id
stringlengths
4
8
url
stringlengths
32
188
title
stringlengths
2
122
text
stringlengths
143
226k
31305228
https://en.wikipedia.org/wiki/ACE%20Encrypt
ACE Encrypt
ACE (Advanced Cryptographic Engine) — the collection of units, implementing both a public key encryption scheme and a digital signature scheme. Corresponding names for these schemes — «ACE Encrypt» and «ACE Sign». Schemes are based on Cramer-Shoup public key encryption scheme and Cramer-Shoup signature scheme. Introduced variants of these schemes are intended to achieve a good balance between performance and security of the whole encryption system. Authors All the algorithms, implemented in ACE are based on algorithms developed by Victor Shoup and Ronald Cramer. The full algorithms specification is written by Victor Shoup. Implementation of algorithms is done by Thomas Schweinberger and Mehdi Nassehi, its supporting and maintaining is done by Victor Shoup. Thomas Schweinberger participated in construction of ACE specification document and also wrote a user manual. Ronald Cramer currently stays in the university of Aarhus, Denmark. He worked on the project of ACE Encrypt while his staying in ETH in Zürich, Switzerland. Mehdi Nassehi and Thomas Schweinberger worked on ACE project in the IBM research lab in Zürich, Switzerland. Victor Shoup works in the IBM research lab in Zürich, Switzerland. Security The encryption scheme in ACE can be proven secure under reasonable and natural intractability assumptions. These four assumptions are: The Decisional Diffie-Hellman (DDH) assumption Strong RSA assumption SHA-1 second preimage collision resistance MARS sum/counter mode pseudo-randomness Basic Terminology and Notation Here we introduce some notations, being used in this article. Basic mathematical notation — The set of integers. — The set of univariate polynomials with coefficients in the finite field of cardinality 2. — integer such that for integer and . — polynomial with such that with . Basic string notation — The set of all strings. — The set of all strings with length n. For — length of string . The string of length zero is denoted . For — the result of and concatenation. Bits, Bytes, Words — The set of bits. Let us take all sets of form . For such a set A we define the "zero element": We define as a set of bytes, and as a set of words. For with and we define a padding operator: Conversion operator Conversion operator makes a conversion between elements . Encryption Scheme Encryption Key Pair The encryption scheme employs two key types: ACE public key: . ACE private key: . For a given size parameter , such that , key components are defined as: — a 256-bit prime number. — a m-bit prime number, such that . — elements (whose multiplicative order modulo divides ). — elements . — elements with and , where and . Key Generation Algorithm. Key Generation for ACE encryption scheme. Input: a size parameter , such that . Output: a public/private key pair. Generate a random prime , such that . Generate a random prime , , such that . Generate a random integer , such that . Generate random integers and Compute the following integers in : Generate random byte strings and , where and . Return the public key/private key pair Ciphertext Representation A ciphertext of the ACE encryption scheme has the form where the components are defined as: — integers from (whose multiplicative order modulo divides ). — element . — element . we call the preamble, and — the cryptogram. If a cleartext is a string consisting of байт, then the length of is equal to . We need to introduce the function , which maps a ciphertext to its byte-string representation, and the corresponding inverse function . For the integer , word string , integers , and byte string , For integer , byte string , such that , Encryption Process Algorithm. ACE asymmetric encryption operation. input: public key and byte string . Output: byte string — ciphertext of . Generate at random. Generate the ciphertext preamble: Generate at random. Compute , . Compute ; note that . Compute . Compute the key for the symmetric encryption operation: , . Compute . Compute cryptogram . Encode the ciphertext: Return . Before starting off the symmetric encryption process, the input message is divided into blocks , where each of the block, possibly except the last one, is of 1024 bytes. Each block is encrypted by the stream cipher. For each encrypted block 16-byte message authentication code is computed. We get the cryptogram Note that if , then . Algorithm. ACE asymmetric encryption process. Input: Output: , . If , then return . Initialize a pseudo-random generator state: Generate the key : . While , do the following: . Generate mask values for the encryption and MAC: . . Encrypt the plaintext: . Generate the message authentication code: If , then ; else . . Update the ciphertext: . . Return . Decryption process Algorithm. ACE decryption process. Input: public key and corresponding private key , byt e string . Output: Decrypted message . Decrypt the ciphertext: If , then return . Compute: note that , where . Verify the ciphertext preamble: If or or , then return . If , then return . . If , then . Compute ; note that . If , then . If , then return . Compute the key for the symmetric decryption operation: , . Compute . Compute ;note that can return . Return . Algorithm. Decryption operation . Input: Output: Decrypted message . If , then return . Initialize a pseudo-random generator state: Generate the key : . While , do the following: . If , then return . Generate mask values for the encryption and MAC: . . Verify the message authentication code: If , then ; else . . If , then return . Update the plaintext: . . Return . Signature Scheme The signature scheme employs two key types: ACE Signature public key: . ACE Signature private key: . For the given size parameter , such that , key components are defined the following way: — -bit prime number with — is also a prime number. — -bit prime number with — is also a prime number. — and has either or бит. — elements (quadratic residues modulo ). — 161-bit prime number. — element — elements . — elements . Key Generation Algorithm. Key generation for the ACE public-key signature scheme. Input: size parameter , such that . Output: public/private key pair. Generate random prime numbers, such that and — is also a prime number, and and . Set . Generate random prime number , где . Generate random , taking into account and , and compute . Generate random and compute . Generate random byte strings , and . Return public key/private key pair Signature Representation The signature in the ACE signature scheme has the form , where the components are defined the following way: — element . — integer, such that . — elements . — element ;note that , where — message being signed. We need to introduce the function, which maps a signature into its byte string representation, and the corresponding inverse function . For integer , byte string , integers and , and byte string , For integer , byte string , where , Signature Generation Process Algorithm. ACE Signature Generation Process. Input: public key and corresponding private key and byte string , . Output: byte string — digital signature . Perform the following steps to hash the input data: Generate a hash key at random, such that . Compute . Select at random, and compute . Compute . Generate a random prime , , and its certificate of correctness : . Repeat this step until . Set ; note that . Compute , where and where and . Encode the signature: Return Notes In the definition of ACE Encryption process and ACE Signature process some auxiliary function (e.g. UOWHash, ESHash and some other) are being used, definition of which goes beyond this article. More details about it can be found in в. Implementation, Utilization and Performance ACE Encryption scheme is recommended by NESSIE (New European Schemes for Signatures, Integrity and Encryption) as asymmetric encryption scheme. Press-release is dated by February 2003. Both schemes were implemented in ANSI C, with the use of GNU GMP library. Tests were done on two platforms: Power PC 604 model 43P under AIX system and 266 MHz Pentium under Windows NT system. Result tables: Literature External links http://www.alphaworks.ibm.com/tech/ace http://www.zurich.ibm.com/security/ace/ NESSIE Portfolio of recommended cryptographic primitives Cryptographic software
31305379
https://en.wikipedia.org/wiki/41st%20Canadian%20Parliament
41st Canadian Parliament
The 41st Canadian Parliament was in session from June 2, 2011 to August 2, 2015, with the membership of its House of Commons having been determined by the results of the 2011 federal election held on May 2, 2011. Parliament convened on June 2, 2011, with the election of Andrew Scheer as Speaker, followed the next day with the Speech from the Throne. There were two sessions in this Parliament. On August 2, 2015, Prime Minister Stephen Harper asked the Governor General to dissolve Parliament and issue the writ of election, leading to an 11-week election campaign period for the 2015 federal election. Party standings Major bills and motions First session The parliament's first session ran between June 2, 2011, and September 13, 2013, and saw 83 bills adopted. In June 2011, immediately following the election the first six bills were given royal assent. These were the enabling legislation for the 2011 Canadian federal budget, the Canada Post back-to-work legislation titled Restoring Mail Delivery for Canadians Act (Bill C-6), and the Fair and Efficient Criminal Trials Act (Bill C-2) authorizing federal judges to hear all pretrial motions at once during mega-trials. When the parliament re-convened in September 2011, the Minister of Justice introduced the Safe Streets and Communities Act (Bill C-10), an omnibus bill of nine separate measures. Among the measures include replacing the pardon system with 'record suspensions', mandatory minimum sentences and/or penalties for certain drug and sexual offences, increasing prison sentences for marijuana offences, making it illegal to make sexually explicit information available to a child, reducing the ability of judges to sentence certain offenders to house arrest, allowing immigration officers to deny work permits to foreigners who are at risk of being sexually exploited, and enabling Canadians to sue state sponsors of terrorism for losses due to an act of terrorism. The bill was reviewed by the 'House Standing Committee on Justice and Human Rights' throughout October and November, chaired by Oxford MP Dave MacKenzie and passed by the House of Commons on December 5, 2011, on a 157 to 127 vote, with only the Conservative Party voting in favour. The senate made six amendments and it was given royal assent on March 13, 2012. On September 29 the Minister of Industry introduced the Copyright Modernization Act (Bill C-11]) — the same bill that was introduced in the 3rd session of the previous parliament and referred to the 'Legislative Committee on Bill C-32'. The bill is first major copyright reform since 1997 and brings Canadian copyright laws in line with modern digital rights management The act enables copyright holders to sue operators of peer-to-peer file sharing sites, makes circumventing technological protection measures (e.g. digital locks, encryption, etc.) illegal except when in the public interest, makes it illegal to remove rights management information (e.g. digital watermarks), extends moral rights for performers, makes legal the practise of copying for the purpose of backup, format shifting (CD to mp3), time shifting (recording to watch later), and expands fair dealing to include use in education, parody, and satire. However, the proposed law was criticized as "irredeemably flawed" due to a contradiction between consumer rights and digital locks, American interference, a requirement for students to destroy copyrighted digital content after a course ends, and makes notice and notice mandatory for all ISPs, including disclosing the identity and activity of customers suspected of copyright infringement. The bill finally passed the House of Commons on June 18 and given royal assent on June 29. The Minister of Agriculture introduced the Marketing Freedom for Grain Farmers Act (Bill C-18) which repealed the Canadian Wheat Board Act, eliminating the requirement for farmers to sell wheat and barley produce to the Canadian Wheat Board. The new act also appoints a new board of directors that must either privatize or dismantle the wheat board. The bill was studied by the 'Legislative Committee on Bill C-18' chaired by Wetaskiwin MP Blaine Calkins between October 31 and November 4. The bill was subject to a lawsuit by the wheat board's existing board of directors claiming that the government cannot change the mandate of the wheat board without the consent of its members and a counter-suit which sought to prevent the board of directors from using wheat board revenue for legal action against the government. A federal trial court decided that for the bill to be legal the government required the consent of the affected farmers, via a vote or plebiscite, as provided for in the 1998 Canadian Wheat Board Act, although that case is in appeal . Nevertheless, on November 28, the bill was passed by the House of Commons, with only the Conservative Party voting in favour. The bill was reviewed by the Standing Senate Committee on Agriculture and Forestry in December and passed by the Senate on December 15, 2011. Despite the ruling of the judicial branch, Governor General David Johnston gave royal assent to the bill on the same day. The Minister of Public Safety introduced the Ending the Long-gun Registry Act (Bill C-19) which amends the Criminal Code and the Firearms Act to remove the requirement to register firearms that are neither prohibited nor restricted and requires that the existing records relating to non-restricted firearms in the Canadian Firearms Registry be destroyed. The registration of long guns had been a divisive issue since its inception in 1995. The bill was introduced on October 25 and reviewed by the 'House Standing Committee on Public Safety and National Security' throughout November, chaired by Crowfoot MP Kevin Sorenson. With no amendments made to the bill in committee, it was passed on February 15 by the House of Commons on a 159 to 130 vote, with only two opposition MPs voting in favour. The bill was passed by the senate on April 5, 2012, and given royal assent the next day. The Minister of Public Safety also introduced the Protecting Children from Internet Predators Act (Bill C-30) which proposed to amend the Criminal Code to grant law enforcement agencies new powers, such as online surveillance or warrantless wiretapping, to combat criminal activity on the internet. The bill has met with criticism from privacy groups, opposition MPs and the public over charges that the law would infringe on the privacy rights of Canadian citizens. Toews responded to the opposition by stating, addressing a Liberal MP, "He can either stand with us or stand with the child pornographers" which was received negatively. The bill was introduced on February 14, 2012, and declared dead a year later when the Response to the Supreme Court of Canada Decision in R. v. Tse Act (Bill C-55) was introduced which also makes provisions for online surveillance and warrantless wiretapping. Senate leader Majorly LeBreton introduced the Safe Food for Canadians Act (Bill S-11) which was part of a response to tainted meat being discovered coming from the XL Foods processing plant in September 2012. The act made numerous changes to the food regulatory system, including requiring better tracking of products, providing food inspectors more authority and increasing penalties for violations. The Minister of Justice introduced the Not Criminally Responsible Reform Act (Bill C-54) on February 8, 2013. The legislation proposes to create a "high risk" designation for people found guilty of a crime but not criminally responsible due to a mental disorder and enshrines in law that the safety of the public is paramount in deciding whether and how such a person can re-enter society. Omnibus bills On April 26, 2012, the Minister of Finance introduced the Jobs, Growth and Long-term Prosperity Act (Bill C-38), an omnibus bill that amends over 50 laws. The bill makes numerous amendments to the environmental assessment process, including increasing the threshold for which reviews are required, limiting the scope of the reviews, shortening review times, moving environmental reviews of pipeline projects to the National Energy Board and nuclear projects to the Canadian Nuclear Safety Commission, enabling the delegation of reviews to provincial agencies, limiting reviews of fish habitats to only the fish used for commercial, recreation or first nations purposes, making reviews of migratory birds optional (at the discretion of cabinet), and limits public participation to only those individuals who directly impacted by a proposal or are specifically sought by the review agency for their specialized knowledge. The omnibus bill would also repeal the Kyoto Protocol Implementation Act and the Fair Wages and Hours of Labour Act, eliminates the National Council of Welfare, and the International Centre for Human Rights and Democratic Development, the regulatory agency Assisted Human Reproduction Canada, the Public Appointments Commission, the National Roundtable on the Environment and the Economy, and the Canadian Artists and Producers Professional Relations Tribunal, as well as eliminates the office of the inspector general at the Canadian Security Intelligence Service and certain reviews by Auditor General. It creates a new department called Shared Services Canada and replaces the Employment Insurance Board of Referees with the Social Security Tribunal. The bill also provides for moving the Old Age Security pension threshold from 65 to 67 years old, and provides for the deprecation of the penny and social insurance number cards. The government was criticized for limiting debate on the 420-page bill to only seven days. The bill was passed by the House of Commons on June 18 and the Senate on June 29 and given royal assent on the same day. The second omnibus bill was the Jobs and Growth Act (Bill C-45), introduced on October 18, 2012, by the Minister of Finance and adopted on December 14. The 443-page bill makes 65 amendments to 24 laws. Among the financial measures in the bill were the elimination of the Overseas Employment Tax Credit and corporate tax credits for mining exploration and development; moving the Atlantic Investment Tax Credit away from oil, gas, and mining towards electricity generation; making provisions for Pooled Registered Pension Plans; various amendments to Registered Disability Savings Plans, Retirement Compensation Arrangements, Employees Profit Sharing Plans, and thin capitalisation rules; reducing the Scientific Research and Experimental Development Tax Credit Program; adding a requirement that employers report as part of an employee's income any contributions to a group sickness or accident insurance plan; increasing the salaries of federal judges and making the income of the Governor General subject to income taxes. Non-financial measures added into the bill included a renaming of the Navigable Waters Protection Act to Navigation Protection Act and reduces its scope from all navigable waters to only 159 rivers and lakes, plus three oceans; creates the Bridge to Strengthen Trade Act which exempts a proposed new bridge between Windsor, Ontario and Detroit, Michigan from the Environmental Assessment Act, Fisheries Act, and the new Navigation Protection Act; eliminates the Merchant Seamen Compensation Board, the Hazardous Materials Information Review Commission, and the Canada Employment Insurance Financing Board. The portion of the bill that dealt with political pensions was taken out after first reading and re-introduced as the Pension Reform Act (Bill C-46). Fifteen private member bills had received royal assent. Six private member bills were adopted in 2012: Geoff Regan's Purple Day Act (Bill C-278) designates March 26 as Purple Day John Carmichael's National Flag of Canada Act (Bill C-288) encourages the display of flag of Canada on multiple-residence buildings and gated communities Joy Smith's An Act to amend the Criminal Code (trafficking in persons) (Bill C-310) enables the prosecution of Canadians who engage in human trafficking while outside Canada Dan Albas's An Act to amend the Importation of Intoxicating Liquors Act (interprovincial importation of wine for personal use) (Bill C-311) allows Canadians to import wine for personal use across provincial borders Harold Albrecht's Federal Framework for Suicide Prevention Act (Bill C-300) requires the federal government to operate a program for suicide prevention Patricia Davidson's An Act to amend the Food and Drugs Act (non-corrective contact lenses) (Bill C-313) makes cosmetic contact lenses subject to the Food and Drugs Act. In 2013, another nine private member bills were adopted: Gord Brown's An Act to amend the Canada National Parks Act (St. Lawrence Islands National Park of Canada) (Bill C-370) changes the name of St. Lawrence Islands National Park to Thousand Islands National Park Roxanne James's An Act to amend the Corrections and Conditional Release Act (vexatious complainants) (Bill C-293) allows Commissioner of the Correctional Service to dismiss complaints believed to be frivolous made by offenders Larry Miller's Transboundary Waters Protection Act (Bill C-383) limits the bulk removal of water from the Canadian side of transboundary bodies of water Merv Tweed's An Act to amend the Canada Post Corporation Act (library materials) (Bill C-383) allows Canada Post to provide reduced postage rates for mailing library materials Blake Richards's Preventing Persons from Concealing Their Identity during Riots and Unlawful Assemblies Act (Bill C-309) makes concealing identity (e.g. wearing a mask) during an unlawful assembly a criminal offense punishable by up to 10 years imprisonment Dick Harris's An Act to amend the Employment Insurance Act (incarceration) (Bill C-316) removes time spent in prison from qualifying and benefit periods for employment insurance Brian Storseth's An Act to amend the Canadian Human Rights Act (protecting freedom) (Bill C-304) repealed section 13 of the Canadian Human Rights Act which had prohibited dissemination of hate speech by telephone or internet David Wilks's An Act to amend the Criminal Code (kidnapping of young person) (Bill C-299) creates mandatory sentencing for an offender convicted of kidnapping a person under 16 years old Alexandrine Latendresse's Language Skills Act (Bill C-419) requires that holders of certain appointed public offices must be fluent in both English and French. Second session The second session ran between October 16, 2013, and August 2, 2015, and saw 86 bills receive royal assent. The Prohibiting Cluster Munitions Act implemented Canada's commitments made under the Convention on Cluster Munitions. The Canadian Museum of History Act changed the name and purpose of the Canadian Museum of Civilization to the Canadian Museum of History. The Combating Counterfeit Products Act created a new criminal offence for possessing or exporting of counterfeit goods and allows customs officers to detain goods that they suspect infringe copyright or trade-marks. The Red Tape Reduction Act required that a federal government regulation be eliminated for every new regulation created affecting a business. The Minister of Aboriginal Affairs introduced the First Nations Elections Act which created an alternative electoral system, to the system under the Indian Act, that First Nations may opt in to elect chiefs and councils. The Minister of Justice sponsored seven bills. The Protecting Canadians from Online Crime Act made revenge porn illegal. The Tackling Contraband Tobacco Act created a new criminal offence for selling, distributing or delivering contraband tobacco products. The Not Criminally Responsible Reform Act makes those found guilty of an offense but not criminally responsible be deemed high risk offenders. The Tougher Penalties for Child Predators Act increases mandatory minimum penalties and maximum penalties for sexual offences against children and creates a publicly accessible database of them, as well as requires reporting to police, border guards and officials in destination countries, of international travel. The Victims Bill of Rights Act creates the "Canadian Victims Bill of Rights" and provides for a right to present a victim impact statement, a right to the protection of identity, a right to participate in the criminal justice process and a right to seek restitution. The Justice for Animals in Service Act makes it a criminal offense to kill or injure a law enforcement animal or a military animal while the animal is carrying out its duty. The Protection of Communities and Exploited Persons Act, which makes purchasing sexual services and communicating in public places or online for the purpose of selling sexual services criminal offenses, was adopted in response to a Supreme Court decision that found the existing laws against prostitution in Canada were unconstitutional. The Minister of Public Safety sponsored four bills. The Protection of Canada from Terrorists Act allows Canadian Security Intelligence Service (CSIS) to act outside Canadian borders, share information with foreign intelligence agencies and guarantee anonymity to informants. The Anti-terrorism Act, 2015 makes promoting terrorism a criminal offense, allows for preventative arrests, allows for easier information sharing, inclusive of confidential data, between federal organizations for the purpose of detecting threats, and providing new powers to CSIS. The Common Sense Firearms Licensing Act simplifies firearms licensing, provides a six-month amnesty for renewing a licence, eases rules on transporting restricted guns, provides the cabinet power to classify guns, and creates new limits to the power of the chief firearms officer. The Drug-Free Prisons Act gives the Parole Board of Canada permission to cancel parole after a positive drug test. The Minister of Health's Respect for Communities Act requires extensive consultation and letters of approvals to allow supervised injection site like Insite. The Protecting Canadians from Unsafe Drugs Act allows the Minister of Health to require studies regarding the effects of a therapeutic product (except natural health products, require a label changes, and require healthcare institutions to report adverse drug reactions and medical device incidents. The Minister of Transport introduced the Safeguarding Canada's Seas and Skies Act implemented the International Convention on Liability and Compensation for Damage in Connection with the Carriage of Hazardous and Noxious Substances by Sea, extends civil and criminal immunity to oil spill response operations, and adds new reporting requirements to oil handling facilities. The same minister also introduce the Safe and Accountable Rail Act establishes minimum liability insurance levels for railway companies and creates a new compensation fund financed by shippers for use to cover damages from railway accidents. The Minister of Natural Resources's Energy Safety and Security Act and Pipeline Safety Act increases the no fault liability for companies involved in oil and gas pipelines and offshore oil facilities to $1-billion and unlimited liability if found at fault, as well as implements parts of the Vienna Convention on Civil Liability for Nuclear Damage. Nineteen private member bills were adopted in the second session. Cheryl Gallant's Disability Tax Credit Promoters Restrictions Act (Bill C-462) prevents tax consultants from charging fees to claim the Disability Tax Credit on behalf of someone. David Tilson's An Act to amend the Criminal Code (mischief relating to war memorials) (Bill C-217) makes committing mischief in relation to a war memorial or cenotaph a criminal offense. Parm Gill's An Act to amend the Criminal Code and the National Defence Act (criminal organization recruitment) (Bill C-394) makes recruiting, soliciting, encouraging, coercing or inviting a person to join a criminal organization a criminal offense. Mark Warawa's An Act to amend the Criminal Code and the Corrections and Conditional Release Act (restrictions on offenders) (Bill C-489) allows courts to require offender to stay 2 kilometres from a victim's residence as a condition of probation and from communicating with the victim or a witness. Earl Dreeshen's An Act to amend the Criminal Code (personating peace officer or public officer) (Bill C-444) makes personating a police officer or a public officer while committing a crime be deemed an aggravation Rick Norlock's National Hunting, Trapping and Fishing Heritage Day Act (Bill C-501) makes the third Saturday in September National Hunting, Trapping and Fishing Heritage Day. Dave MacKenzie's An Act to amend the Corrections and Conditional Release Act (escorted temporary absence) (Bill C-483) transfers the authority, from Correctional Service of Canada to the Parole Board of Canada, to grant or cancel escorted temporary absences of prisoners convicted of first- or second-degree murder. Canadian Ministry With the 28th Canadian Ministry continuing, Harper largely kept the same cabinet as before the election with Jim Flaherty as Minister of Finance, Peter MacKay as Minister of National Defence, Vic Toews as Minister of Public Safety, Leona Aglukkaq as Minister of Health, and Gerry Ritz as the Minister of Agriculture. Five ministers were lost in the election to retirement or defeat. In the 18 May cabinet shuffle Harper promoted Steven Blaney, Ed Fast, Joe Oliver, Peter Penashue to ministerial positions, as well as promoting Denis Lebel and Julian Fantino from Minister of State roles to ministerial positions. He also promoted Bernard Valcourt, Tim Uppal, Alice Wong, Bal Gosal, and Maxime Bernier to Minister of State roles, replacing the two who had been promoted to Minister, one who had been defeated in the election, and Rob Merrifield and Rob Moore who were demoted. Upon the retirement of Bev Oda in July 2012, Harper promoted Julian Fantino to replace her as Minister for International Cooperation, with Bernard Valcourt replacing Fantino as Associate Minister. In preparing for the second session, Harper shuffled his cabinet in July 2013. Kellie Leitch, Chris Alexander, Shelly Glover and Kerry-Lynne Findlay were promoted to ministerial positions. Vic Toews, Keith Ashfield, Peter Kent and Gordon O'Connor were removed from cabinet. Michelle Rempel, Pierre Poilievre, Greg Rickford, Candice Bergen and Rob Moore were promoted from Parliamentary Secretaries to Ministers of State. Kevin Sorenson was added to cabinet as a Minister of State. John Duncan resigned as Minister of Aboriginal Affairs and Northern Development a couple months previously but was added back into cabinet as a Minister of State. In the shuffle Leona Aglukkaq became the new Minister of Environment, Rona Ambrose the new Minister of Health, Rob Nicholson the new Minister of National Defence, Gail Shea the new Minister of Fisheries and Oceans, and Peter MacKay the new Minister of Justice and Attorney-General. Senate In total during the 41st Parliament, Prime-Minister Harper appointed 21 senators, all of whom caucused with the Conservative Party. On May 18, 2011, two weeks after the election, Harper appointed Fabian Manning, Larry Smith, and Josée Verner, all of whom were defeated Conservative Party candidates in the general election. Manning and Smith had resigned from the Senate to run in the election and they became the first Senators to be reappointed to the Senate since John Carling in April 1896. On January 6, 2012, Harper appointed seven new Senators, all Conservative Party members: Alberta Senator-in-waiting Betty Unger, former police chief in the city of Ottawa Vernon White, former MP Norman Doyle, the 2011 Conservative Party nominee in Saint-Hyacinthe—Bagot Jean-Guy Dagenais, as well as JoAnne Buth, Ghislain Maltais, and Asha Seth. A third batch of senators were appointed on September 6, 2012. They included the first Vietnamese-Canadian, Thanh Hai Ngo, and the first Filipino-Canadian, Tobias C. Enverga, to be appointed as senators, as well as Diane Bellemare of Montreal, Tom McInnis of Halifax, and Paul McIntyre. In early 2013, Harper appointed a final batch, including Denise Batters, David Wells of St. John's, Victor Oh of Mississauga, Lynn Beyak of Dryden, Ontario, plus Alberta Senators-in-waiting Doug Black and Scott Tannas. Of those who left the Senate during the 41st Parliament, 22 had reached the mandatory retirement age, including 12 Conservative Party members and one of the two remaining Progressive Conservatives. Three senators (Fred Dickson, Doug Finley, and Pierre Claude Nolin) died while in office. Of the remaining, 13 voluntarily resigned for various reasons, including 7 who had caucused with the Liberal Party and 6 with the Conservative Party. The Senate suspended three members (Mike Duffy, Pamela Wallin and Patrick Brazeau) for the remainder of the 41st Parliament after allegations of misuse of expense accounts was presented — evidence of misspending was also presented against Mac Harb but he voluntarily resigned before Senate could consider disciplinary measures. A comprehensive audit of all senator expenses was released in June 2015 which identified 21 senators who claimed and were paid for invalid expenses, amounting to $978,627. In addition to Duffy, Wallin, Brazeau and Harb, the audit recommended criminal investigations be conducted into the expense claims of 9 other senators who had served during the 41st Parliament. In January 2014, the Liberal Party removed its senate members from its national party caucus. From then on, the members and the new senate caucus were referred to as "Independent Liberal" and referred to themselves as the "Senate Liberal Caucus", though they were no longer formally affiliated with the Liberal Party of Canada. Members Committees House Standing Committee on Aboriginal Affairs and Northern Development Standing Committee on Access to Information, Privacy and Ethics Standing Committee on Agriculture and Agri-Food Standing Committee on Canadian Heritage Standing Committee on Citizenship and Immigration Standing Committee on Environment and Sustainable Development Standing Committee on Finance Standing Committee on Fisheries and Oceans Standing Committee on Foreign Affairs and International Development Standing Committee on Government Operations and Estimates Standing Committee on Health Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities Standing Committee on Industry, Science and Technology Standing Committee on International Trade Standing Committee on Justice and Human Rights Standing Committee on National Defence Standing Committee on Natural Resources Standing Committee on Official Languages Standing Committee on Procedure and House Affairs Standing Committee on Public Accounts Standing Committee on Public Safety and National Security Standing Committee on Status of Women Standing Committee on Transport, Infrastructure and Communities Standing Committee on Veterans Affairs Senate Standing Committee on Aboriginal Peoples Standing Committee on Agriculture and Forestry Standing Committee on Banking, Trade and Commerce Standing Committee on Conflict of Interest for Senators Standing Committee on Energy, the Environment and Natural Resources Standing Committee on Fisheries and Oceans Standing Committee on Foreign Affairs and International Trade Standing Committee on Human Rights Standing Committee on Internal Economy, Budgets and Administration Standing Committee on Legal and Constitutional Affairs Standing Committee on National Finance Standing Committee on National Security and Defence Subcommittee on Veterans Affairs Standing Committee on Official Languages Standing Committee on Rules, Procedures and the Rights of Parliament Selection Committee Standing Committee on Social Affairs, Science and Technology Standing Committee on Transport and Communications Joint Committees Standing Joint Committee on the Library of Parliament Standing Joint Committee on Scrutiny of Regulations Officeholders The current and former officers of Parliament during the 41st Parliament are set out below. Speakers Speaker of the Senate of Canada: Hon. Noël Kinsella, Conservative Senator for New Brunswick. (until November 26, 2014) Hon. Pierre Claude Nolin, Conservative Senator for Quebec. (until April 23, 2015) Hon. Leo Housakos, Conservative Senator for Quebec. (until December 3, 2015) Speaker of the House of Commons of Canada: Hon. Andrew Scheer, Conservative member for Regina—Qu'Appelle, Saskatchewan Other Chair occupants Senate Speaker pro tempore of the Canadian Senate: Hon. Donald H. Oliver, Conservative senator for Nova Scotia (until November 16, 2013) Hon. Pierre Claude Nolin, Conservative senator for Salaberry, Quebec (November 20, 2013 – November 27, 2014) Hon. Leo Housakos, Conservative senator for Wellington, Quebec (from November 27, 2014) House of Commons House of Commons Deputy Speaker and Chair of Committees of the Whole: Denise Savoie, NDP member for Victoria, British Columbia (June 6, 2011 – August 31, 2012) Joe Comartin, NDP member for Windsor—Tecumseh, Ontario (from September 17, 2012) Deputy Chair of Committees of the Whole: Barry Devolin, Conservative member for Haliburton—Kawartha Lakes—Brock, Ontario Assistant Deputy Chair of Committees of the Whole: Bruce Stanton, Conservative member for Simcoe North, Ontario Leaders Prime Minister of Canada: Rt. Hon. Stephen Harper (Conservative) Leader of Her Majesty's Loyal Opposition (NDP): Hon. Jack Layton (May 2, 2011 – August 22, 2011) Nycole Turmel (August 23, 2011 – March 23, 2012 as Opposition Leader; July 28, 2011 to March 24, 2012 as interim NDP leader) Hon. Thomas Mulcair (from March 24, 2012) Liberal Party of Canada: Hon. Bob Rae (interim, May 25, 2011 – April 14, 2013) Justin Trudeau (from April 14, 2013) Bloc Québécois leader (all acting from outside the House): Vivian Barbot (May 2, 2011 – December 11, 2011) Daniel Paillé (December 11, 2011 – December 16, 2013) Mario Beaulieu (June 14, 2014 – July 1, 2015) Gilles Duceppe (July 1, 2015 – present) Green Party of Canada leader: Elizabeth May Strength in Democracy leader: Jean-François Fortin (from October 21, 2014) Floor leaders Senate Leader of the Government in the Senate: Hon. Marjory LeBreton (until July 14, 2013) Hon. Claude Carignan (from August 20, 2013) Leader of the Opposition in the Senate: Hon. Jim Cowan House of Commons Government House Leader: Hon. Peter Van Loan Opposition House Leader: Thomas Mulcair (May 26, 2011 – October 12, 2011) Joe Comartin (October 13, 2011 – April 18, 2012) Nathan Cullen (April 19, 2012 – March 19, 2014) Peter Julian (March 20, 2014 – present) Liberal House Leader: Marc Garneau (May 26, 2011 – November 27, 2012) Dominic LeBlanc (from November 28, 2012) Bloc Québécois House Leader: Louis Plamondon (May 2, 2011 – 2013) (acting) André Bellavance (December 16, 2013 – February 25, 2014) (acting) Jean-François Fortin (February 26, 2014 – August 12, 2014) (acting) Louis Plamondon (August 26, 2014 – present) (acting) Whips Senate Government Whip in the Senate: Hon. Elizabeth Marshall Deputy Government Whip in the Senate: Hon. Yonah Martin (until September 30, 2013) Hon. Stephen Greene (from October 1, 2013) Opposition Whip in the Senate: Hon. Jim Munson Deputy Opposition Whip in the Senate: Hon. Libbe Hubley House of Commons Chief Government Whip: Gordon O'Connor (until July 15, 2013) John Duncan (from July 15, 2013) Deputy Government Whip: Harold Albrecht (until January 27, 2013) Dave MacKenzie (from January 28, 2013) Official Opposition Whip: Chris Charlton (May 26, 2011 – April 18, 2012) Nycole Turmel (from April 19, 2012) Liberal Whip: Judy Foote Shadow cabinets Official Opposition Shadow Cabinet of the 41st Parliament of Canada Liberal Shadow Cabinet of the 41st Parliament of Canada Bloc Québécois Shadow Cabinet of the 41st Parliament of Canada Changes to party standings The following by-elections have been held during the 41st Canadian Parliament: The party standings in the House of Commons have changed as follows: The party standings in the Senate have changed during the 41st Canadian Parliament as follows: Notes References External links 41st Canadian Parliament Parliament of Canada website 2011 establishments in Canada 2015 disestablishments in Canada Stephen Harper
31314339
https://en.wikipedia.org/wiki/Proteus%20%28programming%20language%29
Proteus (programming language)
Proteus (PROcessor for TExt Easy to USe) is a fully functional, procedural programming language created in 1998 by Simone Zanella. Proteus incorporates many functions derived from several other languages: C, BASIC, Assembly, Clipper/dBase; it is especially versatile in dealing with strings, having hundreds of dedicated functions; this makes it one of the richest languages for text manipulation. Proteus owes its name to a Greek god of the sea (Proteus), who took care of Neptune's crowd and gave responses; he was renowned for being able to transform himself, assuming different shapes. Transforming data from one form to another is the main usage of this language. Introduction Proteus was initially created as a multiplatform (DOS, Windows, Unix) system utility, to manipulate text and binary files and to create CGI scripts. The language was later focused on Windows, by adding hundreds of specialized functions for: network and serial communication, database interrogation, system service creation, console applications, keyboard emulation, ISAPI scripting (for IIS). Most of these additional functions are only available in the Windows flavour of the interpreter, even though a Linux version is still available. Proteus was designed to be practical (easy to use, efficient, complete), readable and consistent. Its strongest points are: powerful string manipulation; comprehensibility of Proteus scripts; availability of advanced data structures: arrays, queues (single or double), stacks, bit maps, sets, AVL trees. The language can be extended by adding user functions written in Proteus or DLLs created in C/C++. Language features At first sight, Proteus may appear similar to Basic because of its straight syntax, but similarities are limited to the surface: Proteus has a fully functional, procedural approach; variables are untyped, do not need to be declared, can be local or public and can be passed by value or by reference; all the typical control structures are available (if-then-else; for-next; while-loop; repeat-until; switch-case); new functions can be defined and used as native functions. Data types supported by Proteus are only three: integer numbers, floating point numbers and strings. Access to advanced data structures (files, arrays, queues, stacks, AVL trees, sets and so on) takes place by using handles, i.e. integer numbers returned by item creation functions. Type declaration is unnecessary: variable type is determined by the function applied – Proteus converts on the fly every variable when needed and holds previous data renderings, to avoid performance degradation caused by repeated conversions. There is no need to add parenthesis in expressions to determine the evaluation order, because the language is fully functional (there are no operators). Proteus includes hundreds of functions for: accessing file system; sorting data; manipulating dates and strings; interacting with the user (console functions) calculating logical and mathematical expressions. Proteus supports associative arrays (called sets) and AVL trees, which are very useful and powerful to quickly sort and lookup values. Two types of regular expressions are supported: extended (Unix like); basic (Dos like, having just the wildcards "?" and "*"). Both types of expressions can be used to parse and compare data. The functional approach and the extensive library of built-in functions allow to write very short but powerful scripts; to keep them comprehensible, medium-length keywords were adopted. The user, besides writing new high-level functions in Proteus, can add new functions in C/C++ by following the guidelines and using the templates available in the software development kit; the new functions can be invoked exactly the same way as the predefined ones, passing expressions by value or variables by reference. Proteus is an interpreted language: programs are loaded into memory, pre-compiled and run; since the number of built-in functions is large, execution speed is usually very good and often comparable to that of compiled programs. One of the most interesting features of Proteus is the possibility of running scripts as services or ISAPI scripts. Running a Proteus script as a service, started as soon as the operating system has finished loading, gives many advantages: no user needs to log in to start the script; a service can be run with different privileges so that it cannot be stopped by a user. This is very useful to protect critical processes in industrial environments (data collection, device monitoring), or to avoid that the operator inadvertently closes a utility (keyboard emulation). The ISAPI version of Proteus can be used to create scripts run through Internet Information Services and is equipped with specific functions to cooperate with the web server. For intellectual property protection Proteus provides: script encryption; digital signature of the scripts, by using the development key (which is unique); the option to enable or disable the execution of a script (or part of it) by using the key of the customer. Proteus is appreciated because it is relatively easy to write short, powerful and comprehensible scripts; the large number of built-in functions, together with the examples in the manual, keep low the learning curve. The development environment includes a source code editor with syntax highlighting and a context-sensitive guide. Proteus does not need to be installed: the interpreter is a single executable (below 400 Kb) that does not require additional DLLs to be run on recent Windows systems. Synopsis and licensing The main features of this language are: fully functional, procedural language; multi-language support: Proteus is available in several languages (keywords and messages); no data types: all variables can be used as integer numbers, floating point numbers or strings; variables are interpreted according to the functions being applied – Proteus keeps different representations of their values between calls, to decrease execution time in case of frequent conversions between one type and the other; no pre-allocated structures: all data used by Proteus are dynamically allocated at execution time; there are no limits on: recursion, maximum data size, number of variables, etc.; no operators: Proteus is a completely functional language – there are no operators; thus, there is no ambiguity when evaluating expressions and parenthesis are not needed; large library of predefined functions: Proteus is not a toy-language, it comes with hundreds of library functions ready to be used for working on strings, dates, numbers, for sorting, searching and so on; advanced data access (DAO), pipes, Windows sockets, serial ports: in the Windows version, Proteus includes hundreds of system calls which are operating system-specific; clear and comprehensible syntax: the names of the library functions resemble those of corresponding functions in C, Clipper/Flagship and Assembly; by using medium-length keywords, Proteus programs are very easy to understand; native support for high-level data structures: arrays, queues (single or double), stacks, bit maps, sets, AVL trees are already available in Proteus and do not require additional code or libraries to be used; ISAPI DLL and Windows Service versions: Proteus is available as a Windows service or as an ISAPI DLL (for using together with Microsoft Internet Information Server); user libraries: it is possible to write user defined functions (UDF) in separate files, and include them (even conditionally and recursively) inside new programs; UDFs can be referenced before or after the definition; it is also possible to write external functions in Visual C++ and invoke them from a Proteus script; native support for Ms-Dos/Windows, Macintosh and Unix text files (all versions); three models for dates (English, American, Japanese), with functions to check them and to do calculations according to gregorian calendar; epoch setting for 2-digit-year dates; support for time in 12- and 24-hour format; support for simple (Dos-like) and extended (Unix-like) regular expressions, in all versions; intellectual property protection, by using digital signature and cryptography; extensive library of functions to write interactive console programs. Proteus is available in demo version (script execution limited to three minutes) and registered version, protected by a USB dongle. At the moment, is available as a Windows or Ubuntu package and is distributed by SZP. Example programs Hello World The following example prints out "Hello world!". CONSOLELN "Hello World!" Extract two fields The following example reads the standard input (CSV format, separator ";") and prints out the first two fields separated by "|": CONSOLELN TOKEN(L, 1, ";") "|" TOKEN(L, 2, ";") Proteus scripts by default work on an input file and write to an output file; the predefined identifier L gets the value of every line in input. The function TOKEN returns the requested item of the string; the third parameter represents the delimiter. String concatenation is implicit. The same program can be written in this way: H = TOKNEW(L, ";") CONSOLELN TOKGET(H, 1) "|" TOKGET(H, 2) TOKFREE(H) In this case, we used another function (TOKGET), which builds the list of the tokens in the line; this is more efficient if we need to access several items in the string. External links Proteus user manual Text-oriented programming languages
31344199
https://en.wikipedia.org/wiki/UT-VPN
UT-VPN
University of Tsukuba Virtual Private Network, UT-VPN is a free and open source software application that implements virtual private network (VPN) techniques for creating secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities. It uses SSL/TLS security for encryption and is capable of traversing network address translators (NATs) and firewalls. It was written by Daiyuu Nobori and SoftEther Corporation, and is published under the GNU General Public License (GPL) by University of Tsukuba. UT-VPN has compatible as PacketiX VPN product of SoftEther Corporation. UT-VPN developed based on PacketiX VPN, but some functions was deleted. For example, the RADIUS client is supported by PacketiX VPN Server, but it is not supported by UT-VPN Server. Architecture Encryption UT-VPN uses the OpenSSL library to provide encryption to packets. Authentication UT-VPN offers username/password-based authentication. Networking UT-VPN is software to consist of UT-VPN Server and UT-VPN Client. UT-VPN functions as L2-VPN (over SSL/TLS). UT-VPN Client 'Virtual NIC' (virtual network interface card) is installed in OS how UT-VPN Client was installed in. Virtual NIC is recognized as physical NIC by OS. UT-VPN does encapsulation to TCP (or SSL/TLS) packets from L2 frames by Virtual NIC. UT-VPN Client connects with UT-VPN Server. If authorization with UT-VPN Server succeeded, UT-VPN Client establishes connection with Virtual HUB. UT-VPN Server UT-VPN Server have some 'Virtual HUB', and they function as virtual L2 switch. Virtual HUB does handle frames which received from UT-VPN Client. If necessary, UT-VPN Server forwards encapsulated L2 frames to UT-VPN Client. Virtual HUB on UT-VPN Server has function cascading connection for Virtual HUB on other UT-VPN Server. Site-to-site connection can come true with cascading connection. L2 Bridge UT-VPN Server has bridging function between arbitrary NIC which OS has and virtual HUB. L3 Switch UT-VPN Server has Virtual L3 switch function. Virtual L3 switch does L3-switching between virtual HUB on the UT-VPN Server. Operational Environment UT-VPN Server Windows Windows 98 / Millennium Edition Windows NT 4.0 Windows 2000 Windows XP Windows Server 2003 Windows Vista Windows Server 2008 Hyper-V Server Windows 7 Windows Server 2008 R2 * Supported for x86/x64 UNIX Linux (2.4 or later) FreeBSD (6.0 or later) Solaris (8.0 or later) Mac OS X (Tiger or later) * If it is the environment where compiling it is possible of the source code, UT-VPN Server works. UT-VPN Client Windows Windows 98 Windows ME Windows 2000 Windows XP Windows Server 2003 Windows Vista Windows Server 2008 Hyper-V Server Windows 7 Windows Server 2008 R2 *Supported for x86/x64 UNIX Linux (2.4 or later) * The Virtual NIC does not work in other UNIX operating systems. Community The primary method for community support is through the SoftEther mailing lists. See also University of Tsukuba SoftEther Corporation OpenVPN, The well-known open source VPN software. References External links Official links UT-VPN OpenSource Project (Japanese) UT-VPN Download (Japanese, require email address) Computer network security Tunneling protocols Free security software Unix network-related software
31349396
https://en.wikipedia.org/wiki/DMA%20attack
DMA attack
A DMA attack is a type of side channel attack in computer security, in which an attacker can penetrate a computer or other device, by exploiting the presence of high-speed expansion ports that permit direct memory access (DMA). DMA is included in a number of connections, because it lets a connected device (such as a camcorder, network card, storage device or other useful accessory or internal PC card) transfer data between itself and the computer at the maximum speed possible, by using direct hardware access to read or write directly to main memory without any operating system supervision or interaction. The legitimate uses of such devices have led to wide adoption of DMA accessories and connections, but an attacker can equally use the same facility to create an accessory that will connect using the same port, and can then potentially gain direct access to part or all of the physical memory address space of the computer, bypassing all OS security mechanisms and any lock screen, to read all that the computer is doing, steal data or cryptographic keys, install or run spyware and other exploits, or modify the system to allow backdoors or other malware. Preventing physical connections to such ports will prevent DMA attacks. On many computers, the connections implementing DMA can also be disabled within the BIOS or UEFI if unused, which depending on the device can nullify or reduce the potential for this type of exploit. Examples of connections that may allow DMA in some exploitable form include FireWire, CardBus, ExpressCard, Thunderbolt, USB 4.0, PCI, PCI-X, and PCI Express. Description In modern operating systems, non-system (i.e. user-mode) applications are prevented from accessing any memory locations not explicitly authorized by the virtual memory controller (called memory management unit (MMU)). In addition to containing damage that may be caused by software flaws and allowing more efficient use of physical memory, this architecture forms an integral part of the security of the operating system. However, kernel-mode drivers, many hardware devices, and user-mode vulnerabilities allow direct, unimpeded access of the physical memory address space. The physical address space includes all of the main system memory, as well as memory-mapped buses and hardware devices (which are controlled by the operating system through reads and writes as if they were ordinary RAM). The OHCI 1394 specification allows devices, for performance reasons, to bypass the operating system and access physical memory directly without any security restrictions. But SBP2 devices can easily be spoofed, making it possible to trick an operating system into allowing an attacker to both read and write physical memory, and thereby to gain unauthorised access to sensitive cryptographic material in memory. Systems may still be vulnerable to a DMA attack by an external device if they have a FireWire, ExpressCard, Thunderbolt or other expansion port that, like PCI and PCI Express in general, connects attached devices directly to the physical rather than virtual memory address space. Therefore, systems that do not have a FireWire port may still be vulnerable if they have a PCMCIA/CardBus/PC Card or ExpressCard port that would allow an expansion card with a FireWire to be installed. Uses An attacker could, for example, use a social engineering attack and send a "lucky winner" a rogue Thunderbolt device. Upon connecting to a computer, the device, through its direct and unimpeded access to the physical address space, would be able to bypass almost all security measures of the OS and have the ability to read encryption keys, install malware, or control other system devices. The attack can also easily be executed where the attacker has physical access to the target computer. In addition to the abovementioned nefarious uses, there are some beneficial uses too as the DMA features can be used for kernel debugging purposes. There is a tool called Inception for this attack, only requiring a machine with an expansion port susceptible to this attack. Another application known to exploit this vulnerability to gain unauthorized access to running Windows, Mac OS and Linux computers is the spyware FinFireWire. Mitigations DMA attacks can be prevented by physical security against potentially malicious devices. Kernel-mode drivers have many powers to compromise the security of a system, and care must be taken to load trusted, bug-free drivers. For example, recent 64-bit versions of Microsoft Windows require drivers to be tested and digitally signed by Microsoft, and prevent any non-signed drivers from being installed. An IOMMU is a technology that applies the concept of virtual memory to such system busses, and can be used to close this security vulnerability (as well as increase system stability). Intel brands its IOMMU as VT-d. AMD brands its IOMMU as AMD-Vi. Linux and Windows 10 supports these IOMMUs and can use them to block I/O transactions that have not been allowed. Newer operating systems may take steps to prevent DMA attacks. Recent Linux kernels include the option to disable DMA by FireWire devices while allowing other functions. Windows 8.1 can prevent access to DMA ports of an unattended machine if the console is locked. But as of 2019, the major OS vendors had not taken into account the variety of ways that a malicious device could take advantage of complex interactions between multiple emulated peripherals, exposing subtle bugs and vulnerabilities. Never allowing sensitive data to be stored in RAM unencrypted is another mitigation venue against DMA attacks. However, protection against reading the RAM's content is not enough, as writing to RAM via DMA may compromise seemingly secure storage outside of RAM by code injection. An example of the latter kind of attack is TRESOR-HUNT, which exposes cryptographic keys that are never stored in RAM (but only in certain CPU registers); TRESOR-HUNT achieves this by overwriting parts of the operating system. Microsoft recommends changes to the default Windows configuration to prevent this if it is a concern. See also FireWire security issue Cold boot attack Pin control attack References External links 0wned by an iPod - hacking by Firewire presentation by Maximillian Dornseif from the PacSec/core04 conference, Japan, 2004 Physical memory attacks via Firewire/DMA - Part 1: Overview and Mitigation (Update) Side-channel attacks
31395652
https://en.wikipedia.org/wiki/Formal%20semantics%20%28natural%20language%29
Formal semantics (natural language)
Formal semantics is the study of grammatical meaning in natural languages using formal tools from logic and theoretical computer science. It is an interdisciplinary field, sometimes regarded as a subfield of both linguistics and philosophy of language. It provides accounts of what linguistic expressions mean and how their meanings are composed from the meanings of their parts. The enterprise of formal semantics can be thought of as that of reverse-engineering the semantic components of natural languages' grammars. Overview Formal semantics studies the denotations of natural language expressions. High-level concerns include compositionality, reference, and the nature of meaning. Key topic areas include scope, modality, binding, tense, and aspect. Semantics is distinct from pragmatics, which encompasses aspects of meaning which arise from interaction and communicative intent. Formal semantics is an interdisciplinary field, often viewed as a subfield of both linguistics and philosophy, while also incorporating work from computer science, mathematical logic, and cognitive psychology. Within philosophy, formal semanticists typically adopt a Platonistic ontology and an externalist view of meaning. Within linguistics, it is more common to view formal semantics as part of the study of linguistic cognition. As a result, philosophers put more of an emphasis on conceptual issues while linguists are more likely to focus on the syntax-semantics interface and crosslinguistic variation. Central concepts Truth conditions The fundamental question of formal semantics is what you know when you know how to interpret expressions of a language. A common assumption is that knowing the meaning of a sentence requires knowing its truth conditions, or in other words knowing what the world would have to be like for the sentence to be true. For instance, to know the meaning of the English sentence "Nancy smokes" one has to know that it is true when the person Nancy performs the action of smoking. However, many current approaches to formal semantics posit that there is more to meaning than truth-conditions. In the formal semantic framework of inquisitive semantics, knowing the meaning of a sentence also requires knowing what issues (i.e. questions) it raises. For instance "Nancy smokes, but does she drink?" conveys the same truth-conditional information as the previous example but also raises an issue of whether Nancy drinks. Other approaches generalize the concept of truth conditionality or treat it as epiphenomenal. For instance in dynamic semantics, knowing the meaning of a sentence amounts to knowing how it updates a context. Pietroski treats meanings as instructions to build concepts. Compositionality The Principle of Compositionality is the fundamental assumption in formal semantics. This principle states that the denotation of a complex expression is determined by the denotations of its parts along with their mode of composition. For instance, the denotation of the English sentence "Nancy smokes" is determined by the meaning of "Nancy", the denotation of "smokes", and whatever semantic operations combine the meanings of subjects with the meanings of predicates. In a simplified semantic analysis, this idea would be formalized by positing that "Nancy" denotes Nancy herself, while "smokes" denotes a function which takes some individual x as an argument and returns the truth value "true" if x indeed smokes. Assuming that the words "Nancy" and "smokes" are semantically composed via function application, this analysis would predict that the sentence as a whole is true if Nancy indeed smokes. Phenomena Scope Scope can be thought of as the semantic order of operations. For instance, in the sentence "Paulina doesn't drink beer but she does drink wine," the proposition that Paulina drinks beer occurs within the scope of negation, but the proposition that Paulina drinks wine does not. One of the major concerns of research in formal semantics is the relationship between operators' syntactic positions and their semantic scope. This relationship is not transparent, since the scope of an operator need not directly correspond to its surface position and a single surface form can be semantically ambiguous between different scope construals. Some theories of scope posit a level of syntactic structure called logical form, in which an item's syntactic position corresponds to its semantic scope. Others theories compute scope relations in the semantics itself, using formal tools such as type shifters, monads, and continuations. Binding Binding is the phenomenon in which anaphoric elements such as pronouns are grammatically associated with their antecedents. For instance in the English sentence "Mary saw herself", the anaphor "herself" is bound by its antecedent "Mary". Binding can be licensed or blocked in certain contexts or syntactic configurations, e.g. the pronoun "her" cannot be bound by "Mary" in the English sentence "Mary saw her". While all languages have binding, restrictions on it vary even among closely related languages. Binding was a major for the government and binding theory paradigm. Modality Modality is the phenomenon whereby language is used to discuss potentially non-actual scenarios. For instance, while a non-modal sentence such as "Nancy smoked" makes a claim about the actual world, modalized sentences such as "Nancy might have smoked" or "If Nancy smoked, I'll be sad" make claims about alternative scenarios. The most intensely studied expressions include modal auxiliaries such as "could", "should", or "must"; modal adverbs such as "possibly" or "necessarily"; and modal adjectives such as "conceivable" and "probable". However, modal components have been identified in the meanings of countless natural language expressions including counterfactuals, propositional attitudes, evidentials, habituals and generics. The standard treatment of linguistic modality was proposed by Angelika Kratzer in the 1970s, building on an earlier tradition of work in modal logic. History Formal semantics emerged as a major area of research in the early 1970s, with the pioneering work of the philosopher and logician Richard Montague. Montague proposed a formal system now known as Montague grammar which consisted of a novel syntactic formalism for English, a logical system called Intensional Logic, and a set of homomorphic translation rules linking the two. In retrospect, Montague Grammar has been compared to a Rube Goldberg machine, but it was regarded as earth-shattering when first proposed, and many of its fundamental insights survive in the various semantic models which have superseded it. Montague Grammar was a major advance because it showed that natural languages could be treated as interpreted formal languages. Before Montague, many linguists had doubted that this was possible, and logicians of that era tended to view logic as a replacement for natural language rather than a tool for analyzing it. Montague's work was published during the Linguistics Wars, and many linguists were initially puzzled by it. While linguists wanted a restrictive theory that could only model phenomena that occur in human languages, Montague sought a flexible framework that characterized the concept of meaning at its most general. At one conference, Montague told Barbara Partee that she was "the only linguist who it is not the case that I can't talk to". Formal semantics grew into a major subfield of linguistics in the late 1970s and early 1980s, due to the seminal work of Barbara Partee. Partee developed a linguistically plausible system which incorporated the key insights of both Montague Grammar and Transformational grammar. Early research in linguistic formal semantics used Partee's system to achieve a wealth of empirical and conceptual results. Later work by Irene Heim, Angelika Kratzer, Tanya Reinhart, Robert May and others built on Partee's work to further reconcile it with the generative approach to syntax. The resulting framework is known as the Heim and Kratzer system, after the authors of the textbook Semantics in Generative Grammar which first codified and popularized it. The Heim and Kratzer system differs from earlier approaches in that it incorporates a level of syntactic representation called logical form which undergoes semantic interpretation. Thus, this system often includes syntactic representations and operations which were introduced by translation rules in Montague's system. However, work by others such as Gerald Gazdar proposed models of the syntax-semantics interface which stayed closer to Montague's, providing a system of interpretation in which denotations could be computed on the basis of surface structures. These approaches live on in frameworks such as categorial grammar and combinatory categorial grammar. Cognitive semantics emerged as a reaction against formal semantics, but there have been recently several attempts at reconciling both positions. See also Alternative semantics Barbara Partee Compositionality Computational semantics Discourse representation theory Dynamic semantics Inquisitive semantics Philosophy of language Pragmatics Richard Montague Montague grammar References Further reading A very accessible overview of the main ideas in the field. Chapter 10, Formal semantics, contains the best chapter-level coverage of the main technical directions The most comprehensive reference in the area. One of the first textbooks. Accessible to undergraduates. Reinhard Muskens. Type-logical Semantics. Routledge Encyclopedia of Philosophy Online. Barbara H. Partee. Reflections of a formal semanticist as of Feb 2005. Ample historical information. (An extended version of the introductory essay in Barbara H. Partee: Compositionality in Formal Semantics: Selected Papers of Barbara Partee. Blackwell Publishers, Oxford, 2004.) Semantics Formal semantics (natural language) Grammar
31422319
https://en.wikipedia.org/wiki/Pure%20Storage
Pure Storage
Pure Storage is an American publicly traded technology company headquartered in Mountain View, California, United States. It develops all-flash data storage hardware and software products. Pure Storage was founded in 2009 and developed its products in stealth mode until 2011. Afterwards, the company grew in revenues by about 50% per quarter and raised more than $470 million in venture capital funding, before going public in 2015. Initially, Pure Storage developed the software for storage controllers and used generic flash storage hardware. Pure Storage finished developing its own proprietary flash storage hardware in 2015. Corporate history Pure Storage was founded in 2009 under the code name Os76 Inc. by John Colgrove and John Hayes. Initially, the company was setup within the offices of Sutter Hill Ventures, a venture capital firm, and funded with $5 million in early investments. Pure Storage raised another $20 million in venture capital in a series B funding round. The company came out of stealth mode as "Pure Storage" in August 2011. Simultaneously, Pure Storage announced it had raised $30 million in a third round of venture capital funding. Another $40 million was raised in August 2012, in order to fund Pure Storage's expansion into European markets. In May 2013, the venture capital arm of the American Central Intelligence Agency (CIA), In-Q-Tel, made an investment in Pure Storage for an un-disclosed amount. That August, Pure Storage raised another $150 million in funding. By this time, the company had raised a total of $245 million in venture capital investments. The following year, in 2014, Pure Storage raised $225 million in a series F funding round, valuating the company at $3 billion. Annual revenues for Pure Storage grew by almost 50% per quarter, from 2012 to 2014. It had $6 million in revenues in fiscal 2013, $43 million in fiscal 2014, and $174 million in fiscal 2015. Pure Storage sold 100 devices its first year of commercial production in 2012 and 1,000 devices in 2014. By late 2014, Pure Storage had 750 employees. Although it was growing, the company was not profitable. It lost $180 million in 2014. In 2013, EMC sued Pure Storage and 44 of its employees who were former EMC employees, alleging theft of EMC's intellectual property. EMC also claimed that Pure Storage infringed some of their patents. Pure Storage counter-sued, alleging that EMC illegally obtained a Pure Storage appliance for reverse engineering purposes. In 2016, a jury initially awarded $14 million to EMC. A judge reversed the award and ordered a new trial to determine whether the EMC patent at issue was valid. Pure Storage and EMC subsequently settled the case for $30 million. Pure Storage filed a notification of its intent to go public with the Securities Exchange Commission in August 2015. That October, 25 million shares were sold for a total of $425 million. The company hosted its first annual user conference in 2016. The following year, the Board of Directors appointed Charles Giancarlo as CEO, replacing Scott Dietzen. In 2017 (2018 fiscal year), Pure Storage was profitable for the first time and surpassed $1 billion in annual revenue. In August 2018, Pure Storage made its first acquisition with the purchase of a data deduplication software company called StorReduce, for $25 million. In April the following year, they announced a definitive agreement for an undisclosed amount to acquire Compuverde, a software-based file storage company. In September 2020, Pure Storage acquired Portworx, a provider of cloud-native storage and data-management platform based on Kubernetes, for $370 million. Products Pure Storage develops flash-based storage for data centers using consumer-grade solid state drives. Flash storage is faster than traditional disk storage, but more expensive. Pure Storage develops proprietary de-duplication and compression software to improve the amount of data that can be stored on each drive. It also develops its own flash storage hardware. Pure Storage has three primary product lines: FlashBlade for unstructured data, FlashArray//C which uses QLC flash, and the higher-end NVMe FlashArray//X. Its products use an operating system called Purity. Most of Pure's revenues come from IT resellers that market its products to data center operators. Product history The first commercial Pure Storage product was the FlashArray 300 series. It was one of the first all-flash storage arrays for large data centers. It used generic consumer-grade, multi-level cell (MLC) solid-state drives from Samsung, but Pure Storage's proprietary controllers and software. The second generation product was announced in 2012. It added encryption, redundancies, and the ability to replace components like flash drives or RAM modules. In 2014, Pure Storage added two third-generation products to the 400 series. It also announced FlashStack, a converged infrastructure partnership with Cisco, in order to integrate Pure Storage's flash storage devices with Cisco's blade servers. In 2015, Pure Storage introduced a flash memory appliance built on Pure Storage's own proprietary hardware. The new hardware also used 3D-NAND and had other improvements. In 2017, Pure Storage added artificial intelligence software that configures the storage-array. An expansion add-on appliance was introduced in 2017. The intended uses of Pure Storage expanded as the product developed over time. It was initially intended primarily for server virtualization, desktop virtualization, and database programs. By 2017, 30 percent of Pure Storage's revenue came from software as a service providers and other cloud customers. FlashBlade, introduced in 2016, was intended for rapid restore, unstructured data, and analytics. In 2018, Pure Storage and Nvidia jointly developed and marketed AIRI, an appliance specifically for running artificial intelligence workloads. In December 2021, Pure Storage introduced FlashArray//XL, a high-capacity 5U version of FlashArray. References External links Official website Companies listed on the New York Stock Exchange Computer companies established in 2009 2015 initial public offerings Computer storage companies Cloud storage Storage software Computer data storage Data storage Information technology companies of the United States Technology companies based in the San Francisco Bay Area Companies based in Mountain View, California Companies based in Silicon Valley
31429763
https://en.wikipedia.org/wiki/Niels%20Provos
Niels Provos
Niels Provos is a German-American researcher in security engineering, malware, and cryptography. He received a PhD in computer science from the University of Michigan. From 2003 to 2018, he worked at Google as a Distinguished Engineer on security for Google Cloud Platform. In 2018, he left Google to join Stripe as its new head of security. For many years, Provos contributed to the OpenBSD operating system, where he developed the bcrypt adaptive cryptographic hash function. He is the author of numerous software packages, including the libevent event driven programming system, the Systrace access control system, the honeyd honeypot system, the StegDetect steganography detector, the Bcrypt password encryption technique, and many others. Provos has been an outspoken critic of the effect of the DMCA and similar laws on security researchers, arguing that they threaten to make criminals of people conducting legitimate security research. Provos has also served as the Program Chair of the Usenix Security Symposium, on the program committees of the Network and Distributed System Security Symposium, ACM SIGCOMM, and numerous other conferences, and served on the board of directors of Usenix from 2006 to 2010. Provos's hobbies include swordsmithing, and he has forged swords in both Japanese and Viking styles. It started with his father collecting sabres. Niels routinely posts videos of his blacksmithing activities online. By his words "At work, we try to fight the bad guys and make the world safer for our users. And swords are maybe an expression in a similar way. You create weapons to defend yourself against the hordes of barbarians." Education Ph.D., Computer Science & Engineering, August 2003, the University of Michigan (Dissertation: "Statistical Steganalysis") Diplom in Mathematics, August 1998, Universität Hamburg, Hamburg, Germany. (Masters in Mathematics). (Thesis: "Cryptography, especially the RSA algorithm on elliptic curves and Z/nZ") Vordiplom in Mathematics, March 1995, Universität Hamburg, Hamburg, Germany. Vordiplom in Physics, March 1995, Universität Hamburg, Hamburg, Germany. Selected publications All Your iFrames Point to Us Niels Provos, Panayiotis Mavrommatis, Moheeb Rajab and Fabian Monrose, 17th USENIX Security Symposium, August 2008. The Ghost in the Browser: Analysis of Web-based Malware Niels Provos, Dean McNamee, Panayiotis Mavrommatis, Ke Wang, and Nagendra Modadugu, USENIX Workshop on Hot Topics in Understanding Botnets, April 2007. Detecting Steganographic Content on the Internet Niels Provos and Peter Honeyman, ISOC NDSS'02, San Diego, CA, February 2002 Improving Host Security with System Call Policies Niels Provos, 12th USENIX Security Symposium, Washington, DC, August 2003 Detecting pirated applications (Oct 2014) Ashish Bhatia, Min Gyung Kang, Monirul Islam Sharif, Niels Provos, Panayiotis Mavrommatis, and Sruthi Bandhakavi References External links Provos' CV Wired Profile of Provos Modern cryptographers Cypherpunks Living people Year of birth missing (living people) University of Michigan College of Engineering alumni OpenBSD people Google employees
31469655
https://en.wikipedia.org/wiki/Computer%20crime%20countermeasures
Computer crime countermeasures
Cyber crime, or computer crime, refers to any crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Netcrime refers, more precisely, to criminal exploitation of the Internet. Issues surrounding this type of crime have become high-profile, particularly those surrounding hacking, copyright infringement, identity theft, child pornography, and child grooming. There are also problems of privacy when confidential information is lost or intercepted, lawfully or otherwise. On the global level, both governments and non-state actors continue to grow in importance, with the ability to engage in such activities as espionage, and other cross-border attacks sometimes referred to as cyber warfare. The international legal system is attempting to hold actors accountable for their actions, with the International Criminal Court among the few addressing this threat. A cyber countermeasure is defined as an action, process, technology, device, or system that serves to prevent or mitigate the effects of a cyber attack against a victim, computer, server, network or associated device. Recently there has been an increase in the number of international cyber attacks. In 2013 there was a 91% increase in targeted attack campaigns and a 62% increase in security breaches. A number of countermeasures exist that can be effectively implemented in order to combat cyber-crime and increase security. Types of threats Malicious code Malicious code is a broad category that encompasses a number of threats to cyber-security. In essence it is any “hardware, software, or firmware that is intentionally included or inserted in a system for a harmful purpose.” Commonly referred to as malware it includes computer viruses, worms, Trojan horses, keyloggers, BOTs, Rootkits, and any software security exploits. Malicious code also includes spyware, which are deceptive programs, installed without authorization, “that monitor a consumer’s activities without their consent.” Spyware can be used to send users unwanted popup ads, to usurp the control of a user’s Internet browser, or to monitor a user’s online habits. However, spyware is usually installed along with something that the user actually wishes to install. The user consents to the installation, but does not consent to the monitoring tactics of the spyware. The consent for spyware is normally found in the end-user license agreement. akua AB Network attacks A network attack is considered to be any action taken to disrupt, deny, degrade, or destroy information residing on a computer and computer networks. An attack can take four forms: fabrication, interception, interruption, and modification. A fabrication is the “creation of some deception in order to deceive some unsuspecting user”; an interception is the “process of intruding into some transmission and redirecting it for some unauthorized use”; an interruption is the “break in a communication channel, which inhibits the transmission of data”; and a modification is “the alteration of the data contained in the transmissions.” Attacks can be classified as either being active or passive. Active attacks involve modification of the transmission or attempts to gain unauthorized access to a system, while passive attacks involve monitoring transmissions. Either form can be used to obtain information about a user, which can later be used to steal that user’s identity. Common forms of network attacks include Denial of Service (Dos) and Distributed Denial of Service(DDoS), Man-in-the-middle attack, packet sniffing, TCP SYN Flood, ICMP Flood, IP spoofing, and even simple web defacement. Network abuse Network abuses are activities which violate a network's acceptable use policy and are generally considered fraudulent activity that is committed with the aid of a computer. SPAM is one of the most common forms of network abuse, where an individual will email list of users usually with unsolicited advertisements or phishing attacks attempting to use social engineering to acquire sensitive information such any information useful in identity theft, usernames, passwords, and so on by posing as a trustworthy individual. Social engineering Social engineering is the act of manipulating people into performing actions or divulging confidential information, rather than by breaking in or using technical cracking techniques. This method of deception is commonly used by individuals attempting to break into computer systems, by posing as an authoritative or trusted party and capturing access information from the naive target. Email Phishing is a common example of social engineering's application, but it is not limited to this single type of attack. Technical There are a variety of different technical countermeasures that can be deployed to thwart cybercriminals and harden systems against attack. Firewalls, network or host based, are considered the first line of defense in securing a computer network by setting Access Control Lists (ACLs) determining which what services and traffic can pass through the check point. Antivirus can be used to prevent propagation of malicious code. Most computer viruses have similar characteristics which allow for signature based detection. Heuristics such as file analysis and file emulation are also used to identify and remove malicious programs. Virus definitions should be regularly updated in addition to applying operating system hotfixes, service packs, and patches to keep computers on a network secure. Cryptography techniques can be employed to encrypt information using an algorithm commonly called a cipher to mask information in storage or transit. Tunneling for example will take a payload protocol such as Internet Protocol (IP) and encapsulate it in an encrypted delivery protocol over a Virtual Private Network (VPN), Secure Sockets Layer (SSL), Transport Layer Security (TLS), Layer 2 Tunneling Protocol (L2TP), Point-to-Point Tunneling Protocol (PPTP), or Internet Protocol Security (IPSec)to ensure data security during transmission. Encryption can also be employed on the file level using encryption protocols like Data Encryption Standard (DES), Triple DES, or Advanced Encryption Standard (AES) to ensure security of information in storage. Additionally, network vulnerability testing performed by technicians or automated programs can be used to test on a full-scale or targeted specifically to devices, systems, and passwords used on a network to assess their degree of secureness. Furthermore, network monitoring tools can be used to detect intrusions or suspicious traffic on both large and small networks. Physical deterrents such as locks, card access keys, or biometric devices can be used to prevent criminals from gaining physical access to a machine on a network. Strong password protection both for access to a computer system and the computer's BIOS are also effective countermeasures to against cyber-criminals with physical access to a machine. Another deterrent is to use a bootable bastion host that executes a web browser in a known clean and secure operating environment. The host is devoid of any known malware, where data is never stored on the device, and the media cannot be overwritten. The kernel and programs are guaranteed to be clean at each boot. Some solutions have been used to create secure hardware browsers to protect users while accessing online banking. Counter-Terror Social Network Analysis and Intent Recognition The Counter-Terror Social Network Analysis and Intent Recognition (CT-SNAIR) project uses the Terrorist Action Description Language (TADL) to model and simulate terrorist networks and attacks. It also models links identified in communication patterns compiled from multimedia data, and terrorists’ activity patterns are compiled from databases of past terrorist threats. Unlike other proposed methods, CT-SNAIR constantly interacts with the user, who uses the system both to investigate and to refine hypotheses. Multimedia data, such as voice, text, and network session data, is compiled and processed. Through this compilation and processing, names, entities, relationships, and individual events are extracted from the multimedia data. This information is then used to perform a social network analysis on the criminal network, through which the user can detect and track threats in the network. The social network analysis directly influences and is influenced by the intent recognition process, in which the user can recognize and detect threats. In the CT-SNAIR process, data and transactions from prior attacks, or forensic scenarios, is compiled to form a sequential list of transactions for a given terrorism scenario. The CT-SNAIR process also includes generating data from hypothetical scenarios. Since they are imagined and computer-generated, hypothetical scenarios do not have any transaction data representing terrorism scenarios. Different types of transactions combine to represent the types of relationships between individuals. The final product, or target social network, is a weighted multiplex graph in which the types of edges (links) are defined by the types of transactions within the social network. The weights within these graphs are determined by the content-extraction algorithm, in which each type of link is thought of as a separate graph and “is fed into social network algorithms in part or as a whole.” Links between two individuals can be determined by the existence of (or lack of) the two people being mentioned within the same sentence in the compiled multimedia data or in relation to the same group or event. The final component in the CT-SNAIR process is Intent Recognition (IR). The goal of this component is to indicate to an analyst the threats that a transaction stream might contain. Intent Recognition breaks down into three subcategories: detection of “known or hypothetical target scenarios,” prioritization of these target scenarios, and interpretation “of the resulting detection.” Economic The optimal level of cyber-security depends largely on the incentives facing providers and the incentives facing perpetrators. Providers make their decision based on the economic payoff and cost of increased security whereas perpetrators decisions are based on the economic gain and cost of cyber-crime. Potential prisoner’s dilemma, public goods, and negative externalities become sources of cyber-security market failure when private returns to security are less than the social returns. Therefore, the higher the ratio of public to private benefit the stronger the case for enacting new public policies to realign incentives for actors to fight cyber-crime with increased investment in cyber-security. Legal In the United States a number of legal statutes define and detail the conditions for prosecution of a cyber-crime and are used not only as a legal counter-measure, but also functions as a behavioral check against the commission of a cyber-crime. Many of the provisions outlined in these acts overlap with each. The Computer Fraud and Abuse Act The Computer Fraud and Abuse Act passed in 1986 is one of the broadest statutes in the US used to combat cyber-crime. It has been amended a number of times, most recently by the US Patriot Act of 2002 and the Identity theft enforcement and Restitution Act of 2008. Within it is the definition of a “protected computer” used throughout the US legal system to further define computer espionage, computer trespassing, and taking of government, financial, or commerce information, trespassing in a government computer, committing fraud with a protected computer, damaging a protected computer, trafficking in passwords, threatening to damage a protected computer, conspiracy to commit a cyber-crime, and the penalties for violation. The 2002 update on the Computer Fraud and Abuse Act expands the act to include the protection of “information from any protected computer if the conduct involved an interstate or foreign communication.” The Digital Millennium Copyright Act The Digital Millennium Copyright Act passed in 1998 is a United States copyright law that criminalizes the production and dissemination of technology, devices, or services intended circumvent Digital Rights Management (DRM), and circumvention of access control. The Electronic Communications Privacy Act The Electronic Communications Privacy Act of 1986 extends the government restrictions on wiretaps from telephones. This law is generally thought in the perspective of what law enforcement may do to intercept communications, but it also pertains to how an organization may draft their acceptable use policies and monitor communications. The Stored Communications Act The Stored Communications Act passed in 1986 is focused on protecting the confidentiality, integrity and availability of electronic communications that are currently in some form of electronic storage. This law was drafted with the purpose of protecting the privacy of e-mails and other electronic communications. Identity Theft and Aggravated Identity Theft The Identity Theft and Aggravated Identity Theft statute is a subsection of the Identification and Authentication Fraud statute. It defines the conditions under which an individual has violated identity theft laws. Identity Theft and Assumption Deterrence Act Identity theft was declared unlawful by the federal Identity Theft and Assumption Deterrence Act of 1998 (ITADA). Criminals knowingly transferring or using, without lawful authority, “a means of identification of another person with the intent to commit, or to aid abet, any unlawful activity that constitutes a violation of federal law, or that constitutes a felony under any applicable State or local law.” Penalties of the ITADA include up to 15 years in prison and a maximum fine of $250,000 and directly reflect the amount of damage caused by the criminal’s actions and their amount of planning and intent. Gramm-Leach-Bliley Act The Gramm-Leach-Bliley Act (GLBA) requires that financial institutions and credit agencies increase the security of systems that contain their customers’ personal information. It mandates that all financial institutions “design, implement, and maintain safeguards to protect customer information.” Internet Spyware Prevention Act The Internet Spyware Prevention Act (I-SPY) prohibits the implementation and use of spyware and adware. I-SPY also includes a sentence for “intentionally accessing a computer with the intent to install unwanted software.” Access Device Fraud Statutes 18 U.S.C. § 1029 outlines 10 different offenses under which an offender could violate concerning device fraud. These offenses include: Knowingly trafficking in a counterfeit access device Trafficking the counterfeit access device with the intention to committing fraud Possessing more than 15 devices with the purpose to defraud Production/possession/trafficking in equipment to create access devices if the intent is to defraud Receiving payment from an individual in excess of $1,000 in a one-year period who was found using illegal access devices Solicitation of another individual with offers to sell illegal access devices Distributing or possessing an altered telecommunication device for the purpose of obtaining unauthorized telecommunication services Production, possession, or trafficking in a scanning receiver Using or possessing a telecommunication device that has been knowingly altered to provide unauthorized access to a telecommunication service Using a credit card which was illegally obtained and used to purchase goods and services CAN-SPAM Act The CAN-SPAM Act of 2003 establishes the United States' first national standards for the sending of commercial e-mail and requires the Federal Trade Commission (FTC) to enforce its provisions. Wire Fraud Statute The Wire fraud statute outlined in 18 U.S.C. § 1343 applies to crimes committed over different types of electronic medium such as telephone and network communications. Communications Interference Statutes The communications interference statute listed in 18 U.S.C. § 1362 defines a number of acts under which and individual can be charged with a telecommunications related crime including: Maliciously destroying a property such as cable, system, or other means of communication that is operated or controlled by the United States Maliciously destroying a property such as cable, system, or other means of communication that is operated or controlled by the United States Military Willfully interfering in the working or use of a communications line Willfully obstructing or delaying communication transmission over a communications line Conspiracy to commit any of the above listed acts Behavioral Behavioral countermeasures can also be an effective tool in combating cyber-crime. Public awareness campaigns can educate the public on the various threats of cyber-crime and the many methods used to combat it. It is also here that businesses can also make us of IT policies to help educate and train workers on the importance and practices used to ensure electronic security such as strong password use, the importance of regular patching of security exploits, signs of phishing attacks and malicious code, etc. California, Virginia, and Ohio have implemented services for victims of identity theft, though not well publicized. California has a registry for victims with a confirmed identity theft. Once registered, people can request law enforcement officers call a number staffed 24 hours, year round, to "verify they are telling the truth about their innocence.” In Virginia and Ohio, victims of identity theft are issued a special passport to prove their innocence. However, these passports run the same risk as every other form of identification in that they can eventually be duplicated. Financial agencies such as banks and credit bureaus are starting to require verification of data that identity thieves cannot easily obtain. This data includes users’ past addresses and income tax information. In the near future, it will also include the data located through use of biometrics. Biometrics is the use “of automated methods for uniquely recognizing humans based upon … intrinsic physical or behavioral traits.” These methods include iris scans, voice identification, and fingerprint authentication. The First Financial Credit Union has already implemented biometrics in the form of fingerprint authentication in their automated teller machines to combat identity theft. With a similar purpose, Great Britain has announced plans to incorporate computer chips with biometric data into their passports. However, the greatest problem with the implementation of biometrics is the possibility of privacy invasion. US agents Government Federal Trade Commission (FTC) Federal Bureau of Investigation (FBI) Bureau of Alcohol Tobacco and Firearms (ATF) Federal Communications Commission (FCC) Private organizations Antivirus/security firms Internet service providers (ISPs) Messaging Anti-Abuse Working Group (MAAWG) IT consultants Computer emergency response teams Public–private partnerships CERT Coordination Center, Carnegie Mellon University United States Computer Emergency Readiness Team (US-CERT) See also Government resources References External links Carnegie Mellon University CSIRT Empirical Study of Email Security Threats and Countermeasures
31471299
https://en.wikipedia.org/wiki/StealthNet
StealthNet
StealthNet is an anonymous P2P file sharing software based on the original RShare client, and has been enhanced. It was first named 'RShare CE' (RShare Community Edition). It use the same network and protocols as RShare. In 2011 a fork named DarkNode is release, but one year later the website is cleared with the source code. History Development was stopped in March 2011, with version 0.8.7.9, with no official explanation on the web site. In 2012, the developers had an incident with the software Apache Subversion, a part of the source code of StealthNet was lost : versions 0.8.7.5 (February 2010) to 0.8.7.9 (October 2010). However the source code is still available as files. Features Some of the features of StealhNet: Mix network (use for the pseudo anonymous process) Easy to use (same principles as eMule) Multi-source download : 'Swarming' (Segmented file transfer) Resumption of interrupted downloads Can filter the file types searched (allowing to search only among the videos/archives/musics/... files) SNCollection: this type of file contain a list of files shared into StealthNet. Like the "eMule collection" file and Torrents files Point-to-Point traffic encryption with AES standard process (Advanced Encryption Standard, 256 bits) EndPoint to EndPoint traffic encryption with RSA standard (1024 bits) Strong file hashes based on SHA-512 algorithm Anti-flooding measures Text mode client available for OS with Monosupport like Linux, OSX and others Drawbacks Has no support for UPnP (Universal Plug and Play), the user must open a port number (ie: 6097) into his router. Anonymity is unproven. The source code contains no documentation at all. The encryption level (AES 256 bits and RSA 1024 bits in the v0.8.7.9) that was strong in the 2000s is now medium since the 2010s. See also Anonymous P2P I2P References External links Official web site StealthNet upgrade (2017) Anonymous file sharing networks Free file sharing software File sharing software for Linux Windows file sharing software Free software programmed in C Sharp
31483635
https://en.wikipedia.org/wiki/Nokia%20E6
Nokia E6
The Nokia E6-00 is a smartphone running the Symbian^3 operating system. It supersedes the Nokia E72 as the new Symbian business mobility solution from Nokia following its announcement on 12 April 2011 (same day as Nokia X7-00). It shipped with the new "Symbian Anna" version of Symbian^3, and originally retailed for 340 euros before taxes. The smartphone is notable for its backlit 4-rows QWERTY keyboard and touch screen input methods, for its long battery life (Talktime : 7.5 to 14.8 h and Standby : 28 to 31 days), the out-of-the-box access to Microsoft Exchange ActiveSync, Microsoft Communicator Mobile and Microsoft SharePoint and the high pixel density of its VGA display (326ppi). Like its predecessors (Nokia E71/E72), the Nokia E6-00 integrates a stainless steel and glass design. The back removable cover, the raised panel for the back camera, dual LED flash and loud speaker and the contour of the front are made of stainless steel. The front of the phone (except for the QWERTY keyboard, short cut buttons and Navikey) is covered with Corning Gorilla Glass. Its casing has three color options (black, silver and white). The E6 would also be the last Symbian-based device with a QWERTY keyboard, as the later QWERTY devices would be Series 40 from the Asha line. The Nokia Asha 302 from 2012 bears strong design similarities to the E6. In October 2012, Vertu released the Constellation Quest Blue, based on the E6. History and availability The predecessor of the E6-00 in the Eseries, consisting of business-oriented smartphones, was the Nokia E72 which shipped in November 2009. As with the E71, the E72 received mostly praises from the press. It is worth noting that Nokia released the E7, a landscape QWERTY slider smartphone in the Eseries based on Symbian^3, that shipped in February 2011. The first hints that the Nokia E6-00 was being developed came, in early January 2011, from a Nokia XML and pictures from a Picasa album with pictures taken with the device. Various information could be retrieved from the XML such as the 8 MP camera, VGA display and QWERTY keyboard. The device was not officially announced at the Mobile World Congress held in Barcelona (14–17 February 2011). Various pictures and videos of the Nokia E6-00 leaked during the months of February and March. It was officially announced at a special event, named Discover Symbian, on 12 April 2011 along with the Nokia X7 and the latest update of Symbian software. It was expected to be released in Q2 2011 in Europe at a price of €340 (before taxes and subsidies) and in Q3 2011 in North America. In May 2011, the Nokia E6 became available at the Nokia Deutschland online shop for preorder at the price of €429. Hardware Processors The Nokia E6-00 is powered by the same processor found in other contemporary Symbian devices such as the Nokia N8 , E7 and C7 , Nokia e series which is an ARM11 clocked at 680 MHz with a Broadcom BCM2727 GPU which supports OpenVG1.1 and OpenGL ES 2.0 support. Screen and input The Nokia E6-00 has a 62.5 mm (diagonally) capacitive touchscreen with a resolution of 640 × 480 pixel (VGA, 326 ppi). According to Nokia, it is capable of displaying up to 16.7M colours. This pixel density was the highest among the smartphones launched at the time until the launch of Nokia Lumia 920. The screen brightness of the E6-00 is "more than double the brightness of the E72" when measured in candelas. There is a proximity sensor which deactivates the display and touchscreen when the device is brought near the face during a call. The Nokia E6 also understands PictBridge protocol so it's possible to directly print from the phone to a printer without using a computer to handle the data transfer in between. The optical Navi key of the E72 has been replaced by a Navi key on the E6-00. It has also an ambient light sensor that adjusts the display brightness and activates the backlit of the 4-row keyboard. A 3-axis accelerometer is present but will not switch the display to portrait mode when the device is turned sideways. It will, however, take pictures in portrait and show them the right way in the photo gallery. The device has an autonomous GPS with optional A-GPS functionality, Wi-Fi network positioning and Cell-ID and comes pre-loaded with the Ovi Maps application. Ovi Maps for Symbian^3 provides: free lifetime, turn by turn, voice guided car and pedestrian navigation. If the map is already downloaded to the device, Ovi Maps does not require an active data connection and can work as a stand-alone GPS navigator. For other services, for example Google Maps, a data connection is required. The 8-megapixel (3264 × 2448 px) back camera has an extended depth of field feature (no autofocus), dual LED flash, 2X digital zoom (3X in video mode) and offers high definition (720p, 16:9 aspect ratio) video recording at 25 frame/s or 4:3 aspect ratio. The 0.3-megapixel front camera is capable of video recording (176 × 144 px at 15 frame/s) for video calling. The Nokia E6 has a loudspeaker and two microphones. The microphone at the front of the device collects voices of the user, another microphone at the back of the device collects environmental noise for active noise cancellation, which makes user's voice in noisy environment sound clearer to the person at the other end of the line. Noise cancellation is not available when using the loudspeaker or a headset. Buttons On the front of the device, there is a QWERTY keyboard, call creation and call termination keys, home (menu), calendar, contact and email shortcut keys with short and long press features and a 5-way Scrolling (Navi key). On the top there is the power/lock button, on the right hand side there is the lock/unlock slider, which also turn on the torch (dual LED flash of the camera). Above that button, there are three keys: (1) volume down, (2) volume up and (3) a middle key for activating the voice commands (long press) and the voice recorder (short press). When the device is locked, pressing the Navi will also bring up a menu which allow to unlock the Nokia E6-00 from the touch screen. The QWERTY keyboard comes in a variety (24) language versions, including Arabic, Thai, Russian and Chinese. Audio and output The E6-00 has a microphone and a loudspeaker located on the back of the device. There is a 3.5 mm four-contact audio jack which simultaneously provides stereo audio output and either microphone input or video output. PAL and NTSC TV out is possible using a Nokia Video Connectivity Cable (not included upon purchase) or a standard 3.5 mm audio jack to RCA cable. There is a High-Speed USB 2.0 USB Micro-B connector provided for data synchronization, mass storage mode (client) and battery charging. The Nokia E6-00 supports for USB On-The-Go 1.3 (the ability to act as a USB host) using a Nokia Adapter Cable for USB OTG CA-157 (not included upon purchase). The built-in Bluetooth v3 supports wireless earpieces and headphones through the HSP profile. The Nokia E6-00 is also capable of stereo audio output with the A2DP profile. Built-in car hands-free kits are also supported with the HFP profile. File transfer is supported (FTP) along with the OPP profile for sending/receiving objects. It is possible to remote control the device with the AVRCP profile. The DUN profile which permits access to the Internet from a laptop by dialing up on a mobile phone wirelessly (tethering). Other profiles are also supported (BIP, GAP, GAVDP, GOEP, HSP, PBAP, SAP, SDP and SPP). The device has an 87.5-108 MHz (76-90 MHz in Japan) FM receiver with RDS support. It has Wi-Fi b/g/n connectivity (single band) with support for WEP, WPA and WPA2 (AES/TKIP) security protocols. Battery and SIM The BP-4L 1500 mAh Li-Ion battery performance, as provided by Nokia, are up to 14.8h of talk time, 681h standby, 9h of video playback, 4.7h of video recording and up to 75h of music play back. Storage The Nokia E6-00 has 8 GB of internal storage, which can be expanded with a microSDHC card up to 32 GB in size There is 1 GB of ROM, of which 350 MB is available to the user to install applications. Criticism and issues Notification LED visibility Some new users of the Nokia E6 have experienced problems with the notification LED which has been placed under the D-pad navigation key, as in previous models (E71, E72). However, it is apparent that Nokia has not allowed sufficient space between the D-pad key and the front face for the light to be visible. Nokia have acknowledged the problem but have not indicated whether a fix is possible in software by adjusting the brightness of the component. There is one solution that was done by Aniket Patil (aniketroxx), it was to redirect the notification light to Red Mute key, which was used by many people as alternative to D-Pad Fix. That solution also helped user to see light without opening E6 from cover. A revised Nokia E6 which has a hardware change to resolve the notification light issue started shipping on 22 October 2011. Vibration alert not strong enough Many users of the Nokia E6 reported that the vibration alert is not strong enough. Software Symbian^3 "Anna" Along with the Nokia X7, the E6-00 shipped with the updated Symbian^3 software user experience. This update, nicknamed "Anna", offers, amongst others: New icons. Improved text input with a split screen portrait QWERTY when entering text into web pages and applications. New web browser (v7.3) with an improved user interface (URL entry bar, always visible 'Go Back' and extended toolbar buttons), search-integrated address field, faster navigation and page loading. An updated Ovi Maps (v3.06) application with search for public transport routes, ability to download full country maps via WLAN or Nokia Ovi Suite and check-in to Facebook, Twitter and Foursquare. Hardware accelerated encryption. Increased file size limits for downloading over WLAN with Ovi Store (v2.06). Improved Social (v1.3) with status updates in contact card and ability to retweet and view follower list in Twitter. Flash Lite 4 that support some Flash Player 10.1 content such as YouTube. Java Runtime 2.2, Qt Mobility 1.1 and Qt4.7. Preinstalled applications on the Nokia E6-00 include JoikuSpot Premium (mobile WiFi hotspot), World Traveler, F-secure Mobile Security and QuickOffice. Nokia Belle The stock OS version Symbian Anna was specially adapted to E6. The February 2012 Nokia Belle update was heavily criticised for its failure to meet special demands of E6 which had been met by the original stock firmware. In August 2012, Nokia Belle Refresh was made available. Compatibility Due to its VGA (640 × 480) resolution, compared to the nHD (640 × 360) of the other Symbian^3 devices, some application may be incompatible with the E6-00 if they are designed for nHD display. For example, Nokia guidelines stipulate that the user interface elements of the touch screen (accept button, virtual numpad, etc. ...) should be at least 7mm × 7mm. The high pixel density of the Nokia E6-00 means that the elements will need to be proportionately larger in pixels. Compared to other recent Symbian^3 devices, the Nokia E6-00 has two extra homescreens bringing the total to 5 homescreens. This is to give users the same opportunity for customization as other devices with smaller displays accommodating 3 widgets per homescreen. The Nokia E6 has WiFi coverage of the full spectrum of channels 1–13 in the crowded 2.4 GHz waveband (compared to just channels 1–11 on the E63 for example). It does not support the newer 5 GHz WiFi frequencies. See also List of Nokia products List of Symbian devices Comparison of smartphones Comparison of Symbian devices Charging LED workaround for notifications References External links Nokia E6 Screenshots Nokia E6-00 Online User Guide Nokia official website Active noise control mobile phones Mobile phones with an integrated hardware keyboard Nokia ESeries
31511915
https://en.wikipedia.org/wiki/Hewlett%20Packard%20Enterprise%20Networking
Hewlett Packard Enterprise Networking
Hewlett Packard Enterprise and its predecessor entities have a long history of developing and selling networking products. Today it offers campus and small business networking products through its wholly owned company Aruba Networks which was acquired in 2015. Prior to this, HP Networking was the entity within HP offering networking products. Refer to Aruba Networks for the latest networking initiatives of Hewlett Packard Enterprise. History HP has been in the networking and switching business for decades. HP's networking division was previously known as HP ProCurve. The HP division that became the HP ProCurve division began in Roseville, CA, in 1979. Originally it was part of HP’s Data Systems Division (DSD) and known as DSD-Roseville. Later, it was called the Roseville Networks Division (RND), then the Workgroup Networks Division (WND), before becoming the ProCurve Networking Business (PNB). The trademark filing date for the ProCurve name was February 25, 1998. On August 11, 2008 HP announced the acquisition of Colubris Networks, manufacturer of wireless capabilities, such as 802.11n. This completed on October 1, 2008 On November 11, 2009, HP announced its intent to acquire 3Com Corporation for $2.7B. In April 2010, HP completed its acquisition. In April 2010, following HP's acquisition of 3Com Corporation, HP combined the ProCurve and 3Com entities as HP Networking. HP ProCurve. Based in Roseville, CA, USA. Developer of networking switches and wireless solutions. Global sales. The acquired 3Com Corporation. Based in Marlborough, MA, USA. Global sales outside of China. The 3Com division H3C Technologies Co., Ltd. Based in HangZhou, China. Developer of networking switches, routers, telephony and wireless solutions. Sales within China. The 3Com division TippingPoint. Based in Austin, Texas. Developer of networking security solutions, particularly intrusion prevention systems. Global sales. On May 19, 2015, HP completed the acquisition of Aruba Networks and subsequently moved all its networking business into the Aruba Networking entity. Past networking initiatives and technologies Network architecture Network architecture encompasses the entire framework of an organization's computer network, including hardware components that are used for communication, network layout and topologies, physical and wireless connections, and cabling and device types, as well as software rules and protocols. The core and aggregation layers of a traditional three-tier, hierarchical model provide built-in redundancy, but this design can be inefficient for virtualized environments. The flat layout of the HP FlexNetwork Architecture is designed to provide more agility to the network and to support functionality such as virtualization, convergence, and automation. HP FlexNetwork Architecture unites an organization's networks in the data center, campus, and branch offices through a cost-efficient, consistent architecture, according to published reports. Four product groups make up the architecture: FlexFabric, for data centers with physical and virtual environments composed of converged computing, storage, and networking resources; FlexCampus, for converged wired and wireless networks; FlexBranch, for providing branch offices with networking and security; and Flex Management, which provides one unified management interface for the entire FlexNetwork and includes the HP Intelligent Management Center (IMC). The HP Intelligent Resilient Framework (IRF) software virtualization technology is designed to provide rapid recovery from failure to the FlexNetwork, and to improve vMotion performance in VMware environments. Software-defined networking The focus by enterprise data center networking technologies on virtualization has caused organizations' networks to become more automated and simplified. Several factors are driving these changes: the recognition by IT that network operations can be aligned with an organization's business goals; the request from an organization's leaders for the data center to respond rapidly to variations in demand; changes in application network traffic patterns; and changes in size and density of the data center, due to some services being offloaded to cloud computing resources, greater compute density, and an increased use of virtual technology. In turn, these changes have led to an increased demand for software-defined networking (SDN)technology from organizations. In 2007 HP collaborated with Stanford University to develop Ethane, an early version of the open-source standard OpenFlow upon which SDN is based. HP is a founding member of the nonprofit Open Networking Foundation. Organized in March 2011, the foundation provides support for SDN and manages the OpenFlow standard. HP is also a founding member of the Open Daylight Project, which was announced on April 8, 2013, by the Linux Foundation as an industry-supported collaboration to further the open development of SDN and Network Functions Virtualization. Other founding members include Arista Networks, Big Switch Networks, Brocade, Cisco, Citrix, Ericsson, IBM, Juniper Networks, Microsoft, NEC, Nuage Networks, PLUMgrid, Red Hat, and VMware. Because OpenFlow is based on open standards, there is little risk of vendor lock-in when using OpenFlow-enabled products. It is claimed that networks using SDN will result in a more efficient and reliable data center infrastructure. An SDN controller serves as the core of an SDN network, managing flow controls based on protocols such as OpenFlow, and relaying communications between applications and network devices. In 2012, HP introduced the Virtual Application Networks (VAN) SDN OpenFlow controller, which is available in a software format. The HP SDN Manager application is intended to allow administrators to configure, monitor, and manage policies for SDN switches and controllers. In 2013 HP introduced its SDN Developer Kit and announced the SDN App Store, as well as integration with VMware NSX. The SDN App Store can be used to browse, search, purchase, and download SDN applications onto the HP VAN SDN Controller. HP certifies that applications offered in the SDN App Store will function reliably on HP network infrastructure. New HP network applications will be run on or integrated with the HP VAN SDN Controller and made available through the SDN App Store. In 2014, HP was producing more than 50 models of OpenFlow-enabled switches, including the FlexFabric 7900 switch series, which is optimized for SDN deployment. The FlexFabric 12900 switch series, also optimized for SDN deployment, was awarded SearchNetworking's Network Innovation Award in December 2013,. The HP Virtual Cloud Networking (VCN) SDN Application is designed to provide virtual network overlays to the OpenStack technology open source cloud computing software, serving as a bridge between the HP Helion OpenStack cloud computing platform and the HP VAN SDN controller. According to published reports, the HP VCN SDN Application will help organizations transition from legacy networks to the cloud. Mobility/BYOD/WLAN Mobility/bring your own device (BYOD) refers to the practice of employees using their privately owned mobile devices such as laptops, tablet computers, and smartphones for work purposes. This practice allows employees to perform work functions from these devices both in the office and remotely, increasing working satisfaction and boosting productivity, according to a study by IBM. Wired and wireless network technologies enable organizations to provide connectivity for these mobile devices throughout an office space. To provide wired and wireless access, legacy IT infrastructure requires two individual networks, each with its own management applications. HP provides a unified BYOD solution that includes an SDN security application, which provides real-time threat detection and simplifies operations, reducing costs by up to 38 percent, according to published reports. The HP IMC Smart Connect includes integrated mobile network–access control to manage enterprise access to mobile devices. To help administrators oversee the use of mobile devices on enterprise networks, HP has integrated into IMC support for the Citrix XenMobile and MobileIron mobile device management applications. In March 2014 HP renamed its SDN BYOD security application from Sentinel to Network Protector. HP Network Protector sits on top of the HP SDN VAN Controller. When employees use mobile devices to download files or stream rich media applications such as video, the network traffic can consume much of the bandwidth on the company’s core network. One way to reduce the impact of this increased traffic is to create a separate guest network for mobile devices that is completely segregated from the corporate network, and to set network access control (NAC) policies that limit access to certain sites. In addition to the VAN SDN controller, HP provides a number of SDN products that can help reduce the occurrence of a network bottleneck and enable mobile voice over Internet Protocol (VoIP), video, and other rich media apps. HP offers a pay-per-use cloud service model designed for small and mid-sized businesses and distributed offices. The HP Cloud Managed Network Wireless LAN solution is designed to enable organizations to manage wireless infrastructure without having to have an on-premises controller. The HP Cloud Managed Network Wireless LAN works only with HP 300 series Cloud-Managed access points, which provide cloud management capabilities for distributed organizations. In the event of a loss of connectivity to a cloud management service, the access points can keep a local wireless network up and running, allowing businesses to continue to operate. The HP 870 Unified Wired-WLAN Appliance is designed to help administrators bridge the gap between wired and wireless networks. According to published reports, the appliance simplifies management and access and supports up to 30,000 communication endpoints. The HP 850 Unified Wired-WLAN Appliance supports up to 10,000 endpoints. Network virtualization Network virtualization involves the process of combining available resources in a network by dividing available bandwidth into independent channels that can be dynamically assigned to a specified device or server. The hardware and software network functionality and resources can be merged into one software-based administrative entity. Network virtualization enables the automation of many network management tasks, and allows the network administrator to centrally manage files, images, programs, and folders from a single physical site. The technology is designed to make networks faster and more flexible, scalable, and reliable. Virtualization enables administrators to run multiple operating systems and multiple applications simultaneously on one server. It is the technology that underlies cloud computing. At HP Discover in June 2014, HP announced the Virtual Cloud Networking (VCN) SDN Application, which provides a multitenant network virtualization service for KVM and VMware ESX multi-hypervisor data center applications. Expected in fall 2014, the initial version is an enhanced OpenStack-technology module in HP Helion OpenStack. Centrally orchestrated virtual LAN (VLAN) or VXLAN-based virtual networks provide multitenant isolation. The HP VAN Resource Automation Manager is designed to increase the speed at which network services are rolled out by improving service deployment and provisioning accuracy, providing policy-driven resource management from access to core, according to published reports. The HP IRF software virtualization technology is intended to allow administrators to connect multiple devices through physical IRF ports, configure the devices, and then virtualize those devices into a distributed device. According to published reports, IRF simplifies switch configuration and management, providing horizontal scaling that reduces network hops and delivering support for technology such as Shortest Path Bridging (SPB) and transparent interconnection of lots of links (TRILL). Unified communications Unified communications (UC) products integrate multiple interactive, real-time enterprise communication methods, such as instant messaging, desktop sharing, and telephony with non-real-time communication services such as unified messaging (integrated voicemail, e-mail, SMS, and fax). UC products can enable administrators to control and manage these methods. The HP Network Optimizer SDN Application for Microsoft Lync functions as a unified communications-and-collaboration (UC&C) application that is designed to improve voice quality with Lync; in March 2014 it received a NetEvents Cloud Innovation award in the category of SDN Solution for the Enterprise. Networking professional services Companies engage networking professional services to help them plan how to build networks that support their business needs. HP Trusted Network Transformation is designed to help organizations that want to use private cloud. These networking professional services include workshops, consultation, network assessment, and architectural design services involving network virtualization and SDN. Product and technology highlights Hewlett Packard Enterprise through Aruba Networks sells HP Networking Products for businesses, schools, and government entities. Products Switches: HP offers a range of networking switch series for various locations and configurations: data center core, data center access, HP BladeSystem blade switch, campus LAN core/distribution, and campus/branch LAN access, as well as small business—smart web managed and small business—unmanaged. Network security: HP security modules and appliances include intrusion prevention systems, traditional firewalls, centralized module and appliance management, centralized network access management, and centralized threat management. HP also provides security research, delivered as actionable security intelligence. Network management: The HP Intelligent Management Center provides network monitoring and configuration management functions for a heterogeneous network. Wireless LAN (WLAN): HP provides several different series of unified wired-WLAN enterprise switches, 802.11ac and 802.11n enterprise access points, Unified Wired-WLAN Modules and stand-alone Wireless Controller series, WLAN client bridges, Unified Walljacks, wireless adaptors, WLAN security, and an RF planning tool. Routers: HP routers include series of products for branch locations (fixed port, modular, and virtual), campus (modular), and data center (modular). Transceivers and accessories: Transceivers include series for SFP 1G/100M, SFP+ 10G, X2 10G, XFP 10G, and GBIC. Cables include CAT 5e, CX4, direct attach copper, and fiber optic. Miscellaneous adapters, external power supplies, interconnect kits, and rack-mounting kits are also available. Technologies Bring your own device (BYOD): The HP BYOD solution provides a secure way for users to access an organization's network and applications from mobile devices such as laptops, tablets, and smartphones. It includes a number of switch series for unified wired and wireless networks, as well as these BYOD HP products: IMC User Access Manager, IMC Endpoint Admission Defense, Intelligent Management Center (IMC) Standard Software, IMC Smart Connect WLAN, IMC Smart Connect, and IMC Wireless Services Manager. Dynamic Virtual Private Network (DVPN): The HP DVPN solution interconnects data centers, campuses, and branch offices with standards-based IPsec VPN encryption. It includes HP 6600 router series, HP MSR series routers, and Intelligent Management Center. Software defined networking: HP software-defined networking products are designed to provide an end-to-end solution to automate the network from data center to campus and branch. The SDN Ecosystem includes the HP Network Protector SDN Application and the HP Network Optimizer Application for Microsoft Lync, as well as SDN-related products from third-party developers that integrate with HP SDN products. The HP SDN Dev Center provides resources for developers to produce applications for HP SDN products; it includes the HP SDN App Store and an SDN developer community forum. The SDN App Store can be used to browse, search, purchase, and download SDN applications onto the HP VAN SDN controller. HP SDN infrastructure technologies, including over 50 OpenFlow-enabled switches such as the HP FlexFabric 5930 Switch Series, support the Virtual Extensible LAN (VXLAN), an encapsulation protocol for running virtual networks across the network. The HP Virtual Cloud Networking (VCN) SDN Application provides a multitenant network virtualization service for KVM and VMware ESX multi-hypervisor data center applications and acts as a bridge between the HP Helion OpenStack cloud computing platform and the HP VAN SDN controller. Data Center Interconnect (DCI): Data center interconnect solutions are intended to extend the benefits of multi-tenant private clouds across multiple data centers. HP Unified Wired and Wireless Access: Products are intended to unify campus networks and include routers, SDN applications, switches, IEEE 802.11ac access points and unified wall jacks, and controllers. Intelligent Resilient Framework (IRF): The HP IRF switch platform virtualization technology is intended to simplify the design and operations of data center and campus Ethernet networks. Training and certification HP Networking Training covers product-, solution-, and sales-oriented topics. The HP ExpertOne program networking training and certification program covers a range of networking curricula, from beginning-level courses to Master engineer classes, on three separate tracks: technical, sales, and partner-restricted. Fast-track programs are designed for participants to build upon current industry certifications from Cisco and other companies. In early 2014, HP initiated eight new sales certifications for its technology partners, designed to lower the cost and simplify the training process by narrowing the focus and making the certifications more specific, though no less deep. The new certifications are role-based. HP AllianceOne Program In January 2009 Hewlett Packard launched the ProCurve Open Network Ecosystem (ONE) Alliance, and a programmable module which hosts partner applications from IP telephony to network management This multivendor alliance program objective was to optimize performance of enterprise-class applications with the then ProCurve's (now HP Networking) infrastructure. In April 2010 HP combined the ProCurve ONE alliance program with the programs from 3Com and Tipping Point, and programs from the rest of HP's Enterprise Business to create a new program called HP AllianceOne. The HP Networking Specialization program of HP AllianceOne works with alliance partners who develop applications or services that capitalize on integrated network capabilities for business purposes. Support HP Networking provides a lifetime warranty on some of its products with next business day advanced shipment. This was seen as a Unique Selling Point, until other networking vendors offered similar warranty on part of their product lines. User community The HP Enterprise Business Community page provides resources for HP Networking users, including announcements, tips, and tricks, community feedback and suggestions, and events. Forums include discussion boards and blogs. Open Networking Foundation HP Networking is a founding member of the Open Networking Foundation started on March 23, 2011. Other founding companies include Google, Microsoft, Yahoo, Verizon, Deutsche Telekom and 17 other companies. The nonprofit organization is focused on providing support for software-defined networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas. References External links HPE Networking Web Content Networking hardware Networking hardware companies Telecommunications equipment vendors Networking companies of the United States Hewlett-Packard Networking
31567052
https://en.wikipedia.org/wiki/International%20cybercrime
International cybercrime
There is no commonly agreed single definition of “cybercrime”. It refers to illegal internet-mediated activities that often take place in global electronic networks. Cybercrime is "international" or "transnational" – there are ‘no cyber-borders between countries'. International cybercrimes often challenge the effectiveness of domestic and international law, and law enforcement. Because existing laws in many countries are not tailored to deal with cybercrime, criminals increasingly conduct crimes on the Internet in order to take advantages of the less severe punishments or difficulties of being traced. No matter, in developing or developed countries, governments and industries have gradually realized the colossal threats of cybercrime on economic and political security and public interests. However, complexity in types and forms of cybercrime increases the difficulty to fight back. In this sense, fighting cybercrime calls for international cooperation. Various organizations and governments have already made joint efforts in establishing global standards of legislation and law enforcement both on a regional and on an international scale. China–United States cooperation is one of the most striking progress recently, because they are the top two source countries of cybercrime. Information and communication technology (ICT) plays an important role in helping ensure interoperability and security based on global standards. General countermeasures have been adopted in cracking down cybercrime, such as legal measures in perfecting legislation and technical measures in tracking down crimes over the network, Internet content control, using public or private proxy and computer forensics, encryption and plausible deniability, etc. Due to the heterogeneity of law enforcement and technical countermeasures of different countries, this article will mainly focus on legislative and regulatory initiatives of international cooperation. Typology In terms of cybercrime, we may often associate it with various forms of Internet attacks, such as hacking, Trojans, malware (keyloggers), botnet, Denial-of-Service (DoS), spoofing, phishing, and vishing. Though cybercrime encompasses a broad range of illegal activities, it can be generally divided into five categories: Intrusive Offenses Illegal Access: “Hacking” is one of the major forms of offenses that refers to unlawful access to a computer system. Data Espionage: Offenders can intercept communications between users (such as e-mails) by targeting communication infrastructure such as fixed lines or wireless, and any Internet service (e.g., e-mail servers, chat or VoIP communications). Data Interference: Offenders can violate the integrity of data and interfere with them by deleting, suppressing, or altering data and restricting access to them. Content-related offenses Pornographic Material (Child-Pornography): Sexually related content was among the first content to be commercially distributed over the Internet. Racism, Hate Speech, Glorification of Violence: Radical groups use mass communication systems such as the Internet to spread propaganda. Religious Offenses: A growing number of websites present material that is in some countries covered by provisions related to religious offenses, e.g., anti-religious written statements. Spam: Offenders send out bulk mails by unidentified source and the mail server often contains useless advertisements and pictures. Copyright and trademark-related offenses Common copyright offenses: cyber copyright infringement of software, music or films. Trademark violations: A well-known aspect of global trade. The most serious offenses include phishing and domain or name-related offenses, such as cybersquatting. Computer-related offenses Fraud: online auction fraud, advance fee fraud, credit card fraud, Internet banking Forgery: manipulation of digital documents. Identity theft: It refers to stealing private information including Social Security Numbers (SSN), passport numbers, Date of birth, addresses, phone numbers, and passwords for non-financial and financial accounts. Combination offenses Cyberterrorism: The main purposes of it are propaganda, information gathering, preparation of real-world attacks, publication of training material, communication, terrorist financing and attacks against critical infrastructure. Cyberwarfare: It describes the use of ICTs in conducting warfare using the Internet. Cyberlaundering: Conducting crime through the use of virtual currencies, online casinos etc. Threats Similar to conventional crime, economic benefits, power, revenge, adventure, ideology and lust are the core driving forces of cybercrime. Major threats caused by those motivations can be categorized as following: Economic security, reputation and social trust are severely challenged by cyber fraud, counterfeiting, impersonation and concealment of identity, extortion, electronic money laundering, copyright infringement and tax evasion. Public interest and national security is threatened by dissemination of offensive material —e.g., pornographic, defamatory or inflammatory/intrusive communication— cyber stalking/harassment, Child pornography and paedophilia, electronic vandalism/terrorism. Privacy, domestic and even diplomatic information security are harmed by unauthorized access and misuse of ICT, denial of services, and illegal interception of communication. Domestic, as well as international security are threatened by cybercrime due to its transnational characteristic. No single country can really handle this big issue on their own. It is imperative for us to collaborate and defend cybercrime on a global scale. International trends As more and more criminals are aware of potentially large economic gains that can be achieved with cybercrime, they tend to switch from simple adventure and vandalism to more targeted attacks, especially platforms where valuable information highly concentrates, such as computer, mobile devices and the Cloud. There are several emerging international trends of cybercrime. Platform switch: Cybercrime is switching its battle ground from Windows-system PCs to other platforms, including mobile phones, tablet computers, and VoIP. Because a significant threshold in vulnerabilities has been reached. PC vendors are building better security into their products by providing faster updates, patches and user alert to potential flaws. Besides, global mobile devices’ penetration—from smart phones to tablet PCs—accessing the Internet by 2013 will surpass 1 billion, creating more opportunities for cybercrime. The massively successful banking Trojan, Zeus is already being adapted for the mobile platform. Smishing, or SMS phishing, is another method cyber criminals are using to exploit mobile devices, which users download after falling prey to a social engineering ploy, is designed to defeat the SMS-based two-factor authentication most banks use to confirm online funds transfers by customers. VoIP systems are being used to support vishing (telephone-based phishing) schemes, which are now growing in popularity. Social engineering scams: It refers to a non-technical kind of intrusion, in the form of e-mails or social networking chats, that relies heavily on human interaction and often involves fooling potential victims into downloading malware or leaking personal data. Social engineering is nevertheless highly effective for attacking well-protected computer systems with the exploitation of trust. Social networking becomes an increasingly important tool for cyber criminals to recruit money mules to assist their money laundering operations around the globe. Spammers are not only spoofing social networking messages to persuade targets to click on links in emails — they are taking advantage of users’ trust of their social networking connections to attract new victims. Highly targeted: The newest twist in "hypertargeting" is malware that is meant to disrupt industrial systems — such as the Stuxnet network worm, which exploits zero-day vulnerabilities in Microsoft. The first known copy of the worm was discovered in a plant in Germany. A subsequent variant led to a widespread global outbreak. Dissemination and use of malware: malware generally takes the form of a virus, a worm, a Trojan horse, or spyware. In 2009, the majority of malware connects to host Web sites registered in the U.S.A. (51.4%), with China second (17.2%), and Spain third (15.7%). A primary means of malware dissemination is email. It is truly international in scope. Intellectual property theft (IP theft): It is estimated that 90% of the software, DVDs, and CDs sold in some countries are counterfeit, and that the total global trade in counterfeit goods is more than $600 billion a year. In the USA alone, IP theft costs businesses an estimated $250 billion annually, and 750,000 jobs. International legislative responses and cooperation International responses G8 Group of Eight (G8) is made up of the heads of eight industrialized countries: the U.S., the United Kingdom, Russia, France, Italy, Japan, Germany, and Canada. In 1997, G8 released a Ministers' Communiqué that includes an action plan and principles to combat cybercrime and protect data and systems from unauthorized impairment. G8 also mandates that all law enforcement personnel must be trained and equipped to address cybercrime, and designates all member countries to have a point of contact on a 24 hours a day/7 days a week basis. United Nations In 1990 the UN General Assembly adopted a resolution dealing with computer crime legislation. In 2000 the UN GA adopted a resolution on combating the criminal misuse of information technology. In 2002 the UN GA adopted a second resolution on the criminal misuse of information technology. ITU The International Telecommunication Union (ITU), as a specialized agency within the United Nations, plays a leading role in the standardization and development of telecommunications and cybersecurity issues. The ITU was the lead agency of the World Summit on the Information Society (WSIS). In 2003, Geneva Declaration of Principles and the Geneva Plan of Action were released, which highlights the importance of measures in the fight against cybercrime. In 2005, the Tunis Commitment and the Tunis Agenda were adopted for the Information Society. Council of Europe Council of Europe is an international organisation focusing on the development of human rights and democracy in its 47 European member states. In 2001, the Convention on Cybercrime, the first international convention aimed at Internet criminal behaviors, was co-drafted by the Council of Europe with the addition of USA, Canada, and Japan and signed by its 46 member states. But only 25 countries ratified later. [8] It aims at providing the basis of an effective legal framework for fighting cybercrime, through harmonization of cybercriminal offenses qualification, provision for laws empowering law enforcement and enabling international cooperation. Regional responses APEC Asia-Pacific Economic Cooperation (APEC) is an international forum that seeks to promote promoting open trade and practical economic cooperation in the Asia-Pacific Region. In 2002, APEC issued Cybersecurity Strategy which is included in the Shanghai Declaration. The strategy outlined six areas for co-operation among member economies including legal developments, information sharing and co-operation, security and technical guidelines, public awareness, and training and education. OECD The Organisation for Economic Co-operation and Development (OECD) is an international economic organisation of 34 countries founded in 1961 to stimulate economic progress and world trade. In 1990, the Information, Computer and Communications Policy (ICCP) Committee created an Expert Group to develop a set of guidelines for information security that was drafted until 1992 and then adopted by the OECD Council. In 2002, OECD announced the completion of "Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security". European Union In 2001, the European Commission published a communication titled "Creating a Safer Information Society by Improving the Security of Information Infrastructures and Combating Computer-related Crime". In 2002, EU presented a proposal for a “Framework Decision on Attacks against Information Systems”. The Framework Decision takes note of Convention on Cybercrime, but concentrates on the harmonisation of substantive criminal law provisions that are designed to protect infrastructure elements. Commonwealth In 2002, the Commonwealth of Nations presented a model law on cybercrime that provides a legal framework to harmonise legislation within the Commonwealth and enable international cooperation. The model law was intentionally drafted in accordance with the Convention on Cybercrime. ECOWAS The Economic Community of West African States (ECOWAS) is a regional group of west African Countries founded in 1975 it has fifteen member states. In 2009, ECOWAS adopted the Directive on Fighting Cybercrime in ECOWAS that provides a legal framework for the member states, which includes substantive criminal law as well as procedural law. GCC In 2007, the Arab League and Gulf Cooperation Council (GCC) recommended at a conference seeking a joint approach that takes into consideration international standards. Voluntary industry response During the past few years, public-private partnerships have emerged as a promising approach for tackling cybersecurity issues around the globe. Executive branch agencies (e.g., the Federal Trade Commission in US), regulatory agencies (e.g., Australian Communications and Media Authority), separate agencies (e.g., ENISA in the EU) and industry (e.g., MAAWG, …) are all involved in partnership. In 2004, the London Action Plan was founded, which aims at promoting international spam enforcement cooperation and address spam related problems, such as online fraud and deception, phishing, and dissemination of viruses. Case analysis U.S. According to Sophos, the U.S. remains the top-spamming country and the source of about one-fifth of the world's spam. Cross-border cyber-exfiltration operations are in tension with international legal norms, so U.S. law enforcement efforts to collect foreign cyber evidence raises complex jurisdictional questions. Since fighting cybercrime involves great amount of sophisticated legal and other measures, only milestones rather than full texts are provided here. Legal and regulatory measures The first federal computer crime statute was the Computer Fraud and Abuse Act of 1984 (CFAA). In 1986, Electronic Communications Privacy Act (ECPA) was an amendment to the federal wiretap law. “National Infrastructure Protection Act of 1996”. “Cyberspace Electronic Security Act of 1999”. “Patriot Act of 2001”. Digital Millennium Copyright Act (DMCA) was enacted in 1998. Cyber Security Enhancement Act (CSEA) was passed in 2002. Can-spam law issued in 2003 and subsequent implementation measures were made by FCC and FTC. In 2005 the USA passed the Anti-Phishing Act which added two new crimes to the US Code. In 2009, the Obama Administration released Cybersecurity Report and policy. Cybersecurity Act of 2010, a bill seeking to increase collaboration between the public and the private sector on cybersecurity issues. A number of agencies have been set up in the U.S. to fight against cybercrime, including the FBI, National Infrastructure Protection Center, National White Collar Crime Center, Internet Fraud Complaint Center, Computer Crime and Intellectual Property Section of the Department of Justice (DoJ), Computer Hacking and Intellectual Property Unit of the DoJ, and Computer Emergency Readiness Team/Coordination Center (CERT/CC) at Carnegie-Mellon, and so on. CyberSafe is a public service project designed to educate end users of the Internet about the critical need for personal computer security. Technical measures Cloud computing: It can make infrastructures more resilient to attacks and functions as data backup as well. However, as the Cloud concentrates more and more sensitive data, it becomes increasingly attractive to cybercriminals. Better encryption methods are developed to deal with phishing, smishing and other illegal data interception activities. The Federal Bureau of Investigation has set up special technical units and developed Carnivore, a computer surveillance system which can intercept all packets that are sent to and from the ISP where it is installed, to assist in the investigation of cybercrime. Industry collaboration Public-private partnership: in 2006, the Internet Corporation for Assigned Names and Numbers (ICANN) signed an agreement with the United States Department of Commerce (United States Department of Commerce) that they partnered through the Multistakeholder Model of consultation. In 2008, the second annual Cyber Storm Exercise conference was held, involving nine states, four foreign governments, 18 federal agencies and 40 private companies. In 2010, National Cyber Security Alliance’s public awareness campaign was launched in partnership with the U.S. Department of Homeland Security, the Federal Trade Commission, and others. Incentives for ISP: Though the cost of security measures increases, Internet Service Providers (ISP) are encouraged to fight against cybercrime to win consumer support, good reputation and brand image among consumer and peer ISP as well. International cooperation USA has signed and also ratified Convention on Cybercrime. United States has actively participated in G8/OECD/APEC/OAS/U.S.-China cooperation in cracking down international cyber crime. Future challenges Privacy in tracking down cybercrime is being challenged and becomes a controversial issue. Public-private partnership. As the U.S. government gets more involved in the development of IT products, many companies worry this may stifle their innovation, even undermining efforts to develop more secure technology products. New legislative proposals now being considered by the U.S. Congress could be potentially intrusive on private industry, which may prevent enterprises from responding effectively to emerging and changing threats. Cyber attacks and security breaches are increasing in frequency and sophistication, they are targeting organizations and individuals with malware and anonymization techniques that can evade current security controls. Current perimeter-intrusion detection, signature-based malware, and anti-virus solutions are providing little defense. Relatively few organizations have recognized organized cyber criminal networks, rather than hackers, as their greatest potential cyber security threat; even fewer are prepared to address this threat. China In January 2009, China was ranked No.3 spam-producing country in the world, according to data compiled by security vendor Sophos. Sophos now ranks China as spam producer No.20, right behind Spain. China's underground economy is booming with estimated 10 billion RMB in 2009. Hacking, malware and spam are immensely popular. With patriotic hacktivism, people hack to defend the country. Legal and regulatory measures Criminal Law – the basic law identifies the law enforcement concerning cybercrime. In 2000, the Decision on Internet Security of the Standing Committee of the NPC was passed. In 2000, China issued a series of Internet rules that prohibit anyone to propagate pornography, virus and scams. In 2003, China signed UN General Assembly Resolution 57/239 on “Creation of a global culture of cybersecurity”. In 2003, China signed Geneva Declaration of Principles of the World Summit on the Information Society. In 2006, an anti-spam initiative was launched. In July 2006, the ASEAN Regional Forum (ARF), which included China, issued a statement that its members should implement cybercrime and cybersecurity laws “in accordance with their national conditions and by referring to relevant international instruments”. In 2009, ASEAN-China framework agreement on network and information security emergency response were adopted. In 2009, agreement within the Shanghai Cooperation Organization on information security was made. Technical measures Internet censorship: China has made it tougher to register new Internet domains and has put on stricter content control to help reduce spam. "Golden Shield Project" or "The Great Firewall of China": a national Internet control and censorship project. In 2009, Green Dam software: It restricts access to a secret list of sites, and monitors users’activity. Operating system change: China is trying to get around this by using Linux, though with a lot of technical impediments to solve. Industry collaboration Internet Society of China — the group behind China's anti-spam effort — is working on standards and better ways of cooperating to fight cybercrime. ISPs have become better at working with customers to cut down on the spam problem. International cooperation In 2005, China signed up for the London Action Plan on spam, an international effort to curb the problem. Anti-Spam “Beijing Declaration”2006 International Anti-Spam Summit was held. The APEC Working Group on Telecommunications agreed an action plan for 2010–2015 that included “fostering a safe and trusted ICT environment”. In January 2011, the United States and China committed for the first time at head of state level to work together on a bilateral basis on issues of cybersecurity. "Fighting Spam to Build Trust" will be the first effort to help overcome the trust deficit between China and the United States on cybersecurity. Cyber Security China Summit 2011 will be held in Shanghai. Achievement and future challenges Successfully cracking down spam volume in 2009. However, insufficient criminal laws and regulations are great impediments in fighting cybercrime. A lack of electronic evidence laws or regulations, low rank of existing internet control regulations and technological impediments altogether limit the efficiency of Chinese governments' law enforcement. See also Computer crime Computer security Convention on Cybercrime Cyberethics Cyberstalking Identity theft Internet fraud Legal aspects of computing References External links ITU Global Cybersecurity Agenda Convention on Cybercrime Sophos Security Reports US-China Joint Efforts in Cybercrime, EastWest Institute Computer Crime & Intellectual Property Section, United States Department of Justice Handbook of Legal Procedures of Computer and Network Misuse in EU Countries Cybercrime International criminal law
31590539
https://en.wikipedia.org/wiki/Content%20centric%20networking
Content centric networking
In contrast to IP-based, host-oriented, Internet architecture, content centric networking (CCN) emphasizes content by making it directly addressable and routable. Endpoints communicate based on named data instead of IP addresses. CCN is characterized by the basic exchange of content request messages (called "Interests") and content return messages (called "Content Objects"). It is considered an information-centric networking (ICN) architecture. The goals of CCN are to provide a more secure, flexible and scalable network thereby addressing the Internet's modern-day requirements for secure content distribution on a massive scale to a diverse set of end devices. CCN embodies a security model that explicitly secures individual pieces of content rather than securing the connection or "pipe". It provides flexibility by using data names instead of host names (IP addresses). Additionally, named and secured content resides in distributed caches automatically populated on demand or selectively pre-populated. When requested by name, CCN delivers named content to the user from the nearest cache, traversing fewer network hops, eliminating redundant requests, and consuming less resources overall. CCN began as a research project at the Palo Alto Research Center (PARC) in 2007. The first software release (CCNx 0.1) was made available in 2009. CCN is the ancestor of related approaches, including named data networking. CCN Technology and its open source code base has been acquired by Cisco in February 2017. History The principles behind information-centric networks were first described in the original 17 rules of Ted Nelson's Project Xanadu in 1979. In 2002, Brent Baccala submitted an Internet Draft differentiating between connection-oriented and data-oriented networking and suggested that the Internet web architecture was rapidly becoming more data oriented. In 2006, the DONA project at UC Berkeley and ICSI proposed an information centric network architecture, which improved TRIAD by incorporating security (authenticity) and persistence as first-class primitives in the architecture. On August 30, 2006, PARC Research Fellow Van Jacobson gave a talk titled "A new way to look at Networking" at Google. The CCN project was officially launched at PARC in 2007. In 2009, PARC announced the CCNx project (Content Centric Network), publishing the interoperability specifications and an open source implementation on the Project CCNx website on September 21, 2009. The original CCN design was described in a paper published at the International Conference on emerging Networking EXperiments and Technologies (CoNEXT) in December 2009. Annual CCNx Community meetings were held in 2011, 2012, 2013 and 2015. The protocol specification for CCNx 1.0 has been made available for comment and discussion. Work on CCNx happens openly in the ICNRG IRTF research group. Specification The CCNx specification was published in some IETF drafts. The specifications included: draft-irtf-icnrg-ccnxsemantics-01 draft-irtf-icnrg-ccnxmessages-01 draft-mosko-icnrg-ccnxurischeme-00 Seamless data integration within an open-run environment was proposed as a major contributing factor in protecting the security of cloud-based analytics and key network encryption. The driving force in adopting these heuristics was twofold: Batch-interrupted data streams remaining confined to an optimal run environment; and secure shared cloud access depending upon integrative analytic processes. Software The CCNx software was available on GitHub. Motivation and benefits The functional goal of the Internet Protocol as conceived and created in the 1970s was to enable two machines, one comprising resources and the other desiring access to those resources, to have a conversation with each other. The operating principle was to assign addresses to end-points, thereby enabling these end-points to locate and connect with one another. Since those early days, there have been fundamental changes in the way the Internet is used — from the proliferation of social networking services to viewing and sharing digital content such as videos, photographs, documents, etc. Instead of providing basic connectivity, the Internet has become largely a distribution network with massive amounts of video and web page content flowing from content providers to viewers. Internet users of today are demanding faster, more efficient, and more secure access to content without being concerned with where that content might be located. Networks are also used in many environments where the traditional TCP/IP communication model doesn't fit. The Internet of Things (IoT) and sensor networks are environments where the source-destination communication model doesn't always provide the best solution. CCN was designed to work in many environments from high-speed data centers to resource constrained sensors. CCN aims to be: Secure - The CCN communication model secures data and not the communication pipe between two specific end-hosts. However, ubiquitous content caching and absence of secure communication pipe between end hosts introduces the challenge to content protection against the unauthorized access, which requires extra care and solutions. Flexible - CCN uses names to communicate. Names can be location independent and are much more adaptable than IP addresses. Network elements can make more advanced choices based on the named requests and data. Scalable - CCN enables the network to scale by allowing caching, enabling native multicast traffic, providing native load balancing and facilitating resource planning. Basic concepts Content Object messages are named payloads that are the network-sized chunks of data. Names are a hierarchical series of binary name segments that are assigned to Content Objects by content publishers. Signatures are cryptographic bindings between a name, a payload, and the Key Id of the publisher. This is used for provenance. Interest messages are requests for Content Objects that match the name along with some optional restrictions on that object. The core protocol operates as follows: Consumers issue a request for content by sending an Interest message with the name of the desired content. The network routes the interest based on the name using longest prefix match. The interest leaves state as it traverses the network. This state is stored in the Pending Interest Table (PIT). When a match is found (when an Interest matches a Content Object) the content is sent back on the reverse path of the Interest, following the PIT state created by the Interest. Because the content is self identifiable (via the name and the security binding) any Content Object can be cached. Interest messages may be matched against caches along the way, not only at the publishers. Distributed caching within a content-centric network is also possible, requiring multifunctional access parameters across the database. This essentially enables shared network encryption algorithms to employ role-based access limitations to users based on defined authorization levels. CCNx releases CCNx 0.x Interests match Content Objects based on name prefixes. For example, an Interest for /a/b would match a Content Object named /a/b/c/d or /a/b. Interests include restrictions in the form of selectors. These help the network select which of the possible prefix matches are an actual match. For example, an Interest might exclude certain names, ask for a minimum or maximum number of extra name segments, etc. Content Objects have an implicit final name component that is equal to the hash of the Content Object. This may be used for matching to a name. Packet encoding is done using ccnb (a proprietary format based on a type of binary XML). The last version of this branch is 0.8.2 Software is available under a GPL license. Specifications and documentation are also available. CCNx 1.x CCNx 1.x differs from CCNx 0.x in the following ways: Interests match Content Objects on exact names, not name prefixes. Therefore, an Interest for /a/b/ will only match a Content Object with the name /a/b. Interests can restrict matches on the publisher KeyID or the object's ContentObjectHash. A nested type–length–value (TLV) format is used to encode all messages on the wire. Each message is composed of a set of packet headers and a protocol message that includes the name, the content (or payload), and information used to cryptographically validate the message – all contained in nested TLVs. The specification of CCNx 1.0 is available at: http://blogs.parc.com/ccnx/specifications/ Derivative works Named data networking is an NSF funded project based on the original CCNx 0.x code. CCN-lite is a lightweight version of CCNx functionally interoperable with CCN 0.x. Related Projects GreenICN is a project focused on disaster recovery scenarios using an Information Centric Networking paradigm. See also Information-centric networking Named data networking Information-centric networking caching policies References Computer networking
31595561
https://en.wikipedia.org/wiki/David%20L.%20Aaron
David L. Aaron
David Laurence Aaron (born August 21, 1938) is an American diplomat and writer who served in the Jimmy Carter and Bill Clinton administrations. He graduated from Occidental College with a BA, and from Princeton University with an MPA. He later received an honorary Ph.D from Occidental College. He is currently director of the RAND Corporation's Center for Middle East Public Policy. Background and early career Aaron was born in Chicago, Illinois, United States. He entered the U.S. foreign service in 1962, where he served as a political and economic officer in Guayaquil, Ecuador. In 1964 he was assigned to the NATO desk at the Department of State. He subsequently served as a political officer to NATO where he worked on the Nuclear Planning Group and on the Non Proliferation Treaty. He then joined the Arms Control and Disarmament Agency where he served as a member of the U.S. Delegation to the Strategic Arms Limitation Talks (SALT), during which Aaron was a key negotiator of an agreement with the Soviet Union to reduce the risk of nuclear weapon accidents. He was then recruited to serve on Henry Kissinger's National Security Council staff during the Nixon administration, from 1972 to 1974. During that time, Aaron drafted NSSM 242 on Nuclear Strategy, which came to be known as the Schlesinger Doctrine. In 1974, on the recommendation of Zbigniew Brzezinski, Aaron became Senator Walter Mondale's legislative assistant. The following year, Aaron was task force leader of the Senate's Select Committee on Intelligence. He was the principal architect of the Committee's recommendations. Aaron later followed Mondale to the Jimmy Carter Presidential campaign. Deputy National Security Advisor In 1977, Aaron was asked by Brzezinski, who had been appointed the National Security Advisor, to become Deputy National Security Advisor in the administration of Jimmy Carter. Aaron was one of several former Kissinger aides appointed by Jimmy Carter to foreign policy and defense positions. During his time at the White House, Aaron made a name for himself in foreign policy circles and was recognized as a rising star in the Democratic Party. Aaron was a special envoy to Africa, Latin America, China, Israel and Europe, and became a trusted envoy on Presidential missions. Shortly after Carter's inauguration, Aaron attended the Bilderberg Conference, in which he undertook lengthy private discussions with German Chancellor Helmut Schmidt. In Israel, Aaron worked with Moshe Dayan on the concept of "autonomy" for the Palestinians. This concept helped to open the door for the Camp David Agreements, which are understood to have structured peace between Egypt and Israel. Aaron also represented the White House in talks with the Office of French President Valéry Giscard d'Estaing in Paris, as well as with the Cabinet Office at 10 Downing Street in London. President Carter tapped Aaron to lead an inter-agency mission to structure an agreement with European nations to deploy U.S. Pershing Missiles and Ground Launched Cruise Missiles in Europe, in response to the deployment of SS-20 Intermediate Missiles by the Soviet Union. He persuaded key governments to accept the U.S. deployments, as well as to seek negotiations with the U.S.S.R. for the future bilateral elimination of the deployments. Aaron was also seen as a tough and sometimes controversial figure. The U.S. Ambassador in Paris complained that he was going behind his back in secret dealings with French President Giscard d'Estaing's office. In 1978, he came head to head with Director of Central Intelligence Turner of the CIA, on Turner's cutbacks and at the CIA. Aaron's image as a "tough customer" was intensified during an attack on North Yemen by South Yemen which was backed by the Soviet Union. President Carter, Brzezinski and Cyrus Vance were on a mission to Egypt and Israel. He remained in Washington to coordinate the U.S. response. Aaron's hard-line against Communist expansion led him to push for the dispatch of $400 million in arms to North Yemen. White House staff commented on his tough rule, one staff member was quoted as saying, "Believe it or not, people were relieved when Brzezinski got back to town". Post-government career When Reagan became President in 1981, Aaron moved into the private sector, becoming Vice President for Mergers and Acquisitions at Oppenheimer and Co. and Vice Chairman of Oppenheimer International. Aaron left Oppenheimer in 1985, to write and lecture, but went on to serve on the board of directors of Oppenheimer's Quest for Value Dual Purpose Fund. Over the next several years he published three novels (State Scarlet; Agent of Influence and Crossing By Night) which were translated into ten languages. He also wrote a television documentary, "The Lessons of the Gulf War", hosted by former Chairman of the Joint Chiefs of Staff William J. Crowe. He was also a consultant for the 20th Century Fund, from 1990 to 1992. Aaron was involved in the election campaigns of Walter Mondale and Bill Clinton. In Mondale's campaign, Aaron played a leading role as senior consultant on foreign policy and defense. Aaron served in Clinton's foreign policy team during his election campaign. OECD Ambassador tenure and aftermath In 1993 he became United States Permanent Representative to the Organization for Economic Cooperation and Development (OECD) in Paris, and in 1996 was assigned the additional job of White House Special Envoy for Cryptography. At the OECD he successfully negotiated the Convention to Prohibit Bribery in International Business Transactions. As Special Envoy for Cryptography, Aaron pushed for a global standard that would require computer users with high grade encryption to submit keys to their codes for scrambling data to an independent authority, which would hold them in escrow and make them available to law enforcement only under a court order. At the time, he argued that unbreakable codes in the hands of terrorists would threaten every country's security. However, he was attacked by advocates of privacy rights, who said that the compromise could easily be misused by Governments and corporations. In 1997 he was appointed Under Secretary of Commerce for International Trade, where ironically he negotiated privacy rules with the European Union on the handling of personal data. After Clinton's second term in office, Aaron became senior international advisor at Dorsey & Whitney. He left Dorsey & Whitney in 2003 to join the RAND Corporation as a senior fellow. At RAND, he directs The Center For Middle East Public Policy and recently produced a non fiction book, "In their Own Words: Voices of Jihad", published by the RAND Corporation. Personal life David married Chloe Aaron in 1962, with whom he had a son; his wife died in early 2020. He is a member of the American Ditchley Foundation, the Atlantic Council, the Council on Foreign Relations, the International League of Human Rights, the National Democratic Institute, and the Pacific Council on International Policy. References The Other Side of the Story, Jody Powell, Morrow 1984 External links David L. Aaron Papers at the Seeley G. Mudd Manuscript Library, Princeton University Occidental College alumni 1938 births Living people United States Under Secretaries of Commerce Ambassadors of the United States to the Organisation for Economic Co-operation and Development United States Deputy National Security Advisors Princeton School of Public and International Affairs alumni American expatriates in Ecuador
31602166
https://en.wikipedia.org/wiki/Privacy%20concerns%20with%20social%20networking%20services
Privacy concerns with social networking services
Since the arrival of early social networking sites in the early 2000s, online social networking platforms have expanded exponentially, with the biggest names in social media in the mid-2010s being Facebook, Instagram, Twitter and Snapchat. The massive influx of personal information that has become available online and stored in the cloud has put user privacy at the forefront of discussion regarding the database's ability to safely store such personal information. The extent to which users and social media platform administrators can access user profiles has become a new topic of ethical consideration, and the legality, awareness, and boundaries of subsequent privacy violations are critical concerns in advance of the technological age. A social network is a social structure made up of a set of social actors (such as individuals or organizations), sets of dyadic ties, and other social interactions between actors. Privacy concerns with social networking services is a subset of data privacy, involving the right of mandating personal privacy concerning storing, re-purposing, provision to third parties, and displaying of information pertaining to oneself via the Internet. Social network security and privacy issues result from the large amounts of information these sites process each day. Features that invite users to participate in—messages, invitations, photos, open platform applications and other applications are often the venues for others to gain access to a user's private information. In addition, the technologies needed to deal with user's information may intrude their privacy. The advent of the Web 2.0 has caused social profiling and is a growing concern for internet privacy. Web 2.0 is the system that facilitates participatory information sharing and collaboration on the Internet, in social networking media websites like Facebook and MySpace. These social networking sites have seen a boom in their popularity beginning in the late 2000s. Through these websites many people are giving their personal information out on the internet. These social networks keep track of all interactions used on their sites and save them for later use. Issues include cyberstalking, location disclosure, social profiling, 3rd party personal information disclosure, and government use of social network websites in investigations without the safeguard of a search warrant. History Before social networking sites exploded over the past decade, there were earlier forms of social networking that dated back to 1997 such as Six Degrees and Friendster. While these two social media platforms were introduced, additional forms of social networking included: online multiplayer games, blog and forum sites, newsgroups, mailings lists and dating services. They created a backbone for the new modern sites. Since the start of these sites, privacy has become a concern for the public. In 1996, a young woman in New York City was on a first date with an online acquaintance and later sued for sexual harassment, after her date tried to play out some of the sexual fantasies they had discussed while online. This is just an early example of many more issues to come regarding internet privacy. In the past, social networking sites primarily consisted of the capability to chat with others in a chat room, which was far less popular than social networks today. People using these sites were seen as "techies" unlike users in the current era. One of the early privacy cases was in regards to MySpace, due to "stalking of minors, bullying, and privacy issues", which inevitably led to the adoption of "age requirements and other safety measures". It is very common in society now for events such as stalking and "catfishing" to occur. According to Kelly Quinn, “the use of social media has become ubiquitous, with 73% of all U.S. adults using social network sites today and significantly higher levels of use among young adults and females." Social media sites have grown in popularity over the past decade, and they only continue to grow. A majority of the United States population uses some sort of social media site. Causes There are several causes that contribute to the invasion of privacy throughout social networking platforms. It has been recognized that “by design, social media technologies contest mechanisms for control and access to personal information, as the sharing of user-generated content is central to their function." This proves that social networking companies need private information to become public so their sites can operate. They require people to share and connect with each other. This may not necessarily be a bad thing; however, one must be aware of the privacy concerns. Even with privacy settings, posts on the internet can still be shared with people beyond a user's followers or friends. One reason for this is that “English law is currently incapable of protecting those who share on social media from having their information disseminated further than they intend." Information always has the chance to be unintentionally spread online. Once something is posted on the internet, it becomes public and is no longer private. Users can turn privacy settings on for their accounts; however, that does not guarantee that information will not go beyond their intended audience. Pictures and posts can be saved and posts may never really get deleted. In 2013, the Pew Research Center found that "60% of teenage Facebook users have private profiles.” This proves that privacy is definitely something that people still wish to obtain. A person's life becomes much more public because of social networking. Social media sites have allowed people to connect with many more people than with just in person interactions. People can connect with users from all across the world that they may never have the chance to meet in person. This can have positive effects; however, this also raises many concerns about privacy. Information can be posted about a person that they do not want getting out. In the book It's Complicated, the author, Danah Boyd, explains that some people “believe that a willingness to share in public spaces—and, most certainly, any act of exhibitionism and publicity—is incompatible with a desire for personal privacy." Once something is posted on the internet, it becomes accessible to multiple people and can even be shared beyond just assumed friends or followers. Many employers now look at a person's social media before hiring them for a job or position. Social media has become a tool that people use to find out information about a person's life. Someone can learn a lot about a person based on what they post before they even meet them once in person. The ability to achieve privacy is a never ending process. Boyd describes that “achieving privacy requires the ability to control the social situation by navigating complex contextual cues, technical affordances , and social dynamics." Society is constantly changing; therefore, the ability to understand social situations to obtain privacy regularly has to be changed. Various levels of privacy offered Social networking sites vary in the levels of privacy offered. For some social networking sites like Facebook, providing real names and other personal information is encouraged by the site (onto a page known as a 'Profile'). This information usually consists of the birth date, current address, and telephone number(s). Some sites also allow users to provide more information about themselves such as interests, hobbies, favorite books or films, and even relationship status. However, there are other social network sites, such as Match.com, where most people prefer to be anonymous. Thus, linking users to their real identity can sometimes be rather difficult. Nevertheless, individuals can sometimes be identified with face re-identification. Studies have been done on two major social networking sites, and it is found that by overlapping 15% of the similar photographs, profile pictures with similar pictures over multiple sites can be matched to identify the users. People concern “According to research conducted by the Boston Consulting Group, privacy of personal data is a top issue for 76 percent of global consumers and 83 percent of U.S. consumers.” Six-in-ten Americans (61%) have said they would like to do more to protect their privacy. For sites that do encourage information disclosure, it has been noted that a majority of the users have no trouble disclosing their personal information to a large group of people. In 2005, a study was performed to analyze data of 540 Facebook profiles of students enrolled at Carnegie Mellon University. It was revealed that 89% of the users gave genuine names, and 61% gave a photograph of themselves for easier identification. Majority of users also had not altered their privacy setting, allowing a large number of unknown users to have access to their personal information (the default setting originally allowed friends, friends of friends, and non-friends of the same network to have the full view of a user's profile). It is possible for users to block other users from locating them on Facebook, but this must be done by individual basis, and would, therefore, appear not to be commonly used for a wide number of people. Most users do not realize that while they may make use of the security features on Facebook the default setting is restored after each update. All of this has led to many concerns that users are displaying far too much information on social networking sites which may have serious implications on their privacy. Facebook was criticized due to the perceived laxity regarding privacy in the default setting for users. The “Privacy Paradox” is a phenomenon that occurs when individuals, who state that they have concerns about their privacy online, take no action to secure their accounts. Furthermore, while individuals may take extra security steps for other online accounts, such as those related to banking or finance, this does not extend to social media accounts. Some of these basic or simple security steps would include deleting cookies, browser history, or checking one's computer for spyware. Some may attribute this lack of action to “third-person bias”. This occurs when people are aware of risks, but then do not believe that these risks apply or relate to them as individuals. Another explanation is a simple risk reward analysis. Individuals may be willing to risk their privacy to reap the rewards of being active on social media. Oftentimes, the risk of being exploited for the private information shared on the internet is overshadowed by the rewards of exclusively sharing personal information that bolsters the appeal of the social media user. In the study by Van der Velden and El Emam, teenagers are described as “active users of social media, who seem to care about privacy, but who also reveal a considerable amount of personal information.” This brings up the issue of what should be managed privately on social media, and is an example of the Privacy Paradox. This study in particular looked at teenagers with mental illness and how they interact on social media. Researchers found that “it is a place where teenage patients stay up-to-date about their social life—it is not seen as a place to discuss their diagnosis and treatment.” Therefore, social media is a forum that needs self-protection and privacy. Privacy should be a main concern, especially for teens who may not be entirely informed about the importance and consequences of public versus private use. For example, the “discrepancy between stated privacy concerns and the disclosure of private information.” User awareness in social networking sites Users are often the targets as well as the source of information in social networking. Users leave digital imprints during browsing of social networking sites or services. It has been identified from few of the online studies, that users trust websites and social networking sites. As per trust referred, "trust is defined in (Mayer, Davis, and Schoorman, 1995) as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party" (p. 712)". A survey was conducted by Carnegie Mellon University, a majority of users provided their living city, phone numbers among other personal information, while user is clearly unaware of consequences of sharing certain information. Adding to this insight, is the social networking users are from various cities, remote villages, towns, cultures, traditions, religions, background, economic classes, education background, time zones and so on that highlight the significant gap in awareness. The survey results of the paper suggest, "These results show that the interaction of trust and privacy concern in social networking sites is not yet understood to a sufficient degree to allow accurate modeling of behavior and activity. The results of the study encourage further research in the effort to understand the development of relationships in the online social environment and the reasons for differences in behavior on different sites." As per reference, a survey conducted among social networking users at Carnegie Mellon University was indicative of following as reasons for lack of user awareness: 1) People's disregard of privacy risks due to trust in privacy and protection offered on social networking sites. 2) Availability of user's personal details to third-party tools/applications. 3) APIs and Frameworks also enable any user, who has the fair amount of knowledge to extract the user's data. 4) Cross-site forgery and other possible website threats. There is hence a dire need for improving User's awareness swiftly, in order to address growing security and privacy concerns caused due to merely user's unawareness. Social networking sites themselves can take a responsibility and make such awareness possible by means of participatory methods by virtual online means. To improve user's awareness, a possible method is to have privacy-related trainings for people to understand privacy concerns with the use of social media websites or apps. The trainings can include information of how certain companies or apps help secure user's privacy, and skills to protect user's privacy. Data access methods There are several ways for third parties to access user information. Flickr is an example of a social media website that provides geotagged photos that allows users to view the exact location of where a person is visiting or staying. Geotagged photos make it easy for third party users to see where an individual is located or traveling to. There is also growing use of phishing, which reveals sensitive information through secretive links and downloads through email, messages, and other communications. Social media has opened up an entirely new realm for hackers to get information from normal posts and messages. Share it with third parties Nearly all of the most popular applications on Facebook—including Farmville, Causes, and Quiz Planet—have been sharing users' information with advertising and tracking companies. Even though Facebook's privacy policy says they can provide "any of the non-personally identifiable attributes we have collected" to advertisers, they violate this policy. If a user clicked a specific ad in a page, Facebook will send the user address of this page to advertisers, which will directly lead to a profile page. In this case, it is easy to identify users' names. For example, Take With Me Learning is an app that allows teachers and students to keep track of their academic process. The app requires personal information that includes, school name, user's name, email, and age. But Take With Me Learning was created by a company that was known for illegally gathering student's personal information without their knowledge and selling it to advertisement companies. This company had violated the Children's Online Privacy Protection Act (COPPA), used to keep children safe from identity theft while using the internet. Most recently, Facebook has been scrutinized for the collection of users' data by Cambridge Analytica. Cambridge Analytica was collecting data from Facebook users after they agreed to take a psychology questionnaire. Not only could Cambridge Analytica access the data of the person who took the survey, they could also access all of the data of that person's Facebook friends. This data was then used to hopefully sway people's’ beliefs in hopes that they would vote for a certain politician. While what Cambridge Analytica did by collecting the data may or may not be illegal, they then transferred the data they acquired to third parties so that it could be used to sway voters. Facebook was fined £500,000 in the UK, $5bn (£4bn) in the US, and in 2020, the company was taken to court by Australia's privacy regulator with the perspective of imposing a fine of A$1.7m (£860,000). API Application programming interface (API) is a set of routines, protocols, and tools for building software applications. By using query language, sharing content and data between communities and applications became much easier. APIs simplify all that by limiting outside program access to a specific set of features—often enough, requests for data of one sort or another. APIs clearly define exactly how a program will interact with the rest of the software world—saving time. An API allows software to “speak with other software.” Furthermore, an API can collect and provide information that is not publicly accessible. This is extremely enticing for researchers due to the greater number of possible avenues of research. The use of an API for data collection can be a focal point of the privacy conversation, because while the data can be anonymous, the difficulty is understanding when it becomes an invasion of privacy. Personal information can be collected en masse, but the debate over whether it breaches personal privacy is due to the inability to match this information with specific people. There have however been some concerns with API because of the recent scandal between Facebook and the political consulting firm, Cambridge Analytica. What happened was “Facebook allowed a third-party developer to engineer an application for the sole purpose of gathering data. And the developer was able to exploit a loophole to gather information on not only people who used the app but all their friends — without them knowing.” Search engines Search engines are an easy way to find information without scanning every site yourself. Keywords that are typed into a search box will lead to the results. So it is necessary to make sure that the keywords typed are precise and correct. There are many such search engines, some of which may lead the user to fake sites which may obtain personal information or are laden with viruses. Furthermore, some search engines, like DuckDuckGo, will not violate the user's privacy. Location data On most social media websites, user's geographical location can be gathered either by users (through voluntary check-in applications like Foursquare and Facebook Places) or by applications (through technologies like IP address geolocation, cellphone network triangulation, RFID and GPS). The approach used matters less than the result which holds that the content produced is coupled with the geographical location where the user produced it. Additionally, many applications attach the contents of other forms of information like OS language, device type and capture time. The result is that by posting, tweeting or taking pictures, users produce and share an enormous amount of personal information. Email and phone number leaks Many large platforms reveal a part of a user's email address or phone number when using the 'forgotten password' function. Often the whole email address can be derived from this hint and phone digits can be compared with known numbers. Benefit from data This accessible data along with data mining technology, users' information can be used in different ways to improve customer service. According to what you retweet, what you like and the hashtag, Twitter can recommend some topics and advertisements. Twitter's suggestions for whom to follow is done by this recommendation system. Commerce, such as Amazon, make use of users' information to recommend items for users. Recommendations are based on at least prior purchases, shopping cart, and wishlist. Affinity analysis is a data mining technique that used to understand the purchase behavior of customers. By using machine learning method, whether a user is a potential follower of Starbucks can be predicted. In that case, it is possible to improve the quality and coverage of applications. In addition, user profiles can be used to identify similar users. According to Gary Kovacs's speech about Tracking our online trackers, when he used the internet to find an answer to a question, "We are not even 2 bites into breakfast and there are already nearly 25 sites that are tracking me", and he was navigated by 4 of them. Privacy concerns Studies have shown that people's belief in the right to privacy is the most pivotal predictor in their attitudes concerning online privacy. Social profiling and 3rd party disclosure The Privacy Act of 1974 (a United States federal law) states: "No agency shall disclose any record which is contained in a system of records by any means of communication to any person, or to another agency, except pursuant to a written request by, or with the prior written consent of, the individual to whom the record pertains [subject to 12 exceptions]." 5 U.S.C. § 552a(b). Disclosure in this context refers to any means of communication, be it written, oral, electronic or mechanical. This states that agencies are forbidden to give out, or disclose, the information of an individual without being given consent by the individual to release that information. However, it falls on the individual to prove that a wrongful disclosure, or disclosure in general, has occurred. Because of this social networking sites such as Facebook ask for permission when a third-party application is requesting the user's information. Although The Privacy Act of 1974 does a lot to limit privacy invasion through third party disclosure, it does list a series of twelve exceptions that deem disclosure permissible: 1. For members of an agency who need such information "in the performance of their duties". 2. If the Freedom of Information Act requires such information 3. If the information that is disclosed "is compatible with the purpose for which it was collected". 4. If the Bureau of Census needs such information to complete a particular census. 5. If the third party explicitly informs the individual that the information collected will serve only as a form of "statistical research" and is not "individually identifiable". 6. If it is historically relevant to be added to the National Archives and Records Administration. 7. If such information was requested by a law enforcement agency. 8. If such information is deemed beneficial to the "health or safety of an individual". 9. If such information is requested by the House of Congress or by one of its subcommittees. 10. If such information is requested by the head of the Government Accountability Office or by one "of his authorized representatives". 11. If such information is requested through a court order. 12. If such information is requested through the Debt Collection Act. Social profiling allows for Facebook and other social networking media websites of filtering through the advertisements, assigning specific ones to specific age groups, gender groups, and even ethnicities. Data aggregation sites like Spokeo have highlighted the feasibility of aggregating social data across social sites as well as integrating it with public records. A 2011 study highlighted these issues by measuring the amount of unintended information leakage over a large number of users with the varying number of social networks. It identified and measured information that could be used in attacks against what-you-know security. Studies have also pointed to most social networks unintentionally providing 3rd party advertising and tracking sites with personal information. It raises the issue of private information inadvertently being sent to 3rd party advertising sites via Referrer strings or cookies. Civil libertarians worry that social networking sites, particularly Facebook, have greatly diminished user confidentiality in numerous ways. For one thing, when social media platforms store private data, they also have complete access to that material as well. To sustain their profitability, applications like Facebook examine and market personal information by logging data through cookies, small files that stockpile the data on someone's device. Companies, such as Facebook, carry extensive amounts of private user information on file, regarding individuals’ , “likes, dislikes, and preferences”, which are of high value to marketers. As Facebook reveals user information to advertising and marketing organizations, personalized endorsements will appear on news feeds based on “surfing behavior, hobbies, or pop culture preferences”. For those reasons, Facebook's critics fear that social networking companies may seek business ventures with stockholders by sharing user information in the exchange of profits. Additionally, they argue that since Facebook demonstrates an illusion of privacy presented by a “for-friends-only” type of platform, individuals find themselves more inclined to showcase more personal information online. According to the critics, users might notice that the sponsorships and commercials are tailored to their disclosed private data, which could result in a sense of betrayal. Institutional A number of institutions have expressed concern over the lack of privacy granted to users on social networking sites. These include schools, libraries, and Government agencies. Libraries Libraries in the particular, being concerned with the privacy of individuals, have debated on allowing library patrons to access social networking sites on public library computers. While only 19% of librarians reportedly express real concern over social networking privacy, they have been particularly vocal in voicing their concerns. Some have argued that the lack of privacy found on social networking sites is contrary to the ethics supported by Library organizations, and the latter should thus be extremely apprehensive about dealing with the former. Supporters of this view present their argument from the code of ethics held by both the American Library Association and the UK based Chartered Institute of Library and Information Professionals, which affirms a commitment to upholding privacy as a fundamental right. In 2008, a study was performed in fourteen public libraries in the UK which found that 50% blocked access to social networking sites. Many school libraries have also blocked Facebook out of fear that children may be disclosing too much information on Facebook. However, as of 2011, Facebook has taken efforts to combat this concern by deleting profiles of users under the age of thirteen. Potential dangers Identity theft As there is so much information provided other things can be deduced, such as the person's social security number, which can then be used as part of identity theft. In 2009, researchers at Carnegie Mellon University published a study showing that it is possible to predict most and sometimes all of an individual's 9-digit Social Security number using information gleaned from social networks and online databases. (See Predicting Social Security Numbers from Public Data by Acquisti and Gross). In response, various groups have advised that users either do not display their number, or hide it from Facebook 'friends' they do not personally know. Cases have also appeared of users having photographs stolen from social networking sites in order to assist in identity theft. According to the Huffington Post, Bulgarian IT consultant Bogomil Shopov claimed in a recent blog to have purchased personal information on more than 1 million Facebook users, for the shockingly low price of US$5.00. The data reportedly included users' full names, email addresses, and links to their Facebook pages. The following information could be used to steal the users' identities : Full names including middle name, date of birth, hometown, relationship status, residential information, other hobbies and interest. Preteens and early teenagers Among all other age groups, in general, the most vulnerable victims of private-information-sharing behavior are preteens and early teenagers. According to research, many teens report that social media and social networking services are important to building relationships and friendships. With this fact comes privacy concerns such as identity theft, stealing of personal information, and data usage by advertising companies. Besides from using social media to connect, teenagers use social networking services for political purposes and obtaining information. However, sometimes social media can become the place for harassment and disrespectful political debates that fuels resentment and rises privacy concerns. There have been age restrictions put on numerous websites but how effective they are is debatable. Findings have unveiled that informative opportunities regarding internet privacy as well as concerns from parents, teachers, and peers, play a significant role in impacting the internet user's behavior in regards to online privacy. Additionally, other studies have also found that the heightening of adolescents' concern towards their privacy will also lead to a greater probability that they will utilize privacy-protecting behaviors. In the technological culture that society is developing into, not only should adolescents' and parent's awareness be risen, but society as a whole should acknowledge the importance of online privacy. Preteens and early teenagers are particularly susceptible to social pressures that encourage young people to reveal personal data when posting online. Teens often post information about their personal life, such as activities they are doing, sharing their current locations, who they spend time with, as well their thoughts and opinions. They tend to share this information because they do not want to feel left out or judged by other adolescents who are practicing these sharing activities already. Teens are motivated to keep themselves up to date with the latest gossip, current trends, and trending news and, in doing so they are allowing themselves to become victims of cyberbullying, stalking, and in the future, could potentially harm them when pursuing job opportunities, and in the context of privacy, become more inclined to share their private information to the public. This is concerning because preteens and teenagers are the least educated on how public social media is, how to protect themselves online, and the detrimental consequences that could come from sharing too much personal information online. As more and more young individuals are joining social media sites, they believe it is acceptable to post whatever they are thinking, as they don't realize the potential harm that information can do to them and how they are sacrificing their own privacy. "Teens are sharing more information about themselves on social media sites than they did in the past." Preteens and teenagers are sharing information on social media sites such as Facebook, Snapchat, Instagram, Twitter, Pinterest, and more by posting pictures and videos of themselves unaware of the privacy they are sacrificing. Adolescents post their real name, birthdays, and email addresses to their social media profiles. Children have less mobility than they have had in the past. Everything these teenagers do online is so they can stay in the loop of social opportunities, and the concern with this is that they do this in a way that is not only traceable but in a very persistent environment that motivates people to continue sharing information about themselves as well. Consequently, they continue to use social media sites such as Facebook, despite knowing there exists potential privacy risks. California is also taking steps to protect the privacy of some social media users from users’ own judgments. In 2013, California enacted a law that would require social media sites to allow young registered users to erase their own comments from sites. This is a first step in the United States toward the “right to be forgotten” that has been debated around the world over the past decade. Sexual predators Most major social networking sites are committed to ensuring that use of their services are as safe as possible. However, due to the high content of personal information placed on social networking sites, as well as the ability to hide behind a pseudo-identity, such sites have become increasingly popular for sexual predators online. Further, lack of age verification mechanisms is a cause of concern in these social networking platforms. However, it was also suggested that the majority of these simply transferred to using the services provided by Facebook. While the numbers may remain small, it has been noted that the number of sexual predators caught using social networking sites has been increasing, and has now reached an almost weekly basis. In worst cases children have become victims of pedophiles or lured to meet strangers.They say that sexual predators can lurk anonymously through the wormholes of cyberspace and access victim profiles online. A number of highly publicized cases have demonstrated the threat posed for users, such as Peter Chapman who, under a false name, added over 3,000 friends and went on to rape and murder a 17-year-old girl in 2009. In another case, a 12-year-old, Evergreen girl was safely found by the FBI with the help of Facebook, due to her mother learning of her daughter's conversation with a man she had met on the popular social networking application. Stalking The potential ability for stalking users on social networking sites has been noted and shared. Popular social networking sites make it easy to build a web of friends and acquaintances and share with them your photos, whereabouts, contact information, and interests without ever getting the chance to actually meet them. With the amount of information that users post about themselves online, it is easy for users to become a victim of stalking without even being aware of the risk. 63% of Facebook profiles are visible to the public, meaning if you Google someone's name and you add "+Facebook" in the search bar you pretty much will see most of the person profile. A study of Facebook profiles from students at Carnegie Mellon University revealed that about 800 profiles included current resident and at least two classes being studied, theoretically allowing viewers to know the precise location of individuals at specific times. AOL attracted controversy over its instant messenger AIM which permits users to add 'buddies' without their knowing, and therefore track when a user is online. Concerns have also been raised over the relative ease for people to read private messages or e-mails on social networking sites. Cyberstalking is a criminal offense that comes into play under state anti-stalking laws, slander laws, and harassment laws. A cyberstalking conviction can result in a restraining order, probation, or even criminal penalties against the assailant, including jail. Some applications are explicitly centered on "cyber stalking." An application named "Creepy" can track a person's location on a map using photos uploaded to Twitter or Flickr. When a person uploads photos to a social networking site, others are able to track their most recent location. Some smartphones are able to embed the longitude and latitude coordinates into the photo and automatically send this information to the application. Anybody using the application can search for a specific person and then find their immediate location. This poses many potential threats to users who share their information with a large group of followers. Facebook "Places," is a Facebook service, which publicizes user location information to the networking community. Users are allowed to "check-in" at various locations including retail stores, convenience stores, and restaurants. Also, users are able to create their own "place," disclosing personal information onto the Internet. This form of location tracking is automated and must be turned off manually. Various settings must be turned off and manipulated in order for the user to ensure privacy. According to epic.org, Facebook users are recommended to: (1) disable "Friends can check me in to Places," (2) customize "Places I Check In," (3) disable "People Here Now," and (4) uncheck "Places I've Visited.". Moreover, the Federal Trade Commission has received two complaints in regards to Facebook's "unfair and deceptive" trade practices, which are used to target advertising sectors of the online community. "Places" tracks user location information and is used primarily for advertising purposes. Each location tracked allows third party advertisers to customize advertisements that suit one's interests. Currently, the Federal Trade Commissioner along with the Electronic Privacy Information Center are shedding light on the issues of location data tracking on social networking sites. Unintentional fame Unintentional fame can harm a person's character, reputation, relationships, chance of employment, and privacy- ultimately infringing upon a person's right to the pursuit of happiness. Many cases of unintentional fame have led its victims to take legal action. The right to be forgotten is a legal concept that includes removing one's information from the media that was once available to the public. The right to be forgotten is currently enforced in the European Union and Argentina, and has been recognized in various cases in the United States, particularly in the case of Melvin v. Reid. However, there is controversy surrounding the right to be forgotten in the United States as it conflicts with the public's right to know and the Constitution's First Amendment, restricting one's “right to freedom of speech and freedom of expression” (Amendment I). Privacy concerns have also been raised over a number of high-profile incidents which can be considered embarrassing for users. Various internet memes have been started on social networking sites or been used as a means towards their spread across the internet. In 2002, a Canadian teenager became known as the Star Wars Kid after a video of him using a golf club as a light sabre was posted on the internet without his consent. The video quickly became a hit, much to the embarrassment of the teenager, who claims to have suffered as a result. Along with other incidents of videos being posted on social networking sites, this highlights the ability for personal information to be rapidly transferred between users. Employment Issues relating to privacy and employment are becoming a concern with regards to social networking sites. As of 2008, it has been estimated by CareerBuilder.com that one in five employers search social networking sites in order to screen potential candidates (increasing from only 11% in 2006). For the majority of employers, such action is to acquire negative information about candidates. For example, 41% of managers considered information relating to candidates' alcohol and drug use to be a top concern. Other concerns investigated via social networking sites included poor communication skills, inappropriate photographs, inaccurate qualifications and bad-mouthing former employers/colleagues. However, 24% manager claimed that information found on a social networking site persuaded them to hire a candidate, suggesting that a user image can be used in a positive way. While there is little doubt that employers will continue to use social networking sites as a means of monitoring staff and screening potential candidates, it has been noted that such actions may be illegal under in jurisdictions. According to Workforce.com, employers who use Facebook or Myspace could potentially face legal action: If a potential employer uses a social networking site to check out a job candidate and then rejects that person based on what they see, he or she could be charged with discrimination. On August 1, 2012, Illinois joined the state of Maryland (law passed in March 2012) in prohibiting employer access to social media web sites of their employees and prospective employees. A number of other states that are also considering such prohibitory legislation (California, Delaware, Massachusetts, Michigan, Minnesota, Missouri, New Jersey, New York, Ohio, South Carolina and Washington), as is the United States Congress. In April 2012, the Social Networking Online Protection Act (2012 H.R. 5050) was introduced in the United States House of Representatives, and the Password Protection Act of 2012 (2012 S. 3074) was introduced in the United States Senate in May 2012, which prohibit employers from requiring access to their employees' social media web sites. With the recent concerns about new technologies, the United States is now developing laws and regulations to protect certain aspects of people's information on different medias.[CR4] For example, 12 states in the US currently have laws specifically restricting employers from demanding access to their employees’ social media sites when those sites are not fully public. (The states that have passed these laws are Arkansas, California, Colorado, Illinois, Maryland, Michigan, New Jersey, New Mexico, Nevada, Oregon, Utah, and Washington.) Monitoring of social networking sites is not limited to potential workers. Issues relating to privacy are becoming an increasing concern for those currently in employment. A number of high-profile cases have appeared in which individuals have been sacked for posting comments on social networking which have been considered disparaging to their current employers or fellow workers. In 2009, sixteen-year-old Kimberley Swann was sacked from her position at Ivell Marketing and Logistics Limited after describing her job as 'boring'. In 2008, Virgin Atlantic sacked thirteen cabin crew staff, after it emerged they used had criticized the company's safety standards and called passengers 'chavs' on Facebook. There is no federal law that we are aware of that an employer is breaking by monitoring employees on social networking sites. In fact, employers can even hire third-party companies to monitor online employee activity for them. According to an article by Read Write Web employers use the service to "make sure that employees don't leak sensitive information on social networks or engage in any behavior that could damage a company's reputation." While employers may have found such usages of social networking sites convenient, complaints have been put forward by civil liberties groups and trade unions on the invasive approach adopted by many employers. In response to the Kimberley Swann case, Brendan Barber, of the TUC union stated that: Most employers wouldn't dream of following their staff down the pub to see if they were sounding off about work to their friends," he said. "Just because snooping on personal conversations is possible these days, it doesn't make it healthy." Monitoring of staff's social networking activities is also becoming an increasingly common method of ensuring that employees are not browsing websites during work hours. It was estimated in 2010 that an average of two million employees spent over an hour a day on social networking sites, costing potentially £14 billion. Online victimization Social networks are designed for individuals to socially interact with other people over the Internet. However, some individuals engage in undesirable online social behaviors, which negatively impacts other people's online experiences. It has created a wide range of online interpersonal victimization. Some studies have shown that social network victimization appears largely in adolescent and teens, and the type of victimization includes sexual advances and harassment. Recent research has reported approximately 9% of online victimization involves social network activities. It has been noted that many of these victims are girls who have been sexually victimized over these social network sites. Research concludes that many of social network victimizations are associated with user behaviors and interaction with one another. Negative social behaviors such as aggressive attitudes and discussing sexual related topics motivate the offenders to achieve their goals. All in all, positive online social behaviors is promoted to help reduce and avoid online victimization. Surveillance While the concept of a worldwide communicative network seems to adhere to the public sphere model, market forces control access to such a resource. In 2010, investigation by The Wall Street Journal found that many of the most popular applications on Facebook were transmitting identifying information about users and their friends to advertisers and internet tracking companies, which is a violation of Facebook's privacy policy. The Wall Street Journal analyzed the ten most popular Facebook apps, including Zynga's FarmVille with 57 million users, and Zynga's Mafia Wars with 21.9 million users, and found that they were transmitting Facebook user IDs to data aggregators. Every online move leaves cyber footprints that are rapidly becoming fodder for research without people ever realizing it. Using social media for academic research is accelerating and raising ethical concerns along the way, as vast amounts of information collected by private companies — including Google, Microsoft, Facebook and Twitter — are giving new insight into all aspects of everyday life. Our social media "audience" is bigger than we actually know; our followers or friends aren't the only ones that can see information about us. Social media sites are collecting data from us just by searching something such as "favorite restaurant" on our search engine. Facebook is transformed from a public space to a behavioral laboratory," says the study, which cites a Harvard-based research project of 1,700 college-based Facebook users in which it became possible to "deanonymize parts of the data set," or cross-reference anonymous data to make student identification possible. Some of Facebook's research on user behavior found that 71% of people drafted at least one post that they never posted. Another analyzed 400,000 posts and found that children's communication with parents decreases in frequency from age 13 but then rises when they move out. Law enforcement prowling the networks The FBI has dedicated undercover agents on Facebook, Twitter, MySpace, LinkedIn. One example of investigators using Facebook to nab a criminal is the case of Maxi Sopo. Charged with bank fraud, and having escaped to Mexico, he was nowhere to be found until he started posting on Facebook. Although his profile was private, his list of friends was not, and through this vector, where he met a former official of the Justice Department, he was eventually caught. In recent years, some state and local law enforcement agencies have also begun to rely on social media websites as resources. Although obtaining records of information not shared publicly by or about site users often requires a subpoena, public pages on sites such as Facebook and MySpace offer access to personal information that can be valuable to law enforcement. Police departments have reported using social media websites to assist in investigations, locate and track suspects, and monitor gang activity. On October 18, 2017, the Department of Homeland Security (DHS) was scheduled to begin using personal information collected using social media platforms to screen immigrants arriving in the U.S. The department made this new measure known in a posting to the Federal Register in September 2017, noting that “...social media handles, aliases, associated identifiable information and search results...” would be included in an applicant's immigration file. This announcement, which was made relatively quietly, has received criticism from privacy advocates. The Department of Homeland Security issued a statement in late September 2017 asserting that the planned use of social media is nothing new, with one department spokesperson saying DHS has been using social media to collect information for years. According to a statement made to National Public Radio, DHS uses “...social media handles, aliases, associated identifiable information, and search results” to keep updated records on persons of interest. According to the DHS, the posting to the Federal Register was an effort to be transparent regarding information about social media that is already being collected from immigrants. Government use of SMMS or “Social media monitoring software can be used to geographically track us as we communicate. It can chart out our relationships, networks, and associations. It can monitor protests, identify the leaders of political and social movements, and measure our influence.” SMMS is also a growing industry. SMMS “products like XI Social Discovery, Geofeedia, Dataminr, Dunami, and SocioSpyder (to name just a few) are being purchased in droves by Fortune 500 companies, politicians, law enforcement, federal agencies, defense contractors, and the military. Even the CIA has a venture fund, In-Q-Tel, that invests in SMMS technology.” Mob rule The idea of the 'mob rule' can be described as a situation in which control is held by those outside the conventional or lawful realm. In response to the News International phone hacking scandal involving News of the World in the United Kingdom, a report was written to enact new media privacy regulations. The British author of the Leveson Report on the ethics of the British press, Lord Justice Leveson, has drawn attention to the need to take action on protecting privacy on the internet. This movement is described by Lord Justice Leveson as a global megaphone for gossip: "There is not only a danger of trial by Twitter, but also of an unending punishment, and no prospect of rehabilitation, by Google". Location updates Foursquare, Facebook, Loopt are application which allow users to check- in and these capabilities allows a user to share their current location information to their connection. Some of them even update their travel plans on social networking applications. However, the disclosure of location information within these networks can cause privacy concerns among mobile users. Foursquare defines another framework of action for the user. It appears to be in the interest of Foursquare that users provide many personal data that are set as public. This is illustrated, among others, by the fact that, although all the respondents want high control over the (location) privacy settings, almost none of them ever checked the Foursquare privacy settings before. Although there are algorithms using encryption, k-anonymity and noise injection algorithms, its better to understand how the location sharing works in these applications to see if they have good algorithms in place to protect location privacy. Invasive privacy agreements Another privacy issue with social networks is the privacy agreement. The privacy agreement states that the social network owns all of the content that users upload. This includes pictures, videos, and messages are all stored in the social networks database even if the user decides to terminate his or her account. Privacy agreements oftentimes say that they can track a user's location and activity based on the device used for the site. For example, the privacy agreement for Facebook states that "all devices that a person uses to access Facebook are recorded such as IP addresses, phone numbers, operating system and even GPS locations". One main concern about privacy agreements are the length, because they take a lot of time to fully read and understand. Most privacy agreements state the most important information at the end because it is assumed that people will not read it completely. The ethical dilemma lies in that upon the agreement to register for SNSs, the personal information disclosed is legally accessible and managed by the sites privately established online security operators and operating systems; leaving access of user data to be "under the discretion" of the site(s) operators. Giving rise to the moral obligation and responsibility of the sites operators to maintain private information to be within user control. However, due to the legality of outsourcing of user data upon registration- without prior discretion, data outsourcing has been frequented by SNSs operating systems- regardless of user privacy settings. Data outsourcing has been proven to be consistently exploited since the emergence of SNSs. Employers have often been found to hire individuals or companies to search deep into the SNSs user database to find "less than pleasant" information regarding applicants during the review process. Reading a privacy statement in terms and conditions One of the main concerns that people have with their security is the lack of visibility that policies and settings have in the social networks. It is often located in areas hard to see like the top left or right of the screen. Another concern is the lack of information that users get from the companies when there is a change in their policies. They always inform users about new updates, but it is difficult to get information about these changes. Most social networking sites require users to agree to Terms of Use policies before they use their services. Controversially, these Terms of Use declarations that users must agree to often contain clauses permitting social networking operators to store data on users, or even share it with third parties. Facebook has attracted attention over its policies regarding data storage, such as making it difficult to delete an account, holding onto data after an account is de-activated and being caught sharing personal data with third parties. This section explains how to read the privacy statement in terms and conditions while signing up for any social networking site. What to look for in the privacy policy: Who owns the data that a user posts? What happens to the data when the user account is closed? How does changes in the privacy policy be made aware to its users? The location of the privacy policy that is effective Will the profile page be completely erased when a user deletes the account? Where and how can a user complain in case of any breach in privacy? For how long is the personal information stored? The answers to these questions will give an indication of how safe the social networking site is. Key points to protect social networking privacy Realize the threats that will always exist There are people out there who want—and will do just about anything—to get someone's private information. It's essential to realize that it's difficult to keep your privacy secured all the time. Among other factors, it has been observed that data loss is correlated positively with risky online behavior and forgoing the necessary antivirus and anti spyware programs to defend against breaches of private information via the internet. Be thorough all the time Logging off after every session can help protect account security. It is dangerous to keep your device logged on since others may have access to your social profiles while you are not paying attention.Full names and addresses are typically considered personal information. Children's safety may be compromised if their parents post their whereabouts in a site where others know who their real identities are. Know the sites Read the social networking site's fine prints. Many sites push its users to agree to terms that are best for the sites—not the users. Users should be aware about the terms in case of emergencies. Exactly how to read the terms are explained above at "Reading a Privacy Statement in Terms and Conditions" part Make sure the social networking site is safe before sharing information. Users shouldn't be sharing information if they don't know who are using the websites since their personally identifiable information could be exposed to other users of the site. Be familiar with the privacy protection provided. Users should take the extra time to get to know the privacy protection systems of various social networks they are or will be using. Only friends should be allowed to access their information. Check the privacy or security settings on every social networking site that they might have to use. Protect devices Encrypt devices. Users should use complex passwords on their computers and cell phones and change them from time to time. This will protect users' information in case these devices are stolen. Install Anti-virus software. Others would be able to use viruses and other ways to invade a user's computer if he or she installed something unsafe. Use devices that can disable camera, microphone, which mostly used for privacy invasion. Be careful about taking drastic actions The users' privacy may be threatened by any actions. Following actions needs special attention. (1) Adding a new friend. Facebook reports 8.7% of its total profiles are fake. A user should be sure about who the person is before adding it as a new friend. (2) Clicking on links. Many links which looks attractive like gift cards are specially designed by malicious users. Clicking on these links may result in losing personal information or money. (3) Think twice about posting revealing photos. A revealing photo could attract the attention of potential criminals. Social networks Facebook Facebook has been scrutinized for a variety of privacy concerns due to changes in its privacy settings on the site generally over time as well as privacy concerns within Facebook applications. Mark Zuckerberg, CEO of Facebook, first launched Facebook in 2004, it was focused on universities and only those with .edu address could open an account. Furthermore, only those within one's university network could see their page. Some argue that initial users were much more willing to share private information for these reasons. As time went on, Facebook became more public allowing those outside universities, and furthermore, those without a specific network, to join and see pages of those in networks that were not their own. In 2006 Facebook introduced the News Feed, a feature that would highlight recent friend activity. By 2009, Facebook made "more and more information public by default". For example, in December 2009, "Facebook drastically changed its privacy policies, allowing users to see each others' lists of friends, even if users had previously indicated they wanted to keep these lists private". Also, "the new settings made photos publicly available by default, often without users' knowledge". Facebook recently updated its profile format allowing for people who are not "friends" of others to view personal information about other users, even when the profile is set to private. However, As of January 18, 2011 Facebook changed its decision to make home addresses and telephone numbers accessible to third party members, but it is still possible for third party members to have access to less exact personal information, like one's hometown and employment, if the user has entered the information into Facebook. EPIC Executive Director Marc Rotenberg said "Facebook is trying to blur the line between public and private information. And the request for permission does not make clear to the user why the information is needed or how it will be used." Breakup Notifier is an example of a Facebook "cyberstalking" app, which was taken down on 23 February 2011. The app was later unblocked. The application notifies the user when the person they selected changes their relationship status. The concept became very popular, with the site attracting 700,000 visits in the first 36 hours and the app being downloaded 40,000 times. Before the app was blocked, it had more than 3.6 million downloads and 9,000 Facebook likes. In 2008, four years after the first introduction of Facebook, Facebook created an option to permanently delete information. Until then, the only option was to deactivate one's Facebook account, which still left the user's information within Facebook servers. After thousands of users complaints, Facebook obliged and created a tool which was located in the Help Section but later removed. To locate the tool to permanently delete a user's Facebook, he or she must manually search through Facebook's Help section by entering the request to delete the Facebook in the search box. Only then will a link be provided to prompt the user to delete his or her profile. These new privacy settings enraged some users, one of whom claimed, "Facebook is trying to dupe hundreds of millions of users they've spent years attracting into exposing their data for Facebook's personal gain." However, other features like the News Feed faced an initial backlash but later became a fundamental and very much appreciated part of the Facebook experience. In response to user complaints, Facebook continued to add more and more privacy settings resulting in "50 settings and more than 170 privacy options." However, many users complained that the new privacy settings were too confusing and were aimed at increasing the amount of public information on Facebook. Facebook management responded that "there are always trade offs between providing comprehensive and precise granular controls and offering simple tools that may be broad and blunt." It appears as though users sometimes do not pay enough attention to privacy settings and arguably allow their information to be public even though it is possible to make it private. Studies have shown that users actually pay little attention to "permissions they give to third party apps." Most users are not aware that they can modify the privacy settings and unless they modify them, their information is open to the public. On Facebook privacy settings can be accessed via the drop down menu under account in the top right corner. There users can change who can view their profile and what information can be displayed on their profile. In most cases profiles are open to either "all my network and friends" or "all of my friends." Also, information that shows on a user's profile such as birthday, religious views, and relationship status can be removed via the privacy settings. If a user is under 13 years old they are not able to make a Facebook or a MySpace account, however, this is not regulated. Although Zuckerberg, the Facebook CEO, and others in the management team usually respond in some manner to user concerns, they have been unapologetic about the trend towards less privacy. They have stated that they must continually "be innovating and updating what our system is to reflect what the current social norms are." Their statements suggest that the Internet is becoming a more open, public space, and changes in Facebook privacy settings reflect this. However, Zuckerberg did admit that in the initial release of the News Feed, they "did a bad job of explaining what the new features were and an even worse job of giving you control of them." Facebook's privacy settings have greatly evolved and are continuing to change over time. Zuckerberg "believes the age of privacy is 'over,' and that norms have evolved considerably since he first co-founded the social networking site". Additionally, Facebook has been under fire for keeping track of one's Internet usage whether users are logged into the social media site or not. A user may notice personalized ads under the 'Sponsored' area of the page. "The company uses cookies to log data such as the date, time, URL, and your IP address whenever you visit a site that has a Facebook plug-in, such as a 'Like' button." Facebook claims this data is used to help improve one's experience on the website and to protect against 'malicious' activity. Another issue of privacy that Facebook uses is the new facial recognition software. This feature includes the software to identify photos that users are tagged in by developing a template based on one's facial features. Similar to Rotenberg's claim that Facebook users are unclear of how or why their information has gone public, recently the Federal Trade Commission and Commerce Department have become involved. The Federal Trade Commission has recently released a report claiming that Internet companies and other industries will soon need to increase their protection for online users. Because online users often unknowingly opt in on making their information public, the FTC is urging Internet companies to make privacy notes simpler and easier for the public to understand, therefore increasing their option to opt out. Perhaps this new policy should also be implemented in the Facebook world. The Commerce Department claims that Americans, "have been ill-served by a patchwork of privacy laws that contain broad gaps,". Because of these broad gaps, Americans are more susceptible to identity theft and having their online activity tracked by others. Internet privacy and Facebook advertisements The illegal activities on Facebook are very widespread, in particular, phishing attacks, allowing attackers to steal other people's passwords. The Facebook users are led to land on a page where they are asked for their login information, and their personal information is stolen in that way. According to the news from PC World Business Center which was published on April 22, 2010, we can know that a hacker named Kirllos illegally stole and sold 1.5 million Facebook IDs to some business companies who want to attract potential customers by using advertisements on Facebook. Their illegal approach is that they used accounts which were bought from hackers to send advertisements to friends of users. When friends see the advertisements, they will have opinion about them, because "People will follow it because they believe it was a friend that told them to go to this link," said Randy Abrams, director of technical education with security vendor Eset. There were 2.2232% of the population on Facebook that believed or followed the advertisements of their friends. Even though the percentage is small, the amount of overall users on Facebook is more than 400 million worldwide. The influence of advertisements on Facebook is so huge and obvious. According to the blog of Alan who just posted advertisements on the Facebook, he earned $300 over the 4 days. That means he can earn $3 for every $1 put into it. The huge profit attracts hackers to steal users' login information on Facebook, and business people who want to buy accounts from hackers send advertisements to users' friends on Facebook. A leaked document from Facebook has revealed that the company was able to identify "insecure, worthless, stressed or defeated" emotions, especially in teenagers, and then proceeded to inform advertisers. While similar issues have arisen in the past, this continues to make individuals’ emotional states seem more like a commodity. They are able to target certain age groups depending on the time that their advertisements appear. Recently, there have been allegations made against Facebook accusing the app of listening in on its users through their smartphone's microphone in order to gather information for advertisers. These rumors have been proven to be false as well as impossible. For one, because it does not have a specific buzzword to listen for like the Amazon Echo, Facebook would have to record everything its users say. This kind of “constant audio surveillance would produce about 33 times more data daily than Facebook currently consumes”. Additionally, it would become immediately apparent to the user as their phone's battery life would be swiftly drained by the amount of power it would take to record every conversation. Finally, it is clear that Facebook doesn't need to listen in on its users’ conversations because it already has plenty of access to their data and internet search history through cookies. Facebook specifically states in their Cookies Policy that they use cookies to help display ads that will peak the users interest. They then use this information to help make recommendations for numerous businesses, organizations, associations, etc. to individuals who may be interested in the products, services or causes they offer. Security Breach In September 2018, there was an incident of a security breach within Facebook. Hackers were able to access and steal personal information in nearly half of the 30 million accounts. The company initially believed that even more, around 50 million users, were affected in an attack that gave the hackers control of accounts. Facebook friends study A study was conducted at Northeastern University by Alan Mislove and his colleagues at the Max Planck Institute for Software Systems, where an algorithm was created to try and discover personal attributes of a Facebook user by looking at their friend's list. They looked for information such as high school and college attended, major, hometown, graduation year and even what dorm a student may have lived in. The study revealed that only 5% of people thought to change their friend's list to private. For other users, 58% displayed university attended, 42% revealed employers, 35% revealed interests and 19% gave viewers public access to where they were located. Due to the correlation of Facebook friends and universities they attend, it was easy to discover where a Facebook user was based on their list of friends. This fact is one that has become very useful to advertisers targeting their audiences but is also a big risk for the privacy of all those with Facebook accounts. Facebook Emotion Study Recently, Facebook knowingly agreed and facilitated a controversial experiment; the experiment blatantly bypassed user privacy and demonstrates the dangers and complex ethical nature of the current networking management system. The "one week study in January of 2012" where over 600,000 users were randomly selected to unknowingly partake in a study to determine the effect of "emotional alteration" by Facebook posts. Apart from the ethical issue of conducting such a study with human emotion in the first place, this is just one of the means in which data outsourcing has been used as a breach of privacy without user disclosure. Several issues about Facebook are due to privacy concerns. An article titled "Facebook and Online Privacy: Attitudes, Behaviors, and Unintended Consequences" examines the awareness that Facebook users have on privacy issues. This study shows that the gratifications of using Facebook tend to outweigh the perceived threats to privacy. The most common strategy for privacy protection—decreasing profile visibility through restricting access to friends—is also a very weak mechanism; a quick fix rather than a systematic approach to protecting privacy. This study suggests that more education about privacy on Facebook would be beneficial to the majority of the Facebook user population. The study also offers the perspective that most users do not realize that restricting access to their data does not sufficiently address the risks resulting from the amount, quality, and persistence of data they provide. Facebook users in our study report familiarity and use of privacy settings, they are still accepting people as "friends" that they have only heard of through others or do not know at all and, therefore, most have very large groups of "friends" that have access to widely uploaded information such as full names, birthdates, hometowns, and many pictures. This study suggests that social network privacy does not merely exist within the realm of privacy settings, but privacy control is much within the hands of the user. Commentators have noted that online social networking poses a fundamental challenge to the theory of privacy as control. The stakes have been raised because digital technologies lack "the relative transience of human memory," and can be trolled or data mined for information. For users who are unaware of all privacy concerns and issues, further education on the safety of disclosing certain types of information on Facebook is highly recommended. Instagram Instagram tracks users' photos even if they do not post them using a geotag. This is through the information within metadata, which is in all photos. Metadata contains information like the lens type, location, time, and more of the photo. Users can be tracked through metadata without the use of geotags. The app geotags an uploaded image regardless of whether the user chose to share its location or not. Therefore, anybody can view the exact location where an image was uploaded on a map. This is concerning due to the fact that most people upload photos from their home or other locations they frequent a lot, and the fact that locations are so easily shared raises privacy concerns of stalking and sexual predators being able to find their target in person after discovering them online. The new Search function on Instagram combines the search of places, people, and tags to look at nearly any location on earth, allowing them to scout out a vacation spot, look inside a restaurant, and even to experience an event like they were there in person. The privacy implications of this fact is that people and companies can now see into every corner of the world, culture, and people's private lives. Additionally, this is concerning for individual privacy, because when someone searches through these features on Instagram for a specific location or place, Instagram shows them the personal photos that their users have posted, along with the likes and comments on that photo regardless of whether the poster's account is private or not. With these features, completely random people, businesses, and governments can see aspects of Instagram users' private lives. The Search and Explore pages that collect data based on user tagging illustrates how Instagram was able to create value out of the databases of information they collect on users throughout their business operations. Swarm Swarm is a mobile app that lets users check-in to a location and potentially make plans and set up future meetings with people nearby. This app has made it easier for people in online communities to share their locations, as well as interact with others in this community through collecting rewards such as coins and stickers through competitions with other users. If a user is on Swarm, their exact location may be broadcast even if they didn't select their location to be "checked-in." When users turn on their "Neighborhood Sharing" feature, their location is shared as the specific intersection that they are at, and this location in current time can be viewed simply by tapping their profile image. This is concerning because Swarm users may believe they are being discreet by sharing only which neighborhood they are in, while in fact they are sharing the exact pinpoint of their location. The privacy implications of this is that people are inadvertently sharing their exact location when they do not know that they are. This plays into the privacy concerns of social media in general, because it makes it easier for other users as well as the companies this location data is shared with to track Swarm members. This tracking makes it easier for people to find their next targets for identity theft, stalking, and sexual harassment. Spokeo Spokeo is a "people-related" search engine with results compiled through data aggregation. The site contains information such as age, relationship status, estimated personal wealth, immediate family members and home address of individual people. This information is compiled through what is already on the internet or in other public records, but the website does not guarantee accuracy. Spokeo has been faced with potential class action lawsuits from people who claim that the organization breaches the Fair Credit Reporting Act. In September, 2010, Jennifer Purcell claimed that the FCRA was violated by Spokeo marketing her personal information. Her case is pending in court. Also in 2010, Thomas Robins claimed that his personal information on the website was inaccurate and he was unable to edit it for accuracy. The case was dismissed because Robins did not claim that the site directly caused him actual harm. On February 15, 2011, Robins filed another suit, this time stating Spokeo has caused him "imminent and ongoing" harm. Twitter In January 2011, the US government obtained a court order to force the social networking site, Twitter, to reveal information applicable surrounding certain subscribers involved in the WikiLeaks cases. This outcome of this case is questionable because it deals with the user's First Amendment rights. Twitter moved to reverse the court order, and supported the idea that internet users should be notified and given an opportunity to defend their constitutional rights in court before their rights are compromised. Twitter's privacy policy states that information is collected through their different web sites, application, SMS, services, APIs, and other third parties. When the user uses Twitter's service they consent to the collection, transfer, storage, manipulation, disclosure, and other uses of this information. In order to create a Twitter account, one must give a name, username, password, and email address. Any other information added to one's profile is completely voluntary. Twitter's servers automatically record data such as IP address, browser type, the referring domain, pages visited, mobile carrier, device and application IDS, and search terms. Any common account identifiers such as full IP address or username will be removed or deleted after 18 months. Twitter allows people to share information with their followers. Any messages that are not switched from the default privacy setting are public, and thus can be viewed by anyone with a Twitter account. The most recent 20 tweets are posted on a public timeline. Despite Twitter's best efforts to protect their users privacy, personal information can still be dangerous to share. There have been incidents of leaked tweets on Twitter. Leaked tweets are tweets that have been published from a private account but have been made public. This occurs when friends of someone with a private account retweet, or copy and paste, that person's tweet and so on and so forth until the tweet is made public. This can make private information public, and could possibly be dangerous. Another issue involving privacy on Twitter deals with users unknowingly disclosing their information through tweets. Twitter has location services attached to tweets, which some users don't even know are enabled. Many users tweet about being at home and attach their location to their tweet, revealing their personal home address. This information is represented as a latitude and longitude, which is completely open for any website or application to access. People also tweet about going on vacation and giving the times and places of where they are going and how long they will be gone for. This has led to numerous break ins and robberies. Twitter users can avoid location services by disabling them in their privacy settings. Teachers and MySpace Teachers' privacy on MySpace has created controversy across the world. They are forewarned by The Ohio News Association that if they have a MySpace account, it should be deleted. Eschool News warns, "Teachers, watch what you post online." The ONA also posted a memo advising teachers not to join these sites. Teachers can face consequences of license revocations, suspensions, and written reprimands. The Chronicle of Higher Education wrote an article on April 27, 2007, entitled "A MySpace Photo Costs a Student a Teaching Certificate" about Stacy Snyder. She was a student of Millersville University of Pennsylvania who was denied her teaching degree because of an allegedly unprofessional photo posted on MySpace, which involved her drinking with a pirate's hat on and a caption of "Drunken Pirate". As a substitute, she was given an English degree. Other sites Sites such as Sgrouples and Diaspora have attempted to introduce various forms of privacy protection into their networks, while companies like Safe Shepherd have created software to remove personal information from the net. Certain social media sites such as Ask.fm, Whisper, and Yik Yak allow users to interact anonymously. The problem with websites such as these is that “despite safeguards that allow users to report abuse, people on the site believe they can say almost anything without fear or consequences—and they do." This is a privacy concern because users can say whatever they choose and the receiver of the message may never know who they are communicating with. Sites such as these allow for a large chance or cyberbullying or cyberstalking to occur. People seem to believe that since they can be anonymous, they have the freedom to say anything no matter how mean or malicious. Internet privacy and Blizzard Entertainment On July 6, 2010, Blizzard Entertainment announced that it would display the real names tied to user accounts in its game forums. On July 9, 2010, CEO and cofounder of Blizzard Mike Morhaime announced a reversal of the decision to force posters' real names to appear on Blizzard's forums. The reversal was made in response to subscriber feedback. Snapchat Snapchat is a mobile application created by Stanford graduates Evan Spiegel and Bobby Murphy in September 2011. Snapchat's main feature is that the application allows users to send a photo or video, referred to as a "snap", to recipients of choice for up to ten seconds before it disappears. If recipients of a snap try and screenshot the photo or video sent, a notification is sent to the original sender that it was screenshot and by whom. Snapchat also has a "stories" feature where users can send photos to their "story" and friends can view the story as many times as they want until it disappears after twenty-four hours. Users have the ability to make their snapchat stories viewable to all of their friends on their friends list, only specific friends, or the story can be made public and viewed by anyone that has a snapchat account. In addition to the stories feature, messages can be sent through Snapchat. Messages disappear after they are opened unless manually saved by the user by holding down on the message until a "saved" notification pops up. There is no notification sent to the users that their message has been saved by the recipient, however, there is a notification sent if the message is screenshot. 2015 Snapchat privacy policy update In 2015, Snapchat updated their privacy policy, causing outrage from users because of changes in their ability to save user content. These rules were put in place to help Snapchat create new and cool features like being able to replay a Snapchat, and the idea of “live” Snapchat stories. These features require saving content to snapchat servers in order to release to other users at a later time. The update stated that it has the rights to reproduce, modify, and republish photos, as well as save those photos to Snapchat servers. Users felt uncomfortable with the idea that all photo content was saved and the idea of “disappearing photos” advertised by Snapchat didn't actually disappear. There is no way to control what content is saved and what isn't. Snapchat responded to backlash by saying they needed this license to access our information in order to create new features, like the live snapchat feature. Live Stories With the 2015 new update of Snapchat, users are able to do “Live Stories,” which are a “collection of crowdsourced snaps for a specific event or region.” By doing that, you are allowing snapchat to share your location with not just your friends, but with everyone. According to Snapchat, once you pick the option of sharing your content through a Live Story, you are providing to the company "unrestricted, worldwide, perpetual right and license to use your name, likeness, and voice in any and all media and distribution channels." Privacy concerns with Snapchat On Snapchat, there is a new feature that was incorporated into the app in 2017 called Snap Maps. Snap Maps allows users to track other users’ locations, but when people “first use the feature, users can select whether they want to make their location visible to all of their friends, a select group of connections or to no one at all, which Snapchat refers to as ‘ghost mode.’” This feature however has raised privacy concerns because “‘It is very easy to accidentally share everything that you've got with more people than you need too, and that's the scariest portion’. Cyber security expert Charles Tendell told ABC News of the Snapchat update.” For protecting younger users of Snapchat, “Experts recommend that parents stay aware of updates to apps like Snapchat. They also suggest parents make sure they know who their kids' friends are on Snapchat and also talk to their children about who they add on Snapchat.” An additional concern users have with the privacy of Snapchat is the deletion of Snapchat's after 30 days. Many users become confused as to why it looks like someone has gotten into their account and opened all of their snapchat's which then increases their Snapscore. This has caused great concern over hackers getting into personal Snapchat accounts. To reassure users, Snapchat has added to their Support webpage explaining the expiration of snapchats after 30 days yet it is still very unclear. To clarify, this is exactly what happens: after 30 days, any unopened Snapchats will automatically be deleted or expire (which appears to the user as the same thing as being opened automatically). Therefore, this will change the user's Snapscore. After snaps expire, it will look like all of the snapchats have been opened, shown by many unfilled or open boxes. Snapchat Spectacles In 2016, Snapchat released a new product called “Snapchat Spectacles,” which are sunglasses featuring a small camera that allow users to take photos and record up to 10 seconds of footage. The cameras in the Spectacles are connected to users’ existing Snapchat accounts, so they can easily upload their content to the application. This new product has received negative feedback because the Spectacles do not stand out from normal sunglasses beyond the small cameras on the lenses. Therefore, users have the ability to record strangers without them knowing. Furthermore, the simplistic design may result in people using the glasses accidentally, mistaking them for regular glasses. Critics of Snapchat Spectacles argue that this product is an invasion of privacy for the people who do not know they are being recorded by individuals who are wearing the glasses. Many people believe that these spectacles pose a risk in a way that their physical location might be disclosed to various parties, making the user vulnerable. Proponents disagree, saying that the glasses are distinguishable enough that users and people around them will notice them. Another argument in favor of the glasses is that people are already exposing themselves to similar scenarios by being in public. 2016 Amnesty International Report In October 2016, Amnesty International released a report ranking Snapchat along with ten other leading social media applications, including Facebook, iMessage, FaceTime, and Skype on how well they protect users’ privacy. The report assessed Snapchat's use of encryption and found that it ranks poorly in how it uses encryption to protect users’ security as a result of not using end-to-end encryption. Because of this, third parties have the ability to access Snapchats while they are being transferred from one device to another. The report also claimed that Snapchat does not explicitly inform users in its privacy policy of the application's level of encryption or any threats the application may pose to users’ rights, which further reduced its overall score. Regardless of this report, Snapchat is currently considered the most trustworthy social media platform among users. The FTC In 2014, allegations were made against Snapchat by the Federal Trade Commission "FTC" for deceiving users on its privacy and security measures. Snapchat's main appeal is its marketed ability to have users' photos disappear completely after the one to ten second time frame—selected by the sender to the recipient—is up. However, the FTC made a case claiming this was false, making Snapchat in violation of regulations implemented to prevent deceptive consumer information. One focus of the case was that the reality of a "snap" lifespan is longer than most users perceive; the app's privacy policy stated that Snapchat itself temporarily stored all snaps sent, but neglected to offer users a time period during which snaps had yet to be permanently deleted and could still be retrieved. As a result, many third party applications were easily created for consumers that hold the ability to save "snaps" sent by users and screenshot "snaps" without notifying the sender. The FTC also claimed that Snapchat took information from its users such as location and contact information without their consent. Despite not being written in their privacy policy, Snapchat transmitted location information from mobile devices to its analytics tracking service provider. Although "Snapchat's privacy policy claimed that the app collected only your email, phone number, and Facebook ID to find friends for you to connect with, if you're an IOS user and entered your phone number to find friends, Snapchat collected the names and phone numbers of all the contacts in your mobile device address books without your notice or consent." It was disclosed that the Gibsonsec security group warned Snapchat of potential issues with their security, however no actions were taken to reinforce the system. In early 2014, 4.6 million matched usernames and phone numbers of users were publicly leaked, adding to the existing privacy controversy of the application. Finally, the FTC claimed that Snapchat failed to secure its "find friends" feature by not requiring phone number verification in the registration process. Users could register accounts from numbers other than their own, giving users the ability to impersonate anyone they chose. Snapchat had to release a public statement of apology to alert users of the misconducts and change their purpose to be a "fast and fun way to communicate with photos". WhatsApp WhatsApp, created in 2009, is a platform that allows users to communicate via text and voice message, video chatting, and document sharing for free. WhatsApp was acquired by Facebook in 2014, but the brand continues to be promote as a secure and reliable form of communication. The app can be downloaded and used on Android, iPhone, Mac or Windows PC, and Windows Phone devices without SMS fees or charges from a carrier. While asterisks across the WhatsApp website denote some hazards of fees and additional charges, this has become a popular application for consumers that communication with people overseas. Privacy and security with WhatsApp In 2019, WhatsApp incorporated new privacy and security measures for their users including- Hide Muted Status and Frequently Forwarded. The hide muted status feature allows users to hide specific updates or interactions with specific users; however, if the user would decide to "unhide" their status or updates from certain users, a list of all updates will be shown to the previously blocked user (including the previously hidden status/updates). Similar to apps such as Snapchat and Instagram, users are notified when a story is forwarded, viewed, screenshotted, or shared. WhatsApp developers have added the Frequently Forwarded feature that notifies users if a message, status, or update has been forwarded 4 or more times. Response to criticism Many social networking organizations have responded to the criticism and concerns over privacy brought up over time. It is claimed that changes to default settings, the storage of data and sharing with third parties have all been updated and corrected in the light of criticism, and/or legal challenges. However, many critics remain unsatisfied, noting that fundamental changes to privacy settings in many social networking sites remain minor and at times, inaccessible, and argue that social networking companies prefer to criticize users rather than adapt their policies. There are suggestions for individuals to obtain privacy by reducing or ending their own use of social media. This method does not succeed, since their information is still revealed by posts from their friends. There is ambiguity about how private IP addresses are. The Court of Justice of the European Union has ruled they need to be treated as personally identifiable information if the business receiving them, or a third party like a service provider, knows the name or street address of the IP address holder, which would be true for static IP addresses, not for dynamic addresses. California regulations say IP addresses need to be treated as personal information if the business itself, not a third party, can link them to name and street address. In 2020, An Alberta court ruled that police can obtain the IP addresses and the names and addresses associated with them, without a search warrant. An investigation found the IP addresses which initiated online crimes, and the service provider gave police the names and addresses associated with those IP addresses. See also Anonymous social media Criticism of Facebook Facebook Index of Articles Relating to Terms of Service and Privacy Policies Information privacy Issues relating to social networking services Myspace Social media measurement Social networking service Surveillance capitalism Unauthorized access in online social networks References External links Data Protection and Freedom of Information Advice Privacy Law and Data Protection Law American Library Association Privacy Ethics Internet privacy Social networking services Terms of service Privacy controversies and disputes
31607666
https://en.wikipedia.org/wiki/2011%20PlayStation%20Network%20outage
2011 PlayStation Network outage
The 2011 PlayStation Network outage (sometimes referred to as the PSN Hack) was the result of an "external intrusion" on Sony's PlayStation Network and Qriocity services, in which personal details from approximately 77 million accounts were compromised and prevented users of PlayStation 3 and PlayStation Portable consoles from accessing the service. The attack occurred between April 17 and April 19, 2011, forcing Sony to turn off the PlayStation Network on April 20. On May 4, Sony confirmed that personally identifiable information from each of the 77 million accounts had been exposed. The outage lasted 23 days. At the time of the outage, with a count of 77 million registered PlayStation Network accounts, it was one of the largest data security breaches in history. It surpassed the 2007 TJX hack which affected 45 million customers. Government officials in various countries voiced concern over the theft and Sony's one-week delay before warning its users. Sony stated on April 26 that it was attempting to get online services running "within a week." On May 14, Sony released PlayStation 3 firmware version 3.61 as a security patch. The firmware required users to change their account's password upon signing in. At the time the firmware was released, the network was still offline. Regional restoration was announced by Kazuo Hirai in a video from Sony. A map of regional restoration and the network within the United States was shared as the service was coming back online. Timeline of the outage On April 20, 2011, Sony acknowledged on the official PlayStation Blog that it was "aware certain functions of the PlayStation Network" were down. Upon attempting to sign in via the PlayStation 3, users received a message indicating that the network was "undergoing maintenance". The following day, Sony asked its customers for patience while the cause of outage was investigated and stated that it may take "a full day or two" to get the service fully functional again. The company later announced an "external intrusion" had affected the PlayStation Network and Qriocity services. This intrusion occurred between April 17 and April 19. On April 20, Sony suspended all PlayStation Network and Qriocity services worldwide. Sony expressed their regrets for the downtime and called the task of repairing the system "time-consuming" but would lead to a stronger network infrastructure and additional security. On April 25, Sony spokesman Patrick Seybold reiterated on the PlayStation Blog that fixing and enhancing the network was a "time intensive" process with no estimated time of completion. However, the next day Sony stated that there was a "clear path to have PlayStation Network and Qriocity systems back online", with some services expected to be restored within a week. Furthermore, Sony acknowledged the "compromise of personal information as a result of an illegal intrusion on our systems." On May 1 Sony announced a "Welcome Back" program for customers affected by the outage. The company also confirmed that some PSN and Qriocity services would be available during the first week of May. The list of services expected to become available included: On May 2 Sony issued a press release, according to which the Sony Online Entertainment (SOE) services had been taken offline for maintenance due to potentially related activities during the initial criminal hack. Over 12,000 credit card numbers, albeit in encrypted form, from non-U.S. cardholders and additional information from 24.7 million SOE accounts may have been accessed. During the week, Sony sent a letter to the US House of Representatives, answering questions and concerns about the event. In the letter Sony announced that they would be providing Identity Theft insurance policies in the amount of US$1 million per user of the PlayStation Network and Qriocity services, despite no reports of credit card fraud being indicated. This was later confirmed on the PlayStation Blog, where it was announced that the service, AllClear ID Plus powered by Debix, would be available to users in the United States free for 12 months, and would include Internet surveillance, complete identity repair in the event of theft and a $1 million identity theft insurance policy for each user. On May 6 Sony stated they had begun "final stages of internal testing" for the PlayStation Network, which had been rebuilt. However, the following day Sony reported that they would not be able to bring services back online within the one-week timeframe given on May 1, because "the extent of the attack on Sony Online Entertainment servers" had not been known at the time. SOE confirmed on their Twitter account that their games would not be available until some time after the weekend. Reuters began reporting the event as "the biggest Internet security break-in ever". A Sony spokesperson said: Sony had removed the personal details of 2,500 people stolen by hackers and posted on a website The data included names and some addresses, which were in a database created in 2001 No date had been fixed for the restart On May 14 various services began coming back online on a country-by-country basis, starting with North America. These services included: sign-in for PSN and Qriocity services (including password resetting), online game-play on PS3 and PSP, playback of rental video content, Music Unlimited service (PS3 and PC), access to third party services (such as Netflix, Hulu, Vudu and MLB.tv), friends list, chat functionality and PlayStation Home. The actions came with a firmware update for the PS3, version 3.61. As of May 15 service in Japan and East Asia had not yet been approved. On May 18 SOE shut down the password reset page on their site following the discovery of another exploit that allowed users to reset other users' passwords, using the other user's email address and date of birth. Sign-in using PSN details to various other Sony websites was also disabled, but console sign-ins were not affected. On May 23 Sony stated that the outage costs were $171 million. Sony response US House of Representatives Sony reported on May 4 to the PlayStation Blog that: Sony relayed via the letter that: Explanation of delays On April 26, 2011 Sony explained on the PlayStation Blog why it took so long to inform PSN users of the data theft: Sony investigation Possible data theft led Sony to provide an update in regards to a criminal investigation in a blog posted on April 27: "We are currently working with law enforcement on this matter as well as a recognized technology security firm to conduct a complete investigation. This malicious attack against our system and against our customers is a criminal act and we are proceeding aggressively to find those responsible." On May 3 Sony Computer Entertainment CEO Kazuo Hirai reiterated this and said the "external intrusion" which had caused them to shut down the PlayStation Network constituted a "criminal cyber attack". Hirai expanded further, claiming that Sony systems had been under attack prior to the outage "for the past month and half", suggesting a concerted attempt to target Sony. On May 4 Sony announced that it was adding Data Forte to the investigation team of Guidance Software and Protiviti in analysing the attacks. Legal aspects of the case were handled by Baker & McKenzie. Sony stated their belief that Anonymous, a decentralized unorganized loosely affiliated group of hackers and activists may have performed the attack. No Anons claimed any involvement. Upon learning that a breach had occurred, Sony launched an internal investigation. Sony reported, in its letter to the United States Congress: Inability to use PlayStation 3 content While most games remained playable in their offline modes, the PlayStation 3 was unable to play certain Capcom titles in any form. Streaming video providers throughout different regions such as Hulu, Vudu, Netflix and LoveFilm displayed the same maintenance message. Some users claimed to be able to use Netflix's streaming service but others were unable. Criticism of Sony Delayed warning of possible data theft On April 26 nearly a week after the outage, Sony confirmed that it "cannot rule out the possibility" that personally identifiable information such as PlayStation Network account username, password, home address, and email address had been compromised. Sony also mentioned the possibility that credit card data was taken—after claiming that encryption had been placed on the databases, which would partially satisfy PCI Compliance for storing credit card information on a server. Subsequent to the announcement on both the official blog and by e-mail, users were asked to safeguard credit card transactions by checking bank statements. This warning came nearly a week after the initial "external intrusion" and while the Network was turned off. Some disputed this explanation and queried that if Sony deemed the situation so severe that they had to turn off the network, Sony should have warned users of possible data theft sooner than on April 26. Concerns have been raised over violations of PCI Compliance and the failure to immediately notify users. US Senator Richard Blumenthal wrote to Sony Computer Entertainment America CEO Jack Tretton questioning the delay. Sony replied in a letter to the subcommittee: Unencrypted personal details Credit card data was encrypted, but Sony admitted that other user information was not encrypted at the time of the intrusion. The Daily Telegraph reported that "If the provider stores passwords unencrypted, then it's very easy for somebody else – not just an external attacker, but members of staff or contractors working on Sony's site – to get access and discover those passwords, potentially using them for nefarious means." On May 2, Sony clarified the "unencrypted" status of users' passwords, stating that: British Information Commissioners Office Following a formal investigation of Sony for breaches of the UK's Data Protection Act 1998, the Information Commissioners' Office issued a statement highly critical of the security Sony had in place: Sony was fined £250,000 ($395k) for security measures so poor they did not comply with the British law. Sony Online Entertainment outage On May 3 Sony stated in a press release that there may be a correlation between the attack that had occurred on April 16 towards the PlayStation Network and one that compromised Sony Online Entertainment on May 2. This portion of the attack resulted in the theft of information on 24.6 million Sony Online Entertainment account holders. The database contained 12,700 credit card numbers, particularly those of non-U.S. residents, and had not been in use since 2007 as much of the data applied to expired cards and deleted accounts. Sony updated this information the following day by stating that only 900 cards on the database were still valid. The attack resulted in the suspension of SOE servers and Facebook games. SOE granted 30 days of free time, plus one day for each day the server was down, to users of Clone Wars Adventures, DC Universe Online, EverQuest, EverQuest II, EverQuest Online Adventures, Free Realms, Pirates of the Burning Sea, PlanetSide, Poxnora, Star Wars Galaxies and Vanguard: Saga of Heroes, as well as other forms of compensation for all other Sony Online games. Security experts Eugene Lapidous of AnchorFree, Chester Wisniewski of Sophos Canada and Avner Levin of Ryerson University criticized Sony, questioning its methods of securing user data. Lapidous called the breach "difficult to excuse" and Wisniewski called it "an act of hubris or simply gross incompetence". Reaction Compensation to users Sony hosted special events after the PlayStation Network returned to service. Sony stated that they had plans for PS3 versions of DC Universe Online and Free Realms to help alleviate some of their losses. In a press conference in Tokyo on May 1, Sony announced a "Welcome Back" program. As well as "selected PlayStation entertainment content" the program promised to include 30 days free membership of PlayStation Plus for all PSN members, while existing PlayStation Plus members received an additional 30 days on their subscription. Qriocity subscribers received 30 days. Sony promised other content and services over the coming weeks. Sony offered one year free identity theft protection to all users with details forthcoming. Hulu compensated PlayStation 3 users for the inability to use their service during the outage by offering one week of free service to Hulu Plus members. On May 16, 2011, Sony announced that two PlayStation 3 games and two PSP games would be offered for free from lists of five and four, respectively. The games available varied by region and were only available in countries which had access to the PlayStation Store prior to the outage. On May 27, 2011, Sony announced the "welcome back" package for Japan and the Asia region (Hong Kong, Singapore, Malaysia, Thailand and Indonesia). In the Asia region, a theme - Dokodemo Issyo Spring Theme - was offered for free in addition to the games available in the "welcome back" package. 5 PSP games are offered in the Japanese market. Version of Killzone Liberation offered does not offer online gameplay functionality. Government reaction The data theft concerned authorities around the world. Graham Cluley, senior technology consultant at Sophos, said the breach "certainly ranks as one of the biggest data losses ever to affect individuals". The British Information Commissioner's Office stated that Sony would be questioned, and that an investigation would take place to discover whether Sony had taken adequate precautions to protect customer details. Under the UK's Data Protection Act, Sony was fined £250,000 for the breach. Privacy Commissioner of Canada Jennifer Stoddart confirmed that the Canadian authorities would investigate. The Commissioner's office conveyed their concern as to why the authorities in Canada weren't informed of a security breach earlier. US Senator Richard Blumenthal of Connecticut demanded answers from Sony about the data breach by emailing SCEA CEO Jack Tretton arguing about the delay in informing its customers and insisting that Sony do more for its customers than just offer free credit reporting services. Blumenthal later called for an investigation by the US Department of Justice to find the person or persons responsible and to determine if Sony was liable for the way that it handled the situation. Congresswoman Mary Bono Mack and Congressman G. K. Butterfield sent a letter to Sony, demanding information on when the breach was discovered and how the crisis would be handled. Sony had been asked to testify before a congressional hearing on security and to answer questions about the breach of security on May 2, but sent a written response instead. Legal action against Sony A lawsuit was posted on April 27 by Kristopher Johns from Birmingham, Alabama on behalf of all PlayStation users alleging Sony "failed to encrypt data and establish adequate firewalls to handle a server intrusion contingency, failed to provide prompt and adequate warnings of security breaches, and unreasonably delayed in bringing the PSN service back online." According to the complaint filed in the lawsuit, Sony failed to notify members of a possible security breach and storing members' credit card information, a violation of PCI Compliance—the digital security standard for the Payment Card Industry. A Canadian lawsuit against Sony USA, Sony Canada and Sony Japan claimed damages up to C$1 billion including free credit monitoring and identity theft insurance. The plaintiff was quoted as saying, "If you can't trust a huge multi-national corporation like Sony to protect your private information, who can you trust? It appears to me that Sony focuses more on protecting its games than its PlayStation users". In October 2012 a California judge dismissed a lawsuit against Sony over the PSN security breach, ruling that Sony had not violated California's consumer-protection laws, citing "there is no such thing as perfect security". In 2013 United Kingdom Information Commissioner's Office charged Sony with a £250,000 penalty for putting a large amount of personal and financial data of PSN clients at risk. Credit card fraud , there were no verifiable reports of credit card fraud related to the outage. There were reports on the Internet that some PlayStation users experienced credit card fraud; however, they were yet to be linked to the incident. Users who registered a credit card for use only with Sony also reported credit card fraud. Sony said that the CSC codes requested by their services were not stored, but hackers may have been able to decrypt or record credit card details while inside Sony's network. Sony stated in their letter to the subcommittee: On May 5, a letter from Sony Corporation of America CEO and President Sir Howard Stringer emphasized that there had been no evidence of credit card fraud and that a $1 million identity theft insurance policy would be available to PSN and Qriocity users: Change to terms and conditions It has been suggested that a change to the PSN terms and conditions announced on September 15, 2011, was motivated by the large damages being claimed by class action suits against Sony, in an effort to minimise the company's losses. The new agreement required users to agree to give up their right (to join together as a group in a class action) to sue Sony over any future security breach, without first trying to resolve legal issues with an arbitrator. This included any ongoing class action suits initiated prior to August 20, 2011. Another clause, which removed a user's right to trial by jury should the user opt out of the clause (by sending a letter to Sony), says: Sony guaranteed that a court of law in the respective country, in this case the US, would hold jurisdiction in regards to any rules or changes in the Sony PSN ToS: References 2011 crimes PlayStation Network PlayStation Network Network Sony Interactive Entertainment 2010s internet outages da:PlayStation Network#PlayStation Networks nedbrud 2011
31611383
https://en.wikipedia.org/wiki/FreeBSD%20version%20history
FreeBSD version history
FreeBSD 1 Released in November 1993. 1.1.5.1 was released in July 1994. FreeBSD 2 2.0-RELEASE was announced on 22 November 1994. The final release of FreeBSD 2, 2.2.8-RELEASE, was announced on 29 November 1998. FreeBSD 2.0 was the first version of FreeBSD to be claimed legally free of AT&T Unix code with approval of Novell. It was the first version to be widely used at the beginnings of the spread of Internet servers. 2.2.9-RELEASE was released April 1, 2006 as a fully functional April Fools' Day prank. FreeBSD 3 FreeBSD 3.0-RELEASE was announced on 16 October 1998. The final release, 3.5-RELEASE, was announced on 24 June 2000. FreeBSD 3.0 was the first branch able to support symmetric multiprocessing (SMP) systems, using a Giant lock and marked the transition from a.out to ELF executables. USB support was first introduced with FreeBSD 3.1, and the first Gigabit network cards were supported in 3.2-RELEASE. FreeBSD 4 4.0-RELEASE appeared in March 2000 and the last 4-STABLE branch release was 4.11 in January 2005 supported until 31 January 2007. FreeBSD 4 was lauded for its stability, was a favorite operating system for ISPs and web hosting providers during the first dot-com bubble, and is widely regarded as one of the most stable and high-performance operating systems of the whole Unix lineage. Among the new features of FreeBSD 4, kqueue(2) was introduced (which is now part of other major BSD systems) and Jails, a way of running processes in separate environments. Version 4.8 was forked by Matt Dillon to create DragonFly BSD. FreeBSD 5 After almost three years of development, the first 5.0-RELEASE in January 2003 was widely anticipated, featuring support for advanced multiprocessor and application threading, and for the UltraSPARC and IA-64 platforms. The first 5-STABLE release was 5.3 (5.0 through 5.2.1 were cut from -CURRENT). The last release from the 5-STABLE branch was 5.5 in May 2006. The largest architectural development in FreeBSD 5 was a major change in the low-level kernel locking mechanisms to enable better symmetric multi-processor (SMP) support. This released much of the kernel from the MP lock, which is sometimes called the Giant lock. More than one process could now execute in kernel mode at the same time. Other major changes included an M:N native threading implementation called Kernel Scheduled Entities (KSE). In principle this is similar to Scheduler Activations. Starting with FreeBSD 5.3, KSE was the default threading implementation until it was replaced with a 1:1 implementation in FreeBSD 7.0. FreeBSD 5 also significantly changed the block I/O layer by implementing the GEOM modular disk I/O request transformation framework contributed by Poul-Henning Kamp. GEOM enables the simple creation of many kinds of functionality, such as mirroring (gmirror), encryption (GBDE and GELI). This work was supported through sponsorship by DARPA. While the early versions from the 5.x were not much more than developer previews, with pronounced instability, the 5.4 and 5.5 releases of FreeBSD confirmed the technologies introduced in the FreeBSD 5.x branch had a future in highly stable and high-performing releases. FreeBSD 6 FreeBSD 6.0 was released on 4 November 2005. The final FreeBSD 6 release was 6.4, on 11 November 2008. These versions extended work on SMP and threading optimization along with more work on advanced 802.11 functionality, TrustedBSD security event auditing, significant network stack performance enhancements, a fully preemptive kernel and support for hardware performance counters (HWPMC). The main accomplishments of these releases include removal of the Giant lock from VFS, implementation of a better-performing optional libthr library with 1:1 threading and the addition of a Basic Security Module (BSM) audit implementation called OpenBSM, which was created by the TrustedBSD Project (based on the BSM implementation found in Apple's open source Darwin) and released under a BSD-style license. FreeBSD 7 FreeBSD 7.0 was released on 27 February 2008. The final FreeBSD 7 release was 7.4, on 24 February 2011. New features included SCTP, UFS journaling, an experimental port of Sun's ZFS file system, GCC4, improved support for the ARM architecture, jemalloc (a memory allocator optimized for parallel computation, which was ported to Firefox 3), and major updates and optimizations relating to network, audio, and SMP performance. Benchmarks showed significant performance improvements compared to previous FreeBSD releases as well as Linux. The new ULE scheduler was much improved but a decision was made to ship the 7.0 release with the older 4BSD scheduler, leaving ULE as a kernel compile-time tunable. In FreeBSD 7.1 ULE was the default for the i386 and AMD64 architectures. DTrace support was integrated in version 7.1, and NetBSD and FreeBSD 7.2 brought support for multi-IPv4/IPv6 jails. Code supporting the DEC Alpha architecture (supported since FreeBSD 4.0) was removed in FreeBSD 7.0. FreeBSD 8 FreeBSD 8.0 was officially released on 25 November 2009. FreeBSD 8 was branched from the trunk in August 2009. It features superpages, Xen DomU support, network stack virtualization, stack-smashing protection, TTY layer rewrite, much updated and improved ZFS support, a new USB stack with USB 3.0 and xHCI support added in FreeBSD 8.2, multicast updates including IGMPv3, a rewritten NFS client/server introducing NFSv4, and AES acceleration on supported Intel CPUs (added in FreeBSD 8.2). Inclusion of improved device mmap() extensions enables implementation of a 64-bit Nvidia display driver for the x86-64 platform. A pluggable congestion control framework, and support for the ability to use DTrace for applications running under Linux emulation were added in FreeBSD 8.3. FreeBSD 8.4, released on 7 June 2013, was the final release from the FreeBSD 8 series. FreeBSD 9 FreeBSD 9.0 was released on 12 January 2012. Key features of the release include a new installer (bsdinstall), UFS journaling, ZFS version 28, userland DTrace, NFSv4-compatible NFS server and client, USB 3.0 support, support for running on the PlayStation 3, Capsicum sandboxing, and LLVM 3.0 in the base system. The kernel and base system could be built with Clang, but FreeBSD 9.0 still used GCC4.2 by default. The PlayStation 4 video game console uses a derived version of FreeBSD 9.0, which Sony Computer Entertainment dubbed "Orbis OS". FreeBSD 9.1 was released on 31 December 2012. FreeBSD 9.2 was released on 30 September 2013. FreeBSD 9.3 was released on 16 July 2014. FreeBSD 10 On 20 January 2014, the FreeBSD Release Engineering Team announced the availability of FreeBSD 10.0-RELEASE. Key features include the deprecation of GCC in favor of Clang, a new iSCSI implementation, VirtIO drivers for out-of-the-box KVM support, and a FUSE implementation. FreeBSD 10.1 Long Term Support Release FreeBSD 10.1-RELEASE was announced 14 November 2014, and was supported for an extended term until 31 December 2016. The subsequent 10.2-RELEASE reached EoL on the same day. In October 2017 the 10.4-RELEASE (final release of this branch) was announced, and support for the 10 series was terminated in October 2018. FreeBSD 11 On 10 October 2016, the FreeBSD Release Engineering Team announced the availability of FreeBSD 11.0-RELEASE. FreeBSD 12 FreeBSD 12.0-RELEASE was announced in December 2018. Version history The following table presents a version release history for the FreeBSD operating system. Timeline The timeline shows that the span of a single release generation of FreeBSD lasts around 5 years. Since the FreeBSD project makes effort for binary backward (and limited forward) compatibility within the same release generation, this allows users 5+ years of support, with trivial-to-easy upgrading within the release generation. References FreeBSD History of free and open-source software Lists of operating systems Software version histories
31613712
https://en.wikipedia.org/wiki/Fundamental%20Fysiks%20Group
Fundamental Fysiks Group
The Fundamental Fysiks Group was founded in San Francisco in May 1975 by two physicists, Elizabeth Rauscher and George Weissmann, at the time both graduate students at the University of California, Berkeley. The group held informal discussions on Friday afternoons to explore the philosophical implications of quantum theory. Leading members included Fritjof Capra, John Clauser, Philippe Eberhard, Nick Herbert, Jack Sarfatti, Saul-Paul Sirag, Henry Stapp, and Fred Alan Wolf. David Kaiser argues, in How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival (2011), that the group's meetings and papers helped to nurture the ideas in quantum physics that came to form the basis of quantum information science. Two reviewers wrote that Kaiser may have exaggerated the group's influence on the future of physics research, though one of them, Silvan Schweber, wrote that some of the group's contributions are easy to identify, such as Clauser's experimental evidence for non-locality attracting a share of the Wolf Prize in 2010, and the publication of Capra's The Tao of Physics (1975) and Gary Zukav's The Dancing Wu Li Masters (1979) attracting the interest of a wider audience. Kaiser writes that the group were "very smart and very playful", discussing quantum mysticism and becoming local celebrities in the Bay Area's counterculture. When Francis Ford Coppola bought City Magazine in 1975, one of its earliest features was on the Fundamental Fysiks Group, including a photo spread of Sirag, Wolf, Herbert, and Sarfatti. Research Bell's theorem and no-cloning theorem Hugh Gusterson writes that several challenging ideas lie at the heart of quantum physics: that electrons behave like waves and particles; that you can know a particle's location or momentum, but not both; that observing a particle changes its behavior; and that particles appear to communicate with each other across great distances, known as nonlocality and quantum entanglement. It is these concepts that led to the development of quantum information science and quantum encryption, which has been experimentally used, for example, to transfer money and electronic votes. Kaiser argues that the Fundamental Fysiks Group saved physics by exploring these ideas, in three ways: Specifically, in 1981, Nick Herbert, a member of the group, proposed a scheme for sending signals faster than the speed of light using quantum entanglement. Quantum computing pioneer Asher Peres writes that the refutation of Herbert's ideas led to the development of the no-cloning theorem by William Wootters, Wojciech Zurek, and Dennis Dieks. In a review of Kaiser's book in Physics Today, Silvan Schweber challenges Kaiser's views of the importance of the Fundamental Fysiks Group. He writes that Bell's Theorem was not obscure during the preceding decade, but was worked on by authors such as John Clauser (who was a member of the group) and Eugene Wigner. Schweber also mentioned the work of Alain Aspect, which preceded Nick Herbert's 1981 proposal. Remote viewing Given quantum theory's perceived implications for the study of parapsychology and telepathy, the group cultivated patrons such as the Central Intelligence Agency, Defense Intelligence Agency, and the human potential movement. In 1972, the CIA and DIA set up a research program, jokingly called ESPionage, which financed experiments into remote viewing at the Stanford Research Institute, where the Fundamental Fysiks Group became what Kaiser calls its house theorists. The group also attempted in mid-1975 to independently reproduce the experiments done by SRI in the field; in particular, an experiment featuring one subject in the laboratory attempting to draw or describe a scene, observed by a different individual from a remote location outside of the laboratory. An independent panel of judges was to then determine how close the produced images were to the target location. These experiments were determined not to be statistically significant, though Kaiser notes that one subject showed detailed descriptions of other targets than the one in question at the time. See also Epistemological Letters The Men Who Stare at Goats Parapsychology research at SRI Notes Further reading "25th reunion of the Fundamental Physics Group", quantumtantra.com, accessed August 18, 2011. "Paranormal Science", The New York Times, November 6, 1974. "How the Hippies Saved Physics (Excerpt)", David Kaiser, Scientific American, June 27, 2011. Dizikes, Peter. "Hippie Days", MIT News, June 27, 2011. Wisnioski, Matthew. "Let's Be Fysiksists Again", Science, vol 332, issue 6037, 24 June 2011, pp. 1504–1505. Books Fritjof Capra. The Tao of Physics: An Exploration of the Parallels Between Modern Physics and Eastern Mysticism. Shambhala Publications, 1975. Amit Goswami. The Self-Aware Universe. Tarcher, 1995. David Kaiser. How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival. W. W. Norton, 2011, . Jack Sarfatti. Space-Time and Beyond, with Fred Alan Wolf and Bob Toben, E. P. Dutton, 1975. Evan Harris Walker. The Physics of Consciousness: The Quantum Mind and the Meaning of Life. Da Capo Press, 2000. Ken Wilber (ed). Quantum Questions: Mystical Writings of the World's Great Physicists. Shambhala Publications, 2001 (first published 1984). Gary Zukav. The Dancing Wu Li Masters. HarperOne, 2001 (first published 1979). American physicists Quantum information science 1975 establishments in the United States Quantum mysticism
31672755
https://en.wikipedia.org/wiki/Tails%20%28operating%20system%29
Tails (operating system)
Tails, or The Amnesic Incognito Live System, is a security-focused Debian-based Linux distribution aimed at preserving privacy and anonymity. It connects to the Internet exclusively through the anonymity network Tor. The system is designed to be booted as a live DVD or live USB, and leaves no digital footprint on the machine unless explicitly told to do so. It can also be run as a virtual machine, with some additional security risks. The Tor Project provided financial support for its development in the beginnings of the project, and continues to do so alongside numerous corporate and anonymous sponsors. History Tails was first released on 23 June 2009. It is the next iteration of development on Incognito, a discontinued Gentoo-based Linux distribution. The Tor Project provided financial support for its development in the beginnings of the project. Tails also received funding from the Open Technology Fund, Mozilla, and the Freedom of the Press Foundation. Laura Poitras, Glenn Greenwald, and Barton Gellman have each said that Tails was an important tool they used in their work with National Security Agency whistleblower Edward Snowden. From release 3.0, Tails requires a 64-bit processor to run. Features Tails's pre-installed desktop environment is GNOME 3. The system includes essential software for functions such as reading and editing documents, image editing, video watching and printing. Other software from Debian can be installed at the user's behest. Tails includes a unique variety of software that handles the encryption of files and internet transmissions, cryptographic signing and hashing, and other functions important to security. It is pre-configured to use Tor, with multiple connection options for Tor. It tries to force all connections to use Tor and blocks connection attempts outside Tor. For networking, it features the Tor Browser, instant messaging, email, file transmission and monitoring local network connections for security. By design, Tails is "amnesic". It runs in the computer's Random Access Memory (RAM) and does not write to a hard drive or other storage medium. The user may choose to keep files or applications on their Tails drive in "persistent storage", which is not hidden and is detectable by forensic analysis. While shutting down by normal or emergency means, Tails overwrites most of the used RAM to avoid a cold boot attack. Security incidents In 2014 Das Erste reported that the NSA's XKeyscore surveillance system sets threat definitions for people who search for Tails using a search engine or visit the Tails website. A comment in XKeyscore's source code calls Tails "a comsec [communications security] mechanism advocated by extremists on extremist forums". In the same year, Der Spiegel published slides from an internal National Security Agency presentation dating to June 2012, in which the NSA deemed Tails on its own as a "major threat" to its mission and in conjunction with other privacy tools as "catastrophic". In 2017, the FBI used malicious code developed by Facebook, identifying sexual extortionist and Tails user Buster Hernandez through a zero-day vulnerability in the default video player. The exploit was never explained to or discovered by the Tails developers, but it is believed that the vulnerability was patched in a later release of Tails. It was not easy to find Hernandez: for a long time, the FBI and Facebook had searched for him with no success, resorting to developing the custom hacking tool. See also Crypto-anarchism Dark web Deep web Freedom of information GlobaLeaks GNU Privacy Guard I2P Internet censorship Internet privacy Off-the-Record Messaging Proxy server Security-focused operating systems Tor (anonymity network) Tor2web Whonix References External links Anonymity networks Debian-based distributions Free security software I2P Operating system distributions bootable from read-only media Privacy software Tor (anonymity network) 2009 software Linux distributions
31673876
https://en.wikipedia.org/wiki/Variably%20Modified%20Permutation%20Composition
Variably Modified Permutation Composition
VMPC (Variably Modified Permutation Composition) is a stream cipher similar to the well known and popular cipher RC4 designed by Ron Rivest. It was designed by Bartosz Żółtak, presented in 2004 at the Fast Software Encryption conference. VMPC is a modification of the RC4 cipher. The core of the cipher is the VMPC function, a transformation of n-element permutations defined as: for x from 0 to n-1: g(x) = VMPC(f)(x) = f(f(f(x))+1) The function was designed such that inverting it, i.e. obtaining from , would be a complex problem. According to computer simulations the average number of operations required to recover from for a 16-element permutation is about 211; for 64-element permutation, about 253; and for a 256-element permutation, about 2260. In 2006 at Cambridge University, Kamil Kulesza investigated the problem of inverting VMPC and concluded "results indicate that VMPC is not a good candidate for a cryptographic one-way function". The VMPC function is used in an encryption algorithm – the VMPC stream cipher. The algorithm allows for efficient in software implementations; to encrypt bytes of plaintext do: All arithmetic is performed modulo 256. i := 0 while GeneratingOutput: a := S[i] j := S[j + a] output S[S[S[j]] + 1] swap S[i] and S[j] (b := S[j]; S[i] := b; S[j] := a)) i := i + 1 endwhile Where 256-element permutation and integer value are obtained from the encryption password using the VMPC-KSA (Key Scheduling Algorithm). References External links VMPC Homepage Original conference paper on VMPC from okna wrocław (PDF) Kamil Kulesza: On inverting the VMPC one-way function Unofficial C implementation of VMPC Stream cipher Unofficial Delphi implementation of VMPC Stream cipher https://eprint.iacr.org/2013/768.pdf VMPC-R: Cryptographically Secure Pseudo-Random Number Generator Alternative to RC4 https://eprint.iacr.org/2014/985.pdf Statistical weakness in Spritz against VMPC-R: in search for the RC4 replacement https://eprint.iacr.org/2014/315.pdf Statistical weaknesses in 20 RC4-like algorithms and (probably) the simplest algorithm free from these weaknesses - VMPC-R https://eprint.iacr.org/2019/041.pdf Message Authentication (MAC) Algorithm For The VMPC-R (RC4-like) Stream Cipher Stream ciphers
31678049
https://en.wikipedia.org/wiki/Routing%20and%20Remote%20Access%20Service
Routing and Remote Access Service
Routing and Remote Access Service (RRAS) is a Microsoft API and server software that makes it possible to create applications to administer the routing and remote access service capabilities of the operating system, to function as a network router. Developers can also use RRAS to implement routing protocols. The RRAS server functionality follows and builds upon the Remote Access Service (RAS) in Windows NT 4.0. Overview RRAS was introduced with Windows 2000 and offered as a download for Windows NT 4.0. Multiprotocol router - The computer running RRAS can route IP, IPX, and AppleTalk simultaneously. All routable protocols are configured from the same administrative utility. RRAS included two unicast routing protocols, Routing Information Protocol (RIP) and Open Shortest Path First (OSPF) as well as IGMP routing and forwarding features for IP multicasting. Demand-dial router - IP and IPX can be routed over on-demand or persistent WAN links such as analog phone lines or ISDN, or over VPN connections. Remote access server - provides remote access connectivity to dial-up or VPN remote access clients that use IP, IPX, AppleTalk, or NetBEUI. Routing services and remote access services used to work separately. Point-to-Point Protocol (PPP), the protocol suite commonly used to negotiate point-to-point connections, has allowed them to be combined. RRAS can be used to create client applications. These applications display RAS common dialog boxes, manage remote access connections and devices, and manipulate phone-book entries. Routing and Remote Access Service Management Pack The Routing and Remote Access Service Management Pack helps a network administrator monitor the status and availability of computers running Windows Server 2008 R2. Features introduced in Windows Server 2008 Server Manager – Application used to assist system administrators with installation, configuration, and management of other RRAS features. Secure Socket Tunneling Protocol VPN enforcement for Network Access Protection – Limits VPN connections to defined network services. IPv6 support – added PPPv6, L2TP, DHCPv6, and RADIUS technologies allowing them to work over IPv6. New cryptographic support – strengthened encryption algorithms to comply with U.S. government security requirements, in addition to removing algorithms which could not be strengthened. Removed technologies Bandwidth Allocation Protocol (BAP) was removed from Windows Vista, and disabled in Windows Server 2008. X.25. Serial Line Internet Protocol (SLIP). SLIP-based connections will automatically be updated to PPP-based connections. Asynchronous Transfer Mode (ATM) IP over IEEE 1394 NWLink IPX/SPX/NetBIOS Compatible Transport Protocol Services for Macintosh Open Shortest Path First (OSPF) routing protocol component in Routing and Remote Access Basic Firewall in RRAS (replaced with Windows Firewall) Static IP filter APIs for RRAS (replaced with Windows Filtering Platform APIs) The SPAP, EAP-MD5-CHAP, and MS-CHAP authentication protocols for PPP-based connections. See also Remote Access Service References External links Tech FAQ Microsoft application programming interfaces Microsoft server technology
31679498
https://en.wikipedia.org/wiki/Multicast%20encryption
Multicast encryption
Multicast is what enables a node on a network to address one unit of data to a specific group of receivers. In interactive multicast at the data link or network layer, such as IP multicast, Ethernet multicast or MBMS service over cellular network, receivers may join and leave the group using an interaction channel. Only one copy of the data is sent from the source, and multiple copies are created and then sent to the desired recipient by the network infrastructure nodes. In for example IP multicast, a multicast group is identified by a class D IP address. A host enters or exits a group using IGMP (Internet Group Management Protocol). A message sent via multicast is sent to all nodes on the network, but only the intended nodes accept the multicast frames. Multicasting is useful in situations such as video conferencing and online gaming. Multicast was used originally in LANs, with Ethernet being the best example. A problem with multicast communication is that it is difficult to guarantee that only designated receivers receive the data being sent. This is largely because multicast groups are always changing; users come and go at any time. A solution to the problem of ensuring that only the chosen recipient obtains the data is known as multicast encryption. ISO Standards The ISO (International Organization for Standardization) states that confidentiality, integrity, authentication, access control, and non-repudiation should all be considered when creating any secure system. Confidentiality: No unauthorized party can access appropriate messages. Integrity: Messages cannot be changed during transit without being discovered. Authentication: The message needs to be sent by the person/machine who claims to have sent it. Access control: Only those users enabled can access the data. Non-repudiation: The receiver can prove that the sender actually sent the message. To be secure, members who are just being added to the group must be restricted from viewing past data. Also, members removed from a group may not access future data. Theories One theory for the creation of an encryption protocol explains that ideally, each member of a group should have a key which changes upon the entrance or exit of a member of the group. Another theory suggests a primary key subsidized by additional keys belonging to legitimate group members. One protocol called UFTP (encrypted UDP based FTP over multicast) was created in an attempt to solve this problem. The protocol is designed in three phases: announce/register, file transfer, and completion/confirmation. The latest version 5.0 was released on 4/22/2020 and the source code is available in the website. Current alternatives Today, one alternative in multicast encryption involves the use of symmetric key encryption where data is decoded by intended receivers using a traffic encryption key (TEK). The TEK is changed any time a member joins or leaves the group. This is not feasible for large groups. Users must be continuously connected to obtain the new keys. Another more common method involves asymmetric keys. Here, a private key is shared and those shares are given out asymmetrically. The initial member is given a number of shares, one of which is passed to each group member. If a member has a valid share of the key, he can view the message. See also Broadcast encryption References
31698050
https://en.wikipedia.org/wiki/Homomorphic%20signatures%20for%20network%20coding
Homomorphic signatures for network coding
Network coding has been shown to optimally use bandwidth in a network, maximizing information flow but the scheme is very inherently vulnerable to pollution attacks by malicious nodes in the network. A node injecting garbage can quickly affect many receivers. The pollution of network packets spreads quickly since the output of (even an) honest node is corrupted if at least one of the incoming packets is corrupted. An attacker can easily corrupt a packet even if it is encrypted by either forging the signature or by producing a collision under the hash function. This will give an attacker access to the packets and the ability to corrupt them. Denis Charles, Kamal Jain and Kristin Lauter designed a new homomorphic encryption signature scheme for use with network coding to prevent pollution attacks. The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. In this scheme it is computationally infeasible for a node to sign a linear combination of the packets without disclosing what linear combination was used in the generation of the packet. Furthermore, we can prove that the signature scheme is secure under well known cryptographic assumptions of the hardness of the discrete logarithm problem and the computational Elliptic curve Diffie–Hellman. Network coding Let be a directed graph where is a set, whose elements are called vertices or nodes, and is a set of ordered pairs of vertices, called arcs, directed edges, or arrows. A source wants to transmit a file to a set of the vertices. One chooses a vector space (say of dimension ), where is a prime, and views the data to be transmitted as a bunch of vectors . The source then creates the augmented vectors by setting where is the -th coordinate of the vector . There are zeros before the first '1' appears in . One can assume without loss of generality that the vectors are linearly independent. We denote the linear subspace (of ) spanned by these vectors by . Each outgoing edge computes a linear combination, , of the vectors entering the vertex where the edge originates, that is to say where . We consider the source as having input edges carrying the vectors . By induction, one has that the vector on any edge is a linear combination and is a vector in . The k-dimensional vector is simply the first k coordinates of the vector . We call the matrix whose rows are the vectors , where are the incoming edges for a vertex , the global encoding matrix for and denote it as . In practice the encoding vectors are chosen at random so the matrix is invertible with high probability. Thus, any receiver, on receiving can find by solving where the are the vectors formed by removing the first coordinates of the vector . Decoding at the receiver Each receiver, , gets vectors which are random linear combinations of the ’s. In fact, if then Thus we can invert the linear transformation to find the ’s with high probability. History Krohn, Freedman and Mazieres proposed a theory in 2004 that if we have a hash function such that: is collision resistant – it is hard to find and such that ; is a homomorphism – . Then server can securely distribute to each receiver, and to check if we can check whether The problem with this method is that the server needs to transfer secure information to each of the receivers. The hash functions needs to be transmitted to all the nodes in the network through a separate secure channel. is expensive to compute and secure transmission of is not economical either. Advantages of homomorphic signatures Establishes authentication in addition to detecting pollution. No need for distributing secure hash digests. Smaller bit lengths in general will suffice. Signatures of length 180 bits have as much security as 1024 bit RSA signatures. Public information does not change for subsequent file transmission. Signature scheme The homomorphic property of the signatures allows nodes to sign any linear combination of the incoming packets without contacting the signing authority. Elliptic curves cryptography over a finite field Elliptic curve cryptography over a finite field is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. Let be a finite field such that is not a power of 2 or 3. Then an elliptic curve over is a curve given by an equation of the form where such that Let , then, forms an abelian group with O as identity. The group operations can be performed efficiently. Weil pairing Weil pairing is a construction of roots of unity by means of functions on an elliptic curve , in such a way as to constitute a pairing (bilinear form, though with multiplicative notation) on the torsion subgroup of . Let be an elliptic curve and let be an algebraic closure of . If is an integer, relatively prime to the characteristic of the field , then the group of -torsion points, . If is an elliptic curve and then There is a map such that: (Bilinear) . (Non-degenerate) for all P implies that . (Alternating) . Also, can be computed efficiently. Homomorphic signatures Let be a prime and a prime power. Let be a vector space of dimension and be an elliptic curve such that . Define as follows: . The function is an arbitrary homomorphism from to . The server chooses secretly in and publishes a point of p-torsion such that and also publishes for . The signature of the vector is Note: This signature is homomorphic since the computation of h is a homomorphism. Signature verification Given and its signature , verify that The verification crucially uses the bilinearity of the Weil-pairing. System setup The server computes for each . Transmits . At each edge while computing also compute on the elliptic curve . The signature is a point on the elliptic curve with coordinates in . Thus the size of the signature is bits (which is some constant times bits, depending on the relative size of and ), and this is the transmission overhead. The computation of the signature at each vertex requires bit operations, where is the in-degree of the vertex . The verification of a signature requires bit operations. Proof of security Attacker can produce a collision under the hash function. If given points in find and such that and Proposition: There is a polynomial time reduction from discrete log on the cyclic group of order on elliptic curves to Hash-Collision. If , then we get . Thus . We claim that and . Suppose that , then we would have , but is a point of order (a prime) thus . In other words in . This contradicts the assumption that and are distinct pairs in . Thus we have that , where the inverse is taken as modulo . If we have r > 2 then we can do one of two things. Either we can take and as before and set for > 2 (in this case the proof reduces to the case when ), or we can take and where are chosen at random from . We get one equation in one unknown (the discrete log of ). It is quite possible that the equation we get does not involve the unknown. However, this happens with very small probability as we argue next. Suppose the algorithm for Hash-Collision gave us that Then as long as , we can solve for the discrete log of Q. But the ’s are unknown to the oracle for Hash-Collision and so we can interchange the order in which this process occurs. In other words, given , for , not all zero, what is the probability that the ’s we chose satisfies ? It is clear that the latter probability is . Thus with high probability we can solve for the discrete log of . We have shown that producing hash collisions in this scheme is difficult. The other method by which an adversary can foil our system is by forging a signature. This scheme for the signature is essentially the Aggregate Signature version of the Boneh-Lynn-Shacham signature scheme. Here it is shown that forging a signature is at least as hard as solving the elliptic curve Diffie–Hellman problem. The only known way to solve this problem on elliptic curves is via computing discrete-logs. Thus forging a signature is at least as hard as solving the computational co-Diffie–Hellman on elliptic curves and probably as hard as computing discrete-logs. See also Network coding Homomorphic encryption Elliptic curve cryptography Weil pairing Elliptic curve Diffie–Hellman Elliptic curve DSA Digital Signature Algorithm References External links Comprehensive View of a Live Network Coding P2P System Signatures for Network Coding(presentation) CISS 2006, Princeton University at Buffalo Lecture Notes on Coding Theory – Dr. Atri Rudra Finite fields Coding theory Information theory Error detection and correction
31722495
https://en.wikipedia.org/wiki/Woo%E2%80%93Lam
Woo–Lam
In cryptography, Woo–Lam refers to various computer network authentication protocols designed by Simon S. Lam and Thomas Woo. The protocols enable two communicating parties to authenticate each other's identity and to exchange session keys, and involve the use of a trusted key distribution center (KDC) to negotiate between the parties. Both symmetric-key and public-key variants have been described. However, the protocols suffer from various security flaws, and in part have been described as being inefficient compared to alternative authentication protocols. Public-key protocol Notation The following notation is used to describe the algorithm: - network nodes. - public key of node . - private key of . - nonce chosen by . - unique identifier of . - public-key encryption using key . - digital signature using key . - random session key chosen by the KDC. - concatenation. It is assumed that all parties know the KDC's public key. Message exchange The original version of the protocol had the identifier omitted from lines 5 and 6, which did not account for the fact that is unique only among nonces generated by A and not by other parties. The protocol was revised after the authors themselves spotted a flaw in the algorithm. See also Kerberos Needham–Schroeder protocol Otway–Rees protocol References Computer network security Authentication methods
31723736
https://en.wikipedia.org/wiki/IAIK-JCE
IAIK-JCE
IAIK-JCE is a Java-based Cryptographic Service Provider, which is being developed at the Institute for Applied Information Processing and Communications (IAIK) at the Graz University of Technology. It offers support for many commonly used cryptographic algorithms, such as hash functions, message authentication codes, symmetric, asymmetric, stream and block encryption. Its development started in 1996 and as such IAIK-JCE was one of the first Java-based cryptography providers. It is written entirely in Java and based on the same design principles as Oracle's JCA/JCE. License Next to a commercial license, IAIK-JCE can also be obtained freely for academic purposes, evaluation and open-source development. See also Java Cryptography Architecture Java Cryptography Extension External links IAIK-JCE Cryptographic software Java (programming language) libraries
31786627
https://en.wikipedia.org/wiki/Braintree%20%28company%29
Braintree (company)
Braintree is a company based in Chicago that specializes in mobile and web payment systems for e-commerce companies. Braintree provides clients with a merchant account and a payment gateway. The company was acquired by PayPal on September 26, 2013. History Braintree was founded by Bryan Johnson in 2007. By 2011, the company ranked 47th on Inc. magazine's annual list of the 500 fastest-growing companies. In that year, Bill Ready joined the company as CEO. Johnson remained as chairman. In 2012, Braintree acquired Venmo for $26.2 million. A year later, PayPal, then part of eBay, acquired Braintree for $800 million. In August 2015, PayPal acquired Chicago-based mobile commerce company Modest and rolled Modest's products into Braintree's offerings. Braintree first expanded internationally in 2012, when it announced it would begin providing services in Australia. The company began serving Europe and Canada in August 2013, and announced support in Hong Kong, Singapore, and Malaysia in 2015. By late 2015, Braintree was processing nearly $50 billion in Authorized Payment Volume, up from $12 billion at the time it was acquired by PayPal, and had 154 million cards on file, up from 56.5 million. In 2020, Braintree processed payments in 45 countries and regions. Products and services Braintree provides businesses with the ability to accept payments online or within their mobile application. On October 1, 2012, Braintree launched instant signup, streamlining the signup process for US merchants to a few minutes. Braintree announced the v.zero SDK in July 2014. It allows automatic shopping cart integration with PayPal among other payment types. In September 2014, the company announced a partnership with Coinbase to accept Bitcoin. GitHub and ParkWhiz are among the companies that launched with the v.zero SDK, which supports "One Touch Payments". PayPal's mobile also allows One Touch Payments. It does not require those users to create an account on an e-commerce site or enter credit card details every time they want to buy something. The concept of One Touch is based on a prior product called Venmo Touch, which was developed in conjunction with Venmo, the payment service Braintree bought in August 2012. Venmo Touch was the first one-touch mobile buying experience to hit the market. Integration Braintree requires some development experience and coding knowledge to integrate. Braintree provides client libraries and integration examples in Ruby, Python, PHP, Java, .NET, and Node.js; mobile libraries for iOS and Android; and Braintree.js for in-browser card encryption. Braintree also works with most of the leading ecommerce and billing platforms, including BigCommerce, WooCommerce, and Magento. Credit card data portability Braintree initiated the credit card data portability standard in 2010, which was accepted as an official action group of the DataPortability project. Credit card data portability is supported by an opt-in community of electronic payment processing providers that agree to provide credit card data and associated customer information to an existing merchant upon request in a PCI compliant manner. See also Payment service provider References External links Braintree (company website) PayPal Financial services companies established in 2007 Companies based in Chicago Mobile payments Payment service providers 2013 mergers and acquisitions Online payments
31801377
https://en.wikipedia.org/wiki/Ultrasurf
Ultrasurf
UltraSurf is a freeware Internet censorship circumvention product created by UltraReach Internet Corporation. The software bypasses Internet censorship and firewalls using an HTTP proxy server, and employs encryption protocols for privacy. The software was developed by two different groups of Falun Gong practitioners at the same time, one starting in the US in 2002 by expatriate Chinese. The software was designed as a means of allowing internet users to bypass the Great Firewall of China. It currently boasts as many as 11 million users worldwide. The tool has been described as "one of the most important free-speech tools on the Internet" by Wired, and as the "best performing" circumvention tool by Harvard University in a 2007 study; a 2011 study by Freedom House ranked it fourth. Critics in the open-source community, George Turner Says, have expressed concern about the software's closed-source nature and alleged security through obscurity design; UltraReach says their security considerations mean they prefer third party expert review to open source review. Overview In 2001, UltraReach was founded by Chinese dissidents in Silicon Valley. Shortly after, UltraSurf was created to allow internet users in China to evade government censorship and monitoring. As of 2011 UltraSurf reported over eleven million users worldwide. During the Arab Spring, UltraReach recorded a 700 percent spike in traffic from Tunisia. Similar traffic spikes occur frequently during times of unrest in other regions, such as Tibet and Burma during the Saffron Revolution. Wired magazine in 2010 called UltraSurf "one of the most important free-speech tools on the Internet" for enabling citizens to access and share information from oppressed countries during times of humanitarian or human rights crises. UltraSurf is funded, in part, through contracts with the U.S. government's Broadcasting Board of Governors, which administers Voice of America and Radio Free Asia. As of 2012, UltraReach has had difficulty serving its growing user base due to insufficient funding. Operation Client software UltraSurf is free to download and requires no installation. UltraSurf does not install any files on the user's computer and leaves no registry edits after it exits. In other words, it leaves no trace of its use. To fully remove the software from the computer, a user needs only to delete the exe file named u.exe. It is only available on a Windows platform, runs through Internet Explorer by default, and has an optional plug-in for Firefox and Chrome. The UltraReach website notes that "Some anti-virus software companies misclassify UltraSurf as a malware or Trojan because UltraSurf encrypts the communications and circumvents internet censorship." Some security companies have agreed to whitelist UltraSurf. According to Appelbaum, the UltraSurf client uses anti-debugging techniques and also employs executable compression. The client acts as a local proxy which communicates with the UltraReach network through what appears to be an obfuscated form of TLS/SSL. UltraSurf servers The software works by creating an encrypted HTTP tunnel between the user's computer and a central pool of proxy servers, enabling users to bypass firewalls and censorship. UltraReach hosts all of its own servers. The software makes use of sophisticated, proprietary anti-blocking technology to overcome filtering and censorship online. According to Wired magazine, UltraSurf changes the "IP addresses of their proxy servers up to 10,000 times an hour." On the server-side, a 2011 analysis found that the UltraReach network employed squid and ziproxy software, as well as ISC BIND servers bootstrapping for a wider network of open recursive DNS servers, the latter not under UltraReach control. UltraSurf is designed primarily as an anti-censorship tool but also offers privacy protections in the form of industry standard encryption, with an added layer of obfuscation built in. UltraReach uses an internal content filter which blocks some sites, such as those deemed pornographic or otherwise offensive. According to Wired magazine: "That's partly because their network lacks the bandwidth to accommodate so much data-heavy traffic, but also because Falun Gong frowns on erotica." Additionally, the Falun Gong criticism website facts.org.cn, alleged to be operated by the Chinese government, is also unreachable through UltraSurf. Evaluation In a 2007 study, Harvard University's Berkman Center for Internet & Society found UltraSurf to be the "best performing" of all tested circumvention tools during in-country tests, and recommended it for widespread use. In particular, the report found that UltraSurf effectively bypassed various forms of censorship and blocking, include IP block, DNS block, and keyword filtering. It was also the fastest tool during in-country tests, and was noted for being easy to use and install with a simple user interface. The report noted, however, that UltraReach is designed primarily as a circumvention product, rather than as an anonymity tool, and suggested that users concerned about anonymity should disable browser support for active content when using UltraSurf. A 2011 report by the U.S.-based human rights group Freedom House ranked UltraSurf fourth overall among censorship circumvention and privacy tools, as measured by a combination of performance, usability, support and security. In particular, the tool was recommended for users interested in downloading or viewing information, who required a relatively high degree of privacy, and who favored a fast connection speed. Some technologists have expressed reservations about the UltraReach model, however. In particular, its developers have been criticized by proponents of open-source software for not allowing peer review of the tool's design, except at the discretion of its creators. Moreover, because UltraReach operates all its own servers, their developers have access to user logs. This architecture means that users are required to trust UltraReach not to reveal user data. UltraReach maintains that it keeps logs for a short period of time, and uses them only for the purpose of analyzing traffic for signs of interference or to monitor overall performance and efficacy; the company says it does not disclose user logs to third parties. According to Jacob Appelbaum with the Tor Project, this essentially amounts to an example of "privacy by policy". In an April 2012 report, Appelbaum further criticized UltraSurf for its use of internal content filtering (including blocking pornographic websites), and for its willingness to comply with subpoenas from U.S. law enforcement officials. Appelbaum's report also noted that UltraSurf pages employed Google Analytics, which had the potential to leak user data, and that its systems were not all up to date with the latest security patches and did not make use of forward security mechanisms. Furthermore, Appelbaum claims that "The UltraSurf client uses Open and Free Software including Putty and zlib. The use of both Putty and zlib is not disclosed. This use and lack of disclosure is a violation of the licenses." In a response posted the same day, UltraReach wrote that it had already resolved these issues. They asserted that Appelbaum's report had misrepresented or misunderstood other aspects of its software. UltraReach also argued that the differences between the software approaches to Internet censorship represented by Tor and UltraSurf were at base philosophical and simply different approaches to censorship circumvention. A top-secret NSA presentation revealed as part of the 2013 global surveillance disclosures dismisses this response by UltraSurf as "all talk and no show". Due to restrictions imposed by some organizations, McAfee VirusScan is flagging some Ultrasurf versions as a potentially unwanted program, avoiding its execution on those machines. See also Internet censorship Internet censorship circumvention Internet censorship in the People's Republic of China Bypassing content-control filters Bypassing the Great Firewall of China Freegate References External links How to Bypass Internet Censorship, a FLOSS Manual, 10 March 2011, 240 pp. Internet censorship Proxy servers Anonymity networks Internet privacy software Windows Internet software Falun Gong
31808616
https://en.wikipedia.org/wiki/ISO/IEC%20JTC%201/SC%2027
ISO/IEC JTC 1/SC 27
ISO/IEC JTC 1/SC 27 Information security, cybersecurity and privacy protection is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). ISO/IEC JTC 1/SC 27 develops International Standards, Technical Reports, and Technical Specifications within the field of information security. Standardization activity by this subcommittee includes general methods, management system requirements, techniques and guidelines to address information security, cybersecurity and privacy. Drafts of International Standards by ISO/IEC JTC 1 or any of its subcommittees are sent out to participating national standardization bodies for ballot, comments and contributions. Publication as an ISO/IEC International Standard requires approval by a minimum of 75% of the national bodies casting a vote. The international secretariat of ISO/IEC JTC 1/SC 27 is the Deutsches Institut für Normung (DIN) located in Germany. History ISO/IEC JTC 1/SC 27 was founded by ISO/IEC JTC 1 in 1990. The subcommittee was formed when ISO/IEC JTC 1/SC 20, which covered standardization within the field of security techniques, covering "secret-key techniques" (ISO/IEC JTC 1/SC 20/WG 1), "public-key techniques" (ISO/IEC JTC 1/SC 20/WG 2), and "data encryption protocols" (ISO/IEC JTC 1/SC 20/WG 3) was disbanded. This allowed for ISO/IEC JTC 1/SC 27 to take over the work of ISO/IEC JTC 1/SC 20 (specifically that of its first two working groups) as well as to extend its scope to other areas within the field of IT security techniques. Since 1990, the subcommittee has extended or altered its scope and working groups to meet the current standardization demands. ISO/IEC JTC 1/SC 27, which started with three working groups, eventually expanded its structure to contain five. The two new working groups were added in April 2006, at the 17th Plenary Meeting in Madrid, Spain. Scope The scope of ISO/IEC JTC 1/SC 27 is "The development of standards for the protection of information and ICT. This includes generic methods, techniques and guidelines to address both security and privacy aspects, such as: Security requirements capture methodology; Management of information and ICT security; in particular information security management systems, security processes, security controls and services; Cryptographic and other security mechanisms, including but not limited to mechanisms for protecting the accountability, availability, integrity and confidentiality of information; Security management support documentation including terminology, guidelines as well as procedures for the registration of security components; Security aspects of identity management, biometrics and privacy; Conformance assessment, accreditation and auditing requirements in the area of information security management systems; Security evaluation criteria and methodology. SC 27 engages in active liaison and collaboration with appropriate bodies to ensure the proper development and application of SC 27 standards and technical reports in relevant areas." Structure ISO/IEC JTC 1/SC 27 is made up of five working groups (WG), each of which is responsible for the technical development of information and IT security standards within the programme of work of ISO/IEC JTC 1/SC 27. In addition, ISO/IEC JTC 1/SC 27 has two special working groups (SWG): (i) SWG-M, which operates under the direction of ISO/IEC JTC 1/SC 27 with the primary task of reviewing and evaluating the organizational effectiveness of ISO/IEC JTC 1/SC 27 processes and mode of operations; and (ii) SWG-T, which operates under the direction of ISO/IEC JTC 1/SC 27 to address topics beyond the scope of the respective existing WGs or that can affect directly or indirectly multiple WGs. ISO/IEC JTC 1/SC 27 also has a Communications Officer whose role is to promote the work of ISO/IEC JTC 1/SC 27 through different channels: press releases and articles, conferences and workshops, interactive ISO chat forums and other media channels. The focus of each working group is described in the group's terms of reference. Working groups of ISO/IEC JTC 1/SC 27 are: Collaborations ISO/IEC JTC 1/SC 27 works in close collaboration with a number of other organizations or subcommittees, both internal and external to ISO or IEC, in order to avoid conflicting or duplicative work. Organizations internal to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 27 include: ISO/IEC JTC 1/SWG 6, Management ISO/IEC JTC 1/WG 7, Sensor networks ISO/IEC JTC 1/WG 9, Big Data ISO/IEC JTC 1/WG 10, Internet of Things (IoT) ISO/IEC JTC 1/SC 6, Telecommunications and information exchange between systems ISO/IEC JTC 1/SC 7, Software and systems engineering ISO/IEC JTC 1/SC 17, Cards and personal identification ISO/IEC JTC 1/SC 22, Programming languages, their environments and system software interfaces ISO/IEC JTC 1/SC 25, Interconnection of information technology equipment ISO/IEC JTC 1/SC 31, Automatic identification and data capture techniques ISO/IEC JTC 1/SC 36, Information technology for learning, education and training ISO/IEC JTC 1/SC 37, Biometrics ISO/IEC JTC 1/SC 38, Cloud computing and distributed platforms ISO/IEC JTC 1/SC 40, IT Service Management and IT Governance ISO/TC 8, Ships and marine technology ISO/TC 46, Information and documentation ISO/TC 46/SC 11, Archives/records management ISO/TC 68, Financial services ISO/TC 68/SC 2, Financial Services, security ISO/TC 68/SC 7, Core banking ISO/TC 171, Document management applications ISO/TC 176, Quality management and quality assurance ISO/TC 176/SC 3, Supporting technologies ISO/TC 204, Intelligent transport systems ISO/TC 215, Health informatics ISO/TC 251, Asset management ISO/TC 259, Outsourcing ISO/TC 262, Risk management ISO/TC 272, Forensic sciences ISO/TC 292, Security and resilience ISO/CASCO, Committee on Conformity Assessments ISO/TMB/JTCG, Joint technical Coordination Group on MSS ISO/TMB/SAG EE 1, Strategic Advisory Group on Energy Efficiency IEC/SC 45A, Instrumentation, control and electrical systems of nuclear facilities IEC/TC 57, Power systems management and associated information exchange IEC/TC 65, Industrial-process measurement, control and automation IEC Advisory Committee on Information security and data privacy (ACSEC) Some organizations external to ISO or IEC that collaborate with or are in liaison to ISO/IEC JTC 1/SC 27 include: Attribute-based Credentials for Trust (ABC4Trust) Article 29 Data Protection Working Party Common Criteria Development Board (CCDB) Consortium of Digital Forensic Specialists (CDFS) CEN/TC 377 CEN/PC 428 e-Competence and ICT professionalism Cloud Security Alliance (CSA) Cloud Standards Customer Council (CSCC) Common Study Center of Telediffusion and Telecommunication (CCETT) The Cyber Security Naming & Information Structure Groups (Cyber Security) Ecma International European Committee for Banking Standards (ECBS) European Network and Information Security Agency (ENISA) European Payments Council (EPC) European Telecommunications Standards Institute (ETSI) European Data Centre Association (EUDCA) Eurocloud Future of Identity in the Information Society (FIDIS) Forum of Incident Response and Security Teams (FIRST) Information Security Forum (ISF) Latinoamerican Institute for Quality Assurance (INLAC) Institute of Electrical and Electronics Engineers (IEEE) International Conference of Data Protection and Privacy Commissioners International Information Systems Security Certification Consortium ((ISC)2) International Smart Card Certification Initiatives (ISCI) The International Society of Automation (ISA) INTERPOL ISACA International Standardized Commercial Identifier (ISCI) Information Security Forum (ISF) ITU-T Kantara Initiative MasterCard PReparing Industry to Privacy-by-design by supporting its Application in REsearch (PRIPARE) Technology-supported Risk Estimation by Predictive Assessment of Socio-technical Security (TREsPASS) Privacy and Identity Management for Community Services (PICOS) Privacy-Preserving Computation in the Cloud (PRACTICE) The Open Group The OpenID Foundation (OIDF) TeleManagement Forum (TMForum) Trusted Computing Group (TCG) Visa Member countries Countries pay a fee to ISO to be members of subcommittees. The 51 "P" (participating) members of ISO/IEC JTC 1/SC 27 are: Algeria, Argentina, Australia, Austria, Belgium, Brazil, Canada, Chile, China, Cyprus, Czech Republic, Côte d'Ivoire, Denmark, Finland, France, Germany, India, Ireland, Israel, Italy, Jamaica, Japan, Kazakhstan, Kenya, Republic of Korea, Luxembourg, Malaysia, Mauritius, Mexico, Netherlands, New Zealand, Norway, Peru, Poland, Romania, Russian Federation, Rwanda, Singapore, Slovakia, South Africa, Spain, Sri Lanka, Sweden, Switzerland, Thailand, the Republic of Macedonia, Ukraine, United Arab Emirates, United Kingdom, United States of America, and Uruguay. The 20 "O" (observing) members of ISO/IEC JTC 1/SC 27 are: Belarus, Bosnia and Herzegovina, Costa Rica, El Salvador, Estonia, Ghana, Hong Kong, Hungary, Iceland, Indonesia, Islamic Republic of Iran, Lithuania, Morocco, State of Palestine, Portugal, Saudi Arabia, Serbia, Slovenia, Swaziland, and Turkey. As of August 2014, the spread of meeting locations since Spring 1990 has been as shown below: Published standards ISO/IEC JTC 1/SC 27 currently has 147 published standards within the field of IT security techniques, including: See also ISO/IEC JTC1 List of ISO standards Deutsches Institut für Normung International Organization for Standardization International Electrotechnical Commission References External links ISO/IEC JTC 1/SC 27 home page ISO/IEC JTC 1/SC 27 page at ISO ISO/IEC Joint Technical Committee 1 - Information Technology (public website) ISO/IEC Joint Technical Committee 1 (Livelink password-protected available documents) ISO/IEC Joint Technical Committee 1 (freely available documents), JTC 1 Supplement, Standing Documents and Templates ISO and IEC procedural documentation ISO DB Patents (including JTC 1 patents) ITU-T Study Group 17 (SG17) ISO International Organization for Standardization IEC International Electrotechnical Commission Access to ISO/IEC JTC 1/SC 27 Freely Available Standards 027 Identity management initiative Information assurance standards
31821270
https://en.wikipedia.org/wiki/EMBRACE%20Healthcare%20Reform%20Plan
EMBRACE Healthcare Reform Plan
The Expanding Medical and Behavioral Resources with Access to Care for Everyone (EMBRACE) plan is a healthcare system reform proposal introduced by a group called Healthcare Professionals for Healthcare Reform (HPfHR). The plan incorporates elements of private health insurance, single-payer and fee-for-service models in one comprehensive system. It has been referred to as a "Single System" healthcare system. First published in the Annals of Internal Medicine in April 2009, the plan got some early discussion in the healthcare community, but appeared to have come out too late to have had any impact in the development of the Patient Protection and Affordable Care Act (PPACA), the 111th Congress’ landmark health insurance reform legislation. A book outlining the EMBRACE plan in more detail was authored in 2016 by Dr. Gilead Lancaster, a cofounder of HPfHR. The origins of EMBRACE In 2007 HPfHR was established in an effort to advise politicians on healthcare issues from the point of view of healthcare professionals. They felt that the only effective way to fix the American healthcare system was with a complete overhaul based on science-based guidelines, also known as evidence-based medicine. The group identified five important parts of the American healthcare system that they felt needed to be addressed in their new system. These included inefficiencies in medical offices and hospitals due to a cumbersome insurance and reimbursement system; coverage of the entire United States population for basic healthcare services while preserving the quality and feel of the current delivery system of healthcare; promotion and integration of scientifically validated diagnostic and therapeutic modalities into the system so it becomes the driving force of the healthcare system; and depoliticizing healthcare and allowing for a more manageable way to finance it. In addition, the group felt that it was important that the plan was completely portable throughout the country and did not depend on income, age or employment status. The scope of reform under EMBRACE The EMBRACE system would require a comprehensive reorganization of the entire United States healthcare system, but would attempt to preserve important elements of the current infrastructure. Current Procedural Terminology (CPT) and International Statistical Classification of Diseases and Related Health Problems (ICD) codes that are currently being used to report services and determine reimbursement to doctors, hospitals and other care providers would be maintained. There would also be an attempt to allow doctors and other healthcare providers to keep private offices and clinics as independent businesses. The new system would change 4 fundamental things: It would classify diseases and their therapies into 3 distinct tiers, separate private insurance from public insurance but keep them in the same system, create a politically quasi-independent ‘healthcare board’ funded by Congress to supervise the U. S. healthcare system, and develop a simplified web-based electronic billing and reimbursement system. These fundamental reforms would change many other aspects of the current healthcare system. For example, healthcare coverage would be completely portable from job to job and from state to state and would not be tied to employment. The Tier system EMBRACE would establish 3 tiers of diagnoses and treatments founded on evidence-based medicine (EBM), and its funding will be tier-specific and separate: The base level (Tier 1) would cover all medical, surgical and psychiatric therapies shown to be life saving, life sustaining and/or preventative and would cover the entire population “from cradle to grave” without registration, deductibles or fee payments. It would also be completely portable and independent of employment status, economic status, race, gender or pre-existing conditions. Funding of Tier 1 services would be overseen by a healthcare board (see below) that is in turn funded by Congress. The method of raising this revenue could be similar to the present funding of Medicare (e.g. Federal Insurance Contributions Act tax) and Medicaid. Since there will be no requirement for employer-based insurance under EMBRACE, payroll taxes (indexed to salary), a tax on businesses based on the number of employees (and their wages) or a combination of these could also be considered. Tier 2 would cover all conditions affecting quality of life and their therapies. In addition, this tier will include all services of Tier 1 conditions and treatments that do not have sufficient evidence for a Tier 1 indication. Private insurance carriers would be invited to cover Tier 2 services through a menu of plans developed by the Board that is similar to the Medigap Plans A to N now offered through the Centers for Medicare & Medicaid Services. Although each insurance carrier does not have to offer all the plans listed on the menu, the plans that are offered by the insurance carrier must cover all the services stipulated by the Board. This assures that consumers (whether state governments, unions, employers or individuals) can compare the price of the plans and can be confident of the scope of their coverage. In addition, if an insurance provider offers a specific plan in one state, it will be required to offer it in all other states; assuring portability of all tier 2 coverage. Except for these two stipulations, the private insurance provider will be free to set their fee (on an individual basis), set deductibles and co-pays and even deny coverage. The Tier 2 plans can be broad (covering most Tier 2 services) or can be customized for specific groups: a geriatric plan that covers extended care facilities but not fertility care, a heavy laborer plan that includes chiropractic therapy, or a Workman’s Compensation plan purchased by employers, employees or unions. Tier 3 would apply to all medical and surgical issues considered luxury or cosmetic (examples are Lasik surgery or Botox treatments). Funding for Tier 3 would not be covered under this system (as is true in the current system) and all bills would go to the patient. However, billing would still be made through the web-based universal billing form discussed below. Pharmaceuticals will have similar Tier assignments as medical coverage: Tier 1 would be formulations and therapies that have good evidence-based data for treatment or prevention of Tier 1 illnesses and would mostly be paid by public funds or be heavily subsidized. Tier 2 would apply to those drugs and therapies that enhance quality of life or have not yet had adequate evidence for effectiveness for a particular condition. These Tier 2 pharmaceuticals would be covered by private insurance or out of pocket. Tier 3 would be for “luxury” items and would likely be ‘out of pocket’. Oversight The entire health system would be overseen by a healthcare panel known as “The Board”. Although the details of the exact composition of the Board has not been discussed in detail by HPfHR, it would be composed of physicians and other healthcare professionals, public health experts, economists specializing in health care, business representatives, insurance representatives, representatives from the pharmaceutical industry and representatives of patients. This Board’s mission would be to promote the health of Americans in a socially responsible and economically sound way. Similar to the “Federal Health Board” proposed by Tom Daschle, it would be a quasi-independent organization resembling the Federal Reserve, which it is hoped would make it less beholden to political pressures. It would be headed by a chairperson who would be appointed to a 10-year term by the president and require Senate confirmation. The Board would have oversight of a significantly revised Center for Medicare & Medicaid Services, and input into the Food and Drug Administration and the National Institutes of Health. It would use the already established Diagnosis-related group (DRG), Ambulatory Payment Classification (APC) and International Classification of Diseases (ICD) codes. The Board would decide which diagnoses and services are covered by Tier 1, 2 or 3 based on the medical importance (using evidence-based data such as practice guidelines developed by expert medical panels, Cochrane Library database reviews and other sources), public health considerations and economic impact. This would be updated periodically as more evidence and research becomes available. When evidence is not available, the Board would have the option to commission the National Institutes of Health and the Food and Drug Administration to direct research focused specifically to use in the Tier assignments. Among the prerequisites to the implementation of this system would be delineation of the specific relationships between the Board and existing agencies within the Department of Health and Human Services, in particular the Food and Drug Administration and the National Institutes of Health. Some reorganization of these government agencies might be warranted to optimize inter-agency interactions. To address local variations in health and social concerns, the health Board would establish several local health-boards (possibly in each state). These local branches would not only handle local health issues, but may be used to establish peer review boards to hear ethical and malpractice issues. Hospital and office billing To simplify claim submissions by healthcare providers (physicians, and hospitals), a “Universal Reimbursement Form” would be created by the Board and would be implemented electronically using a web-based tool available to hospitals and physician offices. This Universal Reimbursement Form (URF) will be the only form of billing for all providers, will be internet-based and will be simple to use. It will transmit data to a “Central Billing System”, which will decide if the condition/service is Tier 1, Tier 2 or Tier 3. Tier 1 services will be reimbursed directly to the provider. Tier 2 services will trigger a search (by the computer) for insurance coverage; if insurance is found the insurance carrier would be billed, if not the patient would be billed. Bills for Tier 3 would be sent directly to the patient. To help in cases where there is some question about which tier a particular service will be charged, there will be a “Billing Inquiry” feature on the Central Billing System available to providers and consumers that allows inquiries of tier assignment in advance. Although the CBS will be secured with encryption and other anti-hacking devices, the internet platform that the URF is based on will be open-sourced and available for entrepreneurial development. Similar to the open sourced platform of the iPhone, the URF platform would allow for the development of “Health Information Technology” on a single fully interactive web-based platform. Financing the EMBRACE healthcare system The budget for the EMBRACE system will be determined by the United States Congress, with one comprehensive bill a year that will fund the entire public healthcare system in the United States. Because the Healthcare Board will have to justify the budget, Congress will continue to have full control on expenditures for the healthcare system. References External links Healthcare Professionals for Healthcare Reform (HPfHR) website EMBRACE plan in the Annals of Internal Medicine Website for the book about EMBRACE called EMBRACE: A Revolutionary New Healthcare System for the Twenty-First Century Health economics Healthcare reform in the United States Universal health care
31859710
https://en.wikipedia.org/wiki/Rule%2034%20%28novel%29
Rule 34 (novel)
Rule 34 is a near-future science fiction novel by Charles Stross. It is a loose sequel to Halting State, and was released on 5 July 2011 (US) and 7 July 2011 (UK). The title is a reference to the Internet meme Rule 34, which states that "If it exists, there is porn of it. No exceptions." Rule 34 was nominated for the 2012 Arthur C. Clarke Award and the 2012 Locus Award for Best Science Fiction Novel. Plot summary The novel is told in second-person narrative but primarily from three points of view: Edinburgh police Inspector Kavanaugh, who investigates spammers murdered in gruesome and inventive ways, and learns about similar cases in other parts of Europe; Anwar, a former identity thief who becomes Scottish honorary consul for a fictional state in central Asia; and "The Toymaker", an enforcer and organizer for the criminal "Operation". Their interactions and conflicts drive the story. Critical reception Reviews have been favorable, with Cory Doctorow calling the novel, "savvy, funny, viciously inventive". Kirkus Reviews gives it a star, saying, "Dazzling, chilling and brilliant", Publishers Weekly calls "the whole more than the sum of its parts", and there was a generally positive review in The Guardian. Sequel cancellation Following the revelations by Edward Snowden, Stross announced that there would be no third book in the planned trilogy. "Halting State wasn't intended to be predictive when I started writing it in 2006. Trouble is, about the only parts that haven't happened yet are Scottish Independence and the use of actual quantum computers for cracking public key encryption (and there's a big fat question mark over the latter—what else are the NSA up to?)." References Novels by Charles Stross 2011 British novels British science fiction novels Ace Books books Augmented reality in fiction Novels about the Internet Novels about mass surveillance Novels set in Scotland Novels set in Edinburgh Novels set in fictional countries
31889265
https://en.wikipedia.org/wiki/OpenSC
OpenSC
OpenSC is a set of software tools and libraries to work with smart cards, with the focus on smart cards with cryptographic capabilities. OpenSC facilitate the use of smart cards in security applications such as authentication, encryption and digital signatures. OpenSC implements the PKCS #15 standard and the PKCS #11 API. It also provides some support for Common Data Security Architecture (CDSA) on Mac OS X and Microsoft CryptoAPI on Windows, but it is still work in progress. References External links Driver for Spanish National Card Free software Programming libraries
31897073
https://en.wikipedia.org/wiki/Mobile%20file%20management
Mobile file management
Mobile file management (MFM) is a type of information technology (IT) software that allows businesses to manage transfers and storage of corporate files and other related items on a mobile device, and allows the business to oversee user access. Mobile file management software is typically installed on a corporate file server like Windows 2008, and on a mobile device such as tablet computers and smartphones, e.g., Android, iPad, iPhone, etc. Other features include the ability to remotely wipe a lost or stolen device, access, cache and store files on a mobile device and integrate with file permission solutions like those from Microsoft's Active Directory. A main advantage of modern mobile file management solutions is that they do not need a VPN connection for the mobile devices to connect to the corporate file servers. The connection between the mobile device and the corporate file server is established via a cloud service. This way the corporate file server doesn't need to open incoming ports which would cause security issues. The files are transferred highly encrypted e.g. according to AES 256-bit industry standard. Only the company server and the mobile device keep the encryption key to be able to encrypt and decrypt the files. So nobody, not even the mobile file management solution provider, can access the files. Third-party cloud-based companies provide solutions which can be used to manage mobile files but are not controlled by corporate IT organizations. Companies that utilize Mobile Device Management solutions can also secure content on mobile devices, but usually cannot provide direct access and connection to a corporate file server. File management is how the computer operating system keeps data organized through the use of files and folders, how they are arranged, and how they are listed in a hierarchical order. Mobile file management allows file management to be used on tablet computers. By installing it both on the tablet and the corporate server, users of mobile devices can freely access corporate servers from remote locations. References External links An Overview of Mobile File Management Mobile Document Management Information technology Business software
31921181
https://en.wikipedia.org/wiki/OrientDB
OrientDB
OrientDB is an open source NoSQL database management system written in Java. It is a Multi-model database, supporting graph, document, key/value, and object models, but the relationships are managed as in graph databases with direct connections between records. It supports schema-less, schema-full and schema-mixed modes. It has a strong security profiling system based on users and roles and supports querying with Gremlin along with SQL extended for graph traversal. OrientDB uses several indexing mechanisms based on B-tree and Extendible hashing, the last one is known as "hash index", there are plans to implement LSM-tree and Fractal tree index based indexes. Each record has Surrogate key which indicates position of record inside of Array list , links between records are stored either as single value of record's position stored inside of referrer or as B-tree of record positions (so-called record IDs or RIDs) which allows fast traversal (with O(1) complexity) of one-to-many relationships and fast addition/removal of new links. OrientDB is the fifth most popular graph database according to the DB-Engines graph database ranking, as of December 2021. The development of OrientDB still relies on an open source community led by OrientDB LTD company created by its original author Luca Garulli. The project uses GitHub to manage the sources, contributors and versioning, Google Group and Stack Overflow to provide free support to the worldwide users. OrientDB also offers a free Udemy course for those hoping to learn the basics and get started with OrientDB. Engine OrientDB is built with a multi-model graph/document engine. OrientDB feels like a graph database first, but there's no reason the key-value store can't be used on its own. While OrientDB includes a SQL layer, the support for edges effectively means that these may be used to traverse relationships rather than employing a JOIN statement. OrientDB handles every record / document as an object and the linking between objects / documents is not through references, it's direct linking (saving a pointer to the object). This leads to quick retrieval of related data as compared to joins in an RDBMS. Editions & licenses OrientDB Community Edition is free for any use (Apache 2 license). The open source software is built upon by a community of developers. Features such as horizontal scaling, fault tolerance, clustering, sharding, and replication aren’t disabled in the OrientDB Community Edition. OrientDB Enterprise Edition is the commercial extension of OrientDB Community Edition created to handle more robust and demanding use cases. OrientDB Enterprise Edition includes additional features such as a query profiler, distributed clustering configuration, metrics recording, a live monitor, Teleporter (a migration tool), and configurable alerts. Features Quick installation. OrientDB can be installed and running in less than 60 seconds Fully transactional: supports ACID transactions guaranteeing that all database transactions are processed reliably and in the event of a crash all pending documents are recovered and committed. Graph structured data model: native management of graphs. Fully compliant with the Apache TinkerPop Gremlin (previously known as Blueprints) open source graph computing framework. SQL: supports SQL queries with extensions to handle relationships without SQL join, manage trees, and graphs of connected documents. Web technologies: natively supports HTTP, RESTful protocol, and JSON additional libraries or components. Distributed: full support for multi-master replication including geographically distributed clusters. Run anywhere: implemented using pure Java allowing it to be run on Linux, OS X, Windows, or any system with a compliant JVM. Embeddable: local mode to use the database bypassing the Server. Perfect for scenarios where the database is embedded. Apache 2 License: always free for any usage. No fees or royalties required to use it. Full server has a footprint of about 512 MB. Commercial support is available from OrientDB. Pattern matching: Introduced in version 2.2, the Match statement queries the database in a declarative manner, using pattern matching. Security features introduced in OrientDB 2.2 provide an extensible framework for adding external authenticators, password validation, LDAP import of database roles and users, advanced auditing capabilities, and syslog support. OrientDB Enterprise Edition provides Kerberos (protocol) authentication full browser SPNEGO support. When it comes to database encryption, starting with version 2.2, OrientDB can encrypt records on disk. This prevents unauthorized users from accessing database content or even from bypassing OrientDB security. Teleporter: Allows relational databases to be quickly imported into OrientDB in few simple steps. Cloud ready: OrientDB can be deployed in the cloud and supports the following providers: Amazon Web Services, Microsoft Azure, CenturyLink Cloud, Jelastic, DigitalOcean Applications Banking Big Data Fraud prevention Loan management software (Floify) Master data management Non-coding RNA human interaction database Recommendation engines Social networking Traffic management systems History OrientDB was originally authored by Luca Garulli in 2010. Luca wrote it as a Java rewrite of the fast persistent layer of Orient ODBMS database (originally developed by Luca Garulli in 1999 in C++). During 2012–2014 years storage engine was redeveloped by Andrey Lomakin. It has got a new name "plocal" which stands for "paginated local". This name implies that the new storage engine is based on the concept of the splitting of data files by pages and page is treated as a single atomic unit of change. Since 2012, the project is being sponsored by OrientDB LTD (former Orient Technologies LTD), a for-profit company with Luca as its CEO and founder. In 2013 Andrey Lomakin has joined the company as R&D lead engineer and company's co-owner. The first time the word "multi-model" has been associated to the databases was on May 30, 2012 in Cologne, Germany, during Luca Garulli's keynote "NoSQL Adoption – What’s the Next Step?". Luca Garulli envisioned the evolution of the 1st generation NoSQL products into new products with more features able to be used by multiple use cases. OrientDB was the first product to embrace documents, graphs, key-value, geospatial and reactive models in the same product, at the core level. This means that the multiple models were integrated into the core without using layers. For this reason, OrientDB is a "Native" Multi-model database. OrientDB has been covered by media outlets and is the winner of the 2015 InfoWorld Bossie award. On September 15, 2017, OrientDB LTD company has been acquired by CallidusCloud a public company traded on NASDAQ. On January 30, 2018, it was announced SAP (company) acquired CallidusCloud for $2.4 billion. and therefore OrientDB is now supported by SAP (company). On September 1st, 2021, the original founder Luca Garulli left SAP (company) and forked the project into ArcadeDB after SAP decided to stop providing commercial support for OrientDB. See also XML database References External links Free database management systems Document-oriented databases Distributed computing architecture Key-value databases Structured storage NoSQL Graph databases Database-related software for Linux
31937663
https://en.wikipedia.org/wiki/DebWRT
DebWRT
DebWrt is a discontinued, niche Linux distribution mainly installed on embedded systems (e.g. residential gateways). It was built on top of an OpenWrt base which was used to load a fully functional version of Debian from the RootFS stored on the attached USB storage device. For easy installation and deinstallation of packages it relied on the dpkg Package management system. DebWrt used the command-line interface of Bash. There was no web-based GUI interface. Features DebWrt offered all of the features provided in the stock firmware for residential gateways, such as DHCP services and wireless encryption via WEP, Wi-Fi Protected Access and WPA2. In addition it offered all of the features offered by Debian that are typically not included in a standard firmware. Features included: Package manager apt-get Extensible configuration of your network involving VLAN with exhaustive possibilities to configure the routing itself Customizable methods to filter, manipulate, delay and rearrange network packets: Static DHCP leases Other devices with available Linux drivers Regular bug fixes and updates, even for devices no longer supported by their manufacturers DebWrt had a fully writable file system, which allowed for package management via the dpkg package system, allowing users to install new software to meet their individual needs. This contrasted with Linux-based firmware built using a read-only SquashFS filesystem (or similar) that offered efficient compression but no way to modify the installed software without rebuilding and flashing a complete firmware image. Versions 2.0: Angel - 2009 February See also List of router firmware projects References External links Linksys GPL Code Center Embedded Linux distributions Linux distributions
31941849
https://en.wikipedia.org/wiki/ICloud
ICloud
iCloud is a cloud storage and cloud computing service from Apple Inc. launched on October 12, 2011. As of 2018, the service had an estimated 850 million users, up from 782 million users in 2016. iCloud enables users to store data such as documents, photos, and music on remote servers for download to iOS, macOS or Windows devices, to share and send data to other users, and to manage their Apple devices if lost or stolen. iCloud also provides the means to wirelessly back up iOS devices directly to iCloud, instead of being reliant on manual backups to a host Mac or Windows computer using iTunes. Service users are also able to share photos, music, and games instantly by linking accounts via AirDrop wireless. iCloud replaced Apple's MobileMe service, acting as a data syncing center for email, contacts, calendars, bookmarks, notes, reminders (to-do lists), iWork documents, photos, and other data. Apple has eleven company owned and operated data centers supporting iCloud services. The company has six data centers in the United States, two in Denmark, and three in Asia. One of Apple's original iCloud data centers is located in Maiden, North Carolina, US. Beginning in 2011, iCloud is based on Amazon Web Services and Microsoft Azure (Apple iOS Security white paper published in 2014, Apple acknowledged that encrypted iOS files are stored in Amazon S3 and Microsoft Azure). In 2016, Apple signed a deal with Google to use Google Cloud Platform for some iCloud services. In October 2016, Bloomberg reported that Apple was working on project Pie which aims to improve the speed and experience of Apple's online services by being operated more directly by Apple. In June 2021, Apple introduced iCloud+, which introduced Private Relay, Hide My Email and custom email domains to paid users of the services, as well as an unlimited storage limit for video from cameras added through HomeKit Secure Video. System requirements iCloud account creation requires either an iOS device running iOS 5 or later or a Mac running OS X Lion v10.7.5 or later, as well as an internet connection and a compatible web browser. Also, certain features have their own minimum requirements of OS versions. For example, using iCloud Photo Sharing requires OS X Mavericks v10.9 or above on a Mac. Devices running older versions of macOS (before Mavericks) or iOS (below 7) may be unable to sign into iCloud after the iCloud password has been changed: the only resolution for this issue is to upgrade the OS, which may be impossible on a device that does not meet the newer OS minimum requirements. Synchronizing with a PC requires Windows 7 or later and using the iCloud Control Panel, and optionally Outlook 2007 or later or the built-in Windows 10 Mail and Calendar apps to sync Calendar, Contacts, and Reminders. Users must own an Apple device to set up iCloud for Windows. Synchronization of bookmarks requires Safari 5.1.1 or later on macOS, and Internet Explorer 9, Firefox 22 or Google Chrome 28 or later on Windows. MobileMe account users could move their accounts to an iCloud account, keeping the same account details. History iCloud was announced on June 6, 2011, at the 2011 Apple Worldwide Developers Conference (WWDC). Apple announced that MobileMe would be discontinued after June 30, 2012, with anyone who had an account before the unveiling of iCloud having their MobileMe service extended to that date, free of charge. The official website, www.icloud.com, went live in early August for Apple Developers. On October 12, 2011, iCloud became available to use via an iTunes update. iCloud had 20 million users in less than a week after launch. The iCloud.com domain and registered trademark were bought from a Swedish company called Xcerion, who rebranded their service to CloudMe. Apple now controls major domains like iCloud.de, iCloud.fr and iCloud.es. A class action lawsuit by customers unhappy over the transition from MobileMe to iCloud was filed in early-May 2012. In June 2019, iCloud was introduced to Windows 10 via the Microsoft Store. On June 7, 2021, Apple introduced an upgraded version of iCloud for users who paid for additional storage called iCloud+ during their 2021 Apple Worldwide Developers Conference. iCloud+ includes Private Relay, which allowed users to browse Safari without being tracked, Hide My Email, which allows users to sign up to websites and other apps with a private email address that forwarded messages to their main inbox, and updates to HomeKit Secure Video which allows iCloud+ users to add an unlimited number of HomeKit cameras that do not count against the storage limit. Announcement The first official mention of iCloud from Apple came on May 31, 2011, when a press release announced that it would demonstrate the service at the WWDC on June 6, 2011. A banner hung at the Moscone Center for WWDC revealed the iCloud logo five days before the official launch. At the WWDC 2011 keynote speech, Steve Jobs (in one of his last public appearances) announced iCloud will replace MobileMe services and that the basic iCloud service will be free of charge. Features The cloud-based system allows users to store heterogeneous music, photos, applications, documents, bookmarks, reminders, backups, notes, Apple Books, and contacts and provides a platform for Apple's email servers and calendars. Third-party iOS and macOS app developers can implement iCloud functionality in their apps through the iCloud API. Backup and restore iCloud allows users to back up the settings and data on iOS devices running iOS 5 or later. Data backed up includes photos and videos in the Camera Roll, device settings, app data, messages (iMessage, SMS, and MMS), ringtones, and Visual Voicemails. Backups occur daily when the device is locked and connected to Wi-Fi and a power source. In case of a malfunction of any Apple device, during the restoration process, iCloud offers to restore all data along with App data only if the device was synced to iCloud and backed up. Back to My Mac Back to My Mac, also previously part of MobileMe, is now part of iCloud. As before, this service allows users to log in remotely to other computers that have Back to My Mac enabled and are configured with the same Apple ID. On August 9, 2018, Apple updated a support document to note that Back to My Mac would not be part of the upcoming macOS Mojave (10.14) release. Email An iCloud account can include an email account, much like MobileMe, .Mac, and iTools did previously. However, unlike MobileMe and its previous iterations, the email account is an optional part of an iCloud account, in that the user can choose to use a non-iCloud email address as their iCloud Apple ID. The email account can be accessed using any standard IMAP-compatible email client, as well as via web browser at iCloud.com. Additionally, on an iOS device, iCloud email is push-enabled. Users who converted existing MobileMe accounts to iCloud accounts kept their existing "@me.com" email addresses; users whose accounts pre-dated MobileMe and had both me.com and mac.com email addresses kept both. As well as retaining their previous addresses, users also received the matching "@icloud.com" address. As there is only one mailbox per account, all messages sent to any of a user's iCloud email addresses end up in the same inbox. Find My Friends Find My iPhone My Friends was added to iCloud alongside the launch of iOS 5, allowing users to share their current location with their friends or family. iOS 6 added location-based alerts to notify the user when a device arrives at a certain location. On iOS 9 and 10, Find My Friends is built into iOS and cannot be removed. From iOS 11 onwards it is included, but can be deleted and then subsequently reinstalled from the iOS App Store. In October 2015, Find My Friends was added to iCloud.com to view other "friends" locations. Find My iPhone Find My iPhone, formerly part of MobileMe, allows users to track the location of their iOS device or Mac. A user can see the device's approximate location on a map (along with a circle showing the radius depicting the margin of error), display a message and/or play a sound on the device (even if it is set to silent), change the password on the device, and remotely erase its contents. The feature was first announced on June 10, 2009, and was included in the iOS 3.0 software update as a feature for paying MobileMe users. Find My iPhone was made free of charge with the iOS 4.2.1 software update on November 22, 2010, but only for devices introduced in 2010. An iOS app was also released by Apple on June 18, 2010, which allows users to locate their device from other iOS devices running iOS 4 or later software. In iOS 5, Find My iPhone was continued as a feature for iCloud. iOS 6 introduced Lost Mode, a new feature that allows the user to mark a device as "lost", making it easier to protect and find. The feature also allows someone that finds the user's lost iPhone to call the user directly without unlocking it. Similar phone finder services under various names are available for other families of smartphones. Activation Lock was introduced in 2013 with iOS 7. It is integrated with iCloud and Find My iPhone feature. This new feature locks the activation of any iPhone, iPad, iPod touch or Apple watch which has been restored in either DFU or Recovery mode without first disabling the Find My iPhone feature. Once restore is completed, the device will ask for the Apple ID and password that has been previously associated with it, to proceed with activation, ultimately preventing any stolen device from being usable. As of iOS 9, Find my iPhone is a built-in app, and thus cannot be removed. In iOS and iPadOS 13, Both Find my iPhone and Find My Friends have been removed in favour of Find My. Find My Find My replaced Find my iPhone and Find My Friends, merging the two apps in iOS and iPadOS 14. iCloud Keychain iCloud Keychain is a password manager developed by Apple that syncs passwords across devices and suggests secure ones when creating new accounts. iCloud Keychain backups provide different security guarantees than traditional iCloud backups. This is because iCloud Keychain uses "end-to-end encryption", meaning that iCloud Keychain backups are designed so that the provider does not have access to unencrypted data. This is accomplished through the use of a novel "key vault" design based on a Hardware Security Module located in Apple's data centers. iTunes Match iTunes Match debuted on November 14, 2011. It was initially available to US users only. For an annual fee, customers can scan and match tracks in their iTunes music library, including tracks copied from CDs or other sources, with tracks in the iTunes Store, so customers do not have to repurchase said tracks. Customers may download up to 100,000 tracks in 256 kbit/s DRM-free AAC file format that matches tracks in any supported audio file formats in customers' iTunes libraries, including ALAC and MP3. Customers also have the choice to keep their original copies stored on their computers or have them replaced by copies from the iTunes Store. Any music not available in the iTunes Store is uploaded for download onto customers' other supported devices and computers; doing this will not take storage from the customers' iCloud's storage allowance. Any such tracks stored in the higher quality lossless audio ALAC, or original uncompressed PCM formats, WAV and AIFF, are transcoded to 256 kbit/s DRM-free AAC format before uploading to the customers' iCloud storage account, leaving the original higher quality local files in their original format. If a user stops paying for the iTunes Match service, all copies of the DRM-free AAC iTunes Store versions of tracks that have already been downloaded onto any device can be kept, whether on iOS devices or computers. From iOS 7 and OS X Mavericks, the iTunes Radio function will be available across devices, including integration with the Music app, both on portable iOS devices and Apple TV (2nd generation onwards), as well as inside the iTunes app on Macintosh and Windows computers. It will be included in an ad-free version for subscribers to the iTunes Match service and is currently available only in the US and Australia The streaming Genius shuffle is not available in current versions of iOS but is available in iTunes on the Mac. On January 28, 2016, ad-free iTunes Radio was discontinued and is therefore no longer part of iTunes Match. , iTunes Match is available in 116 countries, while iTunes in the Cloud is available in 155 countries. iWork for iCloud During the 2013 Apple Worldwide Developers Conference (WWDC) keynote speech, iWork for iCloud was announced for release at the same time as the next version of the app versions of iWork later in the year. The three apps for both iOS and macOS that form Apple's iWork suite (Pages, Numbers, and Keynote), will be made available on a web interface (named as Pages for iCloud, Numbers for iCloud, and Keynote for iCloud respectively), and accessed via the iCloud website under each user's iCloud Apple ID login. They will also sync with the user's iOS and macOS versions of the app, should they have them, again via their iCloud Apple ID. This allows the user to edit and create documents on the web, using one of the supported browsers: Safari, Chrome, and Microsoft Edge. It also means that Microsoft Windows users now have access to these native –previously only Apple device– document editing tools, via the web interface. Photo Stream Photo Stream is a service supplied with the basic iCloud service which allows users to store the most recent 1,000 photos on the iCloud servers for up to 30 days free of charge. When a photo is taken on a device with Photo Stream enabled, it is automatically uploaded to the iCloud servers. From there, it becomes available for viewing and saving on the rest of the user's Photo Stream-enabled devices. The photo is automatically removed from the server after 30 days or when it becomes photo number 1,001 in the user's stream. Photo Stream installed on a Mac or Windows desktop computer includes an option to have all photos permanently saved on that device. The service is also integrated with Apple TV, allowing users to view their recent photos wirelessly on their HDTV. iCloud Photos iCloud Photos is a feature on iOS 8.1 or later and OS X Yosemite (version 10.10) or later, plus web app access. The service stores all of the user's photos, maintaining their original resolution and metadata. Users can access their iCloud Photos on supported devices via the new Photos app when available or via the iCloud Photos web app at iCloud.com, which helps limit the amount of local storage each device needs to use to store photos (particularly those with smaller storage capacities) by storing lower-resolution versions on the device, with the user having the option to keep some/all stored locally at a higher resolution. Storage Since its introduction in 2011, each account has 5 GB of free storage for owners of either an iOS device using iOS 5.x or later, or a Mac using OS X Lion 10.7 or later. Users can purchase additional storage for a total of 50 GB, 200 GB or 2 TB. The amount of storage is shared across all devices per iCloud Apple ID. Several native features of iCloud use each user's iCloud storage allowance, specifically, Backup and restore, and email, Contacts, and Calendars. On Macs, users can also store most filetypes into iCloud folders of their choosing, rather than only storing them locally on the machine. While Photo Stream uses the iCloud servers, usage does not come out of the user's iCloud storage allowance. This is also true for iTunes Match music content, even for music that is not sold in the iTunes Store and which gets uploaded into iCloud storage, it does not count against the user's allowance. Other apps can optionally integrate app storage out of the user's iCloud storage allowance. Not all of a user's content counts as part of their iCloud storage allowance. Apple can keep a permanent track of every purchase a user makes under their Apple ID account, and by associating each piece of content with the user, it means only one copy of every Store item is needed to be kept on Apple's servers. For items bought from the iTunes Store (music, music videos, movies, TV shows), Apple Books Store (books), or App Store (iOS apps), this uses a service Apple call iTunes in the Cloud, allowing the user to automatically, or manually if preferred, re-download any of their previous purchases on to a Mac, PC, or iOS device. Downloaded (or streamed, provided the user is connected to the Internet) iTunes Store content can be used across all these devices, however, while Apple Books Store and App Store content can be downloaded to Macs and PCs for syncing to iOS devices, only iOS and Mac devices (and their respective apps) can be used to read the books. Similarly, macOS apps purchased from the Mac App Store are also linked to the Apple ID they were purchased through and can be downloaded to any Mac using the same Apple ID. Also, when a user registers any new device, all previously bought Store content can be downloaded from the Store servers or non-Store content from the iCloud servers. Audiobooks and their metadata fields from non-Apple purchased sources are not synced across devices (macOS or iOS) inside the Apple Books apps, and nor does the metadata from non-Apple purchased books (in Ebook or PDF format). There remains a syncing mismatch on some types of media, between Apple-purchased content and non-Apple purchased content that remains in effect for iCloud users. iCloud Drive iCloud Drive is iCloud's file hosting service, that syncs files across devices running iOS 8, OS X Yosemite (version 10.10), or Windows 7 or later, plus online web app access via iCloud.com. Users can store any kind of file (including photos, videos, documents, music, and other apps' data) in iCloud Drive and access it on any Mac, iPad, iPhone, iPod Touch, or Windows PC, with any single file being a maximum of 50 GB in file size (earlier it was 15 GB). This allows users to start their work on one device and continue on another device. By default, users still get 5 GB of storage for free as previously, but the expandable storage plans available have increased in size (current tiers: 50 GB, 200 GB, and 2 TB), and altered to monthly subscription payment options from the yearly ones offered under the previous MobileMe service. In iOS 11, iCloud Drive has been integrated into the new Files app that gives users access to all their cloud and local on-device storage, which replaced the standalone iCloud Drive app. Messages on iCloud Messages on iCloud is a feature on iOS 11.4 and macOS High Sierra 10.13.5 which keeps all of a user's iMessages and SMS texts stored in the cloud. Private Relay Private Relay, an iCloud+ feature currently on beta, allows users to browse Safari privately, similar to a virtual private network. According to Apple, "regulatory reasons" prevent the company from launching Private Relay in China, Belarus, Russia, Colombia, Egypt, Kazakhstan, Saudi Arabia, South Africa, Turkmenistan, Uganda, and the Philippines. Up to 5% of Wikipedia editors globally could be negatively affected by using Private Relay, because Wikipedia blocks ranges of IP addresses to combat page vandalism Hide My Email Hide My Email is available to iCloud+ users and allows users in Mail and Safari to generate temporary Apple email addresses which forward messages to their main email address. Custom email domain Custom email domains, an iCloud+ feature, allows users to personalize their email address with a custom domain name and invite family members to use the same domain with their iCloud Mail accounts. Criticism iCloud has been criticized by third-party developers for bugs that make some features nearly unusable under earlier versions of iOS and macOS, specifically the use of Core Data in iCloud, for storing and syncing larger amounts of data between third-party apps on users' devices. Third-party developers have reported that the changes implemented in the release of iOS 7 and OS X Mavericks (version 10.9) address these iCloud criticisms. Name dispute iCloud Communications, a telecommunications company in Arizona, sued Apple in June 2011 for trademark infringement shortly after Apple announced iCloud. The lawsuit was filed in the US District Court of Arizona and demanded that Apple stop using the iCloud name and pay unspecified monetary damages. iCloud Communications changed its name to Clear Digital Communications in August 2011 and dropped its lawsuit against Apple shortly thereafter. Privacy Apple's iCloud service, including iCloud Drive and iOS device backups, does not provide end-to-end encryption, also known as client-side encryption, and without end-to-end encryption, users' information is left unsecured because it remains easily accessible to unauthorized persons. Furthermore, Apple reserves the right to and admits to scanning user data for illegal content. In August 2014, it was rumored that hackers had discovered an exploit involving the Find My iPhone service, which potentially allowed an attacker to brute-force a user's Apple ID and access their iCloud data. The exploit was later incorrectly rumored to have been used as part of an August 2014 leak of a large number of private, nude photos of celebrities that had been synced to their iCloud storage from their iPhone. Apple confirmed that it was working with law enforcement agencies to investigate the leak. Apple subsequently denied that the iCloud service itself or the alleged exploit was responsible for the leak, asserting that the leaks were the result of a very targeted phishing attack against the celebrities. On September 13, 2014 Tim Cook, while being interviewed by Charlie Rose, stated on camera that the celebrity leaks were not an iCloud exploit at all, but rather the celebrities had been phished by very targeted phishing to trick them out of their login credentials. Apple has been scanning iCloud Mail for CSAM information starting 2019. On August 5, 2021, Apple confirmed it has planned to started scanning iCloud Photos for the same reason. After receiving a public backlash against Apple scanning private photos, Apple announced it will collect further input before releasing new functionality. China In February 2018, Apple announced that iCloud users in China would have their data, including encryption data, on servers called "云上贵州" located in the country to comply with local regulations. This raised concerns from human rights activists who claim that it may be used to track dissidents. In response, CEO Tim Cook stated that Apple encrypts "the same in every country in the world other than China". On June 7, 2021, during the WWDC event, Apple announced that iCloud's new 'private relay' feature would not work in China for regulatory reasons. See also Comparison of file hosting services Comparison of online backup services Comparison of online music lockers Cloud backup File hosting service References External links – official site Information about iCloud on Apple.com 2011 software Apple Inc. services Cloud applications Companies' terms of service Computer-related introductions in 2011 Data synchronization File hosting for macOS File sharing services Internet properties established in 2011 IOS Storage software Webmail ■
31975089
https://en.wikipedia.org/wiki/ReFS
ReFS
Resilient File System (ReFS), codenamed "Protogon", is a Microsoft proprietary file system introduced with Windows Server 2012 with the intent of becoming the "next generation" file system after NTFS. ReFS was designed to overcome problems that had become significant over the years since NTFS was conceived, which are related to how data storage requirements had changed. The key design advantages of ReFS include automatic integrity checking and data scrubbing, elimination of the need for running chkdsk, protection against data degradation, built-in handling of hard disk drive failure and redundancy, integration of RAID functionality, a switch to copy/allocate on write for data and metadata updates, handling of very long paths and filenames, and storage virtualization and pooling, including almost arbitrarily sized logical volumes (unrelated to the physical sizes of the used drives). These requirements arose from two major changes in storage systems and usage – the size of storage in use (large or massive arrays of multi-terabyte drives now being fairly common), and the need for continual reliability. As a result, the file system needs to be self-repairing (to prevent disk checking from being impractically slow or disruptive), along with abstraction or virtualization between physical disks and logical volumes. ReFS was initially added to Windows Server 2012 only, with the aim of gradual migration to consumer systems in future versions; this was achieved as of Windows 8.1. The initial versions removed some NTFS features, such as disk quotas, alternate data streams, and extended attributes. Some of these were re-implemented in later versions of ReFS. In early versions (2012–2013), ReFS was similar to or slightly faster than NTFS in most tests, but far slower when full integrity checking was enabled, a result attributed to the relative newness of ReFS. The ability to create ReFS volumes was removed in Windows 10's 2017 Fall Creators Update for all editions except Enterprise and Pro for Workstations. The cluster size of a ReFS volume is either 4 KB or 64 KB. Microsoft Windows and Windows Server include , a command-line utility that can be used to diagnose heavily damaged ReFS volumes, identify remaining files, and copy those files to another volume. Feature changes compared to NTFS Major new features Improved reliability for on-disk structures ReFS uses B+ trees for all on-disk structures, including all metadata and file data. Metadata and file data are organized into tables similar to a relational database. The file size, number of files in a folder, total volume size and number of folders in a volume are limited by 64-bit numbers; as a result, ReFS supports a maximum file size of 16 exbibytes (264−1 bytes), and a maximum volume size of 35 petabytes. Built-in resilience ReFS employs an allocation-on-write update strategy for metadata, which allocates new chunks for every update transaction and uses large IO batches. All ReFS metadata have 64-bit checksums which are stored independently. The file data can have an optional checksum in a separate "integrity stream", in which case the file update strategy also implements allocation-on-write for file data; this is controlled by a new "integrity" attribute applicable to both files and directories. If file data or metadata become corrupt, the file can be deleted without taking the whole volume offline for maintenance, and then be restored from the backup. As a result of built-in resiliency, administrators do not need to periodically run error-checking tools such as CHKDSK when using ReFS. Compatibility with existing APIs and technologies ReFS supports only a subset of NTFS features – and only Win32 APIs that are "widely adopted" – but does not require new system APIs, and most file system filters continue to work with ReFS volumes. ReFS supports many existing Windows and NTFS features such as BitLocker encryption, Access Control Lists, USN Journal, change notifications, symbolic links, junction points, mount points, reparse points, volume snapshots, file IDs, and oplock. ReFS seamlessly integrates with Storage Spaces, a storage virtualization layer that allows data mirroring and striping, as well as sharing storage pools between machines. ReFS resiliency features enhance the mirroring feature provided by Storage Spaces and can detect whether any mirrored copies of files become corrupt using a data scrubbing process, which periodically reads all mirror copies and verifies their checksums, then replaces bad copies with good ones. Removed features Some NTFS features are not implemented in ReFS. These include object IDs, 8.3 filename, NTFS compression, Encrypting File System (EFS), transactional NTFS, extended attributes, and disk quotas. In addition, Windows cannot be booted from a ReFS volume. Dynamic disks with mirrored or striped volumes are replaced with mirrored or striped storage pools provided by Storage Spaces; however, automated error-correction is only supported on mirrored spaces. Data deduplication was missing in early versions of ReFS. It was implemented in v3.2, debuting in Windows Server v1709. Support for alternate data streams and hard links was initially not implemented in ReFS. In Windows 8.1 64-bit and Server 2012 R2 the file system reacquired support for alternate data streams only, with lengths of up to 128K, and automatic correction of corruption when integrity streams are used on parity spaces. ReFS had initially been unsuitable for Microsoft SQL Server instance allocation due to the absence of alternate data streams. Version history and compatibility ReFS has some different versions, with various degrees of compatibility between operating system versions. Aside for development versions of the filesystem, usually, later operating system versions can mount filesystems created with earlier OS versions (backwards compatibility). Some features may not be compatible with the feature set of the OS. The version, cluster size and other features of the filesystem can be queried with the command fsutil fsinfo refsinfo volumename. 1.1: The original version, formatted by Windows Server 2012. 1.2: Default version if formatted by Windows 8.1, Windows 10 v1507 to v1607, Windows Server 2012 R2, and when specified ReFSv1 on Windows Server 2016. Can use alternate data streams under Windows Server 2012 R2. 2.2: Default version formatted by Windows 10 Preview build 10049 or earlier. Could not be mounted in 10061 and later. 2.0: Default version formatted by Windows Server 2016 TP2 and TP3. Could not be mounted in Windows 10 Build 10130 and later, or Windows Server 2016 TP4 and later. 3.0: Default version formatted by Windows Server 2016 TP4 and TP5. 3.1: Default version formatted by Windows Server 2016 RTM. 3.2: Default version formatted by Windows 10 v1703 and Windows Server Insider Preview build 16237. Can be formatted with Windows 10 Insider Preview 15002 or later (though only became the default somewhere between 15002 and 15019). Supports deduplication in the server version. 3.3: Default version formatted by Windows 10 Enterprise v1709 (ReFS volume creation ability removed from all editions except Enterprise and Pro for Workstations starting with build 16226; read/write ability remains) and Windows Server version 1709 (starting with Windows 10 Enterprise Insider Preview build 16257 and Windows Server Insider Preview build 16257). 3.4: Default version formatted by Windows 10 Pro for Workstations/Enterprise v1803 and newer, also server versions (including the long-time support version Windows Server 2019). 3.5: Default version formatted by Windows 10 Enterprise Insider Preview (build 19536 or newer); adds support for hard links (only on fresh formatted volume; not supported on volumes upgraded from previous versions). 3.6: Default version formatted by Windows 10 Enterprise Insider Preview (build 21292 or newer) and Windows Server Insider Preview (build 20282 or newer) 3.7: Default version formatted by Windows 10 Enterprise Insider Preview (build 21313 or newer) and Windows Server Insider Preview (build 20303 or newer). Also, the version used by Windows Server 2022 and Windows 11. Notes: 1: The following message is recorded to the event log: 'Volume "?:" was mounted in an older version of Windows. Some features may be lost.' 2: Windows upgrades it to 3.1 when the volume is mounted with write access. 3: Windows upgrades it to 3.2 when the volume is mounted with write access. 4: Windows upgrades it to 3.3 when the volume is mounted with write access. 5: ReFS volume creation ability removed in Windows 10 v1709 (2017's Fall Creators Update), except for Enterprise and Pro for Workstations editions. 6: Windows upgrades it to 3.4 when the volume is mounted with write access. Stability and known problems Issues identified or suggested for ReFS, when running on Storage Spaces, include: Adding thin-provisioned ReFS on top of Storage Spaces (according to a 2012 pre-release article) can fail in a non-graceful manner, in which the volume without warning becomes inaccessible or unmanageable. This can happen, for example, if the physical disks underlying a storage space became too full. Smallnetbuilder comments that, in such cases, recovery could be "prohibitive" as a "breakthrough in theory" is needed to identify storage space layouts and recover them, which is required before any ReFS recovery of file system contents can be started; therefore it recommends using backups as well. Thin-provisioned ReFS on top of Storage Spaces, occasionally if the ReFS partition is extended to the full size of the thin-volume. When later extending the thin-volume size the ReFS partition may fail to extend to the thin-volume size. Once it fails to extend the partition you can never again extend the partition, no matter how large you extend the thin-volume size too. The workaround is to never extend the ReFS partition to the full size of the thin-volume, always leave a few GB at the end of the volume unassigned. It is thought that during the extension of the partition, it writes values into the partition table that corrupt the table when using the full size of the thin-volume, preventing any further expansions. That data is still intact and the ReFS partition works normally. A solution is to create a new volume and ReFS partition and copy out of the old ReFS and into the new ReFS requiring double storage during the copy before the old volume can be deleted. Thin-provisioned volume formatted with ReFS eventually expands the thin volume to the full size of the ReFS formatted size, nullifying the reason to use a thin volume on Windows 10. The more use the partition sees the quicker the volume is expanded, even if the data is mostly static. Less used drives still expand on Windows 10 but not at the same rate. When using an NTFS formatted volume on top of a thin volume does not experience the same expansion. For example, 4TB ReFS two-way mirror 99% static data(data logs) 1.11 TB used, data does not change once added, data added once a month, storage spaces used the entire 8TB of storage(should be around 2.2 TB used in storage spaces). Same storage space different thin volume formatted with ReFS, two-way mirror single data write, but data is used less frequently and added to less frequently 257GB of data, storage spaces using 7.77TB. thin volume NTFS partition, used in the same manner as the 257GB partition, but only using 601GB. 7-month window. Even when Storage Spaces is not thinly provisioned, ReFS may still be unable to dependably correct all file errors in some situations, because Storage Spaces operates on blocks and not files, and therefore some files may potentially lack necessary blocks or recovery data if part of the storage space is not working correctly. As a result, disk and data addition and removal may be impaired, and redundancy conversion becomes difficult or impossible. Third party repair or recover tools are dependent on reverse engineering the system and () few of these exist. Windows Store cannot install apps on a ReFS volume. Server 2016 updates At the Storage Developer Conference 2015, a Microsoft developer presented enhancements of ReFS expected to be released with Windows Server 2016 and included in Technical Preview 4, titled "ReFS v2". It highlighted that ReFS now included capabilities for very high speed moving, reordering, and cloning of blocks between files (which can be done for all blocks of a file). This is particularly needed for virtualization, and is stated to allow fast provisioning, diff merging, and tiering. Other enhancements cover the redo log (for synchronous disk writes), parallelization, efficient tracking of uninitialized sparse data and files, and efficient 4k I/O. ReFS with File Integrity enabled also acts more like a log-structured file system, coalescing small random writes into large sequential ones for efficiency. Server 2022 updates Windows Server 2022 (using ReFS version 3.7) supports file-level snapshots. Performance and competitor comparisons Other operating systems have competing file systems to ReFS, of which the best known are ZFS and Btrfs, in the sense that all three are designed to integrate data protection, snapshots, and silent high-speed background healing of corruption and data errors. In 2012, Phoronix wrote an analysis of ReFS vs Btrfs, a copy-on-write file system for Linux. Their features are similar, with both supporting checksums, RAID-like use of multiple disks, and error detection/correction. However, ReFS lacks copy-on-write snapshots and compression, both found in Btrfs and ZFS. In 2014, a review of ReFS and assessment of its readiness for production use concluded that ReFS had at least some advantages over two of its main file system competitors. ZFS (used in Solaris, illumos, FreeBSD and others) was widely criticized for its comparatively extreme memory requirements of many gigabytes of RAM for online deduplication. However, online deduplication is never enabled by default in ZFS and was not supported at the time by ReFS (it has since been added), so not enabling ZFS online deduplication yielded a more even comparison between the two file systems as ZFS then has a memory requirement of only a few hundred megabytes. Offerings such as Drobo used proprietary methods which have no fallback if the company behind them fails. Reverse engineering and internals , Microsoft has not published any specifications for ReFS, nor have any working open-source drivers been made. A third-party open-source project to document ReFS is on GitHub. Paragon Software Group provides a closed-source driver for Windows and Linux. See also Comparison of file systems APFS WinFS References External links Analysis of detailed differences between NTFS and ReFS in Server 2012, and reasons for choosing one or the other ReFS documentation project - PDF document of the ReFS filing system 2012 software Windows disk file systems
32026892
https://en.wikipedia.org/wiki/AT%20HOP%20card
AT HOP card
The AT HOP card is an electronic fare payment card that was released in two versions on Auckland public transport services, beginning in May 2011. The smart card roll out was the first phase in the introduction of an integrated ticketing and fares system (Auckland Integrated Fares System, or "AIFS") that was rolled out across the region. The first iteration of the card – commonly referred to as the "purple HOP card" – was discontinued in 2012 because of issues with the delivery of key technologies. The current card, called the AT HOP card, is in use on all ferry, train and bus services in Auckland. The rollout of the card to all three transport modes was completed in March 2014. Card operation The AT HOP card is a dark blue credit-card-sized stored-value contactless smartcard that can hold prepaid funds (called HOP Money) to pay for fares or for monthly passes for unlimited travel within one or more of three "transport zones". Either facility must be added to the card before travel. Passengers "tag on" and "tag off" their card on electronic terminals when entering and leaving the transport system in order to validate it or deduct funds. Cards may be "topped-up" or monthly passes purchased in the following ways: online, at ticket machines, at ticketing offices, and at selected retail outlets such as bookshops. Top ups may be made by credit or debit card, with the latter three mediums accepting cash payment. The card is designed to reduce the number of transactions at ticket offices and the number of paper tickets. Usage is encouraged by offering cheaper fares than the cash ticket option, although there is an initial once-only fee to purchase the card. Monthly and/or multiple trip travel is only available with the AT HOP card. The card can be used only for fare payments and only on Auckland Transport routes; it cannot be used to pay for refreshments or other items. It cannot be used to pay for travel on the Northern Explorer passenger train running between Auckland and Wellington or on inter-city bus services. The AT HOP cards are based on near field communication (NFC) with DESFire with support of 3DES and AES, enabling 168–128 bit keys. This encryption give card holders the ability to not have their card simply cloned. History In 2008, the Auckland Regional Transport Authority announced its intentions to develop an integrated ticketing system for the region's public transport services, called the Auckland Integrated Fares System (AIFS). An initial system developed with a consortium including the French Thales Group and New Zealand-based Snapper Services was announced in 2010, however subsequent difficulties with the development of technologies for the system saw the termination of Auckland Transport's agreement with Snapper. The council-controlled organisation confirmed Thales would be contracted with ongoing development of the system across the entirety of the region's transport network. HOP/Snapper card debacle Snapper Services Ltd, a subsidiary of Infratil, made a joint bid with ANZ, New Zealand Post, Eyede, Unisys and Beca Group for the contract of developing Auckland's integrated ticketing system. However, the contract was awarded to the Thales Group. Snapper lodged a complaint, later dismissed, questioning the legitimacy of the tender process. Snapper announced in late-2009 that it would begin rolling out its Snapper card onto NZ Bus services (but no other Auckland bus company or service), in spite of the Auckland Regional Transport Authority-Thales integrated ticketing arrangement. In response, the Auckland Regional Transport Authority called the Snapper announcement "premature" citing the development of ARTA's integrated ticketing offering still in development with Thales and confirming that all public transport operators in Auckland, including NZ Bus, would be required to participate in ARTA's system. Replacing the Auckland Regional Transport Authority in 2010, Auckland Transport announced it had invited Snapper to work with the council-controlled organisation and Thales on the ticketing system. Auckland Transport confirmed Snapper would develop a contactless smart card and supply buses with ticketing terminals that would support the Thales developed back-end, to be rolled out initially on NZ Bus services and later on ferry and train services in time for the Rugby World Cup 2011. In April 2011, Auckland Transport announced the "HOP card", developed by Snapper, with initial rollout on all NZ Bus services. This iteration of the "HOP card" was met with initial confusion as to its capabilities and the extent of Auckland Transport's integration with Snapper and Snapper's pre-existing infrastructure, which included the ability to make minor transactions with merchants and retailers. Concerns were also raised as to the ability of Auckland's ticketing system to work with Snapper cards used on Wellington's transport network and vice versa, with Auckland Transport later instructing NZ Bus drivers not to accept the Wellington implementation of the Snapper card. Auckland Transport subsequently announced in early-2012 that bus passengers would be required to "swap out" their HOP/Snapper cards for a new integrated ticketing card, also called "HOP", as the Snapper offering would not be supported on ferries, trains and on some bus services. Snapper faced difficulties in developing its technology to work with the Thales system, with Thales' New Zealand chief executive citing that the "failure of Snapper to deliver a functional bus system that meets the ratified standard has caused delays to project go-live". Snapper's "failure" to meet the November 30 deadline imposed by Auckland Transport ultimately led to the organisation severing its relationship with Snapper, citing "concerns about whether Snapper could modify its system in a suitable timeframe". Snapper maintained it was "wrongly blamed" for the delays, declaring “Auckland Transport is being disingenuous with its attempt to position Snapper as the reason that the [integrated ticketing] project is delayed." Auckland Transport confirmed it had commissioned Thales to provide the new iteration of the "HOP" smart card – called "AT HOP" – and its ticketing terminals, replacing the HOP/Snapper offering on NZ Bus services and introducing the new card onto ferries, trains and all other bus services. Launch The current blue 'AT HOP card' began rolling out on public transport, starting with the rail network on 28 October 2012. The rollout for all Auckland bus, train and ferry services was completed by March 2014. Spark (then Telecom) had trialed a 'virtual AT HOP card' on Android phones with NFC and intended it for release in late 2013. Post-launch operation A fee of 25 cents for each top up was abolished in July 2014. In September 2016, it was reported that the one millionth AT Hop card had been sold, and that 42 per cent of Auckland adults had a card as of June 2016. The contract with the Thales Group runs until 2021. Auckland Transport has the option to extend the contract to 2026. The expectation is that in 2026, Auckland will implement Project NEXT, an open-loop account-based public transport payment system proposed for New Zealand. With Auckland joining Project NEXT, implementation of this system across the country should be completed, meaning that from then on, the whole country will use the same system. References Contactless smart cards Fare collection systems in New Zealand Public transport in Auckland
32039577
https://en.wikipedia.org/wiki/LibreOffice%20Writer
LibreOffice Writer
LibreOffice Writer is the free and open-source word processor and desktop publishing component of the LibreOffice software package and is a fork of OpenOffice.org Writer. Writer is a word processor similar to Microsoft Word and Corel's WordPerfect with many similar features, and file format compatibility. LibreOffice Writer is released under the Mozilla Public License v2.0. As with the entire LibreOffice suite, Writer can be used across a variety of platforms, including Linux, FreeBSD, Mac OS X and Microsoft Windows. Some features Writer is capable of opening and saving to a number of formats, including OpenDocument (ODT is its default format), Microsoft Word's DOC, DOCX, RTF and XHTML. A spelling and grammar checker (Hunspell) Built-in drawing tools Built-in form building tools Built-in calculation functions Built-in equation editor Export in PDF format, generate hybrid PDF (a standard PDF with attached source ODF file) and create fillable PDF form The ability to import and edit PDF files. Ability to edit HTML, XHTML files visually without using code with WYSIWYG support Export in HTML, XHTML, XML formats Export in EPUB ebook format Contents, index, bibliography Document signing, password and public-key (GPG) encryption Change tracking during revisions, document comparison (view changes between two files) Database integration, including a bibliography database MailMerge Scriptable and Remote Controllable via the UNO API OpenType stylistic sets and character variants of fonts are not selectable from the menus, but can be specified manually in the font window. For example, fontname:ss06&cv03 will set the font to stylistic set 6 and chose character variant 3. This is based on the same syntax for Graphite font feature. Supported file formats Release history Versions for LibreOffice Writer include the following: See also Comparison of office suites Comparison of word processors List of word processors References External links Features page at LibreOffice.org Cross-platform free software Free word processors Writer Linux word processors MacOS word processors Windows word processors
32058867
https://en.wikipedia.org/wiki/WhatsApp
WhatsApp
WhatsApp Messenger, or simply WhatsApp, is an internationally available American freeware, cross-platform centralized instant messaging (IM) and voice-over-IP (VoIP) service owned by Meta Platforms. It allows users to send text messages and voice messages, make voice and video calls, and share images, documents, user locations, and other content. WhatsApp's client application runs on mobile devices but is also accessible from desktop computers, as long as the user's mobile device remains connected to the Internet while they use the desktop app. The service requires a cellular mobile telephone number to sign up. In January 2018, WhatsApp released a standalone business app targeted at small business owners, called WhatsApp Business, to allow companies to communicate with customers who use the standard WhatsApp client. The client application was created by WhatsApp Inc. of Mountain View, California, which was acquired by Facebook in February 2014 for approximately US$19.3 billion. It became the world's most popular messaging application by 2015, and had more than 2billion users worldwide by February 2020. By 2016 it had become the primary means of Internet communication in regions including Latin America, the Indian subcontinent, and large parts of Europe and Africa. History 2009–2014 WhatsApp was founded by Brian Acton and Jan Koum, former employees of Yahoo!. In January 2009, after purchasing an iPhone and realizing the potential of the app industry on the App Store, Koum and Acton began visiting Koum's friend Alex Fishman in West San Jose to discuss a new type of messaging app that would show "statuses next to individual names of the people". They realized that to take the idea further, they would need an iPhone developer. Fishman visited RentACoder.com, found Russian developer Igor Solomennikov, and introduced him to Koum. Koum named the app WhatsApp to sound like "what's up". On February 24, 2009, he incorporated WhatsApp Inc. in California. However, when early versions of WhatsApp kept crashing, Koum considered giving up and looking for a new job. Acton encouraged him to wait for a "few more months". In June 2009, Apple launched push notifications, allowing users to be pinged when they were not using an app. Koum changed WhatsApp so that everyone in the user's network would be notified when a user's status is changed. WhatsApp 2.0 was released with a messaging component and the number of active users suddenly increased to 250,000. Although Acton was working on another startup idea, he decided to join the company. In October 2009, Acton persuaded five former friends at Yahoo! to invest $250,000 in seed funding, and Acton became a co-founder and was given a stake. He officially joined WhatsApp on November 1. After months at beta stage, the application launched in November 2009, exclusively on the App Store for the iPhone. Koum then hired a friend in Los Angeles, Chris Peiffer, to develop a BlackBerry version, which arrived two months later. Subsequently, WhatsApp for Symbian OS was added in May 2010, and for Android OS in August 2010. In 2010, WhatsApp was subject to multiple acquisition offers from Google which were declined. To cover the cost of sending verification texts to users, WhatsApp was changed from a free service to a paid one. In December 2009, the ability to send photos was added to the iOS version. By early 2011, WhatsApp was one of the top 20 apps in Apple's U.S. App Store. In April 2011, Sequoia Capital invested about $8 million for more than 15% of the company, after months of negotiation by Sequoia partner Jim Goetz. By February 2013, WhatsApp had about 200 million active users and 50 staff members. Sequoia invested another $50 million, and WhatsApp was valued at $1.5 billion. Sometime in 2013, WhatsApp acquired Santa Clara based startup, SkyMobius, the developers of Vtok, a video and voice calling app. In a December 2013 blog post, WhatsApp claimed that 400 million active users used the service each month. Facebook subsidiary (since 2014) On February 19, 2014, just one year after a venture capital financing round at a $1.5 billion valuation, Facebook, Inc. (now Meta Platforms) announced it was acquiring WhatsApp for US$19 billion, its largest acquisition to date. At the time, it was the largest acquisition of a venture-backed company in history. Sequoia Capital received an approximate 5000% return on its initial investment. Facebook, which was advised by Allen & Co, paid $4 billion in cash, $12 billion in Facebook shares, and (advised by Morgan Stanley) an additional $3 billion in restricted stock units granted to WhatsApp's founders Koum and Acton. Employee stock was scheduled to vest over four years subsequent to closing. Days after the announcement, WhatsApp users experienced a loss of service, leading to anger across social media. The acquisition was influenced by the data provided by Onavo, Facebook's research app for monitoring competitors and trending usage of social activities on mobile phones, as well as startups that are performing "unusually well". The acquisition caused a considerable number of users to try and/or move to other message services. Telegram claimed that it acquired 8 million new users; and Line, 2 million. At a keynote presentation at the Mobile World Congress in Barcelona in February 2014, Facebook CEO Mark Zuckerberg said that Facebook's acquisition of WhatsApp was closely related to the Internet.org vision. A TechCrunch article said this about Zuckerberg's vision:The idea, he said, is to develop a group of basic internet services that would be free of charge to use – 'a 911 for the internet.' These could be a social networking service like Facebook, a messaging service, maybe search and other things like weather. Providing a bundle of these free of charge to users will work like a gateway drug of sorts – users who may be able to afford data services and phones these days just don't see the point of why they would pay for those data services. This would give them some context for why they are important, and that will lead them to pay for more services like this – or so the hope goes. Just three days after announcing the Facebook purchase, Koum said they were working to introduce voice calls. He also said that new mobile phones would be sold in Germany with the WhatsApp brand and that their ultimate goal was to be on all smartphones. In August 2014, WhatsApp was the most globally popular messaging app, with more than 600 million users. By early January 2015, WhatsApp had 700 million monthly users and over 30 billion messages every day. In April 2015, Forbes predicted that between 2012 and 2018, the telecommunications industry would lose $386 billion because of over-the-top (OTT) services like WhatsApp and Skype. That month, WhatsApp had over 800 million users. By September 2015, it had grown to 900 million; and by February 2016, one billion. Voice calls between two accounts were added to the app in March and April 2015. On November 30, 2015, the Android WhatsApp client made links to another message service, Telegram, unclickable and uncopyable. Multiple sources confirmed that it was intentional, not a bug, and that it had been implemented when the Android source code that recognized Telegram URLs had been identified. (The word "telegram" appeared in WhatsApp's code.) Some considered it an anti-competitive measure, but WhatsApp offered no explanation. Since 2016 On January 18, 2016, WhatsApp's co-founder Jan Koum announced that it would no longer charge users a $1 annual subscription fee, in an effort to remove a barrier faced by users without credit cards. He also said that the app would not display any third-party ads, and that it would have new features such as the ability to communicate with businesses. By June 2016, the company's blog reported more than 100 million voice calls per day were being placed on WhatsApp. On November 10, 2016, WhatsApp launched a beta version of two-step verification for Android users, which allowed them to use their email addresses for further protection. Also in November 2016, Facebook ceased collecting WhatsApp data for advertising in Europe. Later that month, video calls between two accounts were introduced. On February 24, 2017, (WhatsApp's 8th birthday), WhatsApp launched a new Status feature similar to Snapchat and Facebook stories. On May 18, 2017, the European Commission announced that it was fining Facebook €110 million for "providing misleading information about WhatsApp takeover" in 2014. The Commission said that in 2014 when Facebook acquired the messaging app, it "falsely claimed it was technically impossible to automatically combine user information from Facebook and WhatsApp." However, in the summer of 2016, WhatsApp had begun sharing user information with its parent company, allowing information such as phone numbers to be used for targeted Facebook advertisements. Facebook acknowledged the breach, but said the errors in their 2014 filings were "not intentional". In September 2017, WhatsApp's co-founder Brian Acton left the company to start a nonprofit group, later revealed as the Signal Foundation, which developed the WhatsApp competitor Signal. He explained his reasons for leaving in an interview with Forbes a year later. WhatsApp also announced a forthcoming business platform to enable companies to provide customer service at scale, and airlines KLM and Aeroméxico announced their participation in the testing. Both airlines previously launched customer services on the Facebook Messenger platform. In January 2018, WhatsApp launched WhatsApp Business for small business use. In April 2018, WhatsApp co-founder and CEO Jan Koum announced he would be leaving the company. By leaving before November 2018, due to concerns about privacy, advertising, and monetization by Facebook, Acton and Koum gave up $1.3 billion in unvested stock options. Facebook later announced that Koum's replacement would be Chris Daniels. Later in September 2018, WhatsApp introduced group audio and video call features. In October, the "Swipe to Reply" option was added to the Android beta version, 16 months after it was introduced for iOS. On October 25, 2018, WhatsApp announced support for Stickers. But unlike other platforms WhatsApp requires third-party apps to add Stickers to WhatsApp. On November 25, 2019, WhatsApp announced an investment of $250,000 into the startup ecosystem through a partnership with Startup India to provide 500 startups with Facebook ad credits of $500 each. In December 2019, WhatsApp announced that a new update would lock out any Apple users who hadn't updated to iOS 9 or higher and Samsung, Huawei, Sony and Google users who hadn't updated to version 4.0 by February 1, 2020. The company also reported that Windows Phone operating systems would no longer be supported after December 31, 2019. WhatsApp was announced to be the 3rd most downloaded mobile app of the decade from 2010 to 2019. In early 2020, WhatsApp launched its "dark mode" for iPhone and Android devices – a new design consisting of a darker palette. In March, WhatsApp partnered with the World Health Organization and UNICEF to provide messaging hotlines for people to get information on the 2019-2020 coronavirus pandemic. That same month, WhatsApp began testing a feature to help users find out more information and context about information they receive. In October 2020, Whatsapp rolled out a feature allowing users to mute both individuals and group chats forever. The mute chat settings now show ‘8 hours', ‘1 week', and ‘Always' options. The ‘Always' option replaces the ‘1 year' option that was originally part of the settings. In January 2021, WhatsApp announced a new Privacy Policy which users would be forced to accept by February 8, 2021, or stop using the app. The policy would allow WhatsApp to share data with its parent company, Facebook. The policy does not apply in the EU, since it violates the principles of GDPR. Facing a pushback about Facebook data sharing and lack of clarity, WhatsApp postponed the update to May 15, 2021, but announced they have no plans to limit the functionality of the app for those who don't approve the new terms or to give them persistent reminders to do so. On March 1, 2021, WhatsApp started rolling out support for third-party animated stickers in Iran, Brazil and Indonesia. On March 24, 2021, WhatsApp launched third-party animated stickers worldwide. In July 2021 WhatsApp announced the development of an Android beta version update supporting the sending of uncompressed images and videos in 3 options: Auto, Best Quality and Data Saver. The same month, the Android beta enabled end-to-end encryption for cloud backups, stored in Facebook's cloud. The backup is locked by a passcode and 64-digit recovery key and cannot be accessed without them. The company is also testing multi-device support, which would allow users to launch WhatsApp on their desktop devices without keeping their phone session active. On October 4, 2021, Facebook had its worst outage since 2008. The outage also affected other platforms owned by Facebook, such as Instagram and WhatsApp. Security experts identified the problem as possibly being DNS-related. In December 2021, it was reported that WhatsApp started hiding users' online status, called "Last Seen" in the app from people that are not in the user's contacts or that the user has not had a conversation with yet. The option is set by default but can be changed to allow all contacts to see a user's online status. 2019 lawsuit In May 2019, WhatsApp was attacked by hackers who installed spyware on a number of victims' smartphones. The hack, allegedly developed by Israeli surveillance technology firm NSO Group, injected malware onto WhatsApp users’ phones via a remote-exploit bug in the app's Voice over IP calling functions. A Wired report noted the attack was able to inject malware via calls to the targeted phone, even if the user did not answer the call. On October 29, WhatsApp filed a lawsuit against NSO Group in a San Francisco court, claiming that the alleged cyberattack violated US laws including the Computer Fraud and Abuse Act (CFAA). According to WhatsApp, the exploit "targeted at least 100 human-rights defenders, journalists and other members of civil society" among a total of 1,400 users in 20 countries. Platform support After months at beta stage, the official first release of WhatsApp launched in November 2009, exclusively at the App Store for iPhone. In January 2010, support for BlackBerry smartphones was added; and subsequently for Symbian OS in May 2010, and for Android OS in August 2010. In August 2011, a beta for Nokia's non-smartphone OS Series 40 was added. A month later, support for Windows Phone was added, followed by BlackBerry 10 in March 2013. In April 2015, support for Samsung's Tizen OS was added. The oldest device capable of running WhatsApp was the Symbian-based Nokia N95 released in March 2007. (As of June 2017, WhatsApp is no longer compatible with it.) In August 2014, WhatsApp released an Android update, adding support for Android Wear smartwatches. On January 21, 2015, WhatsApp launched WhatsApp Web, a browser-based web client that could be used by syncing with a mobile device's connection. On February 26, 2016, WhatsApp announced they would cease support for BlackBerry (including BlackBerry 10), Nokia Series 40, and Symbian S60, as well as older versions of Android (2.2), Windows Phone (7.0), and iOS (6), by the end of 2016. BlackBerry, Nokia Series 40, and Symbian support was then extended to June 30, 2017. In June 2017, support for BlackBerry and Series 40 was once again extended until the end of 2017, while Symbian was dropped. Support for BlackBerry and older (version 8.0) Windows Phone and older (version 6) iOS devices was dropped on January 1, 2018, but was extended to December 2018 for Nokia Series 40. In July 2018, it was announced that WhatsApp would soon be available for KaiOS feature phones. In October 2019, WhatsApp officially launched a new fingerprint app-locking feature for Android users. In August 2021, WhatsApp launched a feature that allows for chat history to be transferred between mobile operating systems. The feature launched only on the Samsung phones with plans to expand to Android and iOS in the future. WhatsApp Web WhatsApp was officially made available for PCs through a web client, under the name WhatsApp Web, in late January 2015 through an announcement made by Koum on his Facebook page: "Our web client is simply an extension of your phone: the web browser mirrors conversations and messages from your mobile device—this means all of your messages still live on your phone". As of January 21, 2015, the desktop version was only available to Android, BlackBerry, and Windows Phone users. Later on, it also added support for iOS, Nokia Series 40, and Nokia S60 (Symbian). Previously the WhatsApp user's handset had to be connected to the Internet for the browser application to function but as of an update in October 2021 that is no longer the case. All major desktop browsers are supported except for Internet Explorer. WhatsApp Web's user interface is based on the default Android one and can be accessed through web.whatsapp.com. Access is granted after the users scan their personal QR code through their mobile WhatsApp application. There are similar solutions for macOS, such as the open-source ChitChat, previously known as WhatsMac. In January 2021, the limited Android beta version allowed users to use WhatsApp Web without having to keep the mobile app connected to the Internet. In March 2021, this beta feature was extended to iOS users. However, linked devices (using WhatsApp Web, WhatsApp Desktop or Facebook Portal) will become disconnected if people don't use their phone for over 14 days. The multi-device beta can only show messages for the last 3 months on the web version, which was not the case without the beta because the web version was syncing with the phone. Microsoft Windows and Mac On May 10, 2016, the messaging service was introduced for both Microsoft Windows and macOS operating systems. Recently, WhatsApp added support for video calls and voice calls from their desktop clients. Similar to the WhatsApp Web format, the app, which will be synced with a user's mobile device, is available for download on the website. It supports OS versions of Windows 8 and OS X 10.10 and higher. Apple iPad A story circulated in 2019 that iPad support was coming. However, as of May 2021, WhatsApp does not run on the iPad. iPad users searching for WhatsApp are shown numerous third-party clients. Several top results have names and logos resembling WhatsApp itself, and some users do not realize they are using a third-party client. Per WhatsApp's policy, using third-party clients can result in the account getting permanently banned. Technical WhatsApp uses a customized version of the open standard Extensible Messaging and Presence Protocol (XMPP). Upon installation, it creates a user account using one's phone number as the username (Jabber ID: [phone number]@s.whatsapp.net). WhatsApp software automatically compares all the phone numbers from the device's address book with its central database of WhatsApp users to automatically add contacts to the user's WhatsApp contact list. Previously the Android and Nokia Series 40 versions used an MD5-hashed, reversed-version of the phone's IMEI as password, while the iOS version used the phone's Wi-Fi MAC address instead of IMEI. A 2012 update now generates a random password on the server side. Alternatively a user can send to any contact in WhatsApp database through the url https:// api.whatsapp.com/send/?phone=[phone number] where [phone number] is the number of the contact including the country code. Some dual-SIM devices may not be compatible with WhatsApp, though there are some workarounds for this. In February 2015, WhatsApp introduced a voice calling feature; this helped WhatsApp to attract a completely different segment of the user population. WhatsApp's voice codec is Opus, which uses the modified discrete cosine transform (MDCT) and linear predictive coding (LPC) audio compression algorithms. WhatsApp uses Opus at 816 kHz sampling rates. On November 14, 2016, WhatsApp added a video calling feature for users across Android, iPhone, and Windows Phone devices. In November 2017, WhatsApp released a new feature that would let its users delete messages sent by mistake within a time frame of 7 minutes. Multimedia messages are sent by uploading the image, audio or video to be sent to an HTTP server and then sending a link to the content along with its Base64 encoded thumbnail (if applicable). WhatsApp follows a "store and forward" mechanism for exchanging messages between two users. When a user sends a message, it first travels to the WhatsApp server where it is stored. Then the server repeatedly requests the receiver to acknowledge receipt of the message. As soon as the message is acknowledged, the server drops the message; it is no longer available in the database of the server. The WhatsApp server keeps the message only for 30 days in its database when it is not delivered (when the receiver is not active on WhatsApp for 30 days). End-to-end encryption On November 18, 2014, Open Whisper Systems announced a partnership with WhatsApp to provide end-to-end encryption by incorporating the encryption protocol used in Signal into each WhatsApp client platform. Open Whisper Systems said that they had already incorporated the protocol into the latest WhatsApp client for Android, and that support for other clients, group/media messages, and key verification would be coming soon after. WhatsApp confirmed the partnership to reporters, but there was no announcement or documentation about the encryption feature on the official website, and further requests for comment were declined. In April 2015, German magazine Heise Security used ARP spoofing to confirm that the protocol had been implemented for Android-to-Android messages, and that WhatsApp messages from or to iPhones running iOS were still not end-to-end encrypted. They expressed the concern that regular WhatsApp users still could not tell the difference between end-to-end encrypted messages and regular messages. On April 5, 2016, WhatsApp and Open Whisper Systems announced that they had finished adding end-to-end encryption to "every form of communication" on WhatsApp, and that users could now verify each other's keys. Users were also given the option to enable a trust on first use mechanism in order to be notified if a correspondent's key changes. According to a white paper that was released along with the announcement, WhatsApp messages are encrypted with the Signal Protocol. WhatsApp calls are encrypted with SRTP, and all client-server communications are "layered within a separate encrypted channel". The Signal Protocol library used by WhatsApp is open-source and published under the GPLv3 license. On October 14, 2021, WhatsApp rolled out end-to-end encryption for backups on Android and iOS. The feature has to be turned on by the user and provides the option to encrypt the backup either with a password or a 64-digit encryption key. WhatsApp Payments WhatsApp Payments (marketed as WhatsApp Pay) is a peer-to-peer money transfer feature that is currently only available in India. WhatsApp has received permission from the National Payments Corporation of India (NPCI) to enter into partnership with multiple banks in July 2017 to allow users to make in-app payments and money transfers using the Unified Payments Interface (UPI). UPI enables account-to-account transfers from a mobile app without having any details of the beneficiary's bank. On November 6, 2020, WhatsApp announced that it had received approval for providing a payment service, although restricted to maximum of 20 million users initially. The service was subsequently rolled out. WhatsApp Cryptocurrency On February 28, 2019, The New York Times reported that Facebook was “hoping to succeed where Bitcoin failed” by developing an in-house cryptocurrency that would be incorporated into WhatsApp. The project reportedly involves over 50 engineers under the direction of former PayPal president David A. Marcus. This 'Facebook coin' would reportedly be a stablecoin pegged to the value of a basket of different foreign currencies. In June 2019, Facebook formally announced that the project would be named Libra, and that the company planned for a digital wallet named "Calibra" to be integrated into Facebook and WhatsApp. After financial regulators in the US, Europe, and other regions raised concerns, Calibra was rebranded to Novi in May 2020, and Libra was rebranded to Diem in December 2020. Facebook has stated that Novi would require a government-issued ID for verification and the wallet app would have fraud protection. Reception and criticism Hoaxes and fake news Forwarding limitations WhatsApp has repeatedly imposed limits on message forwarding in response to the spread of misinformation in countries such as India and Australia. The measure, first introduced in 2018 to combat spam, was expanded and remained active in 2021. WhatsApp has stated the forwarding limits have helped to curb the spread of misinformation regarding COVID-19. Mob murders in India In July 2018, WhatsApp encouraged people to report fraudulent or inciting messages after lynch mobs in India murdered innocent people because of malicious WhatsApp messages falsely accusing the victims of intending to abduct children. 2018 elections in Brazil In an investigation on the use of social media in politics, it was found that WhatsApp was being abused for the spread of fake news in the 2018 presidential elections in Brazil. Furthermore, it has been reported that US$3 million has been spent in illegal off-the-books contributions related to this practice. Researchers and journalists have called on WhatsApp parent company, Facebook, to adopt measures similar to those adopted in India and restrict the spread of hoaxes and fake news. Security and privacy WhatsApp was initially criticized for its lack of encryption, sending information as plaintext. Encryption was first added in May 2012. End-to-end encryption was only fully implemented in April 2016 after a two-year process. , it is known that WhatsApp makes extensive use of outside contractors and artificial intelligence systems to examine user messages, images and videos; and turns over to law enforcement metadata including critical account and location information. In 2016, WhatsApp was widely praised for the addition of end-to-end encryption and earned a 6 out of 7 points on the Electronic Frontier Foundation's "Secure Messaging Scorecard". WhatsApp was criticized by security researchers and the Electronic Frontier Foundation for using backups that are not covered by end-to-end encryption and allow messages to be accessed by third-parties. In May 2019, a security vulnerability in WhatsApp was found and fixed that allowed a remote person to install spyware by making a call which did not need to be answered. In September 2019, WhatsApp was criticized for its implementation of a 'delete for everyone' feature. iOS users can elect to save media to their camera roll automatically. When a user deletes media for everyone, WhatsApp does not delete images saved in the iOS camera roll and so those users are able to keep the images. WhatsApp released a statement saying that "the feature is working properly," and that images stored in the camera roll cannot be deleted due to Apple's security layers. In November 2019, WhatsApp released a new privacy feature that let users decide who can add them to groups. In December 2019, WhatsApp confirmed a security flaw that would allow hackers to use a malicious GIF image file to gain access to the recipient's data. When the recipient opened the gallery within WhatsApp, even if not sending the malicious image, the hack is triggered and the device and its contents become vulnerable. The flaw was patched and users were encouraged to update WhatsApp. On December 17, 2019, WhatsApp fixed a security flaw that allowed cyber attackers to repeatedly crash the messaging application for all members of group chat, which could only be fixed by forcing the complete uninstall and reinstall of the app. The bug was discovered by Check Point in August 2019 and reported to WhatsApp. It was fixed in version 2.19.246 onwards. For security purposes, since February 1, 2020, WhatsApp has been made unavailable on smartphones using legacy operating systems like Android 2.3.7 or older and iPhone iOS 8 or older that are no longer updated by their providers. In April 2020, the NSO Group held its governmental clients accountable for the allegation of human rights abuses by WhatsApp. In its revelation via documents received from court, the group claimed that the lawsuit brought against the company by WhatsApp threatened to infringe on its clients’ “national security and foreign policy concerns”. However, the company did not reveal names of the end users, which according to a research by Citizen Lab include, Saudi Arabia, Bahrain, Kazakhstan, Morocco, Mexico and the United Arab Emirates. On December 16, 2020, a claim that WhatsApp gave Google access to private messages was included in the anti-trust case against the latter. As the complaint is heavily redacted due to being an ongoing case, it doesn't disclose whether this alleges tampering with the app's end-to-end encryption or simply Google accessing user backups. In January 2021, WhatsApp announced update to Privacy Policy which states that WhatsApp will share user data with Facebook and its "family of companies" starting February 2021. Previously, users could opt-out of such data sharing, but the new policy removes this option. The new Privacy Policy does not apply within EU, since it is illegal under GDPR. Facebook and WhatsApp has been widely criticized for this move. The enforcement of the privacy policy has been postponed from February 8 to May 15, 2021, but announced they have no plans to limit the functionality of the app for those who don't approve the new terms or to give them persistent reminders to do so. On October 15, 2021, WhatsApp announced that it would begin offering an end-to-end encryption service for chat backups, meaning no third party (including both WhatsApp and the cloud storage vendor) will have access to a user's information. This new encryption feature adds an additional layer of protection to chat backups stored either on Apple iCloud or Google Drive. On November 29, 2021, an FBI document was uncovered by Rolling Stone, revealing that WhatsApp responds to warrants and subpoenas from the law enforcement within minutes, providing user metadata to the authorities. The metadata includes things like the user's contact information and their address book. In January 2022, an unsealed surveillance application revealed that WhatsApp started tracking 7 users from China and Macau in November 2021, based on a request from the DEA investigators. The app collected data on who the users contacted and how often, as well as when and how they were using the app. This is reportedly not a singular occurrence as federal agencies can use the Electronic Communications Privacy Act to covertly track users without submitting any probable cause or linking a user's number to their identity. At the beginning of 2022, it was revealed that a San Diego-based startup Boldend had developed tools to hack WhatsApp's encryption, gaining access to user data, at some point since the startup's inception in 2017. The vulnerability was reportedly patched in January 2021. Boldend is financed, in part, by Peter Thiel, a notable investor in Facebook. National Health Service of the United Kingdom In 2018, it was reported that around 500,000 National Health Service (NHS) staff used WhatsApp and other instant messaging systems at work and around 29,000 had faced disciplinary action for doing so. Higher usage was reported by frontline clinical staff to keep up with care needs, even though NHS trust policies do not permit their use. Mods and fake versions In March 2019, WhatsApp released a guide for users who had installed unofficial modified versions of WhatsApp and warned that it may ban those using unofficial clients. NSO Group In October 2019, WhatsApp launched a lawsuit against the Israeli surveillance firm NSO Group, stating it was behind the cyber attacks on over 100 human rights activists, journalists, lawyers, and academics. WhatsApp also claimed that the firm violated American law in an “unmistakable pattern of abuse”. On July 16, 2020, a US federal judge ruled that WhatsApp and its parent company Facebook's lawsuit against NSO group can proceed. The judge denied most of the arguments made by the NSO group. Jeff Bezos phone hack In January 2020, a digital forensic analysis revealed that the Amazon founder Jeff Bezos received an encrypted message on WhatsApp from the official account of Saudi Arabia’s Crown Prince Mohammed bin Salman. The message reportedly contained a malicious file, the receipt of which resulted in Bezos’ phone being hacked. The United Nations’ special rapporteur David Kaye and Agnes Callamard later confirmed that Jeff Bezos’ phone was hacked through WhatsApp, as he was one of the targets of Saudi's hit list of individuals close to The Washington Post journalist Jamal Khashoggi. Tek Fog In January 2022, an investigation by The Wire found that BJP, an Indian political party allegedly used an app called Tek Fog which was capable of hacking inactive WhatsApp accounts en masse in order to mass message their contacts with propaganda. According to the Wire, a whistleblower with app access was able to hack a test WhatsApp account controlled by reporters "within minutes." Terrorism In December 2015, it was reported that a terrorists organization ISIS had been using WhatsApp to plot the November 2015 Paris attacks. According to The Independent, ISIS also uses WhatsApp to traffic sex slaves. In March 2017, British Home Secretary Amber Rudd said encryption capabilities of messaging tools like WhatsApp are unacceptable, as news reported that Khalid Masood used the application several minutes before perpetrating the 2017 Westminster attack. Rudd publicly called for police and intelligence agencies to be given access to WhatsApp and other encrypted messaging services to prevent future terror attacks. In April 2017, the perpetrator of the Stockholm truck attack reportedly used WhatsApp to exchange messages with an ISIS supporter shortly before and after the incident. The messages involved discussing how to make an explosive device and a confession to the attack. In April 2017, nearly 300 WhatsApp groups with about 250 members each were reportedly being used to mobilize stone-pelters in Jammu and Kashmir to disrupt security forces' operations at encounter sites. According to police, 90% of these groups were closed down after counselling of admins of these groups. Further, after a six-month probe which involved the infiltration of 79 WhatsApp groups, the National Investigation Agency reported that out of about 6386 members and admins of these groups, about 1000 were residents of Pakistan and gulf nations. Further, for their help in negating anti-terror operations, the Indian stone pelters were getting funded through barter trade from Pakistan and other indirect means. Scams and malware It has been asserted that there are numerous ongoing scams on WhatsApp that let hackers spread viruses or malware. In May 2016, some WhatsApp users were reported to have been tricked into downloading a third-party application called WhatsApp Gold, which was part of a scam that infected the users' phones with malware. A message that promises to allow access to their WhatsApp friends' conversations, or their contact lists, has become the most popular hit against anyone who uses the application in Brazil. Clicking on the message actually sends paid text messages. Since December 2016, more than 1.5 million people have clicked and lost money. Another application called GB WhatsApp is considered malicious by cybersecurity firm Symantec because it usually performs some unauthorized operations on end-user devices. Bans China WhatsApp is owned by Facebook, whose main social media service has been blocked in China since 2009. In September 2017, security researchers reported to The New York Times that the WhatsApp service had been completely blocked in China. Iran On May 9, 2014, the government of Iran announced that it had proposed to block the access to WhatsApp service to Iranian residents. "The reason for this is the assumption of WhatsApp by the Facebook founder Mark Zuckerberg, who is an American Zionist," said Abdolsamad Khorramabadi, head of the country's Committee on Internet Crimes. Subsequently, Iranian president Hassan Rouhani issued an order to the Ministry of ICT to stop filtering WhatsApp. Turkey Turkey temporarily banned WhatsApp in 2016, following the assassination of the Russian ambassador to Turkey. Brazil On March 1, 2016, Diego Dzodan, Facebook's vice-president for Latin America was arrested in Brazil for not cooperating with an investigation in which WhatsApp conversations were requested. On March 2, 2016, at dawn the next day, Dzodan was released because the Court of Appeal held that the arrest was disproportionate and unreasonable. On May 2, 2016, mobile providers in Brazil were ordered to block WhatsApp for 72 hours for the service's second failure to cooperate with criminal court orders. Once again, the block was lifted following an appeal, after less than 24 hours. Brazil's Central Bank issued an order to Visa and Mastercard on June 23, 2020, to stop working with WhatsApp on its new electronic payment system. A statement from the Bank asserted the decision to block the Facebook-owned company's latest offering was taken in order to “preserve an adequate competitive environment” in the mobile payments space and to ensure “functioning of a payment system that's interchangeable, fast, secure, transparent, open and cheap.” Uganda The government of Uganda banned WhatsApp and Facebook, along with other social media platforms, to enforce a tax on the use of social media. Users are to be charged 200 shilling per day to access these services according to the new law set by parliament. United Arab Emirates (UAE) The United Arab Emirates banned WhatsApp video chat and VoIP call applications in as early as 2013 due to what is often reported as an effort to protect the commercial interests of their home grown nationally owned telecom providers (du and Etisalat). Their app ToTok has received press suggesting it is able to spy on users. Cuba In July 2021, the Cuban government blocked access to several social media platforms, including WhatsApp, to curb the spread of information during the anti-government protests. Switzerland In December 2021, the Swiss army banned the use of WhatsApp and several other non-Swiss encrypted messaging services by army personnel. The ban was prompted by concerns of US authorities potentially accessing user data for such apps because of the CLOUD Act. The army recommended that all army personnel use Threema instead, as the service is based in Switzerland. Zambia In August 2021, the digital rights organization Access Now reported that WhatsApp along with several other social media apps was being blocked in Zambia for the duration of the general election. The organization reported a massive drop-off in traffic for the blocked services, though the country's government made no official statements about the block. Third-party clients In mid-2013, WhatsApp Inc. filed for the DMCA takedown of the discussion thread on the XDA Developers forums about the then popular third-party client "WhatsApp Plus". In 2015, some third-party WhatsApp clients that were reverse-engineering the WhatsApp mobile app, received a cease and desist to stop activities that were violating WhatsApp legal terms. As a result, users of third-party WhatsApp clients were also banned. WhatsApp Business In September 2017, WhatsApp confirmed rumors that they were building and testing two new tools for businesses. The apps were launched in January 2018, separated by the intended userbase: A WhatsApp Business app for small companies An Enterprise Solution for bigger companies with global customer bases, such as airlines, e-commerce retailers and banks, who would be able to offer customer service and conversational commerce (e-commerce) via WhatsApp chat, using live agents or chatbots. (As far back as 2015, companies like Meteordesk had provided unofficial solutions for enterprises to attend to large numbers of users, but these were shut down by WhatsApp.) In October 2020, Facebook announced the introduction of pricing tiers for services offered via the WhatsApp Business API, charged on a per-message basis. User statistics WhatsApp handled ten billion messages per day in August 2012, growing from two billion in April 2012, and one billion the previous October. On June 13, 2013, WhatsApp announced that they had reached their new daily record by processing 27 billion messages. According to the Financial Times, WhatsApp "has done to SMS on mobile phones what Skype did to international calling on landlines". By April 22, 2014, WhatsApp had over 500 million monthly active users, 700 million photos and 100 million videos were being shared daily, and the messaging system was handling more than 10 billion messages each day. On August 24, 2014, Koum announced on his Twitter account that WhatsApp had over 600 million active users worldwide. At that point WhatsApp was adding about 25 million new users every month, or 833,000 active users per day. In May 2017, it was reported that WhatsApp users spend over 340 million minutes on video calls each day on the app. This is the equivalent of roughly 646 years of video calls per day. By February 2017, WhatsApp had over 1.2 billion users globally, reaching 1.5 billion monthly active users by the end of 2017. In January 2020, WhatsApp registers over 5 billion installs on Google Play Store making it only the second non-Google app to achieve this milestone. As of February 2020, WhatsApp had over 2 billion users globally. Specific markets India is by far WhatsApp's largest market in terms of total number of users. In May 2014, WhatsApp crossed 50 million monthly active users in India, which is also its largest country by the number of monthly active users, then 70 million in October 2014, making users in India 10% of WhatsApp's total user base. In February 2017, WhatsApp reached 200 million monthly active users in India. Israel is one of WhatsApp's strongest markets in terms of ubiquitous usage. According to Globes, already by 2013 the application was installed on 92% of all smartphones, with 86% of users reporting daily use. WhatsApp's group chat feature is reportedly used by many Israeli families to stay in contact with each other. Competition WhatsApp competes with a number of messaging services. They include services like iMessage (estimated 1.3 billion active users), WeChat (1.2 billion active users), Telegram (500 million users), Viber (260 million active users), LINE (217 million active users), and Signal (over 50 million active users). Both Telegram and Signal in particular were reported to get registration spikes during WhatsApp outages and controversies. WhatsApp has increasingly drawn its innovation from competing services, such as a Telegram-inspired web version and features for groups. In 2016, WhatsApp was accused of copying features from a then-unreleased version of iMessage. See also Comparison of instant messaging clients Comparison of user features of messaging platforms Comparison of VoIP software Criticism of Facebook List of most-downloaded Google Play applications Instagram References External links Social media Meta Platforms acquisitions Meta Platforms applications 2014 mergers and acquisitions Android Auto software VoIP software Mobile applications Android (operating system) software BlackBerry software IOS software Symbian software Instant messaging clients Cross-platform software Communication software Companies based in Mountain View, California Software companies based in the San Francisco Bay Area 2009 software Software companies of the United States
32072257
https://en.wikipedia.org/wiki/Marc%20Zwillinger
Marc Zwillinger
Marc Zwillinger is the founder and managing member of the Washington, D.C. based data privacy and information security law firm ZwillGen. Zwillinger has been active in the field of Internet law on issues such as encryption, data security, government access to user data, data breaches, and fantasy sports. Career Marc Zwillinger founded Zwillinger Genetski LLP (now ZwillGen PLLC), a boutique law firm specializing in data protection & information security, in March 2010. Prior to founding ZwillGen, Zwillinger was a partner at Sonnenschein Nath & Rosenthal in the firm's Internet, Communications & Data Protection Group where he had created the Internet, Communications and Data Protection Practice Group (originally called Information Security and Anti-Piracy). Zwillinger worked for the United States Department of Justice in the Computer Crime and Intellectual Property Section as a trial attorney from 1997-2000. Before entering the DOJ, Zwillinger was a litigation associate for Kirkland & Ellis from 1995-1997. Zwillinger started his career clerking for the Honorable Mark L. Wolf of the United States District Court, District of Massachusetts from 1994-1995. Education Marc earned his bachelor's degree from Tufts University in 1991, and received his law degree graduating magna cum laude from Harvard Law School in 1994. Work with Apple Zwillinger has represented Apple in several cases, including those brought under the 18th century All Writs Act involving government access to user data. In 2015, Zwillinger, representing Apple, contested unlocking an iPhone 5S belonging to a defendant accused of selling drugs in New York. Most notably, in 2016, Zwillinger represented Apple in the Apple vs San Bernardino case where the government tried to compel Apple to unlock the personal iPhone recovered from one of the terrorists in the San Bernardino attack. The case itself was later dropped. Work with Yahoo In 2008 Zwillinger represented Yahoo! over the government's efforts to force Yahoo! to comply with "surveillance orders and other types of legal process in national security investigations." Of the experience, Zwillinger said that he was proud to be one of the "lawyers who represented Yahoo in its historic challenge to the government's surveillance program in the Foreign Intelligence Surveillance Court ("FISC") and the Foreign Intelligence Court of Review ("FISCR")." Service Zwillinger is one of five amici curiae appointed to serve to the Foreign Intelligence Surveillance Court ("FISC"); a position stipulated under the USA Freedom Act. Amici serve staggered terms, with Zwillinger slated to serve a four-year term. Awards From 2007 through 2015, Zwillinger has been ranked in Chambers & Partners USA as a leading lawyer in his field of Privacy & Data Security. References External links Lawyers who have represented the United States government 1969 births Living people Tufts University alumni Harvard Law School alumni Kirkland & Ellis alumni
32095607
https://en.wikipedia.org/wiki/ENX%20Association
ENX Association
The ENX Association is an association of European vehicle manufacturers, suppliers and organisations. History The Association The ENX Association, which was founded in 2000 is an association according to the French law of 1901. Its headquarters are in Boulogne-Billancourt (France) and Frankfurt am Main. The 15 members of the association, which are all also represented on the so-called ENX board, are Audi, BMW, Bosch, Continental, Daimler, DGA, Ford, Renault, Volkswagen, as well as the automotive associations ANFAC (Spain), GALIA (France), SMMT (UK), and VDA (Germany). The association can decide to accept additional members upon request; however, the association rules state that the total number of members is limited. Fields of activity The ENX Association is a non-profit organisation that acts as a legal and organisational roof for the ENX network standard. It provides the participating companies with a platform for the exchange of information and for the initiation of pre-competitive project cooperations in the field of information technology. The main drive behind the German and French industries creating the standard was to protect intellectual property while at the same time reducing costs and complexity concerning data exchange within the automotive industry. One cited benefit of the creation of a "Trusted Community" for branches of industry is that, although companies protect their own infrastructures, problems occur in cases where encryption or authentication solutions are used across different companies and yet should be acknowledged as confidential. An impasse is often reached when both sides seek to implement their own mechanisms, if not before. This is demonstrated by the example of email encryption, with the clash of safety regulations in view of shared application use and thousands of unencrypted data connections. A shared, confidential infrastructure provides a remedy here. Ford cites the use of ENX to communicate with suppliers as an example of how considerable savings can be made through consolidation and standardisation. The implementation of industrial requirements for IT security between companies represents a further sphere of activity. The following are described as subject areas here: Secure cloud computing (between companies) Protecting intellectual property during development cooperations (e.g. using Enterprise Rights Management, ERM) The ENX Association is a member of the ERM.Open project by ProSTEP iViP e.V. and, was active in the forerunner project SP2 together with Adobe, BMW, FH Augsburg, Continental, Daimler Fraunhofer IGD, Microsoft, PROSTEP, Siemens PLM, TU Darmstadt, TAC, Volkswagen and ZF Friedrichshafen. The SkIdentity project, which the ENX Association is involved with, was named as one of 12 winners of the BMWi technology competition "Secure cloud computing for medium-sized businesses and the public sector - Trusted Cloud" by the Federal Ministry of Economics and Technology (BMWi) on 1 March 2011 at the IT exhibition CeBIT in Hanover. The BMWi has set up the Trusted Cloud programme to promote "the development and testing of innovative, secure and legally compliant cloud solutions". Presidents of the ENX Association The presidents of the ENX Association are: Philippe Ludet (since July 2019) Clive Johnson (since April 2013) Prof. Dr. Armin Vornberger (October 2005 - April 2013) Hans-Joachim Heister, Ford-Werke GmbH (July 2001 - October 2005) Dr. Gunter Zimmermeyer, Verband der Automobilindustrie e.V. (July 2000 - July 2001) ENX Association memberships The ENX Association is a member of the following associations and organisations: Automotive Industry Action Group (AIAG), Southfield, Michigan Bundesverband Informationswirtschaft, Telekommunikation und neue Medien e.V. (BITKOM) ProSTEP iViP e.V. RIPE NCC In addition, there are two-way affiliations with ANFAC, GALIA, and SMMT Use of the ENX network Usage scenarios The European automotive industry's communication network of the same name is based on the standards set by the ENX Association concerning security, availability and interoperability. The so-called industry network guarantees the secure exchange of development, production control and logistical data within the European automotive industry. The automotive industry is shaped by strong international cooperation and the necessity for companies to coordinate closely linked processes, which require a precise alignment and seamless exchange of data between partners. This makes "integrated global network concepts" necessary. ENX is described as a platform, which creates the foundation for these types of cooperative production models. Realignment began at the end of 2002. The aim was to bring the technical development in line with user requirements on a consistent basis, particularly for small and medium-sized business. The implementation took several years. In June 2004, French users complained about the lack of cost-effective entry-level solutions in the France Telecom portfolio. In March 2011, over 1,500 companies within the automotive and other industries were using the network, which is available worldwide, in over 30 countries. The network can be used for all IP-compatible protocols and applications. The bandwidth ranges from classic EDI data exchange, to access to databases and secure email exchange, to the carrying out of video conferences. The use of EDI transfer protocols, such as OFTP (Odette File Transfer Protocol), OFTP2 and AS2, is widespread in the ENX network. OFTP2, which was developed from 2004, allows for use via the public Internet. According to the trade press, some vehicle manufacturers have been demanding the use of OFTP2 over the Internet since 2010. "Tens of thousands of suppliers" are affected. In this medium, which is accessible to everyone, substantially more security is required for the transfer of sensitive data; it is difficult to estimate the implementation costs. Registration as a pre-requisite for use You must register with the ENX Association in order to use the ENX network. Registration can either be completed directly with the ENX Association or via one of its representatives. Representatives of the ENX Association In some countries and industries, the ENX is represented by industrial associations and organisations (so-called ENX Business Centres). These organisations act as contact persons in the relevant local language, process registration applications and take responsibility for the initial authorisation of new users in their relevant area of representation. The ENX Association has chosen this model of representation to allow industrial associations and similar organisations to manage user groups on an independent basis. Operating the ENX network Operating the network and the data links Operation by certified service providers The ENX network fulfills the quality and security requirements found in company-owned networks, while also being as open and flexible for participating vehicle manufacturers, suppliers, and their development partners as the public Internet. Data exchange between ENX users takes place via the network of the communication service provider, certified for this role by the ENX Association, using an encrypted Virtual Private Network (VPN). The first certified communication service provider was the Deutsche Telekom subsidiary T-Systems. This was followed by Orange, Telefónica, Infonet and, in 2007, Verizon Business. In 2010, three additional companies successfully acquired ENX certification, namely ANXeBusiness, BCC and Türk Telekom. According to information from the ENX Association, Open Systems AG is an additional service provider currently going through the certification process. The services provided by the certified service providers are interoperable, and are provided in a competitive environment. Overview of the service providers certified in line with the ENX standard Certification process According to the ENX Association, certification is a two-stage process. The first stage, the so-called concept phase, sees the ENX Association testing to see whether the service provider's ENX operating model fulfills the technical ENX specifications. The second stage sees the service provider putting their operating model into practice. Besides inspecting the internal organisation, the IPSec interoperability is also tested in the so-called "ENX IPSecLab". In addition, the ENX encryption is implemented and the connection is made to providers already certified via private peering points, so-called "ENX Points of Interconnection". Once this has been completed, the implementation and adherence to the ENX specifications is tested in a pilot run. Suitable preparation by the service provider should allow the chargeable certification to be completed within approx. three to four months. Central operational elements behind the scenes Central services are provided on behalf of and under the control of the ENX Association. These services provide a simplified connection ("interconnectivity") between the individually certified service providers and the interoperability of the encrypted hardware used. They include the so-called Points of Interconnection ("ENX POIs"), the IPSec Interoperability Laboratory ("ENX IPSec Lab") and the Public Key Infrastructure ("ENX PKI") in the ENX Trust Centre. The Points of Interconnection have a geographically redundant structure, are interconnected, and are operated in data processing centres in the following regions: Rhine-Main region, Germany; Isle-de-France, France and East coast of the United States. These central operational elements are not visible to the individual users. The customer sources their own connection, including IP router, encryption hardware, key material, uninterrupted end-to-end encryption of each communication, and individual service level agreements, directly from the certified telecommunications service provider that they have chosen. Global availability JNX industry network and the ANXeBusiness in North America The Japanese automotive industry has an industry network that is similar to the ENX in terms of technology and organisation, namely the Japanese Network Exchange (JNX). The network is controlled from the JNX Centre, which is tied to the Japanese automotive associations JAMA and JAPIA. JNX and ENX are not linked. In contrast, there are considerable technical, organisational and commercial differences between the ENX standard and the American ANX, which was developed back in the 1990s. Connection between Europe and North America ENX as a mutual standard since 2010 On 26 April 2010, the ENX Association and ANX eBusiness announced that they were going to connect their networks to create a global standard in the automotive industry. The connection resulted in a transatlantic industry network with more than 1,500 connected companies. The network went live with the completion of the pilot stage on 26 May 2010. According to concordant statements by the ENX Association and the ANX eBusiness Corp., only the ENX standard is used for transatlantic connections both in Europe and in North America. In their announcements, the ANX and ENX have described the interconnection as being free of charge for the individual users. Differences between ENX and ANX The network for North America, the so-called Automotive Network Exchange (ANX), is operated by the ANXeBusiness Corp. Although, like the ENX, it was originally initiated by the automotive industry and operated by a consortium; in contrast to the ENX, it was sold and, as a result, operated as a classic profit-making service company. ANX is a physical network. Availability stands at the forefront. For the time being, ANX is based continuously on the operation of fixed connections with high uptime guarantees. With the additional product "TunnelZ", ANX also offers an optional VPN tunnel management, which is not used by all those manufacturers and suppliers connected to the network. In the classic ANX network, key management takes place using Pre-Shared Key (PSK), while the encryption strength is limited to DES. ENX is set up as a managed security service, which continuously incorporates a standardised tunnel management, a trust centre based Public Key Infrastructure (PKI) and authentication and encryption mechanisms based on various networks (from private to public). In fact, while the ANX network has one provider for its customers, namely the ANXeBusiness company itself, ENX services are provided by various companies that are in competition with one another. In order to link the networks despite this, ANXeBusiness continues to operate its own network separately from and untouched by ENX, but provides every ANX user who wants the service with an active native ENX connection, including all required security and service features, via its own physical network. ANX has undergone certification and monitoring by the ENX Association for this purpose, and acts as an ENX certified service provider. Summary With the certification of ANXeBusiness as an ENX provider, ENX and ANX use the aforementioned organisational differences between a non-profit-making industrial consortium (ENX) on the one hand and a service provider (ANX) on the other to connect the two networks. This is not a case of mutual interoperability, as ANX has adopted the ENX standard. There are likely to be new market perspectives for ANX as a result of the potential access to all ENX users. At the same time, it can be assumed that the bridge to the ANX will make it easier for other ENX service providers in the USA and, as a result, will generate competition. References External links Communication networks within the automotive industry (non-profit organisations) ENX Association: worldwide JNX Center: Japan Information from service providers certified in line with ENX standards (commercial solutions) ANXeBusiness Corp. BCC: ENX Connect KPN: ENX – Automotive Industry Services Open Systems: ENX Global Connect Numlog – Orange Business Services Expert Partner for ENX data links ICDSC – Orange Business Services Expert Partner for ENX data links Türk Telekom: TT ENX T-Systems: Extranet Solution - Securely integrate partners and suppliers Verizon: Certification through ENX ENX member organisations ANFAC GALIA SMMT VDA Automotive industry Business organizations based in Europe Transport organizations based in Europe Trade associations based in France Trade associations based in Germany Organizations established in 2000 2000 establishments in France Transport industry associations
32109909
https://en.wikipedia.org/wiki/Access%20Now
Access Now
Access Now is a non-profit founded in 2009 with a mission to defend and extend the digital civil rights of people around the world. Access Now supports programs including an annual conference on Human Rights (RightsCon), an index of internet shutdowns (#KeepItOn), and providing exit nodes for Tor network. As of 2020, Access Now has legal entities in Belgium, Costa Rica, Tunisia, and the United States, with its staff, operations, and activities distributed across all regions of the world. In 2018, Access Now received approximately $5.1 million in funding. Major funders include Facebook, Global Affairs Canada, Dutch Ministry of Foreign Affairs, and the Swedish International Development Cooperation Agency. History Access Now was founded by Brett Solomon, Cameran Ashraf, Sina Rabbani and Kim Pham in 2009, after the contested Iranian presidential election of that year. During the protests that followed this election, Access Now played a noted role in disseminating the video footage which came out of Iran. Access Now has campaigned against internet shutdowns, online censorship, international trade agreements, and government surveillance. Access Now has also supported the use of encryption and limited cybersecurity laws and regulations. Access Now runs an annual conference, RightsCon, a multistakeholder event. The conference was first held in Silicon Valley in 2011, followed by events in Rio de Janeiro, Brazil (2012), Silicon Valley (2014), Manila, Philippines (2015), and Silicon Valley (2016), thus alternating between Silicon Valley and a city in the Global South. After being held in Brussels and Toronto, RightsCon 2019 took place in Tunis, Tunisia (11-14 June). The 2019 RightsCon event gathered activists and stakeholders from all over the globe to discuss the intersection between human rights and digitalization by government representatives, tech giants, policy makers, NGOs and independent activists. The discussions were around hate speech and freedom of expression, artificial intelligence, privacy and data security, open government and democracy, access, and many others. In 2020, RightsCon was scheduled to be held in San José, Costa Rica, but as a result of the COVID-19 pandemic, the meeting instead took place in a modified online format. In 2021, the 10th edition RightsCon will also be entirely held online from Monday June 7 to Friday June 11, 2021 due to continued global COVID-19 pandemic which altered several digital rights physical meetings. The categories of topics for RightsCon2021 to be discussed by several digital rights organizations and individuals include: Artificial Intelligence (AI), automation, data protection and user control, digital futures, democracy, elections, new business models, content control, peacebuilding, censorship, internet shutdowns, freedom of the media and many others. #KeepItOn Access Now produces an annual report and data set on internet shutdowns around the world as part of the #KeepItOn project. This report tracks internet shutdowns, social media blockages, and internet slowdowns in countries around the world. This report and data are published annually every spring. Methodology Access Now gathers data through the Shutdown Tracker Optimization Project (STOP). This project uses remotely sensed data to initially identify shutdowns, blockages and throttling. These instances are then confirmed using news reports, reports from local activists, official government statements, and statements from ISPs. Access Now defines internet shutdowns as "an intentional disruption of the internet or electronic communications rendering them inaccessible or effectively unusable, for a specific population or within a location, often to exert control over the flow of information." This means that these shutdowns include government as well as shutdowns caused by non-governmental sources. Individual instances are counted if the shutdown lasts longer than one hour. When compared to expert analysis of internet shutdowns, such as those tracked by the V-Dem Institute's Digital Society Project, or Freedom House's Freedom on the Net, Access Now's data has been found to capture fewer false positives but more false negatives. In other words, Access Now's data are more likely to miss shutdowns which are captured by other methods, but those shutdowns captured are more likely to be confirmed by alternative sources. Impact #KeepItOn data is used to measure shutdowns by a range of organizations and academic publications. For example, the Millennium Challenge Corporation uses these data as a part of its Freedom of Information indicator on its annual scorecards, used for determining aid allocations. Access Now's reports are also used in calculations of the total cost of internet shutdowns. Other articles use these data to track trends in internet censorship in various countries and regions. Digital Security Helpline The organization offers a 24/7 advice to victims of cybercrime such as cyber-attacks, spyware campaigns, data theft, and other digital malfeasance through its helpline aimed at protecting citizens from digital attacks. Starting in 2009, Access Now had offered support and direct technical advice to activists, journalists, and other human rights campaigners, but the Digital Security Helpline was officially launched in 2013. Access Now claims to offer digital security guidance on topics such as; how to protect against data and credential theft and also targeted cyber-attack campaigns. The Helpline has been credited with helping to build people-first digital infrastructures, and one content moderation request at a time. Some have claimed that the helpline provides lessons on how to build comprehensive and sustainable digital infrastructures while protecting the digital rights of the people they serve. Its major focus is on protecting the digital wellbeing of CSOs, activists, and human rights defenders. The rapid-response assistance includes working with individuals and CSOs around the world to provide emergency assistance and to help them improve their digital security practices to stay safe online. Others have criticized Access Now's methods of using in-country volunteers to identify attacks from their own government as unethical due to the risk it poses for those reporting via the Helpline and other reporting from government retribution. While others have proposed automated systems to more ethically track these disruptions, they are still in the early stages, and have yet to produce regular data. References Digital rights organizations Internet governance advocacy groups Organizations established in 2009 Internet-related organizations
32183000
https://en.wikipedia.org/wiki/App%20store
App store
An app store (or app marketplace) is a type of digital distribution platform for computer software called applications, often in a mobile context. Apps provide a specific set of functions which, by definition, do not include the running of the computer itself. Complex software designed for use on a personal computer, for example, may have a related app designed for use on a mobile device. Today apps are normally designed to run on a specific operating system—such as the contemporary iOS, macOS, Windows or Android—but in the past mobile carriers had their own portals for apps and related media content. Basic concept An app store is any digital storefront intended to allow search and review of software titles or other media offered for sale electronically. Critically, the application storefront itself provides a secure, uniform experience that automates the electronic purchase, decryption and installation of software applications or other digital media. App stores typically organize the apps they offer based on: the function(s) provided by the app (including games, multimedia or productivity), the device for which the app was designed, and the operating system on which the app will run. App stores typically take the form of an online store, where users can browse through these different app categories, view information about each app (such as reviews or ratings), and acquire the app (including app purchase, if necessary – many apps are offered at no cost). The selected app is offered as an automatic download, after which the app installs. Some app stores may also include a system to automatically remove an installed program from devices under certain conditions, with the goal of protecting the user against malicious software. App stores typically provide a way for users to give reviews and ratings. Those reviews are useful for other users, for developers and for app store owners. Users can select the best apps based on ratings, developers get feedback on what features are praised or disliked and finally, app store owners can detect bad apps and malicious developers by automatically analyzing the reviews with data mining techniques. Many app stores are curated by their owners, requiring that submissions of prospective apps go through an approval process. These apps are inspected for compliance with certain guidelines (such as those for quality control and censorship), including the requirement that a commission be collected on each sale of a paid app. Some app stores provides feedback to developers: number of installations, issues in the field (latency, crash, etc.). Researchers have proposed new features for app stores. For instance, the app store can deliver a unique diversified version of the app for sake of security. The app store can also orchestrate monitoring and bug fixing to detect and repair crashes in applications. History Precursors The Electronic AppWrapper was the first commercial electronic software distribution catalog to collectively manage encryption and provide digital rights for apps and digital media (issue #3 was the app store originally demonstrated to Steve Jobs at NeXTWorld EXPO). While a Senior Editor at NeXTWORLD Magazine, Simson Garfinkel, rated The Electronic AppWrapper 4 3/4 Cubes (out of 5), in his formal review. Paget's Electronic AppWrapper was named a finalist in the highly competitive InVision Multimedia '93 awards in January, 1993 and won the Best of Breed award for Content and Information at NeXTWORLD Expo in May, 1993. Prior to the Electronic AppWrapper which first shipped in 1992 people were used to software distributed via floppy disks or CD-ROMs, one could even download software using a web browser or command-line tools. Many Linux distributions and other Unix-like systems provide a tool known as a package manager, which allows a user to automatically manage the software installed on their systems (including both operating system components and third-party software) using command line toolsnew software (and the packages required for its proper operation) can be retrieved from local or remote mirrors and automatically installed in a single process. Notable package managers in Unix-like operating systems have included FreeBSD Ports (1994), pkgsrc (1997), Debian's APT (1998), YUM, and Gentoo's Portage (which unlike most package managers, distributes packages containing source code that is automatically compiled instead of executables). Some package managers have graphical front-end software which can be used to browse available packages and perform operations, such as Synaptic (which is often used as a front-end for APT). In 1996, the SUSE Linux distribution has YaST as frontend for its own app repository. Mandriva Linux has urpmi with GUI frontend called Rpmdrake. Fedora and Red Hat Enterprise Linux has YUM in 2003 as a successor of YUP (developed at Duke University for Red Hat Linux). In 1997, BeDepot a third-party app store and package manager (Software Valet) for BeOS was launched, which operated until 2001. It was eventually acquired by Be Inc. BeDepot allowed for both commercial and free apps as well as handling updates In 1998, Information Technologies India Ltd (ITIL) launched Palmix, a web based app store exclusively for mobile and handheld devices. Palmix sold apps for the three major PDA platforms of the time: the Palm OS based Palm Pilots, Windows CE based devices, and Psion Epoc handhelds. In 1999, NTT DoCoMo launched i-mode, the first integrated online app store for mobile phones, gaining nationwide popularity in Japanese mobile phone culture. DoCoMo used a revenue-sharing business model, allowing content creators and app providers to keep up to 91% of revenue. Other operators outside Japan also made their own portals after this, such as Vodafone live! in 2002. At this time mobile phone manufacturer Nokia also introduced carrier-free downloadable content with Club Nokia. In December 2001, Sprint PCS launched the Ringers & More Wireless Download Service for their then-new 3G wireless network. This allowed subscribers to the Sprint PCS mobile phone network to download ringtones, wallpaper, J2ME applications and later full music tracks to certain phones. The user interface worked through a web browser on the desktop computer, and a version was available through the handset. In 2002, the commercial Linux distribution Linspire (then known as LindowsOSwhich was founded by Michael Robertson, founder of MP3.com) introduced an app store known as Click'N'Run (CNR). For an annual subscription fee, users could perform one-click installation of free and paid apps through the CNR software. Doc Searls believed that the ease-of-use of CNR could help make desktop Linux a feasible reality. In 2003 Handango introduced the first on-device app store for finding, installing and buying software for smartphones. App download and purchasing are completed directly on the device so sync with a computer is not necessary. Description, rating and screenshot are available for any app. In 2005 Nokia 770 Internet Tablet has graphical frontend for its app repository to easily install app (its Maemo was based on Debian). Later Nokia also introduced Nokia Catalogs, later known as Nokia Download!, for Symbian smartphones which had access to downloadable apps—originally via third-parties like Handango or Jamba! but from mid-2006 Nokia were offering their own content via the Nokia Content Discoverer. The popular Linux distribution Ubuntu (also based on Debian) introduced its own graphical software manager known as the Ubuntu Software Center on version 9.10 as a replacement for Synaptic. On Ubuntu 10.10, released in October 2010, the Software Center expanded beyond only offering existing software from its repositories by adding the ability to purchase certain apps (which, at launch, was limited to Fluendo's licensed DVD codecs). Apple released iPhone OS 2.0 in July 2008 for the iPhone, together with the App Store, officially introducing third-party app development and distribution to the platform. The service allows users to purchase and download new apps for their device through either the App Store on the device, or through the iTunes Store on the iTunes desktop software. While Apple has been criticized by some for how it operates the App Store, it has been a major financial success for the company. The popularity of Apple's App Store led to the rise of the generic term "app store", as well as the introduction of equivalent marketplaces by competing mobile operating systems: the Android Market (later renamed to Google Play) launched alongside the release of the first Android smartphone (the HTC Dream) in September 2008, BlackBerry's App World launched in April 2009, as well as Nokia's Ovi Store and Microsoft's Windows Marketplace for Mobile both launching that year. "App Store" trademark Due to its popularity, the term "app store" (first used by the Electronic AppWrapper and later popularized by Apple's App Store for iOS devices) has frequently been used as a generic trademark to refer to other distribution platforms of a similar nature. Apple asserted trademark claims over the phrase, and filed a trademark registration for "App Store" in 2008. In 2011, Apple sued both Amazon.com (which runs the Amazon Appstore for Android-based devices) and GetJar (who has offered its services since 2004) for trademark infringement and false advertising regarding the use of the term "app store" to refer to their services. Microsoft filed multiple objections against Apple's attempt to register the name as a trademark, considering it to already be a generic term. In January 2013, a United States district court rejected Apple's trademark claims against Amazon. The judge ruled that Apple had presented no evidence that Amazon had attempted "to mimic Apple's site or advertising" or communicated that its service "possesses the characteristics and qualities that the public has come to expect from the Apple APP STORE and/or Apple products". In July 2013, Apple dropped its case. See also Software repository Electronic commerce Digital distribution in video games Comparison of mobile operating systems App store optimization List of Android app stores List of mobile software distribution platforms App Store (iOS/iPadOS), iOS app approvals Cydia Google Play Amazon Appstore Aptoide Cafe Bazaar F-Droid GetJar Itch.io Opera Mobile Store MiKandi XDA Labs Microsoft Store Desktop software distribution platforms AppStream Chrome Web Store Mac App Store, Apple TV App Store Microsoft Store Setapp Steam Ubuntu Software Center References Brands that became generic Software distribution platforms
32327247
https://en.wikipedia.org/wiki/Apple%20silicon
Apple silicon
Apple silicon is a series of system on a chip (SoC) and system in a package (SiP) processors designed by Apple Inc., mainly using the ARM architecture. It is the basis of most new Mac computers as well as iPhone, iPad, Apple TV, and Apple Watch, and of products such as AirPods, HomePod, iPod Touch, and AirTag. Apple announced its plan to switch Mac computers from Intel processors to Apple silicon at WWDC 2020 on June 22, 2020. The first Macs built around the Apple M1 processor were unveiled on November 10, 2020. By early 2022, most Mac models were built on Apple silicon; exceptions include the 27‑inch iMac and the Mac Pro. Apple outsources the chips' manufacture but fully controls their integration with the company's hardware and software. Johny Srouji is in charge of Apple's silicon design. Early series Apple first used SoCs in early versions of the iPhone and iPod touch. They combine in one package a single ARM-based processing core (CPU), a graphics processing unit (GPU), and other electronics necessary for mobile computing. The APL0098 (also 8900B or S5L8900) is a package on package (PoP) system on a chip (SoC) that was introduced on June 29, 2007, at the launch of the original iPhone. It includes a 412 MHz single-core ARM11 CPU and a PowerVR MBX Lite GPU. It was manufactured by Samsung on a 90 nm process. The iPhone 3G and the first-generation iPod touch also use it. The APL0278 (also S5L8720) is a PoP SoC introduced on September 9, 2008, at the launch of the second-generation iPod touch. It includes a 533 MHz single-core ARM11 CPU and a PowerVR MBX Lite GPU. It was manufactured by Samsung on a 65 nm process. The APL0298 (also S5L8920) is a PoP SoC introduced on June 8, 2009, at the launch of the iPhone 3GS. It includes a 600 MHz single-core Cortex-A8 CPU and a PowerVR SGX535 GPU. It was manufactured by Samsung on a 65 nm process. The APL2298 (also S5L8922) is a 45 nm die shrunk version of the iPhone 3GS SoC and was introduced on September 9, 2009, at the launch of the third-generation iPod touch. A series |- |style="text-align: left;"|Notes: is a family of SoCs used in certain models of the iPhone, iPad (with the exception of the fifth generation iPad Pro), iPod Touch, and the Apple TV digital media player. They integrate one or more ARM-based processing cores (CPU), a graphics processing unit (GPU), cache memory and other electronics necessary to provide mobile computing functions within a single physical package. Apple A4 The Apple A4 is a PoP SoC manufactured by Samsung, the first SoC Apple designed in-house. It combines an ARM Cortex-A8 CPU also used in Samsung's S5PC110A01 SoC and a PowerVR SGX 535 graphics processor (GPU), all built on Samsung's 45-nanometer silicon chip fabrication process. The design emphasizes power efficiency. The A4 commercially debuted in 2010, in Apple's iPad tablet, and was later used in the iPhone 4 smartphone, the fourth-generation iPod Touch, and the 2nd-generation Apple TV. The Cortex-A8 core used in the A4, dubbed "Hummingbird", is thought to use performance improvements developed by Samsung in collaboration with chip designer Intrinsity, which was subsequently acquired by Apple It can run at far higher clock rates than other Cortex-A8 designs yet remains fully compatible with the design provided by ARM. The A4 runs at different speeds in different products: 1 GHz in the first iPads, 800 MHz in the iPhone 4 and fourth-generation iPod touch, and an undisclosed speed in the 2nd-generation Apple TV. The A4's SGX535 GPU could theoretically push 35 million polygons per second and 500 million pixels per second, although real-world performance may be considerably less. Other performance improvements include additional L2 cache. The A4 processor package does not contain RAM, but supports PoP installation. The 1st-generation iPad, fourth-generation iPod touch, and the 2nd-generation Apple TV have an A4 mounted with two low-power 128 MB DDR SDRAM chips (totaling 256 MB), while the iPhone 4 has two 256 MB packages for a total of 512 MB. The RAM is connected to the processor using ARM's 64-bit-wide AMBA 3 AXI bus. To give the iPad high graphics bandwidth, the width of the RAM data bus is double that used in previous ARM11- and ARM9-based Apple devices. Apple A5 The Apple A5 is an SoC manufactured by Samsung that replaced the A4. The chip commercially debuted with the release of Apple's iPad 2 tablet in March 2011, followed by its release in the iPhone 4S smartphone later that year. Compared to the A4, the A5 CPU "can do twice the work" and the GPU has "up to nine times the graphics performance", according to Apple. The A5 contains a dual-core ARM Cortex-A9 CPU with ARM's advanced SIMD extension, marketed as NEON, and a dual core PowerVR SGX543MP2 GPU. This GPU can push between 70 and 80 million polygons/second and has a pixel fill rate of 2 billion pixels/second. The iPad 2's technical specifications page says the A5 is clocked at 1 GHz, though it can adjust its frequency to save battery life. The clock speed of the unit used in the iPhone 4S is 800 MHz. Like the A4, the A5 process size is 45 nm. An updated 32 nm version of the A5 processor was used in the 3rd-generation Apple TV, the fifth-generation iPod Touch, the iPad Mini, and the new version of iPad 2 (version iPad2,4). The chip in the Apple TV has one core locked. Markings on the square package indicate that it is named APL2498, and in software, the chip is called S5L8942. The 32 nm variant of the A5 provides around 15% better battery life during web browsing, 30% better when playing 3D games and about 20% better battery life during video playback. In March 2013, Apple released an updated version of the 3rd-generation Apple TV (Rev A, model A1469) containing a smaller, single-core version of the A5 processor. Unlike the other A5 variants, this version of the A5 is not a PoP, having no stacked RAM. The chip is very small, just 6.1×6.2 mm, but as the decrease in size is not due to a decrease in feature size (it is still on a 32 nm fabrication process), this indicates that this A5 revision is of a new design. Markings tell that it is named APL7498, and in software, the chip is called S5L8947. Apple A5X The Apple A5X is an SoC announced on March 7, 2012, at the launch of the third-generation iPad. It is a high-performance variant of the Apple A5; Apple claims it has twice the graphics performance of the A5. It was superseded in the fourth-generation iPad by the Apple A6X processor. The A5X has a quad-core graphics unit (PowerVR SGX543MP4) instead of the previous dual-core as well as a quad-channel memory controller that provides a memory bandwidth of 12.8 GB/s, roughly three times more than in the A5. The added graphics cores and extra memory channels add up to a very large die size of 165 mm², for example twice the size of Nvidia Tegra 3. This is mainly due to the large PowerVR SGX543MP4 GPU. The clock frequency of the dual ARM Cortex-A9 cores have been shown to operate at the same 1 GHz frequency as in A5. The RAM in A5X is separate from the main CPU package. Apple A6 The Apple A6 is a PoP SoC introduced on September 12, 2012, at the launch of the iPhone 5, then a year later was inherited by its minor successor the iPhone 5C. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor the Apple A5. It is 22% smaller and draws less power than the 45 nm A5. The A6 is said to use a 1.3 GHz custom Apple-designed ARMv7 based dual-core CPU, called Swift, rather than a licensed CPU from ARM like in previous designs, and an integrated 266 MHz triple-core PowerVR SGX 543MP3 graphics processing unit (GPU). The Swift core in the A6 uses a new tweaked instruction set, ARMv7s, featuring some elements of the ARM Cortex-A15 such as support for the Advanced SIMD v2, and VFPv4. The A6 is manufactured by Samsung on a high-κ metal gate (HKMG) 32 nm process. Apple A6X Apple A6X is an SoC introduced at the launch of the fourth-generation iPad on October 23, 2012. It is a high-performance variant of the Apple A6. Apple claims the A6X has twice the CPU performance and up to twice the graphics performance of its predecessor, the Apple A5X. Like the A6, this SoC continues to use the dual-core Swift CPU, but it has a new quad core GPU, quad channel memory and slightly higher 1.4 GHz CPU clock rate. It uses an integrated quad-core PowerVR SGX 554MP4 graphics processing unit (GPU) running at 300 MHz and a quad-channel memory subsystem. Compared to the A6 the A6X is 30% larger, but it continues to be manufactured by Samsung on a high-κ metal gate (HKMG) 32 nm process. Apple A7 The Apple A7 is a 64-bit PoP SoC whose first appearance was in the iPhone 5S, which was introduced on September 10, 2013. The chip would also be used in the iPad Air, iPad Mini 2 and iPad Mini 3. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor the Apple A6. The Apple A7 chip is the first 64-bit chip to be used in a smartphone. The A7 features an Apple-designed 1.3–1.4 GHz 64-bit ARMv8-A dual-core CPU, called Cyclone, and an integrated PowerVR G6430 GPU in a four cluster configuration. The ARMv8-A architecture doubles the number of registers of the A7 compared to the A6. It now has 31 general-purpose registers that are each 64-bits wide and 32 floating-point/NEON registers that are each 128-bits wide. The A7 is manufactured by Samsung on a high-κ metal gate (HKMG) 28 nm process and the chip includes over 1 billion transistors on a die 102 mm2 in size. Apple A8 The Apple A8 is a 64-bit PoP SoC manufactured by TSMC. Its first appearance was in the iPhone 6 and iPhone 6 Plus, which were introduced on September 9, 2014. A year later it would drive the iPad Mini 4. Apple states that it has 25% more CPU performance and 50% more graphics performance while drawing only 50% of the power compared to its predecessor, the Apple A7. On February 9, 2018, Apple released the HomePod, which is powered by an Apple A8 with 1 GB of RAM. The A8 features an Apple-designed 1.4 GHz 64-bit ARMv8-A dual-core CPU, and an integrated custom PowerVR GX6450 GPU in a four cluster configuration. The GPU features custom shader cores and compiler. The A8 is manufactured on a 20 nm process by TSMC, which replaced Samsung as the manufacturer of Apple's mobile device processors. It contains 2 billion transistors. Despite that being double the number of transistors compared to the A7, its physical size has been reduced by 13% to 89 mm2 (consistent with a shrink only, not known to be a new microarchitecture). Apple A8X The Apple A8X is a 64-bit SoC introduced at the launch of the iPad Air 2 on October 16, 2014. It is a high performance variant of the Apple A8. Apple states that it has 40% more CPU performance and 2.5 times the graphics performance of its predecessor, the Apple A7. Unlike the A8, this SoC uses a triple-core CPU, a new octa-core GPU, dual channel memory and slightly higher 1.5 GHz CPU clock rate. It uses an integrated custom octa-core PowerVR GXA6850 graphics processing unit (GPU) running at 450 MHz and a dual-channel memory subsystem. It is manufactured by TSMC on their 20 nm fabrication process, and consists of 3 billion transistors. Apple A9 The Apple A9 is a 64-bit ARM-based SoC that first appeared in the iPhone 6S and 6S Plus, which were introduced on September 9, 2015. Apple states that it has 70% more CPU performance and 90% more graphics performance compared to its predecessor, the Apple A8. It is dual sourced, a first for an Apple SoC; it is manufactured by Samsung on their 14 nm FinFET LPE process and by TSMC on their 16 nm FinFET process. It was subsequently included in the first-generation iPhone SE, and the iPad (5th generation). The Apple A9 was the last CPU that Apple manufactured through a contract with Samsung, as all A-series chips after are manufactured by TSMC. Apple A9X The Apple A9X is a 64-bit SoC that was announced on September 9, 2015, and released on November 11, 2015, and first appeared in the iPad Pro. It offers 80% more CPU performance and two times the GPU performance of its predecessor, the Apple A8X. It is manufactured by TSMC using a 16 nm FinFET process. Apple A10 Fusion The Apple A10 Fusion is a 64-bit ARM-based SoC that first appeared in the iPhone 7 and 7 Plus, which were introduced on September 7, 2016. The A10 is also featured in the sixth-generation iPad, seventh-generation iPad and seventh-generation iPod Touch. It has a new ARM big.LITTLE quad core design with two high performance cores, and two smaller highly efficient cores. It is 40% faster than the A9, with 50% faster graphics. It is manufactured by TSMC on their 16 nm FinFET process. Apple A10X Fusion The Apple A10X Fusion is a 64-bit ARM-based SoC that first appeared in the 10.5" iPad Pro and the second generation of the 12.9" iPad Pro, which were both announced on June 5, 2017. It is a variant of the A10 and Apple claims that it has 30 percent faster CPU performance and 40 percent faster GPU performance than its predecessor, the A9X. On September 12, 2017, Apple announced that the Apple TV 4K would be powered by an A10X chip. It is made by TSMC on their 10 nm FinFET process. Apple A11 Bionic The Apple A11 Bionic is a 64-bit ARM-based SoC that first appeared in the iPhone 8, iPhone 8 Plus, and iPhone X, which were introduced on September 12, 2017. It has two high-performance cores, which are 25% faster than the A10 Fusion, four high-efficiency cores, which are 70% faster than the energy-efficient cores in the A10, and for the first time an Apple-designed three-core GPU with 30% faster graphics performance than the A10. It is also the first A-series chip to feature Apple's "Neural Engine," which enhances artificial intelligence and machine learning processes. Apple A12 Bionic The Apple A12 Bionic is a 64-bit ARM-based SoC that first appeared in the iPhone XS, XS Max and XR, which were introduced on September 12, 2018. It is also used in the third-generation iPad Air, fifth-generation iPad Mini, and the eighth-generation iPad. It has two high-performance cores, which are 15% faster than the A11 Bionic, and four high-efficiency cores, which have 50% lower power usage than the energy-efficient cores in the A11 Bionic. The A12 is manufactured by TSMC using a 7 nm FinFET process, the first to ship in a smartphone. It is also used in the 6th generation Apple TV. Apple A12X Bionic The Apple A12X Bionic is a 64-bit ARM-based SoC that first appeared in the 11.0" iPad Pro and the third generation of the 12.9" iPad Pro, which were both announced on October 30, 2018. It offers 35% faster single-core and 90% faster multi-core CPU performance than its predecessor, the A10X. It has four high-performance cores and four high-efficiency cores. The A12X is manufactured by TSMC using a 7 nm FinFET process. Apple A12Z Bionic The Apple A12Z Bionic is a 64-bit ARM-based SoC based on the A12X that first appeared in the fourth generation iPad Pro, which was announced on March 18, 2020. The A12Z is also used in the Developer Transition Kit prototype computer that helps developers prepare their software for Macs based on Apple silicon. Apple A13 Bionic The Apple A13 Bionic is a 64-bit ARM-based SoC that first appeared in the iPhone 11, 11 Pro, and 11 Pro Max, which were introduced on September 10, 2019. It is also featured in the second-generation iPhone SE (released April 15, 2020) and in the 9th generation iPad (announced September 14, 2021). The entire A13 Bionic SoC features a total of 18 cores – a six-core CPU, four-core GPU, and an eight-core Neural Engine processor, which is dedicated to handling on-board machine learning processes; four of the six cores on the CPU are low-powered cores that are dedicated to handling less CPU-intensive operations, such as voice calls, browsing the Web, and sending messages, while two higher-performance cores are used only for more CPU-intensive processes, such as recording 4K video or playing a video game. Apple A14 Bionic The Apple A14 Bionic is a 64-bit ARM-based SoC that first appeared in the fourth-generation iPad Air and iPhone 12, released on October 23, 2020. It is the first commercially available 5 nm chipset and it contains 11.8 billion transistors and a 16-core AI processor. It includes Samsung LPDDR4X DRAM, a 6-core CPU, and 4-Core GPU with real time machine learning capabilities. Apple A15 Bionic The Apple A15 Bionic is a 64-bit ARM-based SoC that first appeared in the iPhone 13, unveiled on September 14, 2021. The A15 is built on a 5-nanometer manufacturing process with 15 billion transistors. It has 2 high-performance processing cores, 4 high-efficiency cores, a new 5-core graphics for iPhone 13 Pro series (4-core for iPhone 13&13 mini) processing unit, and a new 16-core Neural Engine capable of 15.8 trillion operations per second. S series |- |style="text-align: left;"|Notes: The Apple "S" series is a family of Systems in a Package (SiP) used in the Apple Watch. It uses a customized application processor that together with memory, storage and support processors for wireless connectivity, sensors, and I/O comprise a complete computer in a single package. They are designed by Apple and manufactured by contract manufacturers such as Samsung. Apple S1 The Apple S1 is an integrated computer. It includes memory, storage and support circuits like wireless modems and I/O controllers in a sealed integrated package. It was announced on September 9, 2014, as part of the "Wish we could say more" event. It was used in the first-generation Apple Watch. Apple S1P Used in Apple Watch Series 1. It has a dual-core processor identical to the S2, with the exception of the built-in GPS receiver. It contains the same dual-core CPU with the same new GPU capabilities as the S2, making it about 50% faster than the S1. Apple S2 Used in the Apple Watch Series 2. It has a dual-core processor and a built-in GPS receiver. The S2's two cores deliver 50% higher performance and the GPU delivers twice as much as the predecessor, and is similar in performance to the Apple S1P. Apple S3 Used in the Apple Watch Series 3. It has a dual-core processor that is 70% faster than the Apple S2 and a built-in GPS receiver. There is also an option for a cellular modem and an internal eSIM module. It also includes the W2 chip. The S3 also contains a barometric altimeter, the W2 wireless connectivity processor, and in some models UMTS (3G) and LTE (4G) cellular modems served by a built-in eSIM. Apple S4 Used in the Apple Watch Series 4. It has a custom 64-bit dual-core processor based on the A12 with up to 2× faster performance. It also contains the W3 wireless chip, which supports Bluetooth 5. The S4 introduced 64-bit ARMv8 cores to the Apple Watch. The chip contains two Tempest cores, which are the energy-efficient cores found in the A12. Despite the small size, Tempest still uses a 3-wide decode out-of-order superscalar design, which make them much more powerful than previous in-order cores. The S4 contains a Neural Engine that is able to run Core ML. Third-party apps can use it starting from watchOS 6. The SiP also includes new accelerometer and gyroscope functionality that has twice the dynamic range in measurable values of its predecessor, as well as being able to sample data at 8 times the speed. It also contains a new custom GPU, which can use the Metal API. Apple S5 Used in the Apple Watch Series 5, Watch SE, and HomePod mini. It adds a built-in magnetometer to the custom 64-bit dual-core processor and GPU of the S4. Apple S6 Used in the Apple Watch Series 6. It has a custom 64-bit dual-core processor that runs up to 20 percent faster than the S5. The dual cores in the S6 are based on the A13's energy-efficient "little" Thunder cores at 1.8 GHz. Like the S4 and S5, it also contains the W3 wireless chip. The S6 adds the new U1 ultra wideband chip, an always-on altimeter, and 5 GHz WiFi. Apple S7 Used in the Apple Watch Series 7. The S7 has the same T8301 identifier and quoted performance as the S6. T series The T series chip operates as a secure enclave on Intel-based MacBook and iMac computers released from 2016 onwards. The chip processes and encrypts biometric information (Touch ID) and acts as a gatekeeper to the microphone and FaceTime HD camera, protecting them from hacking. The chip runs bridgeOS, a purported variant of watchOS. Apple T1 The Apple T1 chip is an ARMv7 SoC (derived from the processor in the Apple Watch's S2) that drives the System Management Controller (SMC) and Touch ID sensor of the 2016 and 2017 MacBook Pro with Touch Bar. Apple T2 The Apple T2 Security Chip is a SoC first released in the iMac Pro 2017. It is a 64-bit ARMv8 chip (a variant of the A10, or T8010), and runs bridgeOS 2.0. It provides a secure enclave for encrypted keys, enables users to lock down the computer's boot process, handles system functions like the camera and audio control, and handles on-the-fly encryption and decryption for the solid-state drive. T2 also delivers "enhanced imaging processing" for the iMac Pro's FaceTime HD camera. W series The Apple "W" series is a family of SoCs and wireless chips with a focus on Bluetooth and Wi-Fi connectivity. "W" in model numbers stands for wireless. Apple W1 The Apple W1 is a SoC used in the 2016 AirPods and select Beats headphones. It maintains a Bluetooth Class 1 connection with a computer device and decodes the audio stream that is sent to it. Apple W2 The Apple W2, used in the Apple Watch Series 3, is integrated into the Apple S3 SiP. Apple said the chip makes Wi-Fi 85% faster and allows Bluetooth and Wi-Fi to use half the power of the W1 implementation. Apple W3 The Apple W3 is used in the Apple Watch Series 4, Series 5, Series 6, SE, and Series 7. It is integrated into the Apple S4, S5, S6 and S7 SiPs. It supports Bluetooth 5.0. H series The Apple "H" series is a family of SoCs used in headphones. "H" in model numbers stands for headphones. Apple H1 The Apple H1 chip was first used in the 2019 version of AirPods, and was later used in the Powerbeats Pro, the Beats Solo Pro, the AirPods Pro, the 2020 Powerbeats, AirPods Max, and the AirPods (3rd generation). Specifically designed for headphones, it has Bluetooth 5.0, supports hands-free "Hey Siri" commands, and offers 30 percent lower latency than the W1 chip used in earlier AirPods. U series The Apple "U" series is a family of Systems in a Package (SiP) implementing ultra-wideband radio. Apple U1 The Apple U1 is used in the iPhone 11 and later (excluding the second generation iPhone SE), the Apple Watch Series 6 and Series 7, the HomePod mini and AirTag trackers. M series |- |style="text-align: left;"|Notes: The Apple "M" series is a family of Systems on a Chip (SoC) used in Mac computers from November 2020 or later and iPad Pro tablets from April 2021 or later. The "M" designation was previously used for Apple motion coprocessors. Apple M1 The M1 chip, Apple's first processor designed for use in Macs, is manufactured using TSMC's 5 nm process. Announced on November 10, 2020, it is used in the MacBook Air (M1, 2020), Mac mini (M1, 2020), MacBook Pro (13-inch, M1, 2020), iMac (24-inch, M1, 2021), iPad Pro, 11-inch (3rd generation), and iPad Pro, 12.9-inch (5th generation). Apple M1 Pro and M1 Max The M1 Pro chip is a more powerful companion to the M1, with six to eight performance cores, two efficiency cores, 14 to 16 GPU cores, 16 Neural Engine cores, up to 32 GB unified RAM with up to 200 GB/s memory bandwidth, and more than double the transistors. It was announced on October 18, 2021, and is used in the 14- and 16-inch MacBook Pro. Apple said the CPU performance is about 70% faster than the M1, and that its GPU performance is about double. Apple claims the M1 Pro can deliver up to 20 streams of 4K or 7 streams of 8K ProRes video playback (up from 6 offered by Afterburner card for 2019 Mac Pro). The M1 Max chip is a larger version of the M1 Pro chip, with eight performance cores, two efficiency cores, 24 to 32 GPU cores, 16 Neural Engine cores, up to 64 GB unified RAM with up to 400 GB/s memory bandwidth, and more than double the number of transistors. It was announced on October 18, 2021, and is used in the 14- and 16-inch MacBook Pro. Apple says it has 57 billion transistors. Apple claims the M1 Max can deliver up to 30 streams of 4K (up from 23 offered by Afterburner card for 2019 Mac Pro) or 7 streams of 8K ProRes video playback. Miscellaneous devices This segment is about Apple-designed processors that are not easily sorted into another section. The 339S0196 is an ARM-based microcontroller used in Apple's Lightning Digital AV Adapter, a Lightning-to-HDMI adapter. This is a miniature computer with 256 MB RAM, running an XNU kernel loaded from the connected iOS device, then taking a serial signal from the iOS device translating that into a proper HDMI signal. List of Apple processors A series list S series list T series list W series list H series list U series list M series list Miscellaneous See also Apple motion coprocessors ARM Cortex-A9 MPCore List of iOS and iPadOS devices List of Samsung platforms (SoCs): Exynos (none have been used by Apple) historical (some were used in Apple products) PowerVR SGX GPUs were also used in the iPhone 3GS and the third-generation iPod touch PWRficient, a processor designed by P.A. Semi, a company Apple acquired to form an in-house custom chip design department Similar platforms A31 by AllWinner Atom by Intel BCM2xxxx by Broadcom eMAG and Altra by Ampere Computing Exynos by Samsung i.MX by Freescale Semiconductor Jaguar and Puma by AMD Kirin by HiSilicon MTxxxx by MediaTek NovaThor by ST-Ericsson OMAP by Texas Instruments RK3xxx by Rockchip Snapdragon by Qualcomm Tegra by Nvidia References Further reading ARM architecture Computer-related introductions in 2011 System on a chip 32-bit microprocessors 64-bit microprocessors
32377551
https://en.wikipedia.org/wiki/Bundesdatenschutzgesetz
Bundesdatenschutzgesetz
The German (BDSG) is a federal data protection act, that together with the data protection acts of the German federated states and other area-specific regulations, governs the exposure of personal data, which are manually processed or stored in IT systems. Historical development 1960–1970 In the early 1960s, consideration for comprehensive data protection began in the United States and further developed with advancements in computer technology and its privacy risks. So a regulatory framework was needed to counteract the impairment of privacy in the processing of personal data. 1970–1990 In the year 1970, the federal state of Hesse passed the first national data protection law, which was also the first data protection law in the world. In 1971, the first draft bill was submitted for a federal data protection act. Finally, on 1 January 1978, the first federal data protection act came into force. In the following years, as the BDSG was taking shape in practice, a technical development took place in data processing as the computer became increasingly important both at work and in the private sector. There were also significant changes in the legal field. With the Volkszählungsurteil (census verdict) of December 15, 1983, the Constitutional Court developed the right to self-determination of information (Article 1 I Constitution in conjunction with Article 2 I Constitution). The verdict confirmed that personal data are constitutionally protected in Germany. This means that individuals have the power to decide when and to what extent personal information is published. From 1990 In 1990, the legislature adopted a new data protection law based on the decision of the German Constitutional Court. The BDSG was amended in 2009 and 2010 with three amendments: On April 1, 2010 came with the "Novelle I" a new regulation of the activities of credit bureaus and their counterparties (especially credit institutions) and scoring in force. The long and heavily debated "Novelle II" came into force on 1 September 2009. They change 18 paragraphs in the BDSG. Content includes changes to the list privilege for address trading, new regulations for market and opinion research, opt-in , coupling ban, employee data protection, order data processing, new powers for the supervisory authorities and new or greatly expanded fines, information obligations in the event of data breaches, dismissal protection for data protection officers. On June 11, 2010 changed the "Novelle III" [4] as a small sub-item within the law implementing the EU Consumer Credit Directive, the § 29 BDSG by two paragraphs. The legal amendment In 2009, there were three amendments to the BDSG as a result of criticism from consumer advocates and numerous privacy scandals in business. The amendments addressed the following items: Amendments I and III Strict earmarking in the enforcement of data protection rights (§ 6 III BDSG) Permissibility and transparency in automated individual decisions (§ 6a BDSG) Transmission of data to commercial agencies (§ 28a BDSG) Admissibility in scoring procedures (§ 28b BDSG) Claims for credit rejection information for cross-border credit inquiry within the EU/EEA(§ 29 VI and VII BDSG) Information on claims against responsible agencies, especially in the case of scoring and commercial agencies (§ 34 BDSG) New penalty offenses (§ 43 I No. 4a, 8b, 8c BDSG) Amendment II Introducing a legal definition for the term “Beschäftigte” (employees) (§ 3 XI BDSG) Extension of the target data economy and data avoidance (§ 3a BDSG) Strengthening the position of internal data protection officer by training and explicit job protection law (§ 4f III sentence 5-7 BDSG) Extension of the requirement for the written content to be fixed in order data processing and control of the contractor (§ 11 II BDSG) New eligibility requirements and transparency in the use of personal data as part of the trade of addresses and promotional purposes (§ 28 III BDSG) Tightening the consent requirements of non-written consent (§ 28 IIIa BDSG) Introduction of a prohibition of a coupling in connection with the consent (§ 28 IIIb BDSG) Relief for market and opinion research companies (§ 30a BDSG) Rule on the admissibility of the processing of employment data (§ 32 BDSG) Expansion of disclosure requirements for moderate transmission list (§ 34 Ia BDSG) Extension of the arrangement powers of supervisory authorities on processing data protection and uses (§ 38 V BDSG) A duty to self-disclosure to the supervisory authority and the affected person for unlawfully obtaining knowledge of data (§ 42a BDSG) Introduction of new fines (§ 43 I No. 2a, 2b, 3a, 8a and II No. 5a-7 BDSG) Increasing the fine frame at €50,000 to €300,000 (§ 43 III BDSG) Transitional arrangements for market and opinion researchers, as well as for promotional use of stored data recorded before September 1, 2009 (§ 47 BDSG) Emphasis on the use of encryption (Annex of § 9 sentence 1 BDSG) Overview of the BDSG First section (§ § 1-11): General and common rules Second section (§ § 12-26): Data processing by public bodies Third section (§ § 27-38a): Data processing by non-public bodies and public competitor companies Fourth section (§ § 39-42): Special provisions Fifth section (§ § 43-44): Criminal and civil penalty provisions Sixth section (§ § 45-46): Transitional provisions Purpose and scope Purpose The law should protect individuals' personal rights from being injured through the handling of their personal information (§ 1 I BDSG). Scope According to § 1 II BDSG the law applies to the collection, processing, and use of personal data by: Public bodies of the Federation Public authorities of the federal states Non-public agencies Exclusions The Central Register of Foreign Nationals, according to § 22 and § 37 of the law, is excluded from certain sections of the Bundesdatenschutzgesetz. Public bodies of the Federation Public authorities are the Federal Authorities, the administration of justice and other public-law institutions of the Federation, the Federal Authorities, establishments, and foundations under public law and their associations, irrespective of their legal form (§ 2 I BDSG). Public authorities of the federal states Public authorities of the federal states, the authorities and the institutions of justice and other public-law institutions of a federal state, community, a community association and other legal persons of public law, which are subordinated to the supervision of the federal state of public law and their associations, irrespective of their legal form (§ 2 II BDSG). Non-public agencies Non-public agencies are natural and legal persons, companies, and other associations of persons in private law that do not fall under the paragraphs of § 2 I-III BDSG (§ 2 IV BDSG). Overview of the first principles The BDSG contains seven first principles of data protection law: 1. Prohibition with reservation of permission: The collection, processing and use of personal data is strictly prohibited, unless it is permitted by the law or the person concerned gives consent (§ 4 I BDSG). 2. Principle of immediacy: The personal data has to be collected directly from the person concerned. An exception of this principle is a legal permission or a disproportionate effort (§ 4 III BDSG). 3. Priority to special laws: The BDSG supersedes any other federal law that relates to personal information and its publication (§ 1 III BDSG). 4. Principle of proportionality: The creation of standards restrict the fundamental rights of the affected person. Therefore, these laws and procedures must be appropriate and necessary. A balancing of interests must occur. 5. Principle of data avoidance and data economy: Through the use of data anonymization or pseudo-anonymization, every data processing system should achieve the goal to use no (or as little as possible) personally identifiable data. 6. Principle of transparency: If personal data is collected, the responsible entity must inform the affected person of its identity and the purposes of the collection, processing or use (§ 4 III BDSG). 7. Principle of earmarking: If data is permitted to be collected for a particular purpose, use of the data is restricted to this purpose. A new consent or law is required, if the data will be used for another purpose. Types of personal data Personal data means all data that provide information about personal relationships or facts about an identified or identifiable natural person. They include: Personal relationships: name, address, occupation, e-mail, IP address, or personal number Factual circumstances: income, taxes, ownership Special kind of personal data: racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, health, or sex life. These data are subject to special protection. Protected personal data does not include anonymized data, where the person's identity is not discernible. Pseudonymized data (where the person's name is replaced with a pseudonym) is protected by the BDSG, because the data relates to a person whose identity is discernible. The BDSG does not protect the data of legal persons, such as corporations, although some courts have extended protection to legal persons. Interaction with European law The Council of Ministers and the European Parliament adopted the Data Protection Directive on October 24, 1995, that had to be transposed into internal law of the Member States by the end of 1998 (Directive 95/46/EC of the European Parliament and Council on the protection of individuals with the processing of personal data and on the free movement of such data). All member states have enacted their own data protection legislation. On 25 January 2012, the European Commission unveiled a draft General Data Protection Regulation that will supersede the Data Protection Directive. Cross-border data transmission The following rules apply in accordance with the requirements of the European Commission's Data Protection Directive to companies domiciled in Germany and for companies based abroad. Companies domiciled in Germany For companies based in Germany, the Federal Data Protection Act regulates the transfer of data differently in another EU member country and to a third country. Transmission from Germany to another EU member country Through the implementation of the EU Data Protection Directive, a uniform level of data protection has emerged in EU member countries. A company domiciled in Germany is therefore entitled to transfer personal data in Europe under the same rules as if it were to transfer data within Germany. Transmission from Germany to a third country Transfers to third countries must comply with the requirements of the Federal Privacy Act (§ 4b II sentence 1 BDSG). The transmission must cease if the person has a legitimate interest in the prevention of transmission, especially if an adequate data protection in the third country is not guaranteed (§ 4b II sentence 2 BDSG). The adequacy of protection shall be assessed by taking all the circumstances into account that are of importance for data transmission (§ 4b III BDSG). These include the type of data, the purpose, duration of processing, professional rules and security measures. In the opinion of the European Commission, Switzerland and Canada have an adequate level of protection. A further decision by the European Commission affects data transmission into the United States. According to the decision, the U.S. Department of Commerce assured a reasonable level of data protection through the negotiated Safe Harbor Agreement. Through the Safe Harbor Agreement (invalidated 6 October 2015 by Maximillian Schrems v. Data Protection Commissioner, and its successor, Privacy Shield, invalidated on 16 July 2020), the recipient in the United States commits itself to comply with certain data protection principles by means of statements that to the relevant U.S. authorities. No transfer framework currently applies and transfers to and from the U.S., as all third countries, requires another approved mechanism under the GDPR (e.g. binding corporate rules, standard contractual clauses). For other third countries, it is hardly possible to determine the appropriate level of protection because of the complex criteria. For this reason certain exceptions (in § 4c I and II BDSG) under which a data transmission is allowed in third countries, even if an adequate level of data protection is not guaranteed, are important. § 4c I BDSG allows cross-border data transfer with the person's consent and subject to the fulfillment of a contract between the person and the responsible party. In all other cases, the "subject to approval" solution (§ 4c II BDSG) allows the manufacturing site to transfer data in recipient countries where an adequate level of data protection is ensured. The contractual clauses or "binding corporate rules" must offer adequate guarantees regarding the protection of personal rights and must be approved in advance by the Competent Authority (§ 4c BDSG II set 1). For international companies, it is advisable to obtain approval for standard contractual clauses. Even self-regulation in corporate policies can enable the data flow within multinational corporations. The codes of conduct must also give victims legal rights and certain guarantees, as is the case in contracts. See also Volkszählungsurteil References External links Overview of the First Principles German law Privacy in Germany Data laws of Europe
32403275
https://en.wikipedia.org/wiki/Deep%20content%20inspection
Deep content inspection
Deep content inspection (DCI) is a form of network filtering that examines an entire file or MIME object as it passes an inspection point, searching for viruses, spam, data loss, key words or other content level criteria. Deep Content Inspection is considered the evolution of Deep Packet Inspection with the ability to look at what the actual content contains instead of focusing on individual or multiple packets. Deep Content Inspection allows services to keep track of content across multiple packets so that the signatures they may be searching for can cross packet boundaries and yet they will still be found. An exhaustive form of network traffic inspection in which Internet traffic is examined across all the seven OSI ISO layers, and most importantly, the application layer. Background Traditional inspection technologies are unable to keep up with the recent outbreaks of widespread attacks. Unlike shallow inspection methods such as Deep Packet Inspection (DPI), where only the data part (and possibly also the header) of a packet are inspected, Deep Content Inspection (DCI)-based systems are exhaustive, such that network traffic packets are reassembled into their constituting objects, un-encoded and/or decompressed as required, and finally presented to be inspected for malware, right-of-use, compliance, and understanding of the traffic's intent. If this reconstruction and comprehension can be done in real-time, then real-time policies can be applied to traffic, preventing the propagation of malware, spam and valuable data loss. Further, with DCI, the correlation and comprehension of the digital objects transmitted in many communication sessions leads to new ways of network performance optimization and intelligence regardless of protocol or blended communication sessions. Historically, DPI was developed to detect and prevent intrusion. It was then used to provide Quality of Service where the flow of network traffic can be prioritized such that latency-sensitive traffic types (e.g., Voice over IP) can be utilized to provide higher flow priority. New generation of Network Content Security devices such as Unified Threat Management or Next Generation Firewalls (Garner RAS Core Research Note G00174908) use DPI to prevent attacks from a small percentage of viruses and worms; the signatures of these malware fit within the payload of a DPI's inspection scope. However, the detection and prevention of a new generation of malware such as Conficker and Stuxnet is only possible through the exhaustive analysis provided by DCI. The evolution of DPI systems Computer networks send information across a network from one point to another; the data (sometimes referred to as the payload) is ‘encapsulated’ within an IP packet, which looks as follows: *The IP Header provides address information - the sender and destination addresses, while the TCP/UDP Header provided other pertinent information such as the port number, etc. As networks evolve, inspection techniques evolve; all attempting to understand the payload. Throughout the last decade there have been vast improvements including: Packet filtering Historically, inspection technology examined only the IP Header and the TCP/UDP Header. Dubbed as ‘Packet Filtering’, these devices would drop sequence packets, or packets that are not allowed on a network. This scheme of network traffic inspection was first used by firewalls to protect against packet attacks. Stateful packet inspection Stateful packet inspection was developed to examine header information and the packet content to increase source and destination understanding. Instead of letting the packets through as a result of their addresses and ports, packets stayed on the network if the context was appropriate to the networks’ current ‘state’. This scheme was first used by Check Point firewalls and eventually Intrusion Prevention/Detection Systems. Deep packet inspection Deep Packet Inspection is currently the predominant inspection tool used to analyze data packets passing through the network, including the headers and the data protocol structures. These technologies scan packet streams and look for offending patterns. To be effective, Deep Packet Inspection Systems must ‘string’ match Packet Payloads to malware signatures and specification signatures (which dictate what the request/response should be like) at wire speeds. To do so, FPGAs, or Field Programmable Gate Arrays, Network Processors, or even Graphics Processing Units (GPUs) are programmed to be hardwired with these signatures and, as a result, traffic that passes through such circuitry is quickly matched. While using hardware allows for quick and inline matches, DPI systems have the following limitations including; Hardware limitations: Since DPI systems implement their pattern matching (or searches for ‘offending’ patterns) through hardware, these systems are typically limited by: The number of circuits a high-end DPI chip can have; as of 2011, this of a high end DPI system can, optimally, process around 512 request/response per session. The memory available for pattern matches; as of 2011, high-end DPI systems are capable of matches of up to 60,000 unique signatures Payload limitations: Web applications communicate content using binary-to-text encoding, compression (zipped, archived, etc.), obfuscation and even encryption. As such payload structure is becoming more complex such that straight ‘string’ matching of the signatures is no longer sufficient. The common workaround is to have signatures be similarly ‘encoded’ or zipped which, given the above ‘search limitations’, cannot scale to support every application type, or nested zipped or archived files. Deep content inspection Parallel to the development of Deep Packet Inspection, the beginnings of Deep Content Inspection can be traced back as early as 1995 with the introduction of proxies that stopped malware or spam. Deep Content Inspection, can be seen as the third generation of Network Content Inspection, where network content is exhaustively examined, First generation – secure web gateway or proxy-based network content inspection Proxies have been deployed to provide internet caching services to retrieve objects and then forward them. Consequently, all network traffic is intercepted, and potentially stored. These graduated to what is now known as secure web gateways, proxy-based inspections retrieve and scans object, script, and images. Proxies, which relies on a fetch the content first if it were not cached, then forwarding the content to the recipient introduced some form of file inspection as early as 1995 when MAILsweeper was released by Content Technologies (now Clearswift), which was then replaced by MIMEsweeper in 2005. 2006 saw the release of the open-source, cross-platform antivirus software ClamAV provided support for caching proxies, Squid and NetCache. Using the Internet Content Adaptation Protocol (ICAP), a proxy will pass the downloaded content for scanning to an ICAP server running an anti-virus software. Since complete files or ‘objects’ were passed for scanning, proxy-based anti-virus solutions are considered the first generation of network content inspection. BlueCoat, WebWasher and Secure Computing Inc. (now McAfee, now a division of Intel), provided commercial implementations of proxies, eventually becoming a standard network element in most enterprise networks. Limitations: While proxies (or secure web gateways) provide in-depth network traffic inspection, their use is limited as they: require network reconfiguration which is accomplished through – a) end-devices to get their browsers to point to these proxies; or b) on the network routers to get traffic routed through these devices are limited to web (http) and ftp protocols; cannot scan other protocols such as e-mail and finally, proxy architectures which are typically built around Squid, which cannot scale with concurrent sessions, limiting their deployment to enterprises. Second generation – gateway/firewall-based network traffic proxy-assisted deep packet inspection The Second generation of Network Traffic Inspection solutions were implemented in firewalls and/or UTMs. Given that network traffic is choked through these devices, in addition to DPI inspection, proxy-like inspection is possible. This approach was first pioneered by NetScreen Technologies Inc. (acquired by Juniper Networks Inc). However, given the expensive cost of such operation, this feature was applied in tandem with a DPI system and was only activated on a-per-need basis, or when content failed to be qualified through the DPI system. Third generation – transparent, application-aware network content inspection, or deep content inspection The third, and current, generation of Network Content Inspection known as Deep Content Inspection solutions are implemented as fully transparent devices that perform full application level content inspection at wire speed. In order to understand the communication session's intent —in its entirety—, a Deep Content Inspection System must scan both the handshake and payload. Once the digital objects (executables, images, JavaScript's, .pdfs, etc. also referred to as Data-In-Motion) carried within the payload are constructed, usability, compliance and threat analysis of this session and its payload can be achieved. Given that the handshake sequence and complete payload of the session is available to the DCI system, unlike DPI systems where simple pattern matching and reputation search are only possible, exhaustive object analysis is possible. The inspection provided by DCI systems can include signature matching, behavioral analysis, regulatory and compliance analysis, and correlation of the session under inspection to the history of previous sessions. Because of the availability of the complete payload's objects, and these schemes of inspection, Deep Content Inspection Systems are typically deployed where high-grade Security and Compliance is required or where end-point security solutions are not possible such as in bring your own device, or Cloud installations. This third generation approach of Deep Content Inspection was developed within the defence and intelligence community, first appearing in guard products such as SyBard, and later by Wedge Networks Inc.. Key-implementation highlights of this Company's approach can be deduced from their patent USPTO# 7,630,379 The main differentiators of Deep Content Inspection are: Content Deep Content Inspection is Content-focused instead of analyzing packets or classifying traffic based on application types such as in Next Generation Firewalls. "Understanding" content and its intent is the highest level of intelligence to be gained from network traffic. This is important as information flow is moving away from Packet, towards Application, and ultimately to Content. Example inspection levels: Packet: Random Sample to get larger picture Application: Group or application profiling. Certain applications, or areas of applications, are allowed / not allowed or scanned further. Content: Look at everything. Scan everything. Subject the content to rules of inspection (such as Compliance/Data Loss Prevention rules). Understand the intent. Multi-services inspection Because of the availability of the complete objects of that payload to a Deep Content Inspection system, some of the services/inspection examples can include: Anti Malware Anti-spam Data Loss for Data In Motion Zero-Day or Unknown Threats Network Traffic Vizualization and Analytics Code Attacks/Injection Content Manipulation Applications of deep content inspection DCI is currently being adopted by enterprises, service providers and governments as a reaction to increasingly complex internet traffic with the benefits of understanding complete file types and their intent. Typically, these organizations have mission-critical applications with rigid requirements. Obstacles to deep content inspection Network throughput This type of inspection deals with real time protocols that only continue to increase in complexity and size. One of the key barriers for providing this level of inspection, that is looking at all content, is dealing with network throughput. Solutions must overcome this issue while not introducing latency into the network environment. They must also be able to effectively scale up to meet tomorrow's demands and the demands envisioned by the growing Cloud Computing trend. One approach is to use selective scanning; however, to avoid compromising accuracy, the selection criteria should be based on recurrence. The following patent USPTO# 7,630,379 provides a scheme as to how Deep Content Inspection can be carried out effectively using a recurrence selection scheme. The novelty introduced by this patent is that it addresses issues such as content (E.g., an mp3 file) that could have been renamed before transmission. Accuracy of services Dealing with the amount of traffic and information and then applying services requires very high speed look ups to be able to be effective. Need to compare against full services platforms or else having all traffic is not being utilized effectively. An example is often found in dealing with Viruses and Malicious content where solutions only compare content against a small virus database instead of a full and complete one. See also Content Threat Removal Content Disarm and Reconstruction References Deep packet inspection Network analyzers
32444811
https://en.wikipedia.org/wiki/Dooble
Dooble
Dooble is a free and open-source Web browser that was created to improve privacy. Currently, Dooble is available for FreeBSD, Linux, OS X, OS/2, and Windows. Dooble uses Qt for its user interface and abstraction from the operating system and processor architecture. As a result, Dooble should be portable to any system that supports OpenSSL, POSIX threads, Qt, SQLite, and other libraries. Features Dooble is designed and implemented in order to improve privacy and usability. Dooble includes a simple bookmarking system. Users may modify bookmarks via a bookmarks browser and a popup that's accessible from the location widget. Along with standard cookie management options, Dooble also provides a mechanism that automatically removes cookies. If permitted, Dooble will occasionally remove undesired HTTP cookies. Dooble Web Browser provides according to the News Portal Hongkiat an "easy to use download manager". Dooble partially integrates the distributed search engine YaCy. Most of the data that Dooble retains is stored using authenticated encryption. Dooble does not encode file associations and user settings. Dooble also provides a session-based model using temporary keys. The passphrase may be modified without the loss of data. Included is a non-JavaScript file manager and FTP browser. Version 1.53 introduced Gopher (protocol) support. A security passphrase can be created for the browser. The password can be set from the Safe area of the browser settings. "You need to create a master password, otherwise everything is wiped when you exit the program", points out PCAdvisor. Version 1.26 of Dooble introduced support for addons. The TorBrowser Add-On based on Vidalia was added in version 1.40. The Vidalia plugin was removed in version 1.49. The Add-On with the name InterFace expands the browser with social network functions like a messenger with group chat, a friend list, an e-mail client, a chess game, and a forum function like a bulletin board. InterFace is based on Qt and can be integrated as a plugin. It's based on a clone of the RetroShare Messenger. The plugin is considered deprecated. Configurable proxy settings provide reasonable flexibility. Dooble supports session restoration for authenticated sessions. If Dooble exits prematurely, the user may restore previous tabs and windows at the next authenticated session. Some Web sites employ iFrames in order to distribute content from one or more third-party Web sites. Since this technology may raise privacy issues with some users, Dooble provides a means of blocking external content. History The first version (0.1) was released in September, 2008. Since November 5, 2017 it uses the Qt WebEngine. The version (2.1.6) was released on January 25, 2018. Releases Dooble was also available on Nokia's N900. Reception In 2014 Dooble was rated as the ninth of ten "top" Linux browsers by Jack Wallen. Dooble further has been announced in 2015 as one of the top five best secure browsers. PCWorld reviewed Dooble in 2015 on the feature side as "rendering quickly, even on image-heavy sites". The Guardian recommended Dooble in 2015 as an alternative browser against surveillance: "Try out a privacy-focused browser such as Dooble.". See also List of web browsers List of web browsers for Unix and Unix-like operating systems Comparison of web browsers Qt (software) Timeline of web browsers Web browser history References Add-On Links External links Maemo Release Package Free web browsers MacOS web browsers Gopher clients Software based on WebKit Software that uses Qt Web browsers that use Qt
32447029
https://en.wikipedia.org/wiki/HTTP/1.1%20Upgrade%20header
HTTP/1.1 Upgrade header
The Upgrade header field is an HTTP header field introduced in HTTP/1.1. In the exchange, the client begins by making a cleartext request, which is later upgraded to a newer HTTP protocol version or switched to a different protocol. A connection upgrade must be requested by the client; if the server wants to enforce an upgrade it may send a 426 Upgrade Required response. The client can then send a new request with the appropriate upgrade headers while keeping the connection open. Use with TLS One use is to begin a request on the normal HTTP port but switch to Transport Layer Security (TLS). In practice such use is rare, with HTTPS being a far more common way to initiate encrypted HTTP. The server returns a 426 status code to alert legacy clients that the failure was client-related (400 level codes indicate a client failure). This method for establishing a secure connection is advantageous because it: Does not require messy and problematic URL redirection on the server side; Enables virtual hosting of secured websites (although HTTPS also allows this using Server Name Indication); and Reduces the potential for user confusion by providing a single way to access a particular resource. If the same resources are available from the server via both encrypted secure means and unencrypted clear means, a man-in-the-middle may maintain an unencrypted and unauthenticated connection with the client while maintaining an encrypted connection with the server. Disadvantages of this method include: The client cannot specify the requirement for a secure HTTP in the URI (though the client can require such via the upgrade negotiation); and Since HTTP is defined on a hop basis, HTTP tunneling may be required to bypass proxy servers. Use with WebSocket WebSocket also uses this mechanism to set up a connection with a HTTP server in a compatible way. The WebSocket Protocol has two parts: a handshake to establish the upgraded connection, then the actual data transfer. First, a client requests a WebSocket connection by using the Upgrade: WebSocket and Connection: Upgrade headers, along with a few protocol-specific headers to establish the version being used and set up a handshake. The server, if it supports the protocol, replies with the same Upgrade: WebSocket and Connection: Upgrade headers and completes the handshake. Once the handshake is completed successfully, data transfer begins. Use with HTTP/2 The HTTP Upgrade mechanism is used to establish HTTP/2 starting from plain HTTP. The client starts an HTTP/1.1 connection and sends an Upgrade: h2c header. If the server supports HTTP/2, it replies with HTTP 101 Switching Protocol status code. The HTTP Upgrade mechanism is used only for cleartext HTTP2 (h2c). In the case of HTTP2 over TLS (h2), the ALPN TLS protocol extension is used instead. See also Opportunistic encryption Secure Hypertext Transfer Protocol References External links Hypertext Transfer Protocol (HTTP) Upgrade Token Registry at IANA Upgrade header Cryptographic protocols
32549972
https://en.wikipedia.org/wiki/Techinline
Techinline
Techinline FixMe.IT is an application for remote support, remote control, desktop sharing, remote training, and file transfer between computers. The application operates with the Microsoft Windows operating system. Product How it works FixMe.IT offers two desktop applications: Expert and Client. To start a new remote support session, the expert directs a remote user to the FixMe.IT website in order to download the Client application and obtain a unique session ID. The expert can then use the ID provided by the remote user to connect to their computer via the Expert application. Features FixMe.IT can be used to access both on-demand and unattended machines The local expert can chat with the remote client, view and control the client’s desktop, and also allow the client to view and control their local desktop. The expert may reboot the remote machine and the connection will be restored automatically. Files can be transferred between machines by means of copy-and-paste and drag-and-drop methods. The application can be integrated into any website, and the interface can be customized by adding a company logo, text, and fonts. The expert can open up and control multiple remote desktops simultaneously Other features include session recording, reporting, and multi-monitor support. Security FixMe.IT uses a proprietary remote desktop protocol that is transmitted via SSL/TLS using 256-bit AES encryption, and supports two-factor authentication. Licensing policy Techinline FixMe.IT is commercial software that uses a subscription-based licensing model. Two types of subscription plans are available: monthly or yearly. Both plans allow an unlimited number of concurrent sessions and access up to 150 unattended computers. See also Comparison of remote desktop software Remote desktop software References External links Techinline website Remote desktop
32611383
https://en.wikipedia.org/wiki/Duplicati
Duplicati
Duplicati is a backup client that securely stores encrypted, incremental, compressed remote backups of local files on cloud storage services and remote file servers. Duplicati supports not only various online backup services like OneDrive, Amazon S3, Backblaze, Rackspace Cloud Files, Tahoe LAFS, and Google Drive, but also any servers that support SSH/SFTP, WebDAV, or FTP. Duplicati uses standard components such as rdiff, zip, AESCrypt, and GnuPG. This allows users to recover backup files even if Duplicati is not available. Released under the terms of the GNU Lesser General Public License (LGPL), Duplicati is free software. Technology Duplicati is written mostly in C# and implemented completely within the CLR, which enables it to be cross-platform. It runs well on 32-bit and 64-bit versions on Windows, macOS and Linux using either .NET Framework or Mono. Duplicati has both a graphical user interface with a wizard-style interface and a command-line version for use in headless environments. Both interfaces use the same core and thus have the same set of features and capabilities. The command-line version is similar to the Duplicity interface. Duplicati has some unique features that are usually only found in commercial systems, such as remote verification of backup files, disk snapshots, and backup of open files. The disk snapshots are performed with VSS on Windows and LVM on Linux. History The original Duplicati project was started in June 2008 and intended to produce a graphical user interface for the Duplicity program. This included a port of the Duplicity code for use on Windows, but was dropped in September 2008, where work on a clean re-implementation began. This re-implementation includes all the sub-programs found in Duplicity, such as rdiff, ftp, etc. This initial version of Duplicati saw an initial release in June 2009. In 2012, work on Duplicati 2 started, which is a complete rewrite. It includes a new storage engine that allows efficient, incremental, continuous backups. The new user interface is web-based, which makes it possible to install Duplicati 2 on headless systems like servers or a NAS. As it is also responsive, it can be easily used on mobile devices. Implementation The Duplicati GUI and command-line interface both call a common component called Main, which serves as a binding point for all the operations supported. Currently the encryption, compression and storage component are considered subcomponent and are loaded at runtime, making it possible for a third-party developer to inject a subcomponent into Duplicati without access to the source or any need to modify Duplicati itself. The license type is also flexible enough to allow redistribution of Duplicati with a closed-source storage provider. Duplicati is designed to be as independent of the provider as possible, which means that any storage medium that supports the common commands (GET, PUT, LIST, DELETE) can work with Duplicati. The Duplicity model, on which Duplicati is based, relies heavily on components in the system, such as librdiff, TcFTP and others. Since Duplicati is intended to be cross-platform, and it is unlikely that all those components are available on all platforms, Duplicati re-implements the components instead. Most notably, Duplicati features an rdiff and AESCrypt implementation that work on any system that supports a Common Language Runtime. Limitations of Duplicati 1 The GUI frontend in Duplicati 1.x is intended to be used on a single machine with a display attached. However, it is also possible to install Duplicati as a Windows service or Linux daemon, and set the Duplicati system tray from starting the Duplicati service. This limitation has been addressed in Duplicati 2, which has a web interface and can be used on headless systems. Duplicati 1.x has extremely slow file listings, so browsing a file tree to do restores can take a long time. Since Duplicati produces incremental backups, a corrupt or missing incremental volume can render all following incremental backups (up to the next full backup) useless. Duplicati 2 regularly tests the backup to detect corrupted files early. Duplicati 1.x only stores the file modification date, not metadata like permissions and attributes. This has been addressed in Duplicati 2. See also List of backup software References External links 2008 software Free backup software Free software programmed in C Sharp
32632788
https://en.wikipedia.org/wiki/Privacy%20by%20design
Privacy by design
Privacy by design is an approach to systems engineering initially developed by Ann Cavoukian and formalized in a joint report on privacy-enhancing technologies by a joint team of the Information and Privacy Commissioner of Ontario (Canada), the Dutch Data Protection Authority, and the Netherlands Organisation for Applied Scientific Research in 1995. The privacy by design framework was published in 2009 and adopted by the International Assembly of Privacy Commissioners and Data Protection Authorities in 2010. Privacy by design calls for privacy to be taken into account throughout the whole engineering process. The concept is an example of value sensitive design, i.e., taking human values into account in a well-defined manner throughout the process. Cavoukian's approach to privacy has been criticized as being vague, challenging to enforce its adoption, difficult to apply to certain disciplines, as well as prioritizing corporate interests over consumers' interests and placing insufficient emphasis on minimizing data collection. The European GDPR regulation incorporates privacy by design. History and background The privacy by design framework was developed by Ann Cavoukian, Information and Privacy Commissioner of Ontario, following her joint work with the Dutch Data Protection Authority and the Netherlands Organisation for Applied Scientific Research in 1995. In 2009, the Information and Privacy Commissioner of Ontario co-hosted an event, Privacy by Design: The Definitive Workshop, with the Israeli Law, Information and Technology Authority at the 31st International Conference of Data Protection and Privacy Commissioner (2009). In 2010 the framework achieved international acceptance when the International Assembly of Privacy Commissioners and Data Protection Authorities unanimously passed a resolution on privacy by design recognising it as an international standard at their annual conference. Among other commitments, the commissioners resolved to promote privacy by design as widely as possible and foster the incorporation of the principle into policy and legislation. Global usage Germany released a statute (§ 3 Sec. 4 Teledienstedatenschutzgesetz [Teleservices Data Protection Act]) back in July 1997. The new EU General Data Protection Regulation (GDPR) includes ‘data protection by design’ and ‘data protection by default’, the second foundational principle of privacy by design. Canada’s Privacy Commissioner included privacy by design in its report on Privacy, Trust and Innovation – Building Canada’s Digital Advantage. In 2012, U.S. Federal Trade Commission (FTC) recognized privacy by design as one of its three recommended practices for protecting online privacy in its report entitled Protecting Consumer Privacy in an Era of Rapid Change, and the FTC included privacy by design as one of the key pillars in its Final Commissioner Report on Protecting Consumer Privacy. In Australia, the Commissioner for Privacy and Data Protection for the State of Victoria (CPDP) has formally adopted privacy by design as a core policy to underpin information privacy management in the Victorian public sector. The UK Information Commissioner’s Office website highlights privacy by design and data protection by design and default. In October 2014, the Mauritius Declaration on the Internet of Things was made at the 36th International Conference of Data Protection and Privacy Commissioners and included privacy by design and default. The Privacy Commissioner for Personal Data, Hong Kong held an educational conference on the importance of privacy by design. In the private sector, Sidewalk Toronto commits to privacy by design principles; Brendon Lynch, Chief Privacy Officer at Microsoft, wrote an article called Privacy by Design at Microsoft; whilst Deloitte relates certifiably trustworthy to privacy by design. Foundational principles Privacy by design is based on seven "foundational principles": Proactive not reactive; preventive not remedial Privacy as the default setting Privacy embedded into design Full functionality – positive-sum, not zero-sum End-to-end security – full lifecycle protection Visibility and transparency – keep it open Respect for user privacy – keep it user-centric The principles have been cited in over five hundred articles referring to the Privacy by Design in Law, Policy and Practice white paper by Ann Cavoukian. Foundational principles in detail Proactive not reactive; preventive not remedial The privacy by design approach is characterized by proactive rather than reactive measures. It anticipates and prevents privacy invasive events before they happen. Privacy by design does not wait for privacy risks to materialize, nor does it offer remedies for resolving privacy infractions once they have occurred — it aims to prevent them from occurring. In short, privacy by design comes before-the-fact, not after. Privacy as the default Privacy by design seeks to deliver the maximum degree of privacy by ensuring that personal data are automatically protected in any given IT system or business practice. If an individual does nothing, their privacy still remains intact. No action is required on the part of the individual to protect their privacy — it is built into the system, by default. Privacy embedded into design Privacy by design is embedded into the design and architecture of IT systems as well as business practices. It is not bolted on as an add-on, after the fact. The result is that privacy becomes an essential component of the core functionality being delivered. Privacy is integral to the system without diminishing functionality. Full functionality – positive-sum, not zero-sum Privacy by design seeks to accommodate all legitimate interests and objectives in a positive-sum “win-win” manner, not through a dated, zero-sum approach, where unnecessary trade-offs are made. Privacy by design avoids the pretense of false dichotomies, such as privacy versus security, demonstrating that it is possible to have both. End-to-end security – full lifecycle protection Privacy by design, having been embedded into the system prior to the first element of information being collected, extends securely throughout the entire lifecycle of the data involved — strong security measures are essential to privacy, from start to finish. This ensures that all data are securely retained, and then securely destroyed at the end of the process, in a timely fashion. Thus, privacy by design ensures cradle-to-grave, secure lifecycle management of information, end-to-end. Visibility and transparency – keep it open Privacy by design seeks to assure all stakeholders that whatever the business practice or technology involved, it is in fact, operating according to the stated promises and objectives, subject to independent verification. Its component parts and operations remain visible and transparent, to users and providers alike. Remember, trust but verify. Respect for user privacy – keep it user-centric Above all, privacy by design requires architects and operators to keep the interests of the individual uppermost by offering such measures as strong privacy defaults, appropriate notice, and empowering user-friendly options. Keep it user-centric. Design and standards The International Organization for Standardization (ISO) approved the Committee on Consumer Policy (COPOLCO) proposal for a new ISO standard: Consumer Protection: Privacy by Design for Consumer Goods and Services (ISO/PC317). The standard will aim to specify the design process to provide consumer goods and services that meet consumers’ domestic processing privacy needs as well as the personal privacy requirements of data protection. The standard has the UK as secretariat with thirteen participating members and twenty observing members. The Standards Council of Canada (SCC) is one of the participating members and has established a mirror Canadian committee to ISO/PC317. The OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) Technical Committee provides a specification to operationalize privacy by design in the context of software engineering. Privacy by design, like security by design, is a normal part of the software development process and a risk reduction strategy for software engineers. The PbD-SE specification translates the PbD principles to conformance requirements within software engineering tasks and helps software development teams to produce artifacts as evidence of PbD principle adherence. Following the specification facilitates the documentation of privacy requirements from software conception to retirement, thereby providing a plan around adherence to privacy by design principles, and other guidance to privacy best practices, such as NIST’s 800-53 Appendix J (NIST SP 800-53) and the Fair Information Practice Principles (FIPPs) (PMRM-1.0). Relationship to privacy-enhancing technologies Privacy by design originated from privacy-enhancing technologies (PETs) in a joint 1995 report by Ann Cavoukian and John Borking. In 2007 the European Commission provided a memo on PETs. In 2008 the British Information Commissioner's Office commissioned a report titled Privacy by Design – An Overview of Privacy Enhancing Technologies. There are many facets to privacy by design, including software and systems engineering as well as administrative elements (e.g. legal, policy, procedural), other organizational controls, and operating contexts. Privacy by design evolved from early efforts to express fair information practice principles directly into the design and operation of information and communications technologies. In his publication Privacy by Design: Delivering the Promises Peter Hustinx acknowledges the key role played by Ann Cavoukian and John Borking, then Deputy Privacy Commissioners, in the joint 1995 publication Privacy-Enhancing Technologies: The Path to Anonymity. This 1995 report focussed on exploring technologies that permit transactions to be conducted anonymously. Privacy-enhancing technologies allow online users to protect the privacy of their personally identifiable information (PII) provided to (and handled by) services or applications. Privacy by design evolved to consider the broader systems and processes in which PETs were embedded and operated. The U.S. Center for Democracy & Technology (CDT) in The Role of Privacy by Design in Protecting Consumer Privacy distinguishes PET from privacy by design noting that “PETs are most useful for users who already understand online privacy risks. They are essential user empowerment tools, but they form only a single piece of a broader framework that should be considered when discussing how technology can be used in the service of protecting privacy.” Criticism and recommendations The privacy by design framework attracted academic debate, particularly following the 2010 International Data Commissioners resolution, these provide criticism of privacy by design with suggestions by legal and engineering experts to better understand how to apply the framework into various contexts. Privacy by design has been critiqued as "vague" and leaving "many open questions about their application when engineering systems." In 2007, researchers at K.U. Leuven published Engineering Privacy by Design noting that “The design and implementation of privacy requirements in systems is a difficult problem and requires translation of complex social, legal and ethical concerns into systems requirements”. The authors claim that their statement regarding that the principles of privacy by design "remain vague and leave many open questions about their application when engineering systems", may be viewed as criticism. However, the purpose of the paper is to propose that "starting from data minimization is a necessary and foundational first step to engineer systems in line with the principles of privacy by design". The objective of their paper is to provide an "initial inquiry into the practice of privacy by design from an engineering perspective in order to contribute to the closing of the gap between policymakers’ and engineers’ understanding of privacy by design." It has also been pointed out that privacy by design is similar to voluntary compliance schemes in industries impacting the environment, and thus lacks the teeth necessary to be effective, and may differ per company. In addition, the evolutionary approach currently taken to the development of the concept will come at the cost of privacy infringements because evolution implies also letting unfit phenotypes (privacy-invading products) live until they are proven unfit. Some critics have pointed out that certain business models are built around customer surveillance and data manipulation and therefore voluntary compliance is unlikely. In 2011, the Danish National It and Telecom Agency published as a discussion paper on "New Digital Security Models" the publication references "Privacy by Design" as a key goal in creating a security model that is compliant with "Privacy by Design." This is done by extending the concept to "Security by Design" with an objective of balancing anonymity and surveillance by eliminating identification as much as possible. In 2013, Rubinstein and Good used Google and Facebook privacy incidents to conduct a counterfactual analysis in order to identify lessons learned of value for regulators when recommending privacy by design. The first was that “more detailed principles and specific examples” would be more helpful to companies. The second is that “usability is just as important as engineering principles and practices”. The third is that there needs to be more work on “refining and elaborating on design principles–both in privacy engineering and usability design”. including efforts to define international privacy standards. The final lesson learned is that “regulators must do more than merely recommend the adoption and implementation of privacy by design.” Another criticism is that current definitions of privacy by design do not address the methodological aspect of systems engineering, such as using decent system engineering methods, e.g. those which cover the complete system and data life cycle. The concept also does not focus on the role of the actual data holder but on that of the system designer. This role is not known in privacy law, so the concept of privacy by design is not based on law. This, in turn, undermines the trust by data subjects, data holders and policy-makers. The advent of GDPR with its maximum fine of 4% of global turnover now provides a balance between business benefit and turnover and addresses the voluntary compliance criticism and requirement from Rubinstein and Good that “regulators must do more than merely recommend the adoption and implementation of privacy by design”. Rubinstein and Good also highlighted that privacy by design could result in applications that exemplified Privacy by Design and their work was well received. The May 2018 European Data Protection Supervisor Giovanni Buttarelli's paper Preliminary Opinion on Privacy by Design states, "While privacy by design has made significant progress in legal, technological and conceptual development, it is still far from unfolding its full potential for the protection of the fundamental rights of individuals. The following sections of this opinion provide an overview of relevant developments and recommend further efforts". The executive summary makes the following recommendations to EU institutions: To ensure strong privacy protection, including privacy by design, in the ePrivacy Regulation, To support privacy in all legal frameworks which influence the design of technology, increasing incentives and substantiating obligations, including appropriate liability rules, To foster the roll-out and adoption of privacy by design approaches and PETs in the EU and at the member states’ level through appropriate implementing measures and policy initiatives, To ensure competence and resources for research and analysis on privacy engineering and privacy-enhancing technologies at EU level, by ENISA or other entities, To support the development of new practices and business models through the research and technology development instruments of the EU, To support EU and national public administrations to integrate appropriate privacy by design requirements in public procurement, To support an inventory and observatory of the “state of the art” of privacy engineering and PETs and their advancement. The EDPS will: Continue to promote privacy by design, where appropriate in cooperation with other data protection authorities in the European Data Protection Board (EDPB), Support coordinated and effective enforcement of Article 25 of the GDPR and related provisions, Provide guidance to controllers on the appropriate implementation of the principle laid down in the legal base, and Together with data protection authorities of Austria, Ireland and Schleswig-Holstein, award privacy friendly apps in the mobile health domain. Implementing privacy by design The European Data Protection Supervisor Giovanni Buttarelli set out the requirement to implement privacy by design in his article. The European Union Agency for Network and Information Security (ENISA) provided a detailed report Privacy and Data Protection by Design – From Policy to Engineering on implementation. The Summer School on real-world crypto and privacy provided a tutorial on "Engineering Privacy by Design". The OWASP Top 10 Privacy Risks Project for web applications that gives hints on how to implement privacy by design in practice. The OASIS Privacy by Design Documentation for Software Engineers (PbD-SE) offers a privacy extension/complement to OMG’s Unified Modeling Language (UML) and serves as a complement to OASIS’ eXtensible Access Control Mark-up Language (XACML) and Privacy Management Reference Model (PMRM). Privacy by Design guidelines are developed to operationalise some of the high-level privacy-preserving ideas into more granular actionable advice. See also Consumer privacy General Data Protection Regulation FTC fair information practice Internet privacy Mesh networking Dark web End-to-end encryption Personal data service Privacy engineering Privacy-enhancing technologies Surveillance capitalism User interface design References Information privacy Design, Privacy by Systems engineering Product design
32639168
https://en.wikipedia.org/wiki/Operation%20Delego
Operation Delego
Operation Delego is a major international law enforcement investigation launched in 2009, which dismantled an international pedophile ring that operated an invitation-only Internet site named Dreamboard which featured incentives for images of the violent sexual abuse of young children under twelve, including infants. Only 72 charges were filed against the approximately 600 members of Dreamboard due to the extensive encryption involved. Members were required to upload new material at least every 50 days to maintain their access and remain in good standing. Operation Delego is a spinoff investigation from leads developed through "Operation Nest Egg," the prosecution of another online group dedicated to sharing and disseminating child pornography. Operation Nest Egg was a spinoff investigation developed from leads related to another international investigation, "Operation Joint Hammer," which targeted transnational rings of child pornography trafficking. Dozens of law enforcement agencies were involved worldwide in Delego, including Eurojust, and arrests were made on all five continents. Twenty of those charged, however, are only known by their Internet handles, and as such were individually charged as John Does and remain at large. However, some of the indictments were unsealed as of August 2011, and the names of some of those involved in Dreamboard are publicly available. United States Department of Homeland Security Secretary Janet Napolitano stated at a news conference that "The board may have been the vehicle for the distribution of up to 123 terabytes of child pornography, which is roughly equivalent to 16,000 DVDs," making this the DHS's largest child porn bust in its history. Launched in 2009 by federal law enforcement, Operation Delego resulted in the arrest of 52 people in 14 countries including Canada, Denmark, Ecuador, France, Germany, Hungary, Kenya, the Netherlands, the Philippines, Qatar, Serbia, Sweden and Switzerland, according to United States Department of Justice Attorney General Eric Holder. Furthermore, according to federal agents, while Dreamboard's servers were located in Shreveport, Louisiana, the site's top administrators were in France and Canada. Operation Delego is ongoing. Arrests Arrests have occurred in the following countries: Canada, Denmark, Ecuador, France, Germany, Hungary, Kenya, the Netherlands, the Philippines, Qatar, Serbia, Sweden, Switzerland, the US. Convictions On May 10, 2011, Timothy Lee Gentry, 33, of Burlington, Ky., was sentenced to 25 years in prison. On May 31, 2011, Michael Biggs, 32, of Orlando, Fla., was sentenced to 20 years in prison. On June 22, 2011, Michael Childs, 49, of Huntsville, Ala., was sentenced to 30 years in prison. On June 29, 2011, Christopher Luke, 31, of Tonawanda, NY, was sentenced to 20 years in prison for engaging in a child exploitation enterprise. On July 14, 2011, Charles Christian, 49, of Tilton, Ill., was sentenced to more than 22 years in prison. On July 17, 2012, Robert Cuff, 49, was sentenced to life in prison for producing and distributing child pornography. Jonathan Mayer, 29, of Newport, Tenn., and Shane Micah Turner, 33, of Roy, Utah, were sentenced in connection with their participation in the Dreamboard bulletin board. Each received a sentence of 17 ½ years in prison followed by lifetime supervised release, as a result of both defendants pleading guilty to conspiring to advertise child pornography. The sentences were handed down by U.S. District Judge S. Maurice Hicks. On May 17, 2012 John Wyss, aka "Bones," 55, of Monroe, Wisconsin, was found guilty, after a four-day jury trial, of one count of engaging in a child exploitation enterprise, one count of conspiracy to advertise child pornography and one count of conspiracy to distribute child pornography. In September 2012 Wyss was sentenced to life in prison. A total of 72 individuals, including Wyss, have been charged as a result of Operation Delego. To date, 57 of the 72 charged defendants have been arrested in the United States and abroad. 42 individuals have pleaded guilty. 25 of the 42 individuals who have pleaded guilty for their roles in the conspiracy have been sentenced to prison and have received sentences ranging between 10 years and life in prison. 15 of the 72 charged individuals remain at large and are known only by their online identities. Efforts to identify and apprehend these individuals continue. On January 8, 2013, David Ettlinger, aka "ee1", 35, of Newton, Massachusetts was sentenced to serve 45 years in prison. In addition to his prison term, Ettlinger was sentenced to lifetime supervised release. On July 29, 2013, the final two defendants, of those arrested thus far, in the Dreamboard child exploitation and child pornography ring were sentenced to federal prison. Christopher Blackford, 28, of Charleston, S.C., was sentenced to 22 years in federal prison before U.S. District Court Judge S. Maurice Hicks in the Western District of Louisiana. In addition to his prison sentence, Blackford faces a lifetime of supervised release. Blackford admitted in his April guilty plea that he joined Dreamboard in December 2009 and contributed 84 posts to the online bulletin board that contained child pornography. William Davis, 39, of Bristol, N.H., was sentenced to more than 17 years in federal prison in addition to a lifetime of supervised release. Davis admitted in his April guilty plea that he posted advertisements offering to distribute child pornography to other members of the board. References External links US DOJ Brazthumper Complaint, PDF US DOJ Blackbart Indictment, PDF US DOJ Hawkeye Indictment, PDF US DOJ Twitched Indictment, PDF Delego Cybercrime United States Department of Homeland Security
32656215
https://en.wikipedia.org/wiki/Motorola%20Photon
Motorola Photon
The Motorola Photon 4G was a high end Android-based mobile smartphone that was distributed exclusively by Sprint. A very similar model was available as the Motorola Electrify from U.S. Cellular. User interface The Photon runs a customized interface similar to the standard Android interface with several additions. Motorola provides custom widgets to toggle settings for airplane mode, bluetooth, wireless 4G access (WiMAX 2.5 GHz), and WiFi access as well as resizable widgets for functions such as the calendar, social networking, a world clock and more. The Photon's customized interface has seven home screens and four main onscreen buttons at the bottom of the screen. Of the bottom buttons the left three buttons may be customized to run a program of the user's choosing, while the right-most button opens the app drawer. Capacitive touch buttons at the bottom of the screen are Menu, Home, Back and Search. Pressing and holding on the Home button brings up a "Recent Apps" view, while double tapping the Home button will either show a thumbnailed view of all seven home screens (default) or, at the user's preference, quick launch a specific function. Connectivity The Photon is an international capable phone ("world phone"), meaning that besides using Sprint's CDMA/EV-DO network and other CDMA networks accessible internationally and through domestic roaming; it accepts a GSM SIM card with quadband GSM and triband UMTS/WCDMA/HSPA+ capabilities. The phone also, like most Android phones, features WiFi access, Bluetooth 2.1, GPS, 3.5mm headset jack, and a micro-USB port for charging and data sync with a PC. An HDMI port allows for access to HDMI-mirroring and separate functions. Business and Enterprise features The Photon supports several important business features. It supports access to Microsoft Exchange Server mail servers, including the support of several Exchange ActiveSync profiles. Second, full encryption of the internal and external storage is supported. This last feature requires the creation of a password that is required when booting the phone. Motorola provides several additional software packages for PCs and Mac, such as Motorola Media Link (to manage photos, music and video) and the Motorola Phone Portal (manage phone and contents from a web browser over a network link). Webtop Similarly to the Motorola Atrix 4G, it has the integrated Ubuntu-based 'Webtop' application from Motorola. The Webtop application is launched when the phone is connected to the external display through Laptop dock or HD multimedia dock. In Webtop mode, offering similar user interface of typical Ubuntu desktop, the phone can run several applications on external display such as Firefox web browser, SNS clients and 'mobile view' application enabling total access of the Photon and its screen. In September 2011, Motorola released the source code of Webtop application at SourceForge. Detailed specifications Also known as Motorola MB855 Nvidia Tegra 2 (dual-core 1 GHz ARM Cortex-A9 + GeForce ULP) World phone capable of running on multiple types of networks (WiMAX/CDMA/GSM/UMTS) Android 2.3 "Gingerbread" 1 GB LP DDR2 RAM 16 GB internal memory, expandable by microSD 32 GB, total of 48 GB 4.3-inch PenTile qHD display (540×960) with Gorilla Glass 8.0 MP with dual LED flash, 4x digital zoom and autofocus, 1080p video capture at 30 frame/s VGA front-facing camera for video calls TriColor LED notification light 1650 mAh user-changeable battery Built-in kickstand Motorola Webtop interface HDMI mirroring Android updates In 2011, Motorola joined the Android Upgrade Alliance, promising to release operating system updates to all its phones for 18 months following initial release. However, in October 2012, Motorola announced in a forum post that several phones, including the Photon 4G, will not receive updates to Android 4.0 Ice Cream Sandwich, instead offering a rebate program for users to trade up to newer devices. Punit Soni, who runs software product management for Motorola Mobility, stated "I think some of them [loyal customers] have gotten a raw deal, We understand strongly and apologize for it." See also Motorola Xoom Motorola Droid Bionic Motorola Cliq 2 Galaxy Nexus Comparison of smartphones Other phones with Tegra 2 SoC: LG Optimus 2X Motorola Atrix Samsung Galaxy R Droid X2 References External links Sprint.com MotoDev site Review of Motorola Photon Android (operating system) devices Linux-based devices Smartphones Photon Mobile phones introduced in 2011 Discontinued smartphones
32695816
https://en.wikipedia.org/wiki/MQTT
MQTT
MQTT is a lightweight, publish-subscribe network protocol that transports messages between devices. The protocol usually runs over TCP/IP, however, any network protocol that provides ordered, lossless, bi-directional connections can support MQTT. It is designed for connections with remote locations where resource constraints exist or the network bandwidth is limited. The protocol is an open OASIS standard and an ISO recommendation (ISO/IEC 20922). History Andy Stanford-Clark (IBM) and Arlen Nipper (then working for Eurotech, Inc.) authored the first version of the protocol in 1999. It was used to monitor oil pipelines within the SCADA industrial control system. The goal was to have a protocol that is bandwidth-efficient, lightweight and uses little battery power, because the devices were connected via satellite link which, at that time, was extremely expensive. Historically, the "MQ" in "MQTT" came from the IBM MQ (then 'MQSeries') product line, where it stands for "Message Queue". However, the protocol provides publish-and-subscribe messaging (no queues, in spite of the name). In the specification opened by IBM as version 3.1 the protocol was referred to as "MQ Telemetry Transport". Subsequent versions released by OASIS strictly refers to the protocol as just "MQTT", although the technical committee itself is named "OASIS Message Queuing Telemetry Transport Technical Committee". Since 2013, "MQTT" does not stand for anything. In 2013, IBM submitted MQTT v3.1 to the OASIS specification body with a charter that ensured only minor changes to the specification could be accepted. After taking over maintenance of the standard from IBM, OASIS released version 3.1.1 on October 29, 2014. A more substantial upgrade to MQTT version 5, adding several new features, was released on March 7, 2019. MQTT-SN (MQTT for Sensor Networks) is a variation of the main protocol aimed at battery-powered embedded devices on non-TCP/IP networks, such as Zigbee. Overview The MQTT protocol defines two types of network entities: a message broker and a number of clients. An MQTT broker is a server that receives all messages from the clients and then routes the messages to the appropriate destination clients. An MQTT client is any device (from a micro controller up to a fully-fledged server) that runs an MQTT library and connects to an MQTT broker over a network. Information is organized in a hierarchy of topics. When a publisher has a new item of data to distribute, it sends a control message with the data to the connected broker. The broker then distributes the information to any clients that have subscribed to that topic. The publisher does not need to have any data on the number or locations of subscribers, and subscribers, in turn, do not have to be configured with any data about the publishers. If a broker receives a message on a topic for which there are no current subscribers, the broker discards the message unless the publisher of the message designated the message as a retained message. A retained message is a normal MQTT message with the retained flag set to true. The broker stores the last retained message and the corresponding QoS for the selected topic. Each client that subscribes to a topic pattern that matches the topic of the retained message receives the retained message immediately after they subscribe. The broker stores only one retained message per topic. This allows new subscribers to a topic to receive the most current value rather than waiting for the next update from a publisher. When a publishing client first connects to the broker, it can set up a default message to be sent to subscribers if the broker detects that the publishing client has unexpectedly disconnected from the broker. Clients only interact with a broker, but a system may contain several broker servers that exchange data based on their current subscribers' topics. A minimal MQTT control message can be as little as two bytes of data. A control message can carry nearly 256 megabytes of data if needed. There are fourteen defined message types used to connect and disconnect a client from a broker, to publish data, to acknowledge receipt of data, and to supervise the connection between client and server. MQTT relies on the TCP protocol for data transmission. A variant, MQTT-SN, is used over other transports such as UDP or Bluetooth. MQTT sends connection credentials in plain text format and does not include any measures for security or authentication. This can be provided by using TLS to encrypt and protect the transferred information against interception, modification or forgery. The default unencrypted MQTT port is 1883. The encrypted port is 8883. MQTT broker The MQTT broker is a piece of software running on a computer (running on-premises or in the cloud), and could be self-built or hosted by a third party. It is available in both open source and proprietary implementations. The broker acts as a post office. MQTT clients don't use a direct connection address of the intended recipient, but use the subject line called "Topic". Anyone who subscribes receives a copy of all messages for that topic. Multiple clients can subscribe to a topic from a single broker (one to many capability), and a single client can register subscriptions to topics with multiple brokers (many to one). Each client can both produce and receive data by both publishing and subscribing, i.e. the devices can publish sensor data and still be able to receive the configuration information or control commands (MQTT is a bi-directional communication protocol). This helps in both sharing data, managing and controlling devices. A client can not broadcast the same data to a range of topics, and must publish multiple messages to the broker, each with a single topic given. With MQTT broker architecture, the client devices and server application become decoupled. In this way, the clients are kept unaware of each other's information. MQTT uses TLS encryption with username and password protected connections. Optionally, the connection may require certification, in the form of a certificate file that a client provides and must match with the server's copy. In case of failure, broker software and clients can automatically handover to Redundant/automatic backup broker. Backup brokers can also be set up to share the load of clients across multiple servers onsite, in the cloud, or a combination of these. The broker can support both standard MQTT and MQTT for compliant specifications such as Sparkplug. This can be done with same server, at the same time and with the same levels of security. The broker keeps track of all the session's information as the device goes on and off, in a function called "persistent sessions". In this state, a broker will store both connection info for each client, topics each client has subscribed to, and any messages for a topic with a QoS of 1 or 2. The main advantages of MQTT broker are: Eliminates vulnerable and insecure client connections Can easily scale from a single device to thousands Manages and tracks all client connection states, including security credentials and certificates Reduced network strain without compromising the security (cellular or satellite network) Message types Connect Waits for a connection to be established with the server and creates a link between the nodes. Disconnect Waits for the MQTT client to finish any work it must do, and for the TCP/IP session to disconnect. Publish Returns immediately to the application thread after passing the request to the MQTT client. Version 5.0 In 2019, OASIS released the official MQTT 5.0 standard. Version 5.0 includes the following major new features: Reason codes: Acknowledgements now support return codes, which provide a reason for a failure. Shared subscriptions: Allow the load to be balanced across clients and thus reduce the risk of load problems Message expiry: Messages can include an expiry date and are deleted if they are not delivered within this time period. Topic alias: The name of a topic can be replaced with a single number Quality of service Each connection to the broker can specify a quality of service (QoS) measure. These are classified in increasing order of overhead: At most once – the message is sent only once and the client and broker take no additional steps to acknowledge delivery (fire and forget). At least once – the message is re-tried by the sender multiple times until acknowledgement is received (acknowledged delivery). Exactly once – the sender and receiver engage in a two-level handshake to ensure only one copy of the message is received (assured delivery). This field does not affect handling of the underlying TCP data transmissions; it is only used between MQTT senders and receivers. Applications Several projects implement MQTT, for example: OpenHAB the Open-source software home automation platform embed an MQTT binding. The Open Geospatial Consortium SensorThings API standard specification has an MQTT extension in the standard as an additional message protocol binding. It was demonstrated in a US Department of Homeland Security IoT Pilot. Node-RED supports MQTT with TLS nodes as of version 0.14. Home Assistant the Open-source software home automation platform is MQTT enabled and offers a Mosquitto broker add-on. ejabberd supports MQTT as of version 19.02. Eclipse Foundation manages a Sparkplug protocol specification compatible with MQTT. It builds on top of MQTT adding requirements needed in real-time industrial applications. See also Comparison of MQTT implementations Advanced Message Queuing Protocol (AMQP) Streaming Text Oriented Messaging Protocol (STOMP) Constrained Application Protocol (CoAP) Extensible Messaging and Presence Protocol (XMPP) Apache ActiveMQ Solace PubSub+ RabbitMQ References External links Official website of Technical Committee MQTT Specifications Version 5.0, 2019-03-07: PDF edition, HTML edition Version 3.1.1 Plus Errata 01, 2015-12-10: PDF edition, HTML edition Version 3.1.1, 2014-10-29: PDF edition, HTML edition Version 3.1, 2010: PDF edition, HTML edition MQTT-SN Specifications Version 1.2, 2013-11-14: PDF edition Application layer protocols Data transmission MQ Message-oriented middleware Network protocols Telemetry
32775124
https://en.wikipedia.org/wiki/SciEngines%20GmbH
SciEngines GmbH
SciEngines GmbH is a privately owned company founded 2007 as a spin-off of the COPACOBANA project by the Universities of Bochum and Kiel, both in Germany. The project intended to create a platform for an affordable Custom hardware attack. COPACOBANA is a massively-parallel reconfigurable computer. It can be utilized to perform a so-called Brute force attack to recover DES encrypted data. It consists of 120 commercially available, reconfigurable integrated circuits (FPGAs). These Xilinx Spartan3-1000 run in parallel, and create a massively parallel system. Since 2007, SciEngines GmbH has enhanced and developed successors of COPACOBANA. Furthermore, the COPACOBANA has become a well known reference platform for cryptanalysis and custom hardware based attacks to symmetric, asymmetric cyphers and stream ciphers. 2008 attacks against A5/1 stream cipher an encryption system been used to encrypt voice streams in GSM have been published as the first known real world attack utilizing off-the-shelf custom hardware. They introduced in 2008 their RIVYERA S3-5000 enhancing the performance of the computer dramatically via using 128 Spartan-3 5000's. Currently SciEngines RIVYERA holds the record in brute-force breaking DES utilizing 128 Spartan-3 5000 FPGAs. Current systems provide a unique density of up to 256 Spartan-6 FPGAs per single system enabling scientific utilization beyond the field of cryptanalysis, like bioinformatics. 2006 original developers of the COPACOBANA form the company 2007 introduction of the COPACOBANA (Copacobana S3-1000) as a [COTS] 2007 first demonstration of COPACOBANA 5000 2008 they introduced RIVYERA S3-5000, the direct successor of COPACOBANA 5000 and COPACOBANA. The RIVYERA architecture introduced a new high performance optimized bus system and a fully API encapsulated communication framework. 2008 demonstration of the COPACOBANA V4-SX35, a 128 Virtex-4 SX35 FPGA cluster (COPACOBANA shared bus architecture) 2008 introduction of the RIVYERA V4-SX35, a 128 Virtex-4 SX35 FPGA cluster (RIVYERA HPC architecture) 2009 they introduced RIVYERA S6-LX150. 2011 they introduced 256 User usable FPGAs per RIVYERA S6-LX150 computer. Providing a standard off-the-shelf Intel CPU and mainboard integrated into the FPGA computer RIVYERA systems allow to execute most standard code without modifications. SciEngines aims that programmers only have to focus on porting the most time-consuming 5% of their code to the FPGA. Therefore, they bundle an Eclipse like development environment which allows code implementation in hardware based implementation languages e.g. VHDL, Verilog as well as in C based languages. An Application Programming Interface in C, C++, Java and Fortran allow scientists and programmers to adopt their code to benefit from an application-specific hardware architecture. References CLC bio and SciEngines collaborate on 188x acceleration of BLAST www.sciengines.com (Official site) www.copacobana.org Further reading Lars Wienbrandt, Bioinformatics Applications on the FPGA-based High-Performance Computer RIVYERA, in "High Performance Computing Using FPGAs" edited by Wim Vanderbauwhede, Khaled Benkrid, Springer, 2013, . Tim Güneysu, Timo Kasper, Martin Novotný, Christof Paar, Lars Wienbrandt, and Ralf Zimmermann, High-Performance Cryptanalysis on RIVYERA and COPACOBANA Computing Systems, in "High Performance Computing Using FPGAs" edited by Wim Vanderbauwhede, Khaled Benkrid, Springer, 2013, . Ayman Abbas, Claas Anders Rathje, Lars Wienbrandt, and Manfred Schimmler, Dictionary Attack on TrueCrypt with RIVYERA S3-5000, 2012 IEEE 18th International Conference on Parallel and Distributed Systems (ICPADS), Dec 2012, pp. 93–100. Florian Schatz, Lars Wienbrandt, and Manfred Schimmler, Probability model for boundaries of short-read sequencing, 2012 International Conference on Advances in Computing and Communications (ICACC), Aug 2012, pp. 223–228. (best paper award) Christoph Starke, Vasco Grossmann, Lars Wienbrandt, and Manfred Schimmler, An FPGA implementation of an Investment Strategy Processor, Procedia Computer Science, vol. 9, 2012, pp. 1880–1889. Lars Wienbrandt, Daniel Siebert, and Manfred Schimmler, Improvement of BLASTp on the FPGA-Based High-Performance Computer RIVYERA, Lecture Notes in Computer Science, vol. 7292, 2012, pp. 275–286. Christoph Starke, Vasco Grossmann, Lars Wienbrandt, Sven Koschnicke, John Carstens, and Manfred Schimmler, Optimizing Investment Strategies with the Reconfigurable Hardware Platform RIVYERA, International Journal of Reconfigurable Computing, vol. 2012, 10 pages. Lars Wienbrandt, Stefan Baumgart, Jost Bissel, Florian Schatz, and Manfred Schimmler, Massively parallel FPGA-based implementation of BLASTp with the two-hit method, Procedia Computer Science, vol. 4, 2011, pp. 1967–1976. Lars Wienbrandt, Hardware implementation and massive parallelization of BLAST, Invited talk: Workshop on Theoretical Biology, Max-Planck-Institute for Evolutionary Biology, Plön 2011. Lars Wienbrandt, and Manfred Schimmler, Collecting Statistical Information in DNA Sequences for the Detection of Special Motifs, Proceedings of BIOCOMP2010, 2010, pp. 274–278. Manfred Schimmler, Lars Wienbrandt, Tim Güneysu, and Jost Bissel, COPACOBANA: A Massively Parallel FPGA-Based Computer Architecture, in "Bioinformatics: High Performance Parallel Computer Architectures" edited by Bertil Schmidt, CRC Press, 2010, . Lars Wienbrandt, Stefan Baumgart, Jost Bissel, Carol May Yen Yeo, and Manfred Schimmler, Using the reconfigurable massively parallel architecture COPACOBANA 5000 for applications in bioinformatics, Procedia Computer Science, vol. 1 (1), 2010, pp. 1027–1034. Lars Wienbrandt, Massiv parallelisierte DNA-Motivsuche auf COPACOBANA - Hardware-Implementierung in VHDL und Effizienzvergleich mit einem Standard-PC, Diplomarbeit, Dezember 2008 Jan Schröder, Lars Wienbrandt, Gerd Pfeiffer, and Manfred Schimmler, Massively Parallelized DNA Motif Search on the Reconfigurable Hardware Platform COPACOBANA, Proceedings of the Third IAPR International Conference on Pattern Recognition in Bioinformatics (PRIB2008), 2008, pp. 436–447. S. Baumgart, COPACOBANA RIVYERA a feasible Custom Hardware Attacks, oder der Angriff auf moderne Verschlüsselungsverfahren mittels roher Gewalt, esproject conference (23. - 24.11.2010 Berlin) S. Baumgart, Emerging Architectures to Massively Reconfigurable Computing Plattforms and their Applications, JCRA 2010 - Reconfigurable Computing and Applications Conference, (8th-10th Sep. Valencia, Spain) G. Pfeiffer, S. Baumgart, J. Schröder, M. Schimmler, A Massively Parallel Architecture for Bioinformatics, ICCS 2009 - International Conference on Computational Science (9th International Conference Baton Rouge, LA, USA, May 25–27, 2009) S. Baumgart, Using Emerging Parallel Architectures for Computational Science, ICCS 2009 - International Conference on Computational Science (9th International Conference Baton Rouge, LA, USA, May 25–27, 2009) Supercomputers Reconfigurable computing Cryptographic attacks Cryptography companies
32816348
https://en.wikipedia.org/wiki/Freedom%20Flag
Freedom Flag
Freedom flag may refer to: The Freedom of Speech flag documenting the AACS encryption key controversy. Any number of national or institution flags, including: The Flag of the United States The Flag of South Vietnam The French Tricolore The LGBT movement Rainbow flag The Four Freedoms Flag of the United Nations. See also
32861535
https://en.wikipedia.org/wiki/Dept.%20of%20Computer%20Science%2C%20University%20of%20Delhi
Dept. of Computer Science, University of Delhi
The Department of Computer Science, University of Delhi is a department in the University of Delhi under the Faculty of Mathematical Science, set up in 1981. Courses The department started the three years Master of Computer Applications (MCA) program in 1982, which was among the first such programs in India. The department started M.Sc. Computer Science course in 2004. Besides these the department has research interests in Computer Science and offers a Doctor of Philosophy (Ph.D.) program. The university conducts a postgraduate Diploma in Computer Applications (PGDCA) program through its constituent colleges. Emphasis is laid not only on the theoretical concepts but also on practical experience and industry interaction. Few classroom projects MCA Apart from classroom teaching, students take up case studies, presentations and small projects. Following are some projects/assignments taken up by the students: Implementation of Unix Shell Implementation of Chat Server. Simulation of machine language code and implementation of assembler. Simulation of the basic file system on Linux. Simulation of Sliding Window Protocols Go-Back N Protocol Selective Repeat Protocol. Simulation of a two-pass assembler. Projects designed, documented and coded using SDLC Share tracker system. Computerized health care system. Websites on tourism, online FIR, online book store, online examination, social networking, online shipping management system, digital library system. Research and implementation of cryptographic algorithms Design and implementation of new approach for searching in encrypted data using Bloom Filter. Analysis and implementation of security algorithms in Cloud Computing. Malware and keylogger design. Software and hardware implementation of Smart Home System. Misuse, detection and prevention of Advance Spamming techniques. Design and security analysis of chaotic encryption. Analysis of risks, techniques, and corporate usage of Web 2.0 technologies. Implementation of homomorphic encryption algorithms. Regional language encryption and translation. Implementation of elliptic curve cryptography. Design and implementation of self synchronizing stream ciphers. M.Sc. Computer Science As part of the curriculum students give presentations, group projects and programming assignments. The following are some of the projects/assignments taken up by the students: Implementation of robot task assignment given resources using MATLAB. Jade programming for agent communication. Implementation of DES encryption and decryption algorithm. Application of genetic algorithm in 8-queens problem. Implementation of K-means, FP-Tree, BIRCH and DBSCAN algorithm using C++. Generating all strong association rules from a set of given frequent item sets of transactions. Implementation of DBMS. Data preprocessing and KDD (Knowledge Discovery and Data mining) using WEKA and C4.5. Implementation of clustering techniques on output of fuzzy C-means algorithm as initial input using MATLAB. Simulation of Lexical Analyzer and Parser using C. Infrastructure The students of the department are affiliated to two libraries. The Departmental Library: is a reference library with over four thousand titles, in the field of Computer Science and IT and in related areas such as Electronics and Mathematics. The Central Science Library: is one of the largest science libraries in India. It was established in 1981, and has 2,20,000 volumes of books and periodicals. The website of CSL provides electronic subscription for 27,000 e-journals including IEEE, ACM, Springer journals and proceedings. Internet Connection All the labs, offices and faculty rooms of the Department are connected to the internet through the university intranet. Internet connectivity is provided using 4 switches through the university intranet. 24 port switch is used in LAN, providing internet to all systems in the laboratory, classrooms, seminar room and committee room. Delhi University Computer Centre Notable alumni Kiran Sethi - VP, Deutsche Bank, USA Pradeep Mathur - VP, Capgemini, UK Gulshan Kumar - Director, Alcatel-Lucent, India Ranjan Dhar - Director, Silicon Graphics, India Manish Madan - VP, Perot Systems, TSI, India Sachin Wadhwa - Head Operations, Mastech InfoTrellis Inc, USA Kumaran Sasikanthan -Country Head, AllSight Software, India References External links Official website Admissions Information Delhi University University departments in India
32890549
https://en.wikipedia.org/wiki/Solid%20PDF%20Creator
Solid PDF Creator
Solid PDF Creator is proprietary document processing software which converts virtually any Windows-based document into a PDF. Suitable for home and office use, the program appears as a printer option in the Print menu of any print-capable Windows application. The same technology used in the software's Solid Framework SDK is licensed by Adobe for Acrobat X History Solid Documents, the makers of Solid PDF Creator, launched the product in 2006 and have released several version updates since then including 2.0 in 2007. The latest product enhancement, new to version 7, allows for the conversion of Windows-based documents into PDF/A documents in compliance with ISO 19005-1 standards for long-term preservation and archival purposes. Version 9.0, released in June 2014, sees conversion and table reconstruction improvements, less XML output, and feature integration. Features Solid PDF Creator supports conversion from the following formats into PDF: Microsoft Word .docx and .doc Rich text format .rtf Microsoft Excel .xlsx .xml Microsoft PowerPoint .pptx .html Plain text .txt Solid PDF Creator provides a variety of file conversion options including password protection, encryption, permission definition, ISO 19005-1 archiving standards, and file compression capabilities. Building upon the features offered in Solid PDF Creator, Solid PDF Creator Plus released in 2008 allows users to manipulate watermarks, rearrange pages, extract pages, and drag and drop content. See also List of PDF software References External links Solid PDF Creator site PC Tips 3000 Review Flex Developer’s Journal PDF software
32915226
https://en.wikipedia.org/wiki/List%20of%20Israeli%20Ashkenazi%20Jews
List of Israeli Ashkenazi Jews
This is a list of notable Israeli Ashkenazi Jews, including both original immigrants who obtained Israeli citizenship and their Israeli descendants. Although traditionally the term "Ashkenazi Jews" was used as an all-encompassing term referring to the Jews descended from the Jewish communities of Europe, due to the melting pot effect of Israeli society the term "Ashkenazi Jews" gradually becomes more vague as many of the Israeli descendants of the Ashkenazi Jewish immigrants gradually adopted the characteristics of Israeli culture and as more descendants intermarry with descendants of other Jewish communities. The list is ordered by category of human endeavor. Persons with significant contributions in two of these are listed in both of them, to facilitate easy lookup. Politicians Shulamit Aloni – former minister Ehud Barak – prime minister (1999–01) Menachem Begin – prime minister (1977–83); Nobel Peace Prize (1978) Yossi Beilin – leader of the Meretz-Yachad party and peace negotiator David Ben-Gurion – first Prime Minister of Israel (1948–54, 1955–63) Yitzhak Ben-Zvi – first elected/second president President of Israel (1952–63) Gilad Erdan Levi Eshkol – prime minister (1963–69) Miriam Feirberg Yael German Teddy Kollek – former mayor of Jerusalem Yosef Lapid – former leader of the Shinui party Golda Meir – prime minister (1969–74) Benjamin Netanyahu – prime minister (1996–99, 2009–2021); was minister of finance; Likud party chairman Ehud Olmert – prime minister (2006–09); former mayor of Jerusalem Shimon Peres – President of Israel (2007–); prime minister (1984–86, 1995–96); Nobel Peace Prize (1994) Yitzhak Rabin – prime minister (1974–77, 1992–95); Nobel Peace Prize (1994) (assassinated November 1995) Yitzhak Shamir – prime minister (1983–84, 1986–92) Moshe Sharett – prime minister (1954–55) Ariel Sharon – prime minister (2001–06) Chaim Weizmann – first President of Israel (1949–52) Rehavam Zeevi – founder of the Moledet party (assassinated October 2001) Shelly Yachimovich – former leader of the opposition. Military Yigal Allon – politician, a commander of the Palmach, and a general in the IDF Haim Bar-Lev – former Chief of General Staff of the Israel Defense Forces Moshe Dayan – military leader Giora Epstein – combat pilot, modern-day "ace of aces" Uziel Gal – designer of the Uzi submachine gun Benny Gantz – former Chief of General Staff of the Israel Defense Forces Wolfgang Lotz – spy Tzvi Malkhin – Mossad agent, captured Adolf Eichmann Yonatan Netanyahu – Sayeret Matkal commando, leader of Operation Entebbe Yitzhak Rabin – military leader and fifth Prime Minister of Israel Ilan Ramon – astronaut on Columbia flight STS-107 Gilad Shalit – kidnapped soldier held in Gaza Yael Rom – first female to graduate from a full military flight course in the Western world; first woman to graduate from the Israeli Air Force Religious figures Religious-rabbis David Hartman Avraham Yitzchak Kook (1865–1935) – pre-state Ashkenazic Chief Rabbi of the Land of Israel, Israel Meir Lau – Ashkenazic Chief Rabbi of Israel (1993–2003), Chief Rabbi of Netanya (1978–88), (1937–) Aharon Lichtenstein Yona Metzger – Ashkenazic Chief Rabbi of Israel Shlomo Riskin – Ashkenazic Chief Rabbi of Efrat Haredi rabbis Yaakov Aryeh Alter – Gerrer Rebbe Shlomo Zalman Auerbach Yaakov Blau Yisroel Moshe Dushinsky – second Dushinsky rebbe and Chief Rabbi of Jerusalem (Edah HaChareidis) Yosef Tzvi Dushinsky – first Dushinsky rebbe and Chief Rabbi of Jerusalem (Edah HaChareidis) Yosef Tzvi Dushinsky – third Dushinsky rebbe Yosef Sholom Eliashiv Issamar Ginzberg – Nadvorna-Kechnia Rebbe Chaim Kanievsky Avraham Yeshayeh Karelitz (1878–1953) – Chazon Ish Nissim Karelitz – Head Justice of Rabbinical Court of Bnei Brak Meir Kessler – Chief Rabbi of Modi'in Illit Zundel Kroizer (1924–2014) – author of Ohr Hachamah Dov Landau – rosh yeshiva of the Slabodka yeshiva of Bnei Brak Yissachar Dov Rokeach – Belzer rebbe Yitzchok Scheiner (born 1922) – rosh yeshiva of the Kamenitz yeshiva of Jerusalem Elazar Menachem Shach (1899–2001) – Rav Shach Moshe Shmuel Shapira – rosh yeshiva of Beer Yaakov Dovid Shmidel – Chairman of Asra Kadisha Yosef Chaim Sonnenfeld – Chief Rabbi of Jerusalem (Edah HaChareidis) Yitzchok Tuvia Weiss – Chief Rabbi of Jerusalem (Edah HaChareidis) Amram Zaks (1926–2012) – rosh yeshiva of the Slabodka yeshiva of Bnei Brak Uri Zohar – former film director, actor, and comedian who left the entertainment world to become a rabbi Activists Uri Avnery – peace activist, Gush Shalom Yael Dayan – writer, politician, activist Michael Dorfman – Russian-Israeli essayist and human rights activist Uzi Even – gay rights activist Yehuda Glick – Israeli activist and rabbi who campaigned for expanding Jewish access to the Temple Mount Daphni Leef – Israeli activist; in 2011 sparked one of the largest waves of mass protest in Israel's history Rudy Rochman – Jewish, Zionist activist Uri Savir – peace negotiator, Peres Center for Peace Israel Shahak – political activist Natan Sharansky – Soviet-era human rights activist Cultural and entertainment figures Film, TV, and stage Popular musicians Classical musicians Writers Artists Yaacov Agam – kinetic artist Yitzhak Danziger – sculptor Uri Fink – comic book artist and writer Dudu Geva – artist and comic-strip illustrator Nachum Gutman – painter Israel Hershberg – realist painter Shimshon Holzman – painter Models Yael Bar Zohar – model Michaela Bercu – model Nina Brosh – model Anat Elimelech – model and actress; murdered in 1997 by her partner Gal Gadot – model and actress Esti Ginzborg – model Heli Goldenberg – former model and actress Yael Goldman – model Galit Gutmann – model Adi Himmelbleu – model Mor Katzir Rina Mor – model Hilla Nachshon – model Bar Refaeli – model Shiraz Tal – model Pnina Rosenblum – former model Academic figures Physics and chemistry Yakir Aharonov – physicist, Aharonov–Bohm effect and winner of the 1998 Wolf Prize in Physics Jacob Bekenstein – black hole thermodynamics David Deutsch – quantum computing pioneer; 1998 Paul Dirac Prize winner Richard Feynman – path integral formation, quantum theory, superfluidity; winner of 1965 Nobel Prize in Physics Josef Imry – physicist Joshua Jortner – molecular energy; 1988 winner of the Wolf Prize in Chemistry Aaron Katzir – physical chemistry Ephraim Katzir – immobilized enzymes; Japan Prize (1985) and the fourth President of Israel Rafi Levine – molecular energy; 1988 winner of the Wolf Prize in Chemistry Zvi Lipkin – physicist Mordehai Milgrom – modified Newtonian dynamics (MOND) Yuval Ne'eman – the "Eightfold way" Asher Peres – quantum theory Alexander Pines – nuclear magnetic resonance; Wolf Prize in Chemistry Laureate (1991) Giulio Racah – spectroscopy Nathan Rosen – EPR paradox Nathan Seiberg – string theory Dan Shechtman – chemist; winner of the 1999 Wolf Prize in Physics and Winner of the 2011 Nobel Prize in Chemistry for "the discovery of quasicrystals" Igal Talmi – particle physics Reshef Tenne – discovered inorganic fullerenes and inorganic nanotubes Arieh Warshel – chemist, winner of the 2013 Nobel Prize in Chemistry and contributed to the development of multiscale models for complex chemical systems" Chaim Weizmann – acetone production Biology and medicine Ruth Arnon – developed Copaxone; Wolf Prize in Medicine (1998) Aaron Ciechanover – ubiquitin system; Lasker Award (2000), Nobel Prize in Chemistry (2004) Moshe Feldenkrais – invented Feldenkrais method used in movement therapy Lior Gepstein – received American College of Cardiology's Zipes Award for his development of heart cells and pacemakers from stem cells Eyal Gur – selected by Newsweek as one of the world's top microsurgeons Avram Hershko – ubiquitin system; Lasker Award (2000), Nobel Prize in Chemistry (2004) Gavriel Iddan – inventor of capsule endoscopy Benjamin Kahn – marine biologist, defender of the Red Sea reef Yona Kosashvili – orthopedic surgeon and Chess Grandmaster Andy Lehrer – entomologist Shulamit Levenberg – inventor of a muscle tissue which isn't rejected by the body after transplant; selected by Scientific American as one of the 50 leading scientists in the world Alexander Levitzki – cancer research; Wolf Prize in Medicine (2005) Gideon Mer – malaria control Saul Merin – ophthalmologist, author of Inherited Eye Diseases Leo Sachs – blood cell research; Wolf Prize in Medicine (1980) Michael Sela – developed Copaxone; Wolf Prize in Medicine (1998) Joel Sussman – 3D structure of acetylcholinesterase, Elkeles Prize for Research in Medicine (2005) Meir Wilchek – affinity chromatography; Wolf Prize in Medicine (1987) Ada Yonath – structure of ribosome; 2009 winner of the Nobel Prize in Chemistry Amotz Zahavi – proposed the Handicap Principle Social sciences Yehuda Bauer – historian Daniel Elazar – political scientist Haim Ginott – psychologist, child psychology Eliyahu Goldratt – business consultant, Theory of Constraints Louis Guttman – sociologist Elhanan Helpman – economist, international trade Daniel Kahneman – behavioural scientist, prospect theory; Nobel Prize in Economics (2002) Smadar Lavie – anthropologist Amihai Mazar – archaeologist Benjamin Mazar – archaeologist Eilat Mazar – archaeologist Benny Morris – historians, New Historians Erich Neumann – analytical psychologist, development, consciousness Nurit Peled-Elhanan – educator Renee Rabinowitz – psychologist and lawyer Anat Rafaeli – organisational behaviour researcher. Ariel Rubinstein – economist Amos Tversky – behavioral scientist, prospect theory with Daniel Kahneman Yigael Yadin – archaeologist Computing and mathematics Ron Aharoni – mathematician, working in finite and infinite combinatorics Noga Alon – mathematician, computer scientist, winner of the Gödel Prize (2005) Shimshon Amitsur – mathematician, ring theory abstract algebra Robert Aumann – mathematical game theory; Nobel Prize in Economics (2005) Amir Ban – computer programmer; one of the main programmers of the Junior chess program Moshe Bar – computer programmer and creator and main developer of openMosix Yehoshua Bar-Hillel – philosopher, mathematician, and linguist, best known for his pioneering work in machine translation and formal linguistics Joseph Bernstein – mathematician; works in algebraic geometry, representation theory, and number theory Eli Biham – cryptographer and cryptanalyst, specializing in differential cryptanalysis Shay Bushinsky – computer programmer; one of the main programmers of the Junior chess program Aryeh Dvoretzky – mathematician, eighth president of the Weizmann Institute of Science Uriel Feige – computer scientist, winner of the Gödel Prize (2001) Abraham Fraenkel – mathematician, known for his contributions to axiomatic set theory and the ZF set theory Hillel Furstenberg – mathematician; Wolf Prize in Mathematics (2006/7) Shafi Goldwasser – computer scientist, winner of the Gödel Prize (1993 and 2001) David Harel – computer scientist; Israel Prize (2004) Abraham Lempel – LZW compression; IEEE Richard W. Hamming Medal (2007 and 1995) Elon Lindenstrauss – mathematician; known in the area of dynamics, particularly in the area of ergodic theory and its applications in number theory; Fields Medal recipient (2010) Joram Lindenstrauss – mathematician, known for the Johnson–Lindenstrauss lemma Michel Loève – probabilist Joel Moses – MIT provost and writer of Macsyma Yoram Moses – computer scientist, winner of the 2000 Gödel Prize in theoretical computer science and the 2009 Dijkstra Prize in distributed computing Judea Pearl – computer scientist and philosopher; known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks (see the article on belief propagation); Turing Award winner (2011) Ilya Piatetski-Shapiro – mathematician; representation theory; Wolf Prize in Mathematics winner (1990) Amir Pnueli – temporal logic; Turing Award (1996) Michael O. Rabin – nondeterminism, primality testing; Turing Award (1976) Sheizaf Rafaeli – computer scientist, scholar of computer-mediated communication Shmuel Safra – computer scientist, winner of the (2001) Gödel Prize] Adi Shamir – computer scientist; RSA encryption, differential cryptanalysis; Turing Award winner (2002) Nir Shavit – computer scientist, winner of the (2001) Gödel Prize] Saharon Shelah – mathematician, well known for logic; Wolf Prize in Mathematics winner (2001) Ehud Shapiro – computer scientist; Concurrent Prolog, DNA computing pioneer Moshe Y. Vardi – computer scientist; Godel Prize winner (2000) Avi Wigderson – mathematician, known for randomized algorithms; Nevanlinna Prize winner (1994) Doron Zeilberger – mathematician, known for his contributions to combinatorics Jacob Ziv – LZW compression; IEEE Richard W. Hamming Medal (2007 and 1995) Engineering David Faiman – solar engineer and director of the National Solar Energy Center Liviu Librescu – Professor of Engineering Science and Mechanics at Virginia Tech; killed in the Virginia Tech massacre Moshe Zakai – electrical engineering Jacob Ziv – electrical engineering Philosophy Martin Buber – philosopher Yeshayahu Leibowitz – philosopher Avishai Margalit – philosopher Joseph Raz – philosopher Gershom Scholem (1897–1982) – philosopher, historian Humanities Shmuel Ben-Artzi – Bible scholar; father of psychologist Sara Netanyahu and father-in-law of Israeli Prime Minister Benjamin Netanyahu Noam Chomsky – linguist Aharon Dolgopolsky – linguist, Nostratic Moshe Goshen-Gottstein – Bible scholar Michael Oren – historian, educator, writer, and Israeli ambassador to the US> Hans Jakob Polotsky – linguist Chaim Rabin – Bible scholar Alice Shalvi – English literature, educator. Gershon Shaked – Hebrew literature Shemaryahu Talmon – Bible scholar Emanuel Tov – Bible scholar Architecture Richard Kauffmann – architect Neri Oxman – architect Entrepreneurs and businesspeople Technology Amnon Amir – co-founder of Mirabilis (developer of ICQ) Moshe Bar – founder of XenSource, Qumranet Naftali Bennett – founder of Cyota, current Member of the Knesset and leader of The Jewish Home political party Safra Catz – president of Oracle Yair Goldfinger – co-founder of Mirabilis (developer of ICQ) Yossi Gross – recipient of almost 600 patents; founder of 27 medical technology companies in Israel; Chief Technology Office officer of Rainbow Medical Andi Gutmans – co-founder of Zend Technologies (developer of PHP) Daniel M. Lewin – founder of Akamai Technologies Bob Rosenschein – founder of Kivun Computers, Accent Software, GuruNet, Answers.com, Curiyo (Israeli-based) Gil Schwed – founder of Check Point Zeev Suraski – co-founder of Zend Technologies (developer of PHP) Ariki and Yossi Vardi – co-founder of Mirabilis (developer of ICQ) Sefi Vigiser – co-founder of Mirabilis (developer of ICQ) Other industries Ted, Micky and Shari Arison – founder and owners of Carnival Corporation Amir Gal-Or Jamie Geller – celebrity chef and founder of the Kosher Media Network Eival Gilady Eli Hurvitz – head of Teva Pharmaceuticals Mordecai Meirowitz – inventor of the Mastermind board game Arnon Milchan – Hollywood film producer & founder of Regency Enterprises. Sammy Ofer – shipping magnate Yuli Ofer – real estate mogul Guy Oseary – talent agent, businessman, investor, and music manager; founder of Maverick Records; personal music manager of American entertainer Madonna Stef Wertheimer – manufacturing industrialist Josh Reinstein – director of the Knesset Christian Allies Caucus Eyal Ofer - real estate and shipping magnate Moris Kahn - billionaire, entrepreneur Sports Association football Eyal Berkovic – midfielder (national team), Maccabi Haifa, Southampton, West Ham United, Celtic, Manchester City, Portsmouth Ronnie Rosenthal – left winger/striker (national team), Maccabi Haifa, Liverpool, Tottenham, Watford Giora Spiegel – midfielder (national team), Maccabi Tel Aviv Mordechai Spiegler – Soviet Union/Israel – striker (Israel national team), manager Nahum Stelmach – striker (national team) Yochanan Vollach – defender (national team), Maccabi Haifa, Hapoel Haifa, HKFC; current president of Maccabi Haifa Basketball Miki Berkovich – Maccabi Tel Aviv David Blu (formerly "Bluthenthal") – US and Israel, Euroleague 6' 7" forward (Maccabi Tel Aviv) Tal Brody – US and Israel, Euroleague 6' 2" shooting guard, Maccabi Tel Aviv Tal Burstein – Maccabi Tel Aviv Tanhum Cohen-Mintz – Latvian-born Israeli, 6' 8" center; two-time Euroleague All-Star Shay Doron – Israel and US, WNBA 5' 9" guard, University of Maryland (New York Liberty) Tamir Goodman – US and Israel, 6' 3" shooting guard Yotam Halperin – 6' 5" guard, drafted in 2006 NBA draft by Seattle SuperSonics (Olympiacos) Gal Mekel – former point guard in NBA team Dallas Mavericks Amit Tamir – 6' 10" center/forward, University of California, PAOK Thessaloniki (Hapoel Jerusalem) Boxing Hagar Finer – WIBF bantamweight champion Yuri Foreman – Belarusian-born Israeli US middleweight and World Boxing Association super welterweight champion Roman Greenberg – International Boxing Organization's Intercontinental heavyweight champion; "The Lion from Zion" Fencing Boaz Ellis – foil, five-time Israeli champion Lydia Hatoel-Zuckerman – foil, six-time Israeli champion Andre Spitzer – killed by terrorists Figure skating Alexei Beletski – Ukrainian-born Israeli ice dancer, Olympian Galit Chait – ice dancer; World Championship bronze 2002 Natalia Gudina – Ukrainian-born Israeli figure skater, Olympian Tamar Katz – US-born Israeli figure skater Lionel Rumi – ice dancer Sergei Sakhnovsky – ice dancer, World Championship bronze 2002 Michael Shmerkin – Soviet-born Israeli figure skater Alexandra Zaretski – Belarusian-born Israeli; ice dancer, Olympian Roman Zaretski – Belarusian-born Israeli ice dancer, Olympian Sailing Zefania Carmel – yachtsman, world champion (420 class) Gal Fridman – windsurfer (Olympic gold: 2004 (Israel's first gold medalist), bronze: 1996 (Mistral class); world champion: 2002) Lydia Lazarov – yachting world champion (420 class) Swimming Vadim Alexeev – Kazakhstan-born Israeli swimmer, breaststroke Guy Barnea – swimmer who participated in the 2008 Summer Olympics Adi Bichman – 400-m and 800-m freestyle, 400-m medley Yoav Bruck – 50-m freestyle and 100-m freestyle Eran Groumi – 100 and 200 m backstroke, 100-m butterfly Judith Haspel (born "Judith Deutsch") – Austrian-born Israeli; held every Austrian women's middle and long distance freestyle record in 1935; refused to represent Austria at the 1936 Summer Olympics, protesting Hitler, stating, "I refuse to enter a contest in a land which so shamefully persecutes my people" Dan Kutler – US-born Israeli; 100-m butterfly, 4×100-m medley relay Keren Leibovitch – Paralympic swimmer, four-time gold medal-winner, 100-m backstroke, 50- and 100-m freestyle, 200-m individual medley Tal Stricker – 100- and 200-m breaststroke, 4×100-m medley relay Eithan Urbach – backstroke swimmer, European championship silver and bronze; 100-m backstroke Tennis Noam Behr Ilana Berger Gilad Bloom Jonathan Erlich – 6 doubles titles, 6 doubles finals; won 2008 Australian Open Men's Doubles (w/Andy Ram), highest world doubles ranking No. 5 Shlomo Glickstein – highest world singles ranking No. 22, highest world doubles ranking No. 28 Julia Glushko Amos Mansdorf – highest world singles ranking No. 18 Shahar Pe'er – three WTA career titles; highest world singles ranking No. 11, highest world doubles ranking No. 21 Other Alex Averbukh – pole vaulter (European champion: 2002, 2006) Boris Gelfand – chess Grandmaster; ~2700 peak ELO rating Michael Kolganov – Soviet-born Israeli, sprint canoer/kayak paddler, world champion, Olympic bronze 2000 (K-1 500-meter) Marina Kravchenko – Ukrainian-born Israeli table tennis player, Soviet and Israeli national teams Sofia Polgar – Hungarian-born Israeli chess Grandmaster; sister of chess grandmasters Susan Polgar and Judith Polgar Ilya Smirin – chess Grandmaster; ~2700 peak ELO rating Emil Sutovsky – chess Grandmaster; ~2700 peak ELO rating Criminals Hanan Goldblatt – actor, comedian and singer; was convicted in 2008 of perpetrating acts of rape and other sex offenses against women in his acting class Baruch Goldstein – massacred 29 Arabs in the Cave of the Patriarchs in 1994 Avraham Hirschson – politician who was among other things the former Israeli Minister of Finance; convicted of stealing close to 2 million NIS from the National Workers Labor Federation while he was its chairman Zeev Rosenstein – mob boss and drug trafficker Gonen Segev – former Israeli member of Knesset and government minister; convicted for an attempt of drug smuggling, for forgery and electronic commerce fraud Ehud Tenenbaum – computer hacker also known as "The Analyzer" who became famous in 1998 when he was caught by the FBI after hacking into the computers of NASA, the Pentagon, the Knesset and the US Army, and after installing trojan horse software on some of those computers Dudu Topaz – TV personality, comedian, actor, screenwriter, playwright, author and radio and television host; committed suicide in August 2009 after being charged with inciting violence against national media figures See also Israelis List of notable Israelis List of Ashkenazi Jews in central and eastern Europe List of Ashkenazi Jews in northern Europe List of Sephardic Jews in southern and western Europe List of Sephardic Jews in the Balkans References Israeli Ashkenazi Jews Ashkenazi Israeli people of European descent
32918034
https://en.wikipedia.org/wiki/Derek%20Mills-Roberts
Derek Mills-Roberts
Brigadier Derek Mills-Roberts, CBE, DSO and bar, MC (23 November 1908 – 1 October 1980) was a British commando who fought with the 1st Special Service Brigade during World War II. In a quirk of military history, he became the only Allied soldier to strike a German Field Marshal with the latter's own staff-of-office – when Mills-Roberts beat Erhard Milch over the head with the just-surrendered marshal's baton. Early life Derek Mills-Roberts was born on 23 November 1908 in England. During the 1930s, he trained to become a lawyer at Liverpool College and the University of Oxford. On 3 October 1936, he was commissioned into the Irish Guards Supplementary Reserve of Officers as a second lieutenant, having been an officer cadet of the University of Oxford contingent of the Officer Training Corps. It was at Oxford that Derek met his good friend Lord Lovat. Derek and Lord Lovat had actually got off on a bad start. They had a rivalry which involved a heated argument and an exchange of blows. From that time on however, they became close friends. After graduation from Oxford, Derek worked for his father's law firm. World War II Mills-Roberts began his military service in the No.4 Commando Unit. His good friend Lord Lovat was given command of the unit, while Derek served as Second in Command. On 3 March 1941, Mills-Roberts, in the No. 4 Commando Unit, launched a raid on the German-occupied Lofoten Islands in Norway. In the successful raid, the commandos destroyed a significant number of fish-oil factories, petrol dumps and 11 ships. They also seized encryption equipment and codebooks. In addition to the destruction of materials, the commandos captured 216 German troops, and 315 Norwegians chose to accompany the commandos back to Britain. In August 1942, Derek was involved in the disastrous Dieppe Raid. The raid, a small scale invasion mounted by Canadian infantry and British commandos against Adolf Hitler's Atlantic Wall, was a complete failure and the units involved suffered very heavily. Lovat and Mills-Roberts's involvement in the raid was to secure the opposing flanks of the landing area and to destroy coastal batteries. By October 1942, he was a lieutenant (temporary captain) (acting major). In late 1942, Mills-Roberts was promoted to Lieutenant-Colonel and given command of No. 6 Commando Unit; he was then stationed in North Africa. During the Normandy landings in 1944 the unit captured the port of Ouistreham and linked up with the 6th Airborne Division on the eastern flank of Sword. Later in the war, among other actions, he took part in the Bergen-Belsen concentration camp's liberation. When Luftwaffe Field Marshal Erhard Milch was captured and surrendered his command baton to Mills-Roberts, the latter vented his anger about the atrocities he had seen at Bergen-Belsen marching Milch around the camp and demanding to know his thoughts on the terrible sights witnessed. Milch's reply (who spoke English) was along the lines of 'these people are not human beings in the same way as you and I!' This infuriated Mills-Roberts who took Milch's Field Marshal's baton from under Milch's arm and broke it over his head. Mills-Roberts went to Field Marshal Bernard Montgomery the following day to apologise for losing his temper with a senior German officer and Montgomery put his hands over his head in mock protection jokingly saying 'I hear you've got a thing about Field Marshals' and nothing more was said. The broken pieces were retrieved by his batman and the remains were given to Mills-Robert's wife Jill who had the baton restored at Swayne Adeney Brigg in London but the replacement shaft was slightly longer than the original. In later years, Jill sold the baton at auction. Before the auction, an injunction was put on the sale by the Milch family who contested ownership saying that the baton was 'stolen' from Milch. A local magistrate in UK decided that the baton was legitimate war booty and the sale continued; eventually the baton went to an American collector in Florida. By June 1945, he was a brigadier (temporary). Honours and decorations Mills-Roberts was awarded the Military Cross on 2 October 1942 "in recognition of gallant and distinguished services in the combined attack on Dieppe". He was awarded the Distinguished Service Order (DSO) on 22 April 1943 "in recognition of gallant and distinguished services in North Africa". He was awarded a medal bar to his Distinguished Service Order (DSO and bar) on 21 June 1945 "in recognition of gallant and distinguished services in North-West Europe". In the 1950 New Year Honours, he was appointed Commander of the Order of the British Empire (CBE). Commander of the Order of the British Empire Distinguished Service Order and Bar Military Cross Légion d'honneur Croix de Guerre References External links TracesOfWar.com http://www.pegasusarchive.org/normandy/derek_mills_roberts.htm Generals of World War II 1908 births 1980 deaths British Army brigadiers of World War II British Army Commandos officers Recipients of the Military Cross Companions of the Distinguished Service Order Commanders of the Order of the British Empire Recipients of the Croix de Guerre 1939–1945 (France) Recipients of the Legion of Honour Operation Overlord people People educated at Liverpool College Irish Guards officers British Army brigadiers
32918892
https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%208
Features new to Windows 8
The transition from Windows 7 to Windows 8 introduced a number of new features across various aspects of the operating system. These include a greater focus on optimizing the operating system for touchscreen-based devices (such as tablets) and cloud computing. Development platform Language and standards support Windows 8 introduces the new Windows Runtime (WinRT) platform, which can be used to create a new type of application officially known as Windows Store apps and commonly called Metro-style apps. Such apps run within a secure sandbox and share data with other apps through common APIs. WinRT, being a COM-based API, allows for the use of various programming languages to code apps, including C++, C++/CX, C#, Visual Basic .NET, or HTML5 and JavaScript. Metro-style apps are packaged and distributed via APPX, a new file format for package management. Unlike desktop applications, Metro-style apps can be sideloaded, subject to licensing conditions. Windows 8.1 Update allows for sideloading apps on all Windows 8.1 Pro devices joined to an Active Directory domain. In Windows 8 up to two apps may snap to the side of a widescreen display to allow multi-tasking, forming a sidebar that separates the apps. In Windows 8.1, apps can continually be resized to the desired width. Snapped apps may occupy half of the screen. Large screens allow up to four apps to be snapped. Upon launching an app, Windows allows the user to pick which snapped view the app should open into. The term "Metro-style apps" referred to "Metro", a design language prominently used by Windows 8 and other recent Microsoft products. Reports surfaced that Microsoft employees were told to stop using the term due to potential trademark issues with an unspecified partner. A Microsoft spokesperson however, denied these reports and stated that "Metro-style" was merely a codename for the new application platform. Windows 8 introduces APIs to support near field communication (NFC) on Windows 8 devices, allowing functionality like launching URLs/applications and sharing of information between devices via NFC. Windows Store Windows Store is a digital distribution platform built into Windows 8, which in a manner similar to Apple's App Store and Google Play, allows for the distribution and purchase of apps designed for Windows 8. Developers will still be able to advertise desktop software through Windows Store as well. To ensure that they are secure and of a high quality, Windows Store will be the only means of distributing WinRT-based apps for consumer-oriented versions of Windows 8. In Windows 8.1, Windows Store features a redesigned interface with improved app discovery and recommendations and offers automatic updates for apps. Shell and user interface Windows 8 features a redesigned user interface built upon the Metro design language, with optimizations for touchscreens. Metro-style apps can either run in a full-screen environment, or be snapped to the side of a screen alongside another app or the desktop; snapping requires a screen resolution of 1366×768 or higher. Windows 8.1 lowers the snapping requirement to a screen resolution of 1024x768. Users can switch between apps and the desktop by clicking on the top left corner or by swiping the left side of the touchscreen to invoke a sidebar that displays all currently opened Metro-style apps. Right-clicking on the upper left corner provides a context menu with options to switch between open apps. The traditional desktop is accessible from a tile on the Start screen or by launching a desktop app. The shortcut cycles through all programs, regardless of type. The interface also incorporates a taskbar on the right side of the screen known as "the charms" (lowercase), which can be accessed from any app or the desktop by sliding from the right edge of a touchscreen or compatible touchpad, by moving the mouse cursor to one of the right corners of the screen, or by pressing . The charms include Search, Share, Start, Devices and Settings charms. The Start charm invokes or dismisses the Start screen. Other charms invoke context-sensitive sidebars that can be used to access app and system functionality. Because of the aforementioned changes involving the use of hot corners, user interface navigation in Windows 8 is fundamentally different when compared with previous versions of Windows. To assist new users of the operating system, Microsoft incorporated a tutorial that appears during the installation of Windows 8, and also during the first sign-in of a new user account, which visually instructs users to move their mouse cursor into any corner of the screen (or swipe the corners on devices with touchscreens) to interact with the operating system. The tutorial can be disabled so that it does not appear for new user accounts. Windows 8.1 introduces navigation hints with instructions that are displayed during the first use of the operating system, and also includes a help and support app. In Windows 8.1, the aforementioned hotspots in the upper right and the upper left corners can be disabled. Pressing or right-clicking on the bottom left corner of the screen opens the Quick Link menu. This menu contains shortcuts to frequently used areas such as Control Panel, File Explorer, Programs and Features, Run, Search, Power Options and Task Manager. In Windows 8.1, the Quick Link menu includes options to shut down or restart a device. Windows 8.1 Update introduced changes that facilitate mouse-oriented means of switching between and closing Metro-style apps, patterned upon the mechanics used by desktop programs in the Windows user interlace. In lieu of the recent apps sidebar, computer icons for opened apps can be displayed on the taskbar; as with desktop programs, shortcuts to apps can also be pinned to the taskbar. When a mouse is connected, an auto-hiding titlebar with minimize and close buttons is displayed within apps when the mouse is moved toward the top of the screen. Bundled apps A number of apps are included in the standard installation of Windows 8, including Mail (an email client), People (a contact manager), Calendar (a calendaring app), Messaging (an IM client), Photos (an image viewer), Music (an audio player), Video (a video player), Camera (a webcam or digital camera client), SkyDrive, Reader (an e-book reader), and six other apps that expose Bing services (Search, News, Finance, Weather, Travel and Sports). Windows 8.1 adds Calculator, Alarm Clock, Sound Recorder, Reading List, Food & Drink, Health & Fitness, Help + Tips, Scan, and a file manager integrated in the SkyDrive app. Windows 8 also includes a Metro-style system component called PC Settings which exposes a small portion of Control Panel settings. Windows 8.1 improves this component to include more options that were previously exclusive to Control Panel. Windows 8.1 Update adds additional options to PC Settings. Start screen Windows 8 introduces a new form of start menu called Start screen, which resembles the home screen of Windows Phone, and is shown in place of the desktop on startup. The Start screen serves as the primary method of launching applications and consists of a grid of app tiles which can be arranged into columnar groups; groups can be arranged with or without group names. App tiles can either be small (taking up 1 square) or large (taking up 2 squares) in size and can also display dynamic content provided by their corresponding apps, such as notifications and slide shows. Users can arrange individual app tiles or entire groups. An additional section of the Start screen called "All Apps" can be accessed via a right click from the mouse or an upward swipe and will display all installed apps categorized by their names. A semantic zoom feature is available for both the Start screen and "All Apps" view which enables users to target a specific area or group on the screen. The Start screen can uninstall apps directly. Windows 8.1 makes the following changes to the Start screen: The "All Apps" section, now accessed with a hidden downward arrow or upward touch gesture, features a visible search bar which can display results for apps or other items. The section is dismissed by a similar button with an upward arrow. An option to display the "All Apps" section automatically instead of the Start screen is available. On high-resolution display monitors with sufficiently large physical screen sizes, an option to display additional tiles on the Start screen is available. Start screen tiles can be locked in place to prevent accidental manipulation of tiles. The uninstall command allows Windows Store apps to be uninstalled from multiple computers. More size options for live tiles on Start screen: small, medium, wide, and large. The "small" size is one quarter of the default size in Windows 8. Expanded color options on the Start screen, which now allows users to customize a color and a shade of one's own choice instead of choosing from limited colors. New background options for the Start screen, including animated backgrounds and the ability to use the desktop wallpaper. Enhanced synchronization settings, including those for app tile arrangement, tile sizes, and background. In a multi-monitor configuration, Windows 8.1 can optionally display the Start screen only on the primarily display monitor instead of the currently active monitor when the key is pressed. Multiple desktop applications can be selected from the Start screen and pinned to the taskbar at once, or multiple desktop applications and Metro-style apps can be selected from the "All Apps" view and pinned to the Start screen at once. Windows 8.1 Update augments this capability by allowing Metro-style apps to be pinned to the taskbar. The Start menu in previous versions of Windows allowed only one desktop application to be selected and/or pinned at a time. By default, Windows 8.1 no longer displays recently installed apps and their related entries on the Start screen; users must manually pin these items. Windows 8.1 introduces options to categorize apps listed within the "All Apps" section of the Start screen. Apps can be categorized by their name, the date they were installed, their frequency of use, or based on their categories. When sorted by category, desktop applications can optionally be prioritized within the interface. Windows 8.1 Update allows additional app tiles to be displayed within the "All Apps" section of the Start screen. The ability to highlight recently installed apps has been enhanced in Windows 8.1 Update, which now displays the total number of recently installed apps within the lower-left corner of the Start screen in addition to highlighting. In contrast, the Start menu interface included in previous versions of Windows only highlighted apps. Windows 8.1 Update also enables semantic zoom upon clicking or tapping the title of an app category. Windows 8.1 reverts two changes that were featured in Windows 8. Windows 8 removed the Start button on the taskbar in favor of other ways of invoking the Start screen. Windows 8.1 restores this button. Windows 8 also showed the Start screen upon logon, as opposed to other editions of Windows that show the desktop. In Windows 8.1, user may now choose which one to see first. Windows 8.1 Update boots to the desktop by default on non-tablet devices and introduces the ability to switch to the taskbar from the Start screen or from an open Metro-style app by directing the mouse cursor toward the bottom of the screen. Windows 8.1 introduces a new "slide to shutdown" option which allows users to drag their partially revealed lock screen image toward the bottom of the screen to shut down the operating system. Windows 8.1 Update introduces a visible power button on the Start screen. This power button does not appear on all hardware device types. By default, new account profiles in Windows 8.1 Update also receive four additional tiles pinned to the Start screen: This PC, PC Settings, Documents, and Pictures. In Windows RT, only the PC Settings tile is added. Search In Windows 8, searching from the Start screen or clicking on the Search charm will display search results within a full-screen interface. Unlike previous versions of Windows where searching from the Start menu returned results from multiple sources simultaneously, Windows 8 searches through individual categories: apps, settings, and files. By default, Windows 8 searches for apps after a user begins searching from the Start screen or Search charm, but can also search other categories from the user interface or via keyboard shortcuts. Pressing opens the Search charm to search for apps, searches for files, and searches for settings. Search queries can also be redirected between specific categories or apps after being entered. When searching for apps, Windows 8 will display a list of apps that support the Search charm; frequently used apps will be prioritized and users can pin individual apps so that they always appear. The Search charm can also search directly within apps if a user redirects an entered search query to a specific app or presses from within an app that is already open. When searching for files, Windows 8 will highlight words or phrases that match a search query and provide suggestions based on the content and properties of files that appear. Information about the files themselves, such as associated programs and sizes, appear directly beneath filenames. If a user hovers over a file with the mouse cursor or long presses with a finger a tooltip will appear and display additional information. In Windows 8.1, searching no longer opens a full-screen interface; results are instead displayed in a Metro-style flyout interface. Windows 8.1 also reinstates unified local search results, and can optionally provide results from Bing. Dubbed "Smart Search," Windows 8.1 and Bing can optionally analyze a user's search habits to return relevant content that is stored locally and from the Internet. When enabled, Smart Search exposes additional search categories within the user interface: web images and web videos, and can be accessed via a new keyboard shortcut, . A new full screen "hero" interface powered by Bing can display aggregated multimedia (such as photos, YouTube videos, songs/albums on Xbox Music) and other content (such as news articles and Wikipedia entries) related to a search query. Like its predecessor, Windows 8.1 allows users to search through setting and file categories, but the option to search through a category for apps is removed from the interface; the keyboard shortcut previously associated with this functionality, , now displays unified search results. The Search charm also can no longer search from within apps directly or display a list of compatible apps. To search for content within apps, users must first open an app and, if available, use a search feature from within that app's interface. Windows 8.1 Update enhances the Bing Smart Search feature by providing support for natural language queries, which can detect misspellings and display apps or settings relevant to a query. For example, typing "get apps for Windows" will display a shortcut to the Windows Store. Windows 8.1 Update also introduces a visible search button on the Start screen that acts as a shortcut to the Metro-style flyout interface. User login Windows 8 introduces a redesigned lock screen interface based on the Metro design language. The lock screen displays a customizable background image, the current date and time, notifications from apps, and detailed app status or updates. Two new login methods optimized for touch screens are also available, including a four-digit PIN, or a "picture password," which users allow the use of certain gestures performed on a selected picture to log in. These gestures will take into account the shape, the start and end points, as well as the direction. However, the shapes and gestures are limited to tapping and tracing a line or circle. Microsoft found that limiting the gestures increased the speed of sign-ins by three times compared to allowing freeform methods. Wrong gestures will always deny a login, and it will lock out the PC after five unsuccessful attempts, until a text password is provided. Windows 8.1 introduces the ability to display a photo slide show on the lock screen. The feature can display images from local or remote directories, and includes additional options to use photos optimized for the current screen resolution, to disable the slide show while the device is running on battery power, and to display the lock screen slide show instead of turning off the screen after a period of user inactivity. The lock screen can also display interactive toast notifications. As examples, users can answer calls or instant messages received from Skype contacts, or dismiss alarm notifications from the lock screen. Users can also take photos without dismissing the lock screen. Notifications Windows 8 introduces new forms of notifications for Metro-style apps and for certain events in File Explorer. Toast notifications: alert the user to specific events, such as the insertion of removable media Tile notifications: display dynamic information on the Start screen, such as weather forecasts and news updates Badge notifications: display numeric counters with a value from 1-99 that indicate certain events, such as the amount of unread e-mail messages or amount of available updates for a particular app. Additional information may also be displayed by a badge notification, such as the status of an Xbox Music app. The PC Settings component includes options to globally disable all toast notifications, app notifications on the lock screen, or notification sounds; notifications can also be disabled on a per-app basis. In the Settings charm, Windows 8 provides additional options to suppress toast notifications during 1 hour, 3 hour, or 8 hour time intervals. Windows 8.1 introduces a Quiet Hours feature, also available on Windows Phone, that allows users to suppress notifications based on the time of day (e.g., notifications can be disabled from 12:00 AM to 6:00 PM). Microsoft account integration Windows 8 allows users to link profiles with a Microsoft account to provide additional functionality, such as the synchronization of user data and settings, including those belonging to the desktop, and allows for integration with other Microsoft services such as Xbox Live, Xbox Music, Xbox Video (for gaming and multimedia) and SkyDrive online file storage. Display screen Windows 8 includes improved support for multi-monitor configurations; the taskbar can now optionally be shown on multiple displays, and each display can also show its own dedicated taskbar. In addition, options are available which can prevent taskbar buttons from appearing on certain monitors. Wallpapers can also be spanned across multiple displays, or each display can have its own separate wallpaper. Windows 8.1 includes improved support for high-resolution monitors. A desktop scaling feature now helps resize the items on the desktop to solve the visibility problems on screens with a very high native resolution. Windows 8.1 also introduces per-display DPI scaling, and provides an option to scale to 200%. File Explorer Windows Explorer, which has been renamed as File Explorer, now incorporates a ribbon toolbar, designed to bring forward the most commonly used commands for easy access. The "Up" button (which advances the user back a level in the folder hierarchy) that was removed from Explorer after Windows XP has also been restored. Additionally, File Explorer features a redesigned preview pane that takes advantage of widescreen layouts. File Explorer also provides a built-in function for mounting ISO, IMG, and VHD files as virtual drives. For easier management of files and folders, Windows 8 introduces the ability to move selected files or folders via drag and drop from a parent folder into a subfolder listed within the breadcrumb hierarchy of the address bar in File Explorer. Progress windows for file operations have also been redesigned; offering the ability to show multiple operations at once, a graph for tracking transfer speeds, and the ability to pause and resume a file transfer. A new interface has also been introduced for managing file name collisions in a file operation, allowing users to easily control which conflicting files are copied. Libraries, introduced in Windows 7, can now have their individual icons changed through the user interface. Previously, users had to change icons manually by editing configuration files. Windows 8.1, however, no longer creates any default libraries for new users, and does not display the Libraries listing in File Explorer by default. Instead, Windows 8.1 introduces shortcuts to the default user profile folders (Documents, Downloads, Pictures, etc.) within the This PC location of File Explorer. The libraries can be enabled in the Options menu. Internet Explorer Windows 8 ships with Internet Explorer 10, which can run as either a desktop program (where it operates similarly to Internet Explorer 9), or as an app with a new full-screen interface optimized for use on touchscreens. Internet Explorer 10 also contains an integrated version of Flash Player, which will be available in full on the desktop, and in a limited form within the "Metro" app. Windows 8.1 ships with Internet Explorer 11 which includes tab syncing, WebGL and SPDY support, along with expanded developer tools. The Metro version also adds access to favorites and split-screen snapping of multiple tabs; an additional option to always display the address bar and tabs is also available. The Metro version can also detect and highlight phone numbers on a web page and turn them into clickable links that, when clicked, initiate a call with a compatible app such as Skype. Task Manager Windows 8 includes an overhauled version of Task Manager, which features the following changes: Task Manager defaults to a simple view which only displays a list of computer programs with a window. The expanded view is an updated version of the previous Task Managers with several tabs. Resource utilization in the Processes tab is shown using a heat map, with darker shades of yellow representing heavier use. The Performance tab is split into CPU, memory, disk, Ethernet, and wireless network (if applicable) sections. There are overall graphs for each, and clicking on one reaches details for that particular resource The CPU tab no longer displays individual graphs for every logical processor on the system by default. It may show data for each NUMA node. The CPU tab displays simple percentages on heat-mapping tiles to display utilization for systems with many (64 or more, up to 640) logical processors. The color used for these heat maps is blue, with darker color again indicating heavier utilization Hovering the cursor over any logical processor's graph shows the NUMA node of that processor and its ID. The new Startup tab lists startup programs and their impact on boot time. Windows Vista included a feature to manage startup applications that was removed in Windows 7. The Processes tab now lists application names, application status, and overall usage data for CPU, memory, hard disk, and network resources for each process. A new option to restart File Explorer upon its selection is provided. Task manager recognizes when a Windows Runtime application is in "Suspended" status. The process information found in the Processes tab of the older Task Manager can be found in the Details tab. Touch keyboard Windows 8 introduces a revised virtual (also known as on-screen) keyboard interface optimized for touchscreen devices that includes wider spacing between keys and is designed to prevent common typing errors that occur while using touchscreens. Pressing and holding down a key reveals related keys which can be accessed via a press or swipe, and suggestions for incomplete words are available. Emoji characters are also supported. Windows 8.1 introduces the ability to swipe the space bar in the desired direction of a suggested word to switch between on-screen suggestions. Windows 8.1 Update introduces a new gesture that allows users to tap twice and hold the second tap to drag and drop highlighted text or objects. A visible option to hide or show the virtual keyboard is also available. Password input Windows 8 displays a "peek" button for password text boxes which can optionally allows users to view passwords as they are entered in order to ensure that they are typed correctly. The feature can be disabled via Group Policy. Infrastructure File History File History is a continuous data protection component. File History automatically creates incremental backups of files stored in Libraries, including those for users participating in a HomeGroup, and user-specified folders to a different storage device (such as another internal or external hard drive, Storage Space, or network share). Specific revisions of files can then be tracked and restored using the "History" functions in File Explorer. File History replaces both Backup and Restore and Shadow Copy (known in Windows Explorer as "Previous Versions") as the main backup tool of Windows 8. Unlike Shadow Copy, which performs block-level tracking of files, File History utilizes the USN Journal to track changes, and simply copies revisions of files to the backup location. Unlike Backup and Restore, File History cannot back up files encrypted with EFS. Hardware support Windows 8 adds native support for USB 3.0, which allows for faster data transfers and improved power management with compatible devices. This native stack includes support for the newer, more efficient USB Attached SCSI (UAS) protocol, which is turned on by default even for USB 2.0 devices, although these must however have supporting firmware/hardware to take advantage of it. Windows 8.1 enhanced support for power saving features of USB storage devices, but this addition was not without problems, with some poorly implemented hardware degrading user experience by hangs and disconnects. Support for Advanced Format hard drives without emulation is included for the first time. A port of Windows for the ARM architecture was also created for Windows 8. Known as Windows RT, it is specifically optimized for mobile devices such as tablets. Windows RT is only able to run third-party Windows Store apps, but comes with a preinstalled version of Office 2013 specially redesigned for touchscreen use. Windows 8.1 improves hardware support with DirectX 11.2. Windows 8.1 adds native support for NVM Express. Windows 8 adds support for UEFI Secure Boot, and TPM 2.0. Installation Alongside the existing WinPE-based Windows Setup (which is used for installations that are initiated by booting from DVD, USB, or network), Upgrade Assistant is offered to provide a simpler and faster process for upgrading to Windows 8 from previous versions of Windows. The program runs a compatibility check to scan the device's hardware and software for Windows 8 compatibility, and then allows the user to purchase, download, generate installation media with a DVD or USB flash drive and install Windows 8. The new installation process also allows users to transfer user data into a clean installation of Windows. A similar program, branded as Windows 8 Setup, is used for installations where the user already has a product key. Windows 8 implements OEM Activation 3.0, which allows Microsoft to digitally distribute Windows licenses to original equipment manufacturers (OEMs). Windows 8 devices store product keys directly in firmware rather than printed on a Certificate of Authenticity (CoA) sticker. This new system is designed to prevent OEM product keys from being used on computers they are not licensed for, and also allows the installer to automatically detect and accept the product key in the event of re-installation. Windows 8.1 Update adds a new installation mode known as "WIMBoot", where the WIM image that contains the Windows installation is left compressed rather than being extracted, and the system is configured to use files directly from within the system image. This installation method was primarily designed to reduce the footprint of the Windows installation on devices with small amounts of storage. The system image also doubles as the recovery image, speeding up Refresh and Reset operations. It is only supported in systems with a Unified Extensible Firmware Interface (UEFI), where Windows is located on a solid-state drive or eMMC. Networking Windows 8 incorporates improved support for mobile broadband as a "first-class" method of internet connectivity. Upon the insertion of a SIM card, the operating system will automatically determine the user's carrier and configure relevant connection settings using an Access Point Name database. The operating system can also monitor mobile data usage, and changes its behavior accordingly to reduce bandwidth use on metered networks. Carriers can also offer their own dedicated Windows Store apps for account management, which can also be installed automatically as a part of the connection process. This functionality was demonstrated with an AT&T app, which could also display monthly data usage statistics on its live tile. Windows 8 also reduces the need for third-party drivers and software to implement mobile broadband by providing a generic driver, and by providing an integrated airplane mode option. Windows 8 supports geolocation. Windows 8.1 adds support for NFC printing, mobile broadband tethering, auto-triggered VPN and geofencing. Windows 8.1 Update provides options for the "Network" Settings charm to show the estimated data usage for a selected network, and to designate a network as a metered connection. Startup Windows 8 defaults to a "hybrid boot" mode; when the operating system is shut down, it hibernates the kernel, allowing for a faster boot on the subsequent startup. These improvements are further compounded by using all processor cores during startup by default. To create a more seamless transition between the Power-on self-test and Windows startup process, manufacturers' logos can now be shown on the Windows boot screen on compatible systems with UEFI. The Advanced Startup menu now uses a graphical interface with mouse and touch support in place of the text-based menu used by previous versions. As the increased boot speed of devices with UEFI can make it difficult to access it using keyboard shortcuts during boot, the menu can now be launched from within Windows—using either the PC Settings app, holding down Shift while clicking the Restart option in the Power menu, or by using the new "-o" switch on shutdown.exe. though the legacy version of the Advanced Startup menu can still be enabled instead. UEFI firmware can be exposed to Windows via class drivers. Updated firmware capsules can be distributed as an update to this "driver" in a signed package with an INF file and security catalog, similarly to those for other devices. When the "driver" is installed, Windows prepares the update to be installed on the next boot, and Windows Boot Manager renders status information on the device's boot screen. Video subsystem Windows 8 includes WDDM 1.2 and DirectX Graphics Infrastructure (DXGI) 1.2. The Desktop Window Manager now runs at all times (even on systems with unsupported graphics cards; where DWM now also supports software rendering), and now also includes support for stereoscopic 3D content. Other major features include preemptive multitasking with finer granularity (DMA buffer, primitive, triangle, pixel, or instruction-level), reduced memory footprint, improved resource sharing, and improved timeout detection and recovery. 16-bit color surface formats (565, 5551, 4444) are mandatory in Windows 8, and Direct3D 11 Video supports YUV 4:4:4/4:2:2/4:2:0/4:1:1 video formats with 8, 10, and 16-bit precision, as well as 4 and 8-bit palettized formats. Windows 8.1 introduces WDDM 1.3 and adds support for Miracast, which enables wireless or wired delivery of compressed standard- or high-definition video to or from desktops, tablets, mobile phones, and other devices. Printing Windows 8 adds support for printer driver architecture version 4. This adds a Metro friendly interface as well as changes the way the architecture was written. Windows 8.1 adds support for Wi-Fi Direct printing, NFC printing, and native APIs for 3D printing through the XML-based 3D Manufacturing Format (3MF). Windows PowerShell Windows PowerShell is Microsoft's task automation framework, consisting of a command-line shell and associated scripting language built on .NET Framework. PowerShell provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems. Windows 8 includes Windows PowerShell v3.0. Windows 8.1 comes with Windows PowerShell v4.0 which features a host of new commands for managing the Start screen, Windows Defender, Windows components, hardware and network. Windows To Go Windows To Go is a feature exclusive to the Enterprise version of Windows 8 which allows an organization to provision bootable USB flash drives with a Windows installation on them, allowing users to access their managed environment on any compatible PC. Windows 8.1 updates this feature to enable booting from a USB composite device with a storage and a smart card function. Maintenance The Action Center introduced in Windows 7 is expanded to include controls and notifications for new categories, such as SmartScreen status, drive health status, File History, device software updates, and the new Automatic Maintenance feature, which can periodically perform a number of maintenance tasks, such as diagnostics, updates, and malware scans to improve system performance. PC Settings app in Windows 8 can be used to interact with Windows Update, although the traditional interface from Control Panel is retained. Windows 8 is able to distribute firmware updates on compatible devices and can be configured not to automatically download Windows updates over metered networks. A new set of Windows PowerShell cmdlets enable adding or removing features of Windows, as Programs and Features applet in Control Panel does. The Deployment Image Servicing and Management (DISM) utility in Windows 8 includes all features that were previously available in ImageX and is able to periodically check for component store corruption and repair it. It can report the amount of disk space in use by WinSxS folder and can also determine if a cleanup should be performed. Windows 8 can now detect when a system is experiencing issues that have been preventing the system from functioning correctly, and automatically launch the Advanced Startup menu to access diagnostic and repair functions. For system recovery, Windows 8 introduced new functions known collectively as "Push-button reset", which allows a user to re-install Windows without needing to use installation media. The feature consists of "Reset" and "Refresh" functions, accessible from within the advanced boot options menu and PC Settings. Both of these options reboot the system into the Windows Recovery Environment to perform the requested operation; Refresh preserves user profiles, settings, and Windows Store apps, while Reset performs a clean installation of Windows. The reset function may also perform specialized disk wiping and formatting procedures for added security. Both operations will remove all installed desktop applications from the system. Users can also create a custom disk image for use with Refresh and Reset. Security Biometrics Windows 8 introduces virtual smart card support. A digital certificate of a smart card can be stored onto a user's machine and protected by the Trusted Platform Module, thereby eliminating the need for the user to physically insert a smart card, though entering a PIN is still required. Virtual smart card support enables new two-factor authentication scenarios. Windows 8.1 improves this functionality by simplifying the device enrollment process for virtual smart cards and introduces additional virtual smart card functionality for Metro-style applications, and enrollment and management features via WinRT APIs. Windows 8.1 features pervasive support for biometric authentication throughout the operating system, includes a native fingerprint registration feature, and enables the use of a fingerprint for tasks such as signing into a device, purchasing apps from the Windows Store, and consenting to authentication prompts (e.g., User Account Control). Windows 8.1 also introduces new WinRT APIs for biometrics. Device encryption On Windows RT, logging in with a Microsoft account automatically activates passive device encryption, a feature-limited version of BitLocker which seamlessly encrypts the contents of mobile devices to protect their contents. On Windows 8.1, device encryption is similarly available for x86-based Windows devices, automatically encrypting user data as soon as the operating system is configured. When a user signs in with a Microsoft account or on a supported Active Directory network, a recovery key is generated and saved directly to the user's account. Unlike BitLocker, device encryption on x86-based devices requires that the device meet the Connected Standby specifications (which among other requirements, requires that the device use solid state storage and have RAM soldered directly to the motherboard) and have a Trusted Platform Module (TPM) 2.0 chip. Device lockdown Windows 8.1 introduces Assigned Access, formerly called Kiosk mode, which restricts the Windows device to a running a single predetermined Metro-style app. Windows 8.1 was slated to include a Provable PC Health feature which would allow owners to subject devices connected to a network to remote PC analysis. Under Provable PC Health, connected devices would periodically send various configuration-related information to a cloud service, which would provide suggestions for remediation upon detection of an issue. However, the feature was dropped before the operating system's general availability. Family Safety Windows 8 integrates Windows Live Family Safety into the operating system, allowing parents to restrict user activity via web filtering, application restriction, and computer usage time limits. Parental controls functionality, introduced in Windows Vista, was previously partially removed in Windows 7 and made a part of Windows Live Family Safety instead. A notable change in Family Safety is that administrators can now specify time periods for computer usage. For example, an administrator can restrict a user account so that it can only remained signed in for a total time period of one hour. In previous versions of Windows, administrators could only restrict accounts based on the time of day. Startup security Windows 8 introduced four new features to offer security during the startup process: UEFI secure boot, Trusted Boot, Measured Boot and Early Launch Anti-Malware (ELAM). Of the four, secure boot is not a native feature of Windows 8; it is part of UEFI. At startup, the UEFI firmware checks the validity of a digital signature present in the Windows Boot Loader (bootmgfw.efi), which is signed with Microsoft's public key. This signature check happens every time the computer is booted and prevents malware from infecting the system before the operating system loads. The UEFI firmware will only allow signatures from keys that has been enrolled into its database, and, prior to Windows 8 release, Microsoft announced that certified computers had to ship with the Microsoft's public key enrolled and with secure boot enabled by default. However, following the announcement, the company was accused by critics and free and open-source software advocates (including the Free Software Foundation) of trying to use the secure boot to hinder or outright prevent the installation of alternative operating systems such as Linux. Microsoft denied that the secure boot requirement was intended to serve as a form of lock-in, and clarified that x86 certified systems (but not ARM systems) must allow secure boot to enter custom mode or be disabled. Trusted Boot is a feature of Windows boot loader and ensures the integrity of all Microsoft components loaded into memory, including ELAM, which loads last. ELAM ensures that all third-party boot drivers are trustworthy; they are not loaded if ELAM check fails. ELAM can use either Windows Defender or a third-party compatible antivirus. During the 2011 Build conference in Anaheim, California, Microsoft showed a Windows 8 machine that can prevent an infected USB flash memory from compromising the boot process. Measured Boot can attest to the state of a client machine by sending details about its configuration to a remote machine. The feature relies on the attestation feature of the Trusted Platform Module and is designed to verify the boot integrity of the client. Windows Platform Binary Table Windows Platform Binary Table allows executable files to be stored within UEFI firmware for execution on startup. Microsoft states this feature is meant to "allow critical software to persist even when the operating system has changed or been reinstalled in a 'clean' configuration"; specifically, anti-theft security software; but this has also been mis-used, including by Lenovo with their "Lenovo Service Engine" feature. Windows Defender In Windows 7, Windows Defender was an anti-spyware solution. Windows 8 introduced Windows Defender as an antivirus solution (and as the successor of Microsoft Security Essentials), which provides protection against a broader range of malware. It was the first time that a standard Windows install included an antivirus solution. Windows 8.1 augments it with network behavior monitoring, a new feature for Windows Defender. For Microsoft Security Essentials, this feature has been present since July 2010. Keyboard shortcuts Windows 8 includes various features that can be controlled through keyboard shortcuts. Displays the Charms Bar. Opens the Search charm to search for files. Opens the Share charm. Opens the Settings charm. Switches between the active app and a snapped app. Opens the Devices charm. Locks the current display orientation. Opens the Search charm to search for apps. Shows available app commands. and respectfully activate and deactivate semantic zoom. Switches the user's IME. Reverts to a previous IME. Cycles through open Metro-style apps. Cycles through open Metro-style apps and snaps them as they are cycled. Cycles through open Metro-style apps in reverse order. In a multi-monitor configuration, moves the Start screen and open Metro-style apps to the display monitor on the left. In a multi-monitor configuration, moves the Start screen and open Metro-style apps to the display monitor on the right. Initiates the Peek feature introduced in Windows 7. Snaps an open Metro-style app to the left side of the screen. Snaps an open Metro-style app to the right side of the screen. Takes a screenshot of the entire screen and saves it to a Screenshots folder within the Pictures directory. On a tablet, this feature can be accessed by simultaneously pressing a button with the Windows logo and a button that lowers the volume of the device. Virtualization Hyper-V, a native hypervisor previously offered only in Windows Server, is included in Windows 8 Pro, replacing Windows Virtual PC, a hosted hypervisor. Storage Storage Spaces Storage Spaces is a storage virtualization technology which succeeds Logical Disk Manager and allows the organization of physical disks into logical volumes similar to Logical Volume Manager (Linux), RAID1 or RAID5, but at a higher abstraction level. A storage space behaves like a physical disk to the user, with thin provisioning of available disk space. The spaces are organized within a storage pool, i.e. a collection of physical disks, that can span multiple disks of different sizes, performance or technology (USB, SATA, SAS). The process of adding new disks or replacing failed or older disks is fully automatic, but can be controlled with PowerShell commands. The same storage pool can host multiple storage spaces. Storage Spaces have built-in resiliency from disk failures, which is achieved by either disk mirroring or striping with parity across the physical disks. Each storage pool on the ReFS filesystem is limited to 4 PB (4096 TB), but there are no limits on the total number of storage pools or the number of storage spaces within a pool. A review in Ars Technica concluded that "Storage Spaces in Windows 8 is a good foundation, but its current iteration is simply too flawed to recommend in most circumstances." Microsoft MVP Helge Klein also criticized Storage Spaces as unsuitable for its touted market of SOHO users. Storage Spaces was further enhanced in Windows Server 2012 R2 with tiering and caching support, which can be used for caching to SSD; these new features were not added to Windows 8.1. Instead Windows 8.1 gained support for specific features of SSHD drives, e.g. for host-hinted LBA caching (TP_042v14_SATA31_Hybrid Information). NVM Express Windows 8.1 gained support for NVM Express (NVMe), a new industry standard protocol for PCIe-attached storage, such as PCIe flash cards. Windows 8.1 also supports the TRIM command for PCI Express SSDs based on NVMe (Windows 7 supported TRIM only for AHCI/SATA) drives and only the ones which were connected internally via the M.2 or SATA/IDE connectors. Windows 8.1 supports the SCSI unmap command which is a full analog of the SATA TRIM command for devices that use the SCSI driver stack. If external SSD drives as well as the device firmware in the bridge chip both support TRIM, Windows 8.1 can perform a TRIM operation on these external SATA and NVMe SSDs that connect via USB as long as they use the USB Attached SCSI Protocol (UASP). Windows 8.1 also introduces a manual TRIM function via Microsoft Drive Optimizer which can perform an on-demand user-requested TRIM operation on internal and external SSDs. Windows 7 only had automatic TRIM for internal SATA SSDs built into system operations such as Delete, Format, Diskpart etc. See also Windows Server 2012 References External links Building Windows 8 Blog Windows 8 Windows 8
32936800
https://en.wikipedia.org/wiki/Internet%20censorship%20circumvention
Internet censorship circumvention
Internet censorship circumvention is the use of various methods and tools to bypass internet censorship. Various techniques and methods are used to bypass Internet censorship, and have differing ease of use, speed, security, and risks. Some methods, such as the use of alternate DNS servers, evade blocking by using an alternate address or address lookup system to access the site. Techniques using website mirrors or archive sites rely on other copies of the site being available at different locations. Additionally, there are solutions that rely on gaining access to an Internet connection that is not subject to filtering, often in a different jurisdiction not subject to the same censorship laws, using technologies such as proxying, Virtual Private Networks, or anonymization networks. An arms race has developed between censors and developers of circumvention software, resulting in more sophisticated blocking techniques by censors and the development of harder-to-detect tools by researchers. Estimates of adoption of circumvention tools vary substantially and are disputed. Barriers to adoption can include usability issues, difficulty finding reliable and trustworthy information about circumvention, lack of desire to access censored content, and risks from breaking the law. Circumvention methods There are many methods available that may allow the circumvention of Internet filtering, which can widely vary in terms of implementation difficulty, effectiveness, and resistance to detection. Alternate names and addresses Filters may block specific domain names, either using DNS hijacking or URL filtering. Sites are sometimes accessible through alternate names and addresses that may not be blocked. Some websites may offer the same content at multiple pages or domain names. For example, the English Wikipedia is available at Main Page, and there is also a mobile-formatted version at Wikipedia, the free encyclopedia. If DNS resolution is disrupted but the site is not blocked in other ways, it may be possible to access a site directly through its IP address or modifying the host file. Using alternative DNS servers, or public recursive name servers (especially when used through an encrypted DNS client), may bypass DNS-based blocking. Censors may block specific IP addresses. Depending on how the filtering is implemented, it may be possible to use different forms of the IP address, such as by specifying the address in a different base. For example, the following URLs all access the same site, although not all browsers will recognize all forms: http://1.1.1.1/ (dotted decimal), http://16843009/ (decimal), http://0001.0001.0001.0001/ (dotted octal), http://0x01010101/ (hexadecimal), and http://0x01.0x01.0x01.0x01/ (dotted hexadecimal). Blockchain technology is an attempt to decentralize namespaces outside the control of a single entity. Decentralized namespaces enable censorship resistant domains. The BitDNS discussion began in 2010 with a desire to achieve names that are decentralized, secure and human readable. Mirrors, caches, and copies Cached pages: Some search engines keep copies of previously indexed webpages, or cached pages, which are often hosted by search engines and may not be blocked. For example, Google allows the retrieval of cached pages by entering "cache:some-url" as a search request. Mirror and archive sites: Copies of web sites or pages may be available at mirror or archive sites such as the Internet Archive's Wayback Machine or Archive.today. RSS aggregators: RSS aggregators such as Feedly may be able to receive and pass on RSS feeds that are blocked when accessed directly. Alternative Platforms Decentralized Hosting: Content creators may publish to an alternative platform which is willing to host ones content. Napster was the first peer to peer platform but got technically closed due to centralized bootstrapping vulnerabilities. Gnutella was the first sustainable hosting by decentralization. Freenet's model is that "true freedom requires true anonymity." Later BitTorrent was developed to allocate resources with high performance and fairness. ZeroNet was the first DHT to support dynamic and updateable webpages. YaCy is the leading distributed search. Anonymity Networks: The anonymity Tor Onion and I2P provides leads to more willingness to host content that would otherwise be censored. However hosting implementation and location may bring issues, and the content is still hosted by a single entity which can be controlled. Federated: Being semi-decentralised, federated platforms such as Nextcloud and IRC make it easier for users to find an instance where they are welcomed. Providers with a different policy: DuckDuckGo indexes results Google has delisted. However nothing by design keeps it so. See: Darknets Platform Beguilement Code-words: Users can use code-words which only the in-group are aware of the intended meaning. This is especially effective if the code-word is a common term. Word connotations: Users may put a common word into a context to be given a banned meaning. This relies on the censor being unwilling to ban such a common term. Link relaying: Users can link to a page which then contains a second link to a banned website which promotes the intended message. This linking to a page having a link prevents platforms from banning the direct link, and only requires an extra link click. Proxying Web proxies: Proxy websites are configured to allow users to load external web pages through the proxy server, permitting the user to load the page as if it is coming from the proxy server and not the (blocked) source. However, depending on how the proxy is configured, a censor may be able to determine the pages loaded and/or determine that the user is using a proxy server. For example, the mobile Opera Mini browser uses a proxy-based approach employing encryption and compression in order to speed up downloads. This has the side effect of allowing it to circumvent several approaches to Internet censorship. In 2009 this led the government of China to ban all but a special Chinese versions of the browser. Domain fronting: Circumvention software can implement a technique called domain fronting, where the destination of a connection is hidden by passing the initial requests through a content delivery network or other popular site which censors may be unwilling to block. This technique was used by messaging applications including Signal and Telegram. Tor's meek uses Microsoft's Azure cloud. However, large cloud providers such as Amazon Web Services and Google Cloud no longer permit its use. Website owners can use a free account to use a Cloudflare domain for fronting. SSH tunneling: By establishing an SSH tunnel, a user can forward all their traffic over an encrypted channel, so both outgoing requests for blocked sites and the response from those sites are hidden from the censors, for whom it appears as unreadable SSH traffic. Virtual private network (VPN): Using a VPN, A user who experiences internet censorship can create a secure connection to a more permissive country, and browse the internet as if they were situated in that country. Some services are offered for a monthly fee; others are ad-supported. According to GlobalWebIndex in 2014 there were over 400 million people use virtual private networks to circumvent censorship or for increased level of privacy although this number is not verifiable. Tor: More advanced tools such as Tor route encrypted traffic through multiple servers to make the source and destination of traffic less traceable. It can in some cases be used to avoid censorship, especially when configured to use traffic obfuscation techniques. Traffic obfuscation A censor may be able to detect and block use of circumvention tools through Deep Packet Inspection. There are efforts to make circumvention tools less detectable by randomizing the traffic, attempting to mimic a whitelisted protocol or tunneling traffic through a whitelisted site by using techniques including domain fronting or Meek. Tor and other circumvention tools have adopted multiple obfuscation techniques that users can use depending on the nature of their connection, which are sometimes called "Pluggable Transports." Internet Alternatives Functionality people may be after might overlap with non-internet services, such as traditional post, Bluetooth, or Walkie-talkie. The following are some detailed examples: Alternative Data Transport Datacasting allows transmission of Web pages and other information via satellite broadcast channels bypassing the Internet entirely. This requires a satellite dish and suitable receiver hardware but provides a powerful means of avoiding censorship. Because the system is entirely receive only for the end user, a suitably air-gapped computer can be impossible to detect. Sneakernets A sneakernet is the transfer of electronic information, especially computer files, by physically carrying data on storage media from one place to another. A sneakernet can move data regardless of network restrictions simply by not using the network at all. One example of a widely adopted sneakernet network is El Paquete Semanal in Cuba. Adoption of circumvention tools Circumvention tools have seen spikes in adoption in response to high-profile blocking attempts, however, studies measuring adoption of circumvention tools in countries with persistent and widespread censorship report mixed results. In response to persistent censorship Measures and estimates of circumvention tool adoption have reported widely divergent results. A 2010 study by Harvard University researchers estimated that very few users use censorship circumvention tools—likely less than 3% of users even in countries that consistently implement widespread censorship. Other studies have reported substantially larger estimates, but have been disputed. In China, anecdotal reports suggest that adoption of circumvention tools is particularly high in certain communities, such as universities, and a survey by Freedom House found that users generally did not find circumvention tools to be difficult to use. Market research firm GlobalWebIndex has reported that there are over 35 million Twitter users and 63 million Facebook users in China (both services are blocked). However, these estimates have been disputed; Facebook's advertising platform estimates 1 million users in China, and other reports of Twitter adoption estimate 10 million users. Other studies have pointed out that efforts block circumvention tools in China have reduced adoption of those tools; the Tor network previously had over 30,000 users connecting from China but as of 2014 had only approximately 3,000 Chinese users. In Thailand, internet censorship has existed since 2002, and there is sporadic and inconsistent filtering. In a small-scale survey of 229 Thai internet users, a research group at the University of Washington found that 63% of surveyed users attempted to use circumvention tools, and 90% were successful in using those tools. Users often made on-the-spot decisions about use of circumvention tools based on limited or unreliable information, and had a variety of perceived threats, some more abstract and others more concrete based on personal experiences. In response to blocking events In response to the 2014 blocking of Twitter in Turkey, information about alternate DNS servers was widely shared, as using another DNS server such as Google Public DNS allowed users to access Twitter. The day after the block, the total number of posts made in Turkey was up 138%, according to Brandwatch, an internet measurement firm. After an April 2018 ban on the Telegram messaging app in Iran, web searches for VPN and other circumvention software increased as much as 48× for some search terms, but there was evidence that users were downloading unsafe software. As many as a third of Iranian internet users used the Psiphon tool in the days immediately following the block, and in June 2018 as many as 3.5 million Iranian users continued to use the tool. Anonymity, risks, and trust Circumvention and anonymity are different. Circumvention systems are designed to bypass blocking, but they don't usually protect identities. Anonymous systems protect a user's identity. And while they can contribute to circumvention, that is not their primary function. It is important to understand that open public proxy sites do not provide anonymity and can view and record the location of computers making requests as well as the websites accessed. In many jurisdictions accessing blocked content is a serious crime, particularly content that is considered child pornography, a threat to national security, or an incitement of violence. Thus it is important to understand the circumvention technologies and the protections they do or do not provide and to use only tools that are appropriate in a particular context. Great care must be taken to install, configure, and use circumvention tools properly. Individuals associated with high-profile rights organizations, dissident, protest, or reform groups should take extra precautions to protect their online identities. Circumvention sites and tools should be provided and operated by trusted third parties located outside the censoring jurisdiction that don't collect identities and other personal information. Best are trusted family and friends personally known to the circumventor, but when family and friends are not available, sites and tools provided by individuals or organizations that are only known by their reputations or through the recommendations and endorsement of others may need to be used. Commercial circumvention services may provide anonymity while surfing the Internet, but could be compelled by law to make their records and users' personal information available to law enforcement. Software There are five general types of Internet censorship circumvention software: CGI proxies use a script running on a web server to perform the proxying function. A CGI proxy client sends the requested url embedded within the data portion of an HTTP request to the CGI proxy server. The CGI proxy server pulls the ultimate destination information from the data embedded in the HTTP request, sends out its own HTTP request to the ultimate destination, and then returns the result to the proxy client. A CGI proxy tool's security can be trusted as far as the operator of the proxy server can be trusted. CGI proxy tools require no manual configuration of the browser or client software installation, but they do require that the user use an alternative, potentially confusing browser interface within the existing browser. HTTP proxies send HTTP requests through an intermediate proxying server. A client connecting through a HTTP proxy sends exactly the same HTTP request to the proxy as it would send to the destination server unproxied. The HTTP proxy parses the HTTP request; sends its own HTTP request to the ultimate destination server; and then returns the response back to the proxy client. An HTTP proxy tool's security can be trusted as far as the operator of the proxy server can be trusted. HTTP proxy tools require either manual configuration of the browser or client side software that can configure the browser for the user. Once configured, an HTTP proxy tool allows the user transparently to use his normal browser interface. Application proxies are similar to HTTP proxies, but support a wider range of online applications. Peer-to-peer systems store content across a range of participating volunteer servers combined with technical techniques such as re-routing to reduce the amount of trust placed on volunteer servers or on social networks to establish trust relationships between server and client users. Peer-to-peer system can be trusted as far as the operators of the various servers can be trusted or to the extent that the architecture of the peer-to-peer system limits the amount of information available to any single server and the server operators can be trusted not to cooperate to combine the information they hold. Re-routing systems send requests and responses through a series of proxying servers, encrypting the data again at each proxy, so that a given proxy knows at most either where the data came from or is going to, but not both. This decreases the amount of trust required of the individual proxy hosts. Below is a list of different Internet censorship circumvention software: See also Anonymous P2P Bypassing content-control filters Computer surveillance Content-control software Crypto-anarchism Cypherpunk Domain fronting Electronic Frontier Foundation - an international non-profit digital rights advocacy and legal organization Freedom of information Freedom of speech Global Internet Freedom Consortium (GIFC) - a consortium of organizations that develop and deploy anti-censorship technologies Bypassing the Great Firewall of China Internet freedom Internet privacy Meshnet Open Technology Fund (OTF) – a U.S. Government funded program created in 2012 at Radio Free Asia to support global Internet freedom technologies Proxy list Tactical Technology Collective – a non-profit foundation promoting the use of free and open source software for non-governmental organizations, and producers of NGO-in-A-Box Tor References External links Casting A Wider Net: Lessons Learned in Delivering BBC Content on the Censored Internet, Ronald Deibert, Canada Centre for Global Security Studies and Citizen Lab, Munk School of Global Affairs, University of Toronto, 11 October 2011 Censorship Wikia, an anti-censorship site that catalogs past and present censored works, using verifiable sources, and a forum to discuss organizing against and circumventing censorship "Circumvention Tool Evaluation: 2011", Hal Roberts, Ethan Zuckerman, and John Palfrey, Berkman Centre for Internet & Society, 18 August 2011 "Circumvention Tool Usage Report: 2010", Hal Roberts, Ethan Zuckerman, Jillian York, Robert Faris, and John Palfrey, Berkman Centre for Internet & Society, 14 October 2010 Digital Security and Privacy for Human Rights Defenders, by Dmitri Vitaliev, Published by Front Line - The International Foundation for the Protection of Human Rights Defenders "Digital Tools to Curb Snooping", New York Times, 17 July 2013 "DNS Nameserver Swapping", Methods and Scripts useful for evading censorship through DNS filtering How to Bypass Internet Censorship, also known by the titles: Bypassing Internet Censorship or Circumvention Tools, a FLOSS Manual, 10 March 2011, 240 pp. Translations have been published in Arabic, Burmese, Chinese, Persian, Russian, Spanish, and Vietnamese Internet censorship wiki, provides information about different methods of access filtering and ways to bypass them "Leaping over the Firewall: A Review of Censorship Circumvention Tools", by Cormac Callanan (Ireland), Hein Dries-Ziekenheiner (Netherlands), Alberto Escudero-Pascual (Sweden), and Robert Guerra (Canada), Freedom House, April 2011 "Media Freedom Internet Cookbook" by the OSCE Representative on Freedom of the Media, Vienna, 2004 "Online Survival Kit", We Fight Censorship project of Reporters Without Borders "Selected Papers in Anonymity", Free Haven Project, accessed 16 September 2011 "Ten Things to Look for in a Circumvention Tool", Roger Dingledine, The Tor Project, September 2010 Internet security Content-control software Internet censorship Internet privacy Circumvention
32941222
https://en.wikipedia.org/wiki/Avaya%201100-series%20IP%20phones
Avaya 1100-series IP phones
The 1100-series IP phones are 6 different desktop IP clients manufactured by Avaya for Unified communications which can operate on the SIP or UNIStim protocols. The SIP Firmware supports presence selection and notification along with secure instant messaging. History The 1100 series of phones was originally manufactured in 2008 as an evolution of the IP Phone 2004 series of phones from Nortel. As such it began as a UNIStim-only phone, which meant that the phone was primarily supported with only Nortel manufactured voice PBX systems. In 2009 a firmware upgrade was made available to allow the phone to function on the SIP protocol. This meant that the phone could now be used with a wide variety of PBX systems including those produced by Nortel, Avaya, and even open-source PBX systems such as Asterisk (PBX). In 2010 a VPN client was added to the firmware as of release 0623C7F. This means that it is very easy to send the phone to a remote worker location with a typical cable modem or DSL Internet connection and the phone will use the VPN capability to securely establish an IP tunnel back to the corporate network and extend a standard voice telephone extension to any location on the Internet. Other users in the global corporation can dial the user's extension and the phone will ring. Models 1110 The 1110 supports a single telephone line and is used for lobby and conference center locations, with a fully backlit monochrome display of 143 x 32 pixels. This phone has a two-port 10/100 Ethernet switch. 1120E This phone is a four-line phone with gigabit Ethernet ports, and a 240 x 160 pixel display. This device has a USB port and Bluetooth ability but in contrast its sister the 1120SA does not have these functioning USB or Bluetooth for security reasons. 1120SA The 1120SA is a special-use phone which is certified for use by Sensitive Compartmented Information Facilities (SCIF) users, without an external device. The National Telecommunications Security Working Group (NTSWG) has approved the Avaya IP Phone 1120SA for deployment under the Director of Central Intelligence Directive (DCID 6/9), for VoIP and VoSIP (TSG-6 Type-acceptance CNSS Class-A and Class-B Certified). Unique security features The major security functions are: Positive disconnect Off-hook visual indicator Tamper-evident labeling No hands-free microphone Signaling encryption, media encryption and user-based authentication (network access control) Lockable tools menu - VPN-capable IP phone 1140E VPN-capable IP phone 1150E This phone can support up to 12 phone lines and 12 programmable soft keys, and seven additional fixed keys for agents (In-Calls, Not Ready, Make Set Busy, Supervisor, Supervisor Listen/Talk, Emergency and Activity). This IP ACD and contact center phone with a 240 x 160 pixel display, integrates two gigabit Ethernet ports. It also integrates Bluetooth and USB, to support wired and wireless mouse, keyboard, card readers, headsets, and flash memory drive devices. VPN-capable IP phone 1165E The phone can support up to sixteen lines through the UNIStim or Session Initiation Protocol (SIP) protocols. The display is a color QVGA resolution (320 x 240 pixels) LCD display, with the option to have multiple themes, background images, or digital pictures. For additional security and privacy all traffic is encrypted, for both voice and signaling. The integrated Bluetooth and USB ports support wired and wireless mouse, keyboard, card readers, headsets, and flash memory drive devices. The phone has two gigabit Ethernet switch ports, and is powered through an IEEE 802.3af PoE device or can use a local power adapter. Unique features Four soft keys Seven specialized feature keys: Quit, Directory, Message/Inbox, Shift/Outbox, Services, Copy, Expand. Supports the WML Browser 1100 expansion module Is an expansion module that may be installed on the 1120E, 1140E, or the 1150E phones. It extends the phones ability to support 18 telephone lines and has 18 programmable feature keys. Additional information Summary of supported protocols Session Initiation Protocol UNIStim 802.1x and EAP (MD-5) Signaling Encryption (AES - 128bit) Media Encryption 802.1ab Link Layer Discovery Protocol (LLDP) 802.3af PoE PVQM (Proactive Voice Quality Monitoring) PBX platform compatibility Avaya CS 1000 AS5300 BCM 400 (release 4.0 onwards) BCM 450 BCM 50 (release 3.0 onwards) Asterisk and Trixbox E-Metrotel UCx20, UCx50, UCx450, UCx1000 IP Call Servers Pure SIP platform from SIP providers See also Avaya Government Solutions Avaya Aura Application Server 5300 (AS5300) Further reading References External links 1100 Series IP Deskphones support site Repairing a dead Nortel/Avaya 1100 Series IP Phone Avaya/Nortel 1100 series phone to work with Asterisk IP Phone VoIP hardware
32961506
https://en.wikipedia.org/wiki/RDRAND
RDRAND
RDRAND (for "read random"; known as Intel Secure Key Technology, previously known as Bull Mountain) is an instruction for returning random numbers from an Intel on-chip hardware random number generator which has been seeded by an on-chip entropy source. RDRAND is available in Ivy Bridge processors and is part of the Intel 64 and IA-32 instruction set architectures. AMD added support for the instruction in June 2015. The random number generator is compliant with security and cryptographic standards such as NIST SP 800-90A, FIPS 140-2, and ANSI X9.82. Intel also requested Cryptography Research Inc. to review the random number generator in 2012, which resulted in the paper Analysis of Intel's Ivy Bridge Digital Random Number Generator. RDSEED is similar to RDRAND and provides lower-level access to the entropy-generating hardware. The RDSEED generator and processor instruction rdseed are available with Intel Broadwell CPUs and AMD Zen CPUs. Overview The CPUID instruction can be used on both AMD and Intel CPUs to check whether the RDRAND instruction is supported. If it is, bit 30 of the ECX register is set after calling CPUID standard function 01H. AMD processors are checked for the feature using the same test. RDSEED availability can be checked on Intel CPUs in a similar manner. If RDSEED is supported, the bit 18 of the EBX register is set after calling CPUID standard function 07H. The opcode for RDRAND is 0x0F 0xC7, followed by a ModRM byte that specifies the destination register and optionally combined with a REX prefix in 64-bit mode. Intel Secure Key is Intel's name for both the RDRAND instruction and the underlying random number generator (RNG) hardware implementation, which was codenamed "Bull Mountain" during development. Intel calls their RNG a "digital random number generator" or DRNG. The generator takes pairs of 256-bit raw entropy samples generated by the hardware entropy source and applies them to an Advanced Encryption Standard (AES) (in CBC-MAC mode) conditioner which reduces them to a single 256-bit conditioned entropy sample. A deterministic random-bit generator called CTR_DRBG defined in NIST SP 800-90A is seeded by the output from the conditioner, providing cryptographically secure random numbers to applications requesting them via the RDRAND instruction. The hardware will issue a maximum of 511 128-bit samples before changing the seed value. Using the RDSEED operation provides access to the conditioned 256-bit samples from the AES-CBC-MAC. The RDSEED instruction was added to Intel Secure Key for seeding another pseudorandom number generator, available in Broadwell CPUs. The entropy source for the RDSEED instruction runs asynchronously on a self-timed circuit and uses thermal noise within the silicon to output a random stream of bits at the rate of 3 GHz, slower than the effective 6.4 Gbit/s obtainable from RDRAND (both rates are shared between all cores and threads). The RDSEED instruction is intended for seeding a software PRNG of arbitrary width, whereas the RDRAND is intended for applications that merely require high-quality random numbers. If cryptographic security is not required, a software PRNG such as Xorshift is usually faster. Performance On an Intel Core i7-7700K, 4500 MHz (45 × 100 MHz) processor (Kaby Lake-S microarchitecture), a single RDRAND or RDSEED instruction takes 110 ns, or 463 clock cycles, regardless of the operand size (16/32/64 bits). This number of clock cycles applies to all processors with Skylake or Kaby Lake microarchitecture. On the Silvermont microarchitecture processors, each of the instructions take around 1472 clock cycles, regardless of the operand size; and on Ivy Bridge processors RDRAND takes up to 117 clock cycles. On an AMD Ryzen CPU, each of the instructions takes around 1200 clock cycles for 16-bit or 32-bit operand, and around 2500 clock cycles for a 64-bit operand. An astrophysical Monte Carlo simulator examined the time to generate 107 64-bit random numbers using RDRAND on a quad-core Intel i7-3740 QM processor. They found that a C implementation of RDRAND ran about 2× slower than the default random number generator in C, and about 20× slower than the Mersenne Twister. Although a Python module of RDRAND has been constructed, it was found to be 20× slower than the default random number generator in Python, although a performance comparison between a PRNG and CSPRNG cannot be made. A microcode update released by Intel in June 2020, designed to mitigate the CrossTalk vulnerability (see the security issues section below), negatively impacts the performance of RDRAND and RDSEED due to additional security controls. On processors with the mitigations applied, each affected instruction incurs additional latency and simultaneous execution of RDRAND or RDSEED across cores is effectively serialised. Intel introduced a mechanism to relax these security checks, thus reducing the performance impact in most scenarios, but Intel processors do not apply this security relaxation by default. Compilers Visual C++ 2015 provides intrinsic wrapper support for the RDRAND and RDSEED functions. GCC 4.6+ and Clang 3.2+ provide intrinsic functions for RDRAND when -mrdrnd is specified in the flags, also setting to allow conditional compilation. Newer versions additionally provide immintrin.h to wrap these built-ins into functions compatible with version 12.1+ of Intel's C Compiler. These functions write random data to the location pointed to by their parameter, and return 1 on success. Applications It is an option to generate cryptographically secure random numbers using RDRAND and RDSEED in OpenSSL, to help secure communications. A scientific application of RDRAND can be found in astrophysics. Radio observations of low-mass stars and brown dwarfs have revealed that a number of them emit bursts of radio waves. These radio waves are caused by magnetic reconnection, the same process that causes solar flares on the Sun. RDRAND was used to generate large quantities of random numbers for a Monte Carlo simulator, to model physical properties of the brown dwarfs and the effects of the instruments that observe them. They found that about 5% of brown dwarfs are sufficiently magnetic to emit strong radio bursts. They also evaluated the performance of the RDRAND instruction in C and Python compared to other random number generators. Reception In September 2013, in response to a New York Times article revealing the NSA's effort to weaken encryption, Theodore Ts'o publicly posted concerning the use of RDRAND for /dev/random in the Linux kernel: Linus Torvalds dismissed concerns about the use of RDRAND in the Linux kernel and pointed out that it is not used as the only source of entropy for /dev/random, but rather used to improve the entropy by combining the values received from RDRAND with other sources of randomness. However, Taylor Hornby of Defuse Security demonstrated that the Linux random number generator could become insecure if a backdoor is introduced into the RDRAND instruction that specifically targets the code using it. Hornby's proof-of-concept implementation works on an unmodified Linux kernel prior to version 3.13. The issue was mitigated in the Linux kernel in 2013. Developers changed the FreeBSD kernel away from using RDRAND and VIA PadLock directly with the comment "For FreeBSD 10, we are going to backtrack and remove RDRAND and Padlock backends and feed them into Yarrow instead of delivering their output directly to /dev/random. It will still be possible to access hardware random number generators, that is, RDRAND, Padlock etc., directly by inline assembly or by using OpenSSL from userland, if required, but we cannot trust them any more." FreeBSD /dev/random uses Fortuna and RDRAND started from FreeBSD 11. Security issues On 9 June 2020, researchers from Vrije Universiteit Amsterdam published a side-channel attack named CrossTalk (CVE-2020-0543) that affected RDRAND on a number of Intel processors. They discovered that outputs from the hardware digital random number generator (DRNG) were stored in a staging buffer that was shared across all cores. The vulnerability allowed malicious code running on an affected processor to read RDRAND and RDSEED instruction results from a victim application running on another core of that same processor, including applications running inside Intel SGX enclaves. The researchers developed a proof-of-concept exploit which extracted a complete ECDSA key from an SGX enclave running on a separate CPU core after only one signature operation. The vulnerability affects scenarios where untrusted code runs alongside trusted code on the same processor, such as in a shared hosting environment. Intel refers to the CrossTalk vulnerability as Special Register Buffer Data Sampling (SRBDS). In response to the research, Intel released microcode updates to mitigate the issue. The updated microcode ensures that off-core accesses are delayed until sensitive operations specifically the RDRAND, RDSEED, and EGETKEY instructions are completed and the staging buffer has been overwritten. The SRBDS attack also affects other instructions, such as those that read MSRs, but Intel did not apply additional security protections to them due to performance concerns and the reduced need for confidentiality of those instructions' results. A wide range of Intel processors released between 2012 and 2019 were affected, including desktop, mobile, and server processors. The mitigations themselves resulted in negative performance impacts when using the affected instructions, particularly when executed in parallel by multi-threaded applications, due to increased latency introduced by the security checks and the effective serialisation of affected instructions across cores. Intel introduced an opt-out option, configurable via the IA32_MCU_OPT_CTRL MSR on each logical processor, which improves performance by disabling the additional security checks for instructions executing outside of an SGX enclave. See also AES instruction set Bullrun (decryption program) OpenSSL wolfSSL Notes References External links RdRand .NET Open Source Project X86 microprocessors X86 instructions Machine code Random number generation X86 architecture
32962079
https://en.wikipedia.org/wiki/Brenno%20de%20Winter
Brenno de Winter
Brenno de Winter (born 6 December 1971, in Ede) is a former Dutch ICT and investigative journalist. He writes for Linux Magazine, Computer!Totaal, NU.nl, and Webwereld, and is a commenter for the PowNews programme on PowNed TV. Brenno is also a podcaster and hosts Laura Speaks Dutch. He caused controversy by submitting requests for information on the basis of the Open Government Act (WOB) to include Jeltje van Nieuwenhoven (regarding her role as OV ambassador) and hundreds WOB requests to all Dutch municipalities and provinces. Because not all agencies fulfilled the WOB requests, de Winter filed lawsuits against them. The Dutch Association of Journalists (NEY) supported de Winter. A court in The Hague ruled in de Winter's favour on 4 May 2010. In April 2010, de Winter was involved in the disclosure of the expenditure of the FENS funds (1.3 billion euros) by the NS. After the publications and media appearances of de Winter related to the ease and simplicity of the OV-chipkaart, the public transport smart card in the Netherlands, the Minister of Infrastructure and Environment was able to get the NVB in Haaglanden about a one-month postponement. Due to the disclosure, the District Attorney decided to open a criminal investigation against de Winter; however, after a legal defence fund met its goals within an hour. Open source software Since 1993, de Winter has developed software for commercial applications. Since 1995 he focuses on projects with open source software as a basis. From the late 1990s, he gives lectures and trainings on this and advises organizations on their IT security. Journalism Since 2000, de Winter has been a professional journalist. The articles written by him focus on the business side of the IT industry and the technical aspects of open source software and IT security. In his work as an investigative journalist, he makes frequent use of the WOB. In 2011, the journalism magazine Villamedia named de Winter journalist of the year. In July 2012, de Winter broke a story about Dutch employers' censorship, after an employee of Dutch company Unisys Netherlands was threatened with termination for giving a presentation about online censorship for the conference Last H.O.P.E., in New York. In September 2012, de Winter released a video and accompanying news story of how he was able to use a fake ID to gain access to numerous Dutch and European government offices, including, amongst many others, the European Parliament and four Dutch government ministries, including the Ministry of Justice and the Ministry of the Interior, the Dutch Secret Service, and the Dutch National Cyber Security Center. OV-chipcard Security Jeltje de Nieuwenhoven wrote a report during her function as OV (public transport) ambassador called "The OV-chipcard, the Traveler and Confidence" which concluded that "many travelers are not very concerned about privacy; it's only an issue when the media makes an issue of it. However, a small amount privacy is important." De Winter found that these statements, other statements, and recommendations in this report were not substantiated, and so sent a WOB request in order to gain an understanding of the data on which these claims are based. De Nieuwenhoven rejected this application; De Winter presented this to court. The OV-chipcard/MIFARE uses 48-bit encryption and offers no modern security features. Netherlands in Open Connection During the program's Open Standards and Open Source Software, later as part of the Open Source Software Strategy (for the government), (OSOSS) (2002–2007) and its successor Netherlands in Open Connection (NOiV) (2008–2011), de Winter followed developments critically. It turned out that the House of Representatives did not follow its policy on the implementation by the IT departments of the government. De Winter issued a WOB request to the Dutch Association of Municipalities (VNG) to gain more insight into the performance. When the VNG refused to make the information public, de Winter filed a lawsuit, and later, WOB requests to all individual municipalities, provinces, and many independent administrative bodies. Winter was on 16 December 2009 at the presentation of the Book A wall of rubber in the WOB in journalistic practice for to criticism from the parliamentarian Pierre Heijnen (PvdA). This was criticised by the other attendees at the meeting, including former minister Bram Peper. Criminal proceedings and dismissal In 2011 de Winter was the subject of a criminal investigation. For uncovering weaknesses in the chip card and the central system, the Public Prosecutor assumed he was illegally cracking the cards. The company behind the chip card has reported. Winter by the Public Prosecutor considered suspicious for manipulating value cards, possession of means to do so, and computer intrusion. He faced imprisonment for up to six years. De Winter indicates that the manipulation was performed as part of the journalistic investigation into the weaknesses of the OV-chipcard expose and denounced the investigation. In demonstrating the weaknesses were also journalists, among others, the NIS, the public broadcasting Powned, Computerworld, RTV Rijnmond was involved. They would not have reported been made. In order to pay the legal costs, NU.nl, Computerworld, PC-Active, and GeenStijl started a campaign to raise money. Within an hour, the necessary €2,500 was raised and the funds kept coming. The case dragged a while and, according to Winter, influence on his work. He now also being hindered in his work by the prosecution. The Dutch Association of Journalists supported de Winter in his case. On 8 September, the prosecution announced that Brenno de Winter would not be prosecuted for fraud associated with the smartcard. The prosecutor noted that de Winter acted carefully and the limits of the permissible maxima were not exceeded. Moreover, in this case considered important in that de Winter, a professional journalist, exposed the vulnerability of the smartcard and was criminally investigated for fraud by the request of the corporation associated with the card. Publications Computer! Total: Linux in practice, 2004, Addison Wesley, Privacy and the Internet: How do I protect my identity?, 2011, Academic Service, Open Map, 2011, Academic Service, References External links Laura Speaks Dutch Podcast bigwobber.nl, website, 24 September 2012 1971 births Living people Dutch journalists People from Ede, Netherlands
33020823
https://en.wikipedia.org/wiki/56-bit%20encryption
56-bit encryption
In computing, 56-bit encryption refers to a key size of fifty-six bits, or seven bytes, for symmetric encryption. While stronger than 40-bit encryption, this still represents a relatively low level of security in the context of a brute force attack. Description The US government traditionally regulated encryption for reasons of national security, law enforcement and foreign policy. Encryption was regulated from 1976 by the Arms Export Control Act until control was transferred to the Department of Commerce in 1996. 56-bit refers to the size of a symmetric key used to encrypt data, with the number of unique possible permutations being (72,057,594,037,927,936). 56-bit encryption has its roots in DES, which was the official standard of the US National Bureau of Standards from 1976, and later also the RC5 algorithm. US government regulations required any users of stronger 56-bit symmetric keys to submit to key recovery through algorithms like CDMF or key escrow, effectively reducing the key strength to 40-bit, and thereby allowing organisations such as the NSA to brute-force this encryption. Furthermore, from 1996 software products exported from the United States were not permitted to use stronger than 56-bit encryption, requiring different software editions for the US and export markets. In 1999, US allowed 56-bit encryption to be exported without key escrow or any other key recovery requirements. The advent of commerce on the Internet and faster computers raised concerns about the security of electronic transactions initially with 40-bit, and subsequently also with 56-bit encryption. In February 1997, RSA Data Security ran a brute force competition with a $10,000 prize to demonstrate the weakness of 56-bit encryption; the contest was won four months later. In July 1998, a successful brute-force attack was demonstrated against 56-bit encryption with Deep Crack in just 56 hours. In 2000, all restrictions on key length were lifted, except for exports to embargoed countries. 56-bit DES encryption is now obsolete, having been replaced as a standard in 2002 by the 128-bit (and stronger) Advanced Encryption Standard. DES continues to be used as a symmetric cipher in combination with Kerberos because older products do not support newer ciphers like AES. See also 40-bit encryption Pretty Good Privacy References Symmetric-key cryptography History of cryptography
33064807
https://en.wikipedia.org/wiki/Connectify
Connectify
Connectify () is an American software company that develops networking software for consumers, professionals and companies. Connectify Hotspot is a virtual router software for Microsoft Windows. and Speedify is a mobile VPN service with channel bonding capabilities available for individuals, families and teams. History Connectify launched their first product, Connectify Hotspot, in October 2009. It can enable a Windows PC to serve as a router over Ethernet or Wi-Fi. Along with a Windows 7, 8 or 10 certified Wi-Fi device it can act as a wireless access point. This enables users to share files, printers, and Internet connections between multiple computing devices without the need for a separate physical access point or router. Connectify spent the next two years improving the product, first making it free and ad-supported. In 2011, Connectify switched to a freemium commercial model which included premium features for paying customers. These features included extended support of 3G/4G mobile devices, fully customizable SSIDs and premium customer support. In 2011, Connectify received funding from In-Q-Tel to begin developing a more powerful and secure remote networking platform and a connection-aggregation application. Connectify used this funding to develop the foundation of the application, and then in 2012 turned to the crowdfunding site Kickstarter to raise additional funding to develop Connectify Dispatch. Dispatch was a load balancer which could combine any number of Ethernet, Wi-Fi or mobile Internet connections. In 2014, Connectify launched Speedify, a channel bonding application for PCs running Microsoft Windows and macOS. In January 2016, Speedify for Mobile was launched at CES, adding support for iOS and Android. In December 2016, Speedify added encryption on all supported platforms, turning it into a mobile virtual private network. Connectify released other apps: Pingify (in 2017) - a mobile network diagnostics tool, and EdgeWise Connect (in 2019). Products Connectify Hotspot Connectify Hotspot is a virtual router software application available for PCs running Windows 7 or a later version. It was launched in 2009 by Connectify and it has 3 main functions: Wi-Fi hotspot - users can share the Internet connection from their PC through a Wi-Fi adapter. The free version of Connectify Hotspot only allows sharing of wired or Wi-Fi Internet via Wi-Fi. The paid versions allow users to share any type of Internet connection, including 4G / LTE. through Wi-Fi or Ethernet. Wired router - users can share a computer's Wi-Fi connection via Ethernet. This functionality is only available in the paid versions of Connectify Hotspot. Wi-Fi repeater - users can extend the range of a Wi-Fi network and bridge other devices on that network directly. This functionality is only available in the paid versions of Connectify Hotspot. Starting with version 2017, Connectify Hotspot also incorporates a universal ad blocker for clients connecting to the Wi-Fi or Ethernet network it creates. This feature is available for free. Speedify Speedify is a mobile VPN bonding service available for devices running Windows, macOS, Android and iOS. Speedify 1.0 was first launched in June 2014 as a channel bonding service. Speedify can combine multiple Internet connections given its link aggregation capabilities. In theory, this should offer faster Internet connection speeds and failover protection. Starting 2016, as a VPN service, Speedify's privacy policy states they do not log what users do or what sites they visit through the Speedify service. Starting with version 10, in 2020, Speedify provides QoS for live streams with its new streaming mode, which dynamically prioritizes streaming traffic. Speedify can be used for free for the first 2 GB each month; there are monthly and yearly subscription plans available. Tom's Guide picked Speedify as the fastest VPN in late 2018. Speedify became Citrix Ready in late 2019. Speedify for Teams Speedify for Teams is Connectify's business product. It is basically a multi-seat Speedify mobile VPN subscription with added centralized account management capabilities. Pingify Pingify is a free utility for testing network coverage, Internet reliability and VPN dependability. It's currently available for iOS devices - it helps you run ping on your iPhone. Pingify was developed as an internal testing tool for Speedify on iOS - a mobile network diagnostics tool. Then, in 2017, it was made publicly available. EdgeWise Connect EdgeWise Connect is a simple VPN available for iPhone and Android which can seamlessly move web traffic between the Wi-Fi and cellular data connections. The service uses channel bonding technology and is fully encrypted. EdgeWise Connect was launched in early 2019 and can be used for free for a few hours each day; there are monthly and yearly subscription plans available. References External links Speedify website EdgeWise Connect website Pingify website 2009 software Windows-only shareware Windows-only freeware Communication software Routing software File sharing software Servers (computing)
33074879
https://en.wikipedia.org/wiki/Vizada
Vizada
Vizada is a worldwide satellite communications services provider which operates land earth stations that connect satellite communications to terrestrial telecommunications and IP networks. Vizada provides both mobile and fixed satellite telecommunications to a wide array of markets including merchant shipping, defense and government, fishing and yachting, oil and gas, mining, and non-governmental organizations. The product offering covers maritime, land and aeronautical services. In 2011, the Vizada Group was acquired by EADS, to be integrated as a subsidiary of Astrium. History Vizada has a rich history which stems from the acquisitions and reorganizations of several leading global telecommunications entities including France Telecom and Telenor, as well as vertically specialized companies such as Sait Communications and TDCom. 1940s: The company has been involved in maritime communications since World War II when it began operating a network of radio communications in Europe. 1970s: Originally part of Norwegian operator Telenor, Vizada became the first to use satellites for domestic use. 1976: Eik teleport, situated on the southwest coast of Norway, was established to provide communications to oil platforms in the North Sea. 1979: Creation of the International Maritime Satellite Organisation (Inmarsat) under the auspices of United Nations and IMO with the contribution of France, Norway and the US as founding member. 1991: The France Telecom group was the fifth largest shareholder in Inmarsat, the international maritime satellite provider. In 1991, Telenor Satellite Services at Eik Teleport developed the Sealink Maritime VSAT solution to provide communications services between remote locations at sea and fleet management offices on shore. 1997: Creation of Taide Network (now Vizada Networks) in Holmestrand, Norway, from an academic project initiated at universities in Oslo, Norway, and Vilnius, Lithuania in the early 1990s. VSAT services operations were established in the Netherlands, the Czech Republic, Slovakia, Poland and Austria. 1999: Satellite services were split from Telenor core business into independent operating units: Telenor Satellite Services (maritime Inmarsat and fixed satellite services) and Telenor Satellite Network (VSAT networks). They would later form one single company. 2001–2002: France Telecom Mobile Satellite Communications acquired DT Mobile Satellite (Germany), Glocall (Netherlands), and TDcom (France). 2002: Telenor Satellite Services acquired Comsat Mobile Communications (USA) and Sait Communications, a leading maritime communications service provider which was renamed Marlink. 2007: Apax Partners acquired FTMSC and Telenor Satellite Services and formed Vizada in 2007. 2010: Vizada, through its subsidiary Marlink, received the Inmarsat 2010 partner award recognizing Vizada as a solutions innovator for the merchant shipping market. 2011: EADS acquired all assets of the Vizada Group for approximately US$1 billion for its Astrium Services business unit. The acquisition was concluded in December 2011. 2015: Airbus DS decided to sell its Satcom division. Vizada Group Vizada Group operates offices and satellite gateways worldwide. It has three main subsidiaries: Vizada, which markets global mobile and fixed satellite communications; Vizada Networks, which specializes in fixed satellite and hybrid network solutions; and Marlink, which focuses on maritime satellite communication. Vizada Vizada is an independent provider of global satellite-based mobility services. The company, which was formed by the merger of Telenor Satellite Services and France Telecom Mobile Satellite Communications, offers mobile and fixed connectivity services from multiple satellite network operators through a network of 400 service provider partners. These services are packaged with Vizada Solutions, designed to add business value to basic connectivity, and delivered through Vizada's global teleport network – five state-of-the-art satellite facilities strategically positioned around the world. Vizada works with the broadest range of network providers in the industry, including: Inmarsat, Iridium, Thuraya, Eutelsat, Intelsat, Loral, SES World Skies, and SES Americom. Vizada Networks Vizada Networks ( Networks), provides international and regional broadband fixed satellite services, international telephony and hybrid network solutions. Vizada Networks works with the major fixed satellite operators, such as Eutelsat, Intelsat, and SES World Skies. Vizada Networks' main offices are located in Oslo and Holmestrand in Norway. Marlink Marlink is a large maritime satellite communications provider equipping vessels with mobile-satellite services and VSAT services. Products Sea For more than 25 years, Vizada has provided maritime communication products based on Inmarsat and Iridium technologies. Vizada also developed its own satellite communication solution based on VSAT technology called Pharostar. Vizada's products provide broadband data and voice communications onboard seabound vessels. Land Vizada provides land satellite communication products based on Inmarsat, Iridium and Thuraya technologies. Vizada also provides full broadband satellite solutions based on a customized VSAT network. Air Vizada provides aero satellite communications products based on the Inmarsat Swift services. These products enable high speed and IP based communication both in cockpit and cabin for voice communication, internet browsing, fax and file sharing. Solutions Vizada provides services around satellite communications such as data traffic management tools, secure communication tools, emailing services, data compression and encryption services, prepaid cards for satellite communications and fixed to satellite mobile communications. Infrastructure Land earth stations Vizada operates five earth stations around the world that connect satellites to terrestrial networks. Vizada also has an earth station in Japan through a partnership with KDDI. The earth station is located in Yamaguchi. Point of presence Vizada's IP points of presence provide local access to the global broadband services. They are located in: Amsterdam, the Netherlands Aussaguel, France Hong Kong, Hong Kong London, United Kingdom New York City, United States Oslo, Norway Partnership Telecoms Sans Frontières In 1998, Vizada entered into partnership with the Télécoms Sans Frontières (TSF). Vizada provides TSF with mobile satellites services and training for TSF personnel. TSF used Vizada's services on 14 separate occasions in 2009, and notably for the 2010 Haiti earthquake. In 2011, Vizada provided TSF with hardware, airtime for the voice calls and internet connections on the border between Tunisia and Libya. See also EADS EADS Astrium Iridium Thuraya VSAT References External links Vizada official website Vizada Networks website Marlink website Communications satellite operators
33106960
https://en.wikipedia.org/wiki/Joanna%20Shields%2C%20Baroness%20Shields
Joanna Shields, Baroness Shields
Joanna Shields, Baroness Shields, OBE (born 12 July 1962) is a British-American technology industry veteran and life peer who currently serves as Group CEO for BenevolentAI. Shields previously served as UK Minister for Internet Safety and Security, Under-Secretary of State, and Advisor on the Digital Economy to David Cameron. She was made a Life Peer in the House of Lords in 2014. In 2016, Baroness Shields was appointed the Prime Minister's Special Representative on Internet Safety. Before joining the government, Shields spent over 25 years building some of the world's best-known technology companies, including Electronics for Imaging, RealNetworks, Google, Aol and Facebook, as well as leading several start-ups to successful acquisitions, including Bebo, Decru and Veon. Early life Shields was born in 1962 in St. Marys, Pennsylvania, and was the second of five children. Career history In 1984 as a graduate business student, Shields worked part-time at the National Affairs Office of Deloitte in Washington, D.C. Shields was assigned the task of writing a business plan for a start-up called NDC (National Digital Corporation), an early pioneer in the transmission and archival of digital media that was acquired by Gruner + Jahr. During her time there she became convinced that digital technology was going to change the way we live our lives and interact with each other. While at NDC, Shields met Israeli entrepreneur and founder of Scitex, Efi Arazi, who had formed a new venture called Efi (Electronics for Imaging, Nasdaq:EFII). In 1989 she moved to Silicon Valley and joined the company, where she began working as a product manager and over the course of eight years rose through the ranks to become VP of Production Systems, a division that designed, built and manufactured ASICs, embedded controllers and servers that connected digital printing systems to networks from companies such as HP, Canon, Ricoh, Minolta, Fuji Xerox and Kodak. In 1997 Shields became CEO of Veon, an interactive video technology company whose intellectual property included patents for adding interactive links to video streams that became part of the MPEG4 streaming video standard. Philips acquired Veon in 2000. After closing the Veon transaction, Shields was hired by the company that invented streaming audio and video, RealNetworks, to run its businesses outside the United States. Shields briefly joined former Efi CEO and colleague, Dan Avida, to build the business of a storage encryption company he founded called Decru, where she played an instrumental role in forming a partnership with Network Appliance, the company that eventually acquired Decru for $272m. Shields then became a managing director for Google Europe, Middle East and Africa where she was responsible for developing the company's advertising syndication business, AdSense, and for the acquisition of content and partnerships for such products such as Google Mail, Video (before the YouTube acquisition), Maps, Local, News and Books. In late 2006 Shields was approached by Benchmark Capital to step in as CEO of the social networking startup Bebo. At Bebo, Shields introduced Open Media, opening Bebo's platform for media companies to reach its 50M user base and enabling media owners to monetise their content, and Bebo Originals, a series of original online shows. The first Bebo Original KateModern was viewed 85M times, was nominated for two BAFTA awards and won the Broadcasting Press Guild Innovation Award for Outstanding Development in Broadcasting. After engineering Bebo's acquisition for $850m by Aol in May 2008, Shields briefly relocated to New York City to head Aol's newly created People Networks, overseeing the company's social and communications assets including AIM, Aol Instant Messenger and ICQ. Bebo's development continued under Shields with the release of Timeline in 2009, the first social network to organise and represent life events in a linear way. Timeline eventually became standard on social networks when Facebook released the feature in 2012. In 2009 Shields was recruited by former Google colleague Sheryl Sandberg to run Facebook in Europe, Middle East & Africa as VP & managing director. In her role she built EMEA into the company's largest region, focusing on making Facebook the world's most valuable marketing, communications and customer services platform for brands and leveraging Facebook's Open Graph as a growth engine for some of Europe's hottest startups and established businesses. She also presided over the growth of international presence to over one billion users. In May 2018, Shields was announced as the Group CEO of BenevolentAI, a London-based medical startup. Industry recognition Shields was ranked No. 1 on the Wired 100 in 2011 and No. 6 in the MediaGuardian 100 in 2012. In February 2013 she was named to the list of the 100 most powerful women in the United Kingdom by BBC Radio 4's Woman's Hour. In July 2013 Computer Weekly named Shields the Most Influential Woman in UK IT. In July 2013 she received the British Interactive Media Association's Lifetime Achievement Award. Government work In October 2012 Shields was recruited by Prime Minister Cameron to lead HM Government's Tech City initiative and become the UK's Ambassador for Digital Industries. She was Chair and CEO of Tech City from Jan 2013 to May 2015, during which time, she worked with the London Stock Exchange to launch the new high-growth segment and created Future Fifty, a programme to identify the 50 fastest growing businesses and support them on the path to an IPO. Future Fifty was launched by the Chancellor George Osborne in April 2013. Shields is also involved in promoting the policies and conditions that foster entrepreneurship across the EU and, along with eight other leading EU entrepreneurs, launched the EU Startup Manifesto, which aims to transform the European Union into a startup-friendly region. Shields was appointed the Prime Minister's Adviser on the Digital Economy in Summer of 2014 and she served in that role until the May 2015 General Election when David Cameron appointed Baroness Shields as the Minister for Internet Safety and Security and a Parliamentary Under-Secretary for both the Home Office and the Department of Culture, Media and Sport in the newly elected majority government. In July 2016 she was reappointed to her post by Prime Minister Theresa May. In December 2016, after 18 months at the Department for Culture, Media and Sport and a year as a joint Minister, Shields became a full-time Home Office Minister and the Prime Minister's Special Representative on Internet Crime and Harms, in addition to her role as Minister for Internet Safety & Security. Shields served as Minister for Internet Safety and Security until June 2017 when she stepped down from her ministerial post ahead of the general election to focus on her work with the WeProtect Global Alliance and her role as the Prime Minister's Special Representative on Internet Crime and Harms. She currently serves the government in the following positions: Founder and Member, WeProtect Global Alliance - End Child Sexual Exploitation Online (formerly US/UK Taskforce to combat child online abuse and exploitation) Prime Minister's Special Representative on Internet Safety Member, Child Protection Implementation Taskforce Member, Tackling Extremism in Communities Implementation Taskforce Shields has formerly served in the following positions: Parliamentary Under Secretary of State, Home Office and Minister for Internet Safety and Security Interest as Parliamentary Under Secretary of State, Department for Culture, Media and Sport Co-Chair, UK Council for Child Internet Safety Member, Inter-Ministerial Group on Violence Against Women and Girls Member, Inter-Ministerial Group on Child Sexual Abuse Profile on parliament.uk Government Focus – To make the internet safer by tackling online child abuse, exploitation and access to harmful content. To help combat online radicalisation and counter extremism and to promote informed digital citizenship. Spoken material to date: https://publications.parliament.uk/pa/ld201516/ldhansrd/ldallfiles/peers/lord_hansard_7037_od.html https://www.gov.uk/government/speeches/securing-childrens-safety-in-a-digital-world Government honours Shields was appointed OBE in the 2014 New Year Honours List for "services to digital industries and voluntary service to young people". After being nominated as a working peeress in August 2014, Shields was elevated to the peerage on 16 September 2014 taking the title Baroness Shields, of Maida Vale in the City of Westminster. Founding of WeProtect Global Alliance On 22 July 2013, the Prime Minister David Cameron made a speech regarding the proliferation and accessibility of child abuse images on the Internet and about cracking down on online pornography. The Prime Minister announced that a new UK-US taskforce would be created to lead a global alliance of the biggest Internet companies to tackle indecent images of children online. Joanna Shields would head up this initiative, working with UK and US governments and law enforcement agencies and with industry to maximise the international efforts. In April 2014 she founded WeProtect.org. WePROTECT is a global alliance led by the UK government and supported by over 70 countries, 20 technology companies and NGOs to stop the global crime of online child sexual abuse and exploitation. In April 2014, Shields founded WePROTECT, an initiative designed to engage Internet companies in the development of technology aimed to combat online child abuse and exploitation. In December 2014, David Cameron convened the first WePROTECT global summit in London. This summit and a second hosted in Abu Dhabi by the Interior Ministry of the United Arab Emirates have unified a global multi-stakeholder coalition in commitment to Statements of Action I & II. Shields opened the UAE Summit as keynote speaker in November 2015 and at the United Nations launch of the End Violence Against Children Global Partnership & WePROTECT Initiative Fund in New York, July 2016. WePROTECT Global Alliance to End Child Sexual Exploitation Online combines two major initiatives: the Global Alliance, led by the U.S. Department of Justice and the EU Commission and WePROTECT, which was convened by the UK. This new, merged initiative has unprecedented reach, with 70 countries already members of WePROTECT or the Global Alliance, along with major international organisations, 20 of the biggest names in the global technology industry, and 17 leading civil society organisations. www.weprotect.org Board service & affiliations Shields also served on several boards as trustee, including Save the Children, the National Society for the Prevention of Cruelty to Children NSPCC, There4Me board, The American School in London, and as a non-executive director on the board of the London Stock Exchange Group and on Mayor Boris Johnson's London Smart Board. She also served on the EU Web Entrepreneurs Leaders' Club[38] established by then EU Commissioner and Vice-President, Neelie Kroes. Shields also served on the Advisory Board of the philanthropy platform Elbi on an unpaid basis until early 2016. In 2017, Shields became a life member of the Council on Foreign Relations. In 2018, Shields joined the Alliance of Democracies' Transatlantic Commission on Election Integrity. She participated in the Commission's summit in Copenhagen in July 2018. In October 2018, Shields was awarded the World Childhood Foundation ThankYou Award. The following month, she received the International Figures Sheikha Fatima bint Mubarak Award for Motherhood and Childhood. Personal life She graduated as BS from Penn State University, where she was a member of Chi Omega sorority, and did her post-grad studies as MBA from George Washington University. Shields received a Doctorate in Public Service, Honoris Causa, from George Washington University May 2016. Honours and awards Life Baroness OBE. New Year Honours: People to watch for Tech personality Alumni Fellows Award - Penn State Distinguished Alumni Award - George Washington University AACSB International's "Influential Leaders" Accreditation 2016 Debrett's 500 - Digital & Social Sector International Figures Sheikha Fatima bint Mubarak Award for Motherhood and Childhood World Childhood Foundation ThankYou Award - World Childhood Foundation References External links http://www.joannashields.com * http://www.joannashields.co.uk http://www.parliament.uk/biographies/lords/baroness-shields/4325 1962 births Living people People from St. Marys, Pennsylvania George Washington University School of Business alumni Pennsylvania State University alumni Facebook employees American technology chief executives American women chief executives AOL employees Google employees American emigrants to England British women business executives Naturalised citizens of the United Kingdom British chief executives Officers of the Order of the British Empire Life peers created by Elizabeth II Female life peers Conservative Party (UK) life peers BBC 100 Women National Society for the Prevention of Cruelty to Children people
33130292
https://en.wikipedia.org/wiki/IBM%20XIV%20Storage%20System
IBM XIV Storage System
The IBM XIV Storage System was a line of cabinet-size disk storage servers. The system is a collection of modules, each of which is an independent computer with its own memory, interconnections, disk drives, and other subcomponents, laid out in a grid and connected together in parallel using either InfiniBand (third generation systems) or Ethernet (second generation systems) connections. Each module has an x86 CPU and runs a software platform consisting largely of a modified Linux kernel and other open source software. Description Traditional storage systems distribute a volume across a subset of disk drives in a clustered fashion. The XIV storage system distributes volumes across all modules in 1 MiB chunks (partitions) so that all of the modules' resources are used evenly. For robustness, each logical partition is stored in at least two copies on separate modules, so that if a part of a disk drive, an entire disk drive, or an entire module fails, the data is still available. One can increase the system storage capacity by adding additional modules. When one adds a module, the system automatically redistributes previously stored data to make optimal use of its I/O capacity. Depending on the model and disk type chosen when the machine is ordered, one system can be configured for storage capacity from 27 TB to 324 TB. The XIV software features include remote mirroring, thin provisioning, quality of service controls, LDAP authentication support, VMware support, differential, writable snapshots, online volume migration between two XIV systems and encryption protecting data at rest. The IBM XIV management GUI is a software package that can be installed on operating systems including Microsoft Windows, Linux and Mac OS. An XIV Mobile Dashboard available for Android or iOS. History The IBM XIV Storage System was developed in 2002 by an Israeli start-up company funded and headed by engineer and businessman Moshe Yanai. They delivered their first system to a customer in 2005. Their product was called Nextra. In December 2007, the IBM Corporation acquired XIV, renaming the product the IBM XIV Storage System. The first IBM version of the product was launched publicly on September 8, 2008. Unofficially within IBM this product is called Generation 2 of the XIV. The differences between Gen1 and Gen2 were not architectural, they were mainly physical. New disks were introduced, new controllers, new interconnects, improved management, additional software functions. In September 2011, IBM announced larger disk drives, changing the inter-connectivity layer to use InfiniBand rather than Ethernet. In 2012-2013 IBM added the support of SSD devices and 10GbE host connectivity. See also IBM storage IBM DS8000 series References External links IBM Redbooks that contain information on IBM XIV Official IBM Information page on XIV Data storage IBM storage servers Mergers and acquisitions of Israeli companies Israeli inventions
33130571
https://en.wikipedia.org/wiki/Android%20Privacy%20Guard
Android Privacy Guard
Android Privacy Guard (APG) is a free and open-source app for the Android operating system that provides strong, user-based encryption which is compatible with the Pretty Good Privacy (PGP) and GNU Privacy Guard (GPG) programs. This allows users to encrypt, decrypt, digitally sign, and verify signatures for text, emails, and other files. The application allows the user to store the credentials of other users with whom they interact, and to encrypt files such that only a specified user can decrypt them. In the same manner, if a file is received from another user and its credentials are saved, the receiver can verify the authenticity of that file and decrypt it if necessary. The specific implementation in APG relies on the Spongy Castle APIs. APG has not been updated since March 2014 and is no longer under active development. The development has been picked up by OpenKeychain. Reception After its initial release in June 2010, it has gained a strong following with over 2000 reviews and over 100,000 installs from the Google Play store. Several tutorials have been written which instruct new users in how to set up APG on an Android phone. These tutorials generally reference APGs interaction with the K-9 Mail Android e-mail client. OpenKeychain Between December 2010 and October 2013 no new version of APG was released. In the light of the global surveillance disclosures this lack of development was viewed critically by the community. In September 2013 a fork of APG was released, version 2.1 of OpenKeychain. Some of the new features and improvements were subsequently merged back to APG. However, this process stopped in March 2014, while the OpenKeychain project continued to release new versions. As of February 2016 the development of OpenKeychain is more active than that of APG. Notable features of OpenKeychain include a modern user interface, support for NFC and the YubiKey NEO. References External links GitHub repository Software reviews and tutorials Guardian Project: Lockdown your mobile email Setting up from scratch Free and open-source Android software OpenPGP Cryptographic software
33137535
https://en.wikipedia.org/wiki/ThinkCentre%20M%20series
ThinkCentre M series
The M-series of desktops are part of Lenovo's ThinkCentre product line. Formerly an IBM brand, Lenovo acquired the ThinkCentre desktop brand following its purchase of IBM's Personal Computing Division (PCD) in 2005. Following its acquisition of IBM's PCD, Lenovo has released M-series desktops in multiple form factors, ranging from traditional tower, to small form factor, and all-in-ones (AIOs). 2003 In 2003, IBM redesigned and re-launched their ThinkCentre product line. The first desktop released was an M-series desktop – the M50. M50 The first desktop in IBM's redesigned ThinkCentre line was the M50, announced in 2003. The desktop offered the following specifications: Processor: Intel Pentium 4 3.0 GHz RAM: 256MB PC2700 DDR Storage: 40GB 7200 RPM Graphics: Intel Extreme 2 (integrated, 64MB of shared video RAM) Optical drive: 48x CD-ROM Audio: SoundMAX Cadenza audio without speakers Operating system: Microsoft Windows XP Professional USB ports: eight USB 2.0 Ports While the desktop was made available as a consumer PC, it was more suited to a corporate environment, with the limited storage and graphics capabilities. 2005 The ThinkCentre desktop released by Lenovo in 2005, following its acquisition of IBM's PCD was the M52. M52 The ThinkCentre M52 desktop was announced in May 2005 following Lenovo's acquisition of IBM's Personal Computing Division. PC World called the M52 desktop, "A corporate machine for the security conscious business user looking for stability and reliability". The M52 desktop was equipped with a 3 GHz Pentium 4 processors, an 80GB hard disk drive, up to 4GB of RAM, eight USB 2.0 ports, two serial ports, a Gigabit Ethernet connection, VGA output, and a chassis that did not require tools to open − a toolless chassis. 2006 The ThinkCentre M55, M55p, and M55e were announced by Lenovo in September 2006. M55 The ThinkCentre M55 received a positive review from PC World, with the reviewer stating that "The Lenovo ThinkCentre M55 9BM is a compact and quiet business PC that keeps maintenance simple and makes upgrades easy. Its design and functions are well-suited to an office environment and we think it's a good choice for any business searching for a uniformed PC roll-out." The desktop offered the following specifications: Processor: Intel Core 2 Duo E6300 1.86 GHz RAM: 1GB of DDR2 Storage: 80GB Despite the fact that the desktop was capable of handling Windows Vista Business, it was preloaded with Windows XP. While the chassis was similar to previous ThinkCentre desktops, it was made smaller to fit better in office spaces. M55p The ThinkCentre M55p desktop offered the following specifications: Processor: Intel Core 2 Duo E6600 RAM: 1GB PC2-5300 DDR2 Storage: 160GB 7200rpm SATA Graphics: Intel GMA 3000 Integrated Graphics (256MB shared video RAM) Audio: Intel HD Audio Optical drive: 16x Dual Layer DVD reader/writer USB ports: ten USB 2.0 ports Operating System: Windows XP Professional It was described by About.com as being "a very solid system for business users" and a "general purpose PC" for consumers. However, both multimedia performance and storage space were criticized. M55e The ThinkCentre M55e desktop was equipped with: Processor: Intel Core 2 Duo E6300 1.86 GHz RAM: 2GB Graphics: Intel GMA 3000 (integrated) Storage: 250GB hard disk drive Operating System: Microsoft Windows Vista Business Display: 22 inch LED widescreen PCMag listed the pros as being the dual core processor, small form factor, enterprise-class hardware, ThinkVantage Technologies, and the three year on-site warranty. The price, the 90-day subscription to Symantec client security, and the lack of a DVD writer were listed as the cons of the desktop. 2007 The ThinkCentre M-series desktops released by Lenovo in 2007 were the M57 and M57p. M57 and M57p The ThinkCentre M57 and M57p desktops were announced in September 2007 by Lenovo. These were the first desktops from a manufacturer to receive a GREENGUARD certification. In addition, both desktops were EPEAT Gold and Energy Star 4.0 rated. They were also the first ThinkCentre desktops to incorporate recycled material from consumer plastics. The desktops were equipped with up to Intel Core 2 Duo processors, up to 2GB DDR2 RAM, integrated graphics, and up to 160GB hard disk drive. 2008 M57 Eco and M57p Eco The ThinkCentre M57 Eco and M57p Eco were announced by Lenovo in March 2008. These were eco-friendly versions of the M57 and M57p, which were released in 2007. The desktops were dubbed "eco" and had an ultra-small form factor with low power consumption. According to Desktop Review, the M57 used a fraction of the power needed by standard desktops, and a little more than that of an energy-saving notebook. The M57 desktop offered the following specifications: Processor: Intel Core 2 Duo E8400 3 GHz Graphics: Intel X3100 (integrated) RAM: up to 4GB PC2-5300 DDR2 SDRAM Storage: 160GB 7200RPM Optical Drive: DVD Burner Operating System: Windows Vista Business (32-bit) Dimensions (inches): 11.8 x 9.4 x 3.2 Weight: 7 lbs The M57p eco desktop had the same specifications as the M57 eco. M58 The M58 and M58p were announced by Lenovo in October 2008. The desktop offered the following specifications: Processor: 2.33 GHz Intel Core 2 Quad Storage: up to 500GB hard disk drive RAM: 2GB RAM Optical drive: DVD writer Graphics: Intel GMA 4500 Audio: Intel HD audio USB ports: eight USB ports Operating System: Microsoft Windows Vista M58p The M58p was received by Desktop Review in a manner similar to the M58, with the reviewer stating, "The M58p is designed to meet all the stringent requirements commercial organizations have while still providing that Lenovo touch through OEM software, warranties and support." The M58p desktop offered the following specifications: Processor: 3.00 GHz Intel Core 2 Duo E8400 RAM: 4 GB 1066 MHz DDR3 SDRAM Storage: 250GB 7200RPM SATA Audio: integrated HD audio Speakers: integrated speakers Graphics: integrated graphics Operating System: Windows Vista Home Premium Customized with best options: (SFF) Processor: Up to Intel Core 2 Quad Q9650 (3.0 GHz Clock) RAM: Up to 8 or 16GB 1066 MHz (updating bios) DDR3 Memory Storage: Up to 2x1TB 7200RPM SATA Graphics: Up to GT 1030 or GTX 750 Ti Graphics Card Operating system: Genuine Windows 10 Pro (64-bit) The pros of the system were listed by Desktop Review as the high-end configuration, the handle for easy movement, and the capacity for expansion. The cons were listed as being the price and the lack of DVI ports. 2009 The ThinkCentre M-series desktop released by Lenovo in 2009 was the M58e. The Small Form Factor version uses a BTX motherboard. M58e Lenovo announced the ThinkCentre M58e desktop in March 2009. The desktop offered the following specifications: Processor: Intel Core 2 Duo E8400 3 GHz RAM: up to 4GB Graphics: Intel GMA 4500MHD (integrated) Storage: 320GB Operating System: Microsoft Windows Vista Business The desktop was EPEAT Gold rated and met Energy Star requirements. The M58e was also compatible with solar charger packs. PC Mag summarized its review of the desktop by saying "The Lenovo ThinkCentre M58e is a middle-of-the-road business PC, both in performance and features, though it does have the added benefits of Intel vPro and IT-friendly features. It's certainly worth a look if you need a PC environment that can grow with your business." 2010 The ThinkCentre M-series desktops released by Lenovo in 2010 were the M70e, M70z, M90, and M90z. M70e The M70e desktop was released by Lenovo in 2010 with the following specifications: Processor: 3 GHz Intel Core 2 Duo E8400 Chipset: Intel G41 RAM: up to 4GB 1066 MHz DDR3 Storage: 500GB SATA Graphics: Intel Graphics Media Accelerator X4500 Audio: Integrated HD audio Operating System: Microsoft Windows 7 Professional (64 bit) M90 The M90 desktop was released by Lenovo in 2010 with the following specifications: Processor: 3.33 GHz Intel Core i5 RAM: up to 4GB DDR3 Storage: up to 500GB 7200RPM SATA Optical drive: DVD reader/writer Graphics: Intel GMA X4500 Form factor: Small form factor (SFF) Dimensions (inches): 10.78 x 9.37 x 3.07 The M90 desktop received the "PCPro Recommended" award upon release, with an overall rating of five of six stars. The desktop was summarized as, "Expensive but, thanks to superb design and power, worth the cash for demanding business users". M90z Also released in 2010, the M90z was an all-in-one (AIO) desktop released by Lenovo. The AIO desktop offered the following specifications: Processor: 3.2 GHz Intel Core i5-650 RAM: 4 GB Storage: 500 GB Graphics Card: Intel GMA HD Optical Drive: Dual-Layer DVD reader/writer Display: 23 inch HD widescreen (maximum resolutions of 1920x1080) USB ports: 6 USB 2.0 PCMag listed the pros of the desktop as the compact design, HD display, support for two monitors, simple multi-touch interface, good component mix, stand options, and easy servicing. The cons were listed as the dull colors on videos because of the matte screen, the lack of an eSATA port, and the need for an adapter when using external DVI. Computer shopper summarized the capabilities of the M90z with the statement, "In our test configuration, the business-oriented M90z is overkill for most office tasks. Configuration options, however, can bring down the price while still delivering a peppy big-screen office PC." 2011 M71e The ThinkCentre M71e desktop was described by PCWorld as being "a basic PC designed for small and medium-sized businesses". The desktop was powered by an Intel Core i5-2500 processor and included 4GB of DDR3 RAM, a 500GB 7200RPM hard disk drive, and AMD Radeon HD 5450 discrete graphics. The desktop was indicated to be good for everyday office tasks and offering suitable responsiveness. Despite the presence of three fans, the desktop was not "annoyingly loud". The DVD bay was powered by a strong motor; the drive tray would eject and close almost as soon as the button was pressed with very little lag. The desktop was, overall, described as being a simple machine with a decent configuration, without "fancy features" such as a USB 3.0 port. Some legacy features, such as a PS/2 port were available on the desktop. Also, the assembly of the machine was described as "basic", with a messy internal appearance. The one year on-site warranty was indicated as being one of the best features on the desktop. Other features on the desktop included four USB 2.0 ports, a Gigabit Ethernet port, DVI, Display Port, and analogue audio ports. M71z The M71z was described by IT Pro as being "a rare business all-in-one with a touchscreen." The default configuration offered a non-touch screen, with multi-touch as an optional upgrade. The touchscreen was described as being precise and responsive, with Lenovo applications using suitably large icons. However, the applications themselves were indicated to have not been optimized for touch control. Further, there were no touch-specific applications commonly found on consumer touchscreen devices. The 1600 x 900 display was indicated to be "spacious enough" with acceptable color accuracy. However, the brightness level was indicated to be low, at 210.8 cd/m2. The processing and graphical power was acceptable for everyday office tasks, with the all-in-one powered by an Intel Core i3-2100 processor and Intel HD 2000 integrated graphics. The use of graphics- and processing-intensive software was indicated to be a challenge, because of the lack of discrete graphics. Power consumption, heat levels, and noise levels were low. IT Pro commented that "the fans never became irritatingly loud. In fact, we had to press our ears up against the computer to even hear them." The build of the all-in-one was described as being good, with strong, matte black plastic. Both keyboard and mouse were called reliable, with the keyboard described as responsive. Detailed specifications of the M71z all-in-one are as follows: Processor: Intel Pentium G260 (2.6 GHz) Operating system: Microsoft Windows 7 Professional (64-bit) Screen: 20-inches (non-touch by default; optional multi-touch upgrade) RAM: Up to 8GB Storage: 1TB 7200RPM SATA II 160GB SSD WiFi: 802.11 a/b/g/n M75e The ThinkCentre M75e desktop was praised by SlashGear for its processing power and small form factor. In terms of design, the desktop was similar to other ThinkCentre products from Lenovo, with no unnecessary styling and designs. Components could be accessed by removing two screws on the chassis. The desktop contained an AMD Athlon II X4 640 processor, 4GB of DDR3 RAM, a 500GB hard disk drive, and ATI Radeon HD 3000 discrete graphics. The desktop was indicated by Lenovo to be "multi-monitor friendly", with the capacity to power two displays even with the basic configuration. An optional half-height graphics card also allowed two additional monitors to be powered, for a total of four independent displays. The primary points of criticism were a direct result of the desktop's small form factor. Although the space for cooling was reduced, the M75e did not exceed stable temperatures. However, noise was a concern. The PC's fan would only run for a short duration at a high speed, making it louder than some desktops and workstations. Detailed specifications of the ThinkCentre M75e desktop are as follows: Processor: up to AMD Phenom II X4 B9x series Operating system: Microsoft Windows 7 (Professional /Home Premium/Home Basic) Chipset: AMD 750G + SB710 Storage: up to 500GB 7200RPM hard disk drive RAM: up to 16GB DDR3 Graphics: ATI Radeon 3000 (integrated) NVIDIA Quadro FX380 NVIDIA GeForce 310 NVIDIA GeForce 310 M77 Announced on October 28, 2011, the ThinkCentre M77 could be upgraded to include AMD's FX processors and up to 16GB of RAM. According to Tom Shell, the vice-president and general manager of Lenovo's Commercial Desktop Business Unit, this represented a level of processing power previously found only in premium desktops. The desktop was made available in both tower and small form factors. According to Lenovo, the use of Enhanced Experience 3.0 allowed the desktop to boot in less than 30 seconds. The desktop optionally included AMD Radeon discrete graphics, with support for up to four independent displays. Additional features on the desktop included a hard disk drive of up to 1TB, eight USB 2.0 ports, a 25-in-1 memory card reader, Trusted Platform Module, and hard disk encryption. 2012 M82 The M82 is available in Tower or SFF. Specifications: Standard: (Tower) Processor: 2nd Generation Intel Core i3-2120 (3.3 GHz) Operating System: Windows 7 Professional (64-bit) RAM: 4GB DDR3 Memory Storage: 1x500GB 7200RPM SATA Graphics: Integrated Graphics Customized with best options: (Tower) Processor: Up to 3rd generation Intel Core i7-3770 (3.4 GHz Clock, 3.9 Turbo) Operating system: Windows 7 Professional (64-bit) RAM: Up to 8GB DDR3 Memory Storage: Up to 2x1TB 7200RPM SATA (Tower) Graphics: Up to AMD Radeon HD 7450 2013 M92/M92p The M92p is a desktop computer designed for business use. Like other computers of the M series, it exists in three form factors: tower, small form factor (SFF) and tiny. The M92p uses Intel Core i3, i5 or i7 processors and makes use of DDR3-1600 RAM. Graphics processing is done by an integrated Intel HD Graphics 2000 GPU. The M92p is available with both hard drives and solid-state storage. One difference with the M91p is that the M92p comes with four USB 3.0 ports on the rear of the computer, whilst the M91p only offers USB 2.0 ports. In a review for ZDNet, Charles McLellan wrote, "Unless internal expansion is required, we can find little wrong with Lenovo's ThinkCentre M92p as a business-class small-form-factor PC (and there are bigger models in the range if expansion is required). Our review unit was only a moderate performer, but alternative configurations are available to give it more muscle if required." Specifications (tower): Processor: Intel Core i7-3770 Intel Core i5-3470 / 3550 / 3570 Intel Core i3-2120 / 2130 Operating System: Windows 7 Professional (64bit) Windows 8 Pro RAM: Up to 32GB DDR3 Storage: Up to 128GB SSD or 2x1TB 7200RPM HDD Graphics: AMD Radeon HD7350 / HD7450 M93/M93p The M93/M93p available in Tower Form Factor, Small Form Factor (SFF), and Tiny Form Factor. Specifications (tower): Processor: Intel Core i7-4770 Intel Core i5-4430 / 4440S / 4570 / 4670 Intel Core i3-4130 / 4330 Intel Pentium Dual Core G3220 / G3420 / G3430 Intel Celeron G1820 / G1830 Operating System: Windows 7 (Home Basic / Home Premium / Professional / Ultimate) Windows 8.1 / Windows 8.1 Pro RAM: Up to 32 GB DDR3 (4 x 8 GB) Storage: Up to 180GB SSD or 2TB 7200RPM HDD Graphics: Intel Integrated ATI Radeon HD8470 NVIDIA GeForce GT620 / GT630 M83 The M83 is available in Mini Tower Form Factor, Small Form Factor, or Tiny Form Factor. Specifications (mini tower): Processor: Intel Core i7-4770 Intel Core i5-4570 / 4670 Intel Core i3-4130 / 4330 Operating System: Windows 7 Professional 64 Windows 8.1 64 / Windows 8.1 Pro 64 RAM: Up to 32GB DDR3 Storage: Up to 180GB SSD or 2TB 7200RPM HDD Graphics: Intel Integrated ATI Radeon HD8470 / HD8570 NVIDIA GeForce GT620 2015 M900/M900x The M900 series was announced in December, 2015. The M900 series is available in Tower Form Factor, Small Form Factor, or Tiny Form Factor. The M900x was only available in Tiny Form Factor. Specifications (tower): Processor: Intel Core i7-6700 Intel Core i5-6400 / 6500 Intel Core i3-6100 Operating System: Windows 7 Professional Windows 10 (Home / Pro) RAM: Up to 32GB DDR4 Storage: Up to 512GB SSD or 2TB 7200RPM HDD Graphics: Intel HD Graphics 530 NVIDIA Geforce GT 720 (1GB / 2GB) NVIDIA Quadro K420 2GB 2016 M700 The M700 series was announced in January, 2016. The M700 series is available in Tower Form Factor, Small Form Factor, Tiny Form Factor, or Thin Client. Specifications (tower): Processor: Intel Core i5-6400 Operating System: Windows 7 Professional Windows 10 Pro RAM: 8GB DDR4 Storage: Tower: 2 x 3.5" Bay for HDD / SSD SFF : 1 x 3.5" HDD / 2.5" SSD / 2.5" HDD (Optional) 2017 M710 The M710 series was announced in February, 2017. The M710 series consists of these models: M710t (tower form factor) M710s (small form factor) M710q (tiny form factor) Specifications (tower): Processor: 6th or 7th Generation Intel Pentium, i3, i5 and i7 Operating System: Windows 10 Pro (optional downgrade to Windows 7 Professional available) RAM: 4GB or 8GB DDR4 Storage: 2 x 3.5" Bay for HDD + 1x NVMe M.2 drive slot M715 The M715 series was announced in June, 2017. The M715 series consists of these models: M715t (tower form factor) M715s (small form factor) M715q (tiny form factor) - 2nd generation was announced in July, 2018 Specifications (Tiny 2nd gen): Processor: AMD A6, A10 or A12 APUs AMD Ryzen 2nd generation APUs (Athlon 200GE, Ryzen 3 2200GE, Ryzen 5 2400GE) Operation System: Windows 10 Pro RAM: 4GB or 8GB DDR4 Storage: 1x 2,5" HDD/SSD bay + 1x NVMe M.2 drive slot (drive in one of these is already installed) Product Specifications Reference (historical entries) 2021 Nano Desktops ThinkCentre M75n ThinkCentre M75n IoT ThinkCentre M75n Thin Client ThinkCentre M90n-1 Nano ThinkCentre M90n-1 Nano IoT Tiny Desktops ThinkCentre M60e ThinkCentre M625 Tiny Thin Client ThinkCentre M630e Tiny ThinkCentre M70q ThinkCentre M70q Gen 2 ThinkCentre M720 Tiny ThinkCentre M75q Gen 2 ThinkCentre M75q Tiny ThinkCentre M80q ThinkCentre M90q ThinkCentre M90q Gen 2 ThinkCentre M920 Tiny ThinkSmart Edition Tiny M80q ThinkSmart Edition Tiny M920q for Logitech ThinkSmart Edition Tiny M920q for Poly ThinkSmart Edition Tiny M920q for Zoom Rooms SFF Desktops ThinkCentre M70s ThinkCentre M70c ThinkCentre M720 SFF ThinkCentre M75s Gen 2 ThinkCentre M80s ThinkCentre M90s ThinkCentre M920 SFF Tower Desktops ThinkCentre M70t ThinkCentre M720 Tower ThinkCentre M75t Gen 2 ThinkCentre M80t ThinkCentre M90t ThinkCentre M920 Tower 2020 Nano Desktops ThinkCentre M75n ThinkCentre M75n IoT ThinkCentre M75n Thin Client ThinkCentre M90n-1 Nano ThinkCentre M90n-1 Nano IoT Tiny Desktops ThinkCentre M70q ThinkCentre M80q ThinkCentre M625 Tiny ThinkCentre M625 Tiny Thin Client ThinkCentre M630e Tiny ThinkCentre M715 Tiny (2nd Gen) ThinkCentre M715 Tiny Thin Client (2nd Gen) ThinkCentre M720 Tiny ThinkCentre M75q Gen 2 ThinkCentre M75q Tiny ThinkCentre M90q ThinkCentre M920 Tiny ThinkCentre M920x Tiny ThinkSmart Edition Tiny M920q for Logitech ThinkSmart Edition Tiny M920q for Poly ThinkSmart Edition Tiny M920q for Zoom Rooms SFF Desktops ThinkCentre M70s ThinkCentre M70c ThinkCentre M720 SFF ThinkCentre M720e SFF ThinkCentre M725 SFF ThinkCentre M75s ThinkCentre M75s Gen 2 ThinkCentre M80s ThinkCentre M90s ThinkCentre M920 SFF Tower Desktops ThinkCentre M70t ThinkCentre M720 Tower ThinkCentre M75t Gen 2 ThinkCentre M80t ThinkCentre M90t ThinkCentre M920 Tower 2019 Nano Desktops ThinkCentre M90n-1 Nano ThinkCentre M90n-1 Nano IoT Tiny Desktops ThinkCentre M600 Tiny ThinkCentre M600 Tiny Thin Client ThinkCentre M625 Tiny ThinkCentre M625 Tiny Thin Client ThinkCentre M630e Tiny ThinkCentre M710 Tiny ThinkCentre M715 Tiny (2nd Gen) ThinkCentre M715 Tiny Thin Client ThinkCentre M715 Tiny Thin Client (2nd Gen) ThinkCentre M720 Tiny ThinkCentre M75q Tiny ThinkCentre M910 Tiny ThinkCentre M920 Tiny ThinkCentre M920x Tiny SFF Desktops ThinkCentre M710 SFF ThinkCentre M710e SFF ThinkCentre M715 SFF ThinkCentre M720 SFF ThinkCentre M720e SFF ThinkCentre M725 SFF ThinkCentre M75s SFF ThinkCentre M910 SFF ThinkCentre M920 SFF Tower Desktops ThinkCentre M710 Tower ThinkCentre M715 Tower ThinkCentre M720 Tower ThinkCentre M910 Tower ThinkCentre M920 Tower 2018 ThinkCentre M600 Tiny ThinkCentre M600 Tiny Thin Client ThinkCentre M625 Tiny ThinkCentre M625 Tiny Thin Client ThinkCentre M710 SFF ThinkCentre M710 Tiny ThinkCentre M710 Tower ThinkCentre M710e SFF ThinkCentre M715 SFF ThinkCentre M715 Tiny ThinkCentre M715 Tiny (2nd Gen) ThinkCentre M715 Tiny Thin Client ThinkCentre M715 Tiny Thin Client (2nd Gen) ThinkCentre M715 Tower ThinkCentre M720 SFF ThinkCentre M720 Tiny ThinkCentre M720 Tower ThinkCentre M725 SFF ThinkCentre M910 SFF ThinkCentre M910 Tiny ThinkCentre M910 Tower ThinkCentre M910x Tiny ThinkCentre M920 SFF ThinkCentre M920 Tiny ThinkCentre M920 Tower ThinkCentre M920x Tiny 2017 ThinkCentre M600 Tiny ThinkCentre M600 Tiny Thin Client ThinkCentre M710 SFF ThinkCentre M710 Tiny ThinkCentre M710 Tower ThinkCentre M715 SFF ThinkCentre M715 Tiny ThinkCentre M715 Tiny Thin Client ThinkCentre M715 Tower ThinkCentre M910 SFF ThinkCentre M910 Tiny ThinkCentre M910 Tower ThinkCentre M910x Tiny 2016 ThinkCentre M600 Tiny ThinkCentre M700 SFF ThinkCentre M700 Tiny ThinkCentre M700 Tower ThinkCentre M715 Tiny ThinkCentre M79 SFF ThinkCentre M79 Tower ThinkCentre M800 SFF ThinkCentre M800 Tower ThinkCentre M900 SFF ThinkCentre M900 Tiny ThinkCentre M900 Tower ThinkCentre M900x Tiny 2015 ThinkCentre M32 Thin Client ThinkCentre M53 Tiny ThinkCentre M600 Tiny ThinkCentre M73 SFF ThinkCentre M73 Tiny ThinkCentre M73 Tower ThinkCentre M73p Tower ThinkCentre M79 SFF ThinkCentre M79 Tower ThinkCentre M800 SFF Pro ThinkCentre M800 Tower ThinkCentre M83 SFF Pro ThinkCentre M83 Tiny ThinkCentre M83 Tower ThinkCentre M900 SFF Pro ThinkCentre M900 Tiny ThinkCentre M900 Tower ThinkCentre M93 M93p Tiny ThinkCentre M93 M93p Tower ThinkCentre M93p SFF Pro 2014 ThinkCentre M32 Thin Client ThinkCentre M53 Tiny ThinkCentre M73 SFF ThinkCentre M73 Tiny ThinkCentre M73 Tower ThinkCentre M79 SFF ThinkCentre M79 Tower ThinkCentre M83 SFF Pro ThinkCentre M83 Tiny ThinkCentre M83 Tower ThinkCentre M93 M93p Tiny ThinkCentre M93 M93p Tower ThinkCentre M93p SFF Pro References External links Lenovo M series Tiny desktops page Lenovo X86 IBM personal computers
33170045
https://en.wikipedia.org/wiki/Stingray%20phone%20tracker
Stingray phone tracker
The StingRay is an IMSI-catcher, a cellular phone surveillance device, manufactured by Harris Corporation. Initially developed for the military and intelligence community, the StingRay and similar Harris devices are in widespread use by local and state law enforcement agencies across Canada, the United States, and in the United Kingdom. Stingray has also become a generic name to describe these kinds of devices. Technology The StingRay is an IMSI-catcher with both passive (digital analyzer) and active (cell-site simulator) capabilities. When operating in active mode, the device mimics a wireless carrier cell tower in order to force all nearby mobile phones and other cellular data devices to connect to it. The StingRay family of devices can be mounted in vehicles, on air planes, helicopters and unmanned aerial vehicles. Hand-carried versions are referred to under the trade name KingFish. Active mode operations Extracting stored data such as International Mobile Subscriber Identity (IMSI) numbers and Electronic Serial Number (ESN), Writing cellular protocol metadata to internal storage Forcing an increase in signal transmission power Forcing an abundance of radio signals to be transmitted Forcing a downgrade to an older and less secure communications protocol if the older protocol is allowed by the target device, by making the Stingray pretend to be unable to communicate on an up-to-date protocol Interception of communications data or metadata Using received signal strength indicators to spatially locate the cellular device Conducting a denial of service attack Radio jamming for either general denial of service purposes or to aid in active mode protocol rollback attacks Passive mode operations conducting base station surveys, which is the process of using over-the-air signals to identify legitimate cell sites and precisely map their coverage areas Active (cell site simulator) capabilities In active mode, the StingRay will force each compatible cellular device in a given area to disconnect from its service provider cell site (e.g., operated by Verizon, AT&T, etc.) and establish a new connection with the StingRay. In most cases, this is accomplished by having the StingRay broadcast a pilot signal that is either stronger than, or made to appear stronger than, the pilot signals being broadcast by legitimate cell sites operating in the area. A common function of all cellular communications protocols is to have the cellular device connect to the cell site offering the strongest signal. StingRays exploit this function as a means to force temporary connections with cellular devices within a limited area. Extracting data from internal storage During the process of forcing connections from all compatible cellular devices in a given area, the StingRay operator needs to determine which device is the desired surveillance target. This is accomplished by downloading the IMSI, ESN, or other identifying data from each of the devices connected to the StingRay. In this context, the IMSI or equivalent identifier is not obtained from the cellular service provider or from any other third-party. The StingRay downloads this data directly from the device using radio waves. In some cases, the IMSI or equivalent identifier of a target device is known to the StingRay operator beforehand. When this is the case, the operator will download the IMSI or equivalent identifier from each device as it connects to the StingRay. When the downloaded IMSI matches the known IMSI of the desired target, the dragnet will end and the operator will proceed to conduct specific surveillance operations on just the target device. In other cases, the IMSI or equivalent identifier of a target is not known to the StingRay operator and the goal of the surveillance operation is to identify one or more cellular devices being used in a known area. For example, if visual surveillance is being conducted on a group of protestors, a StingRay can be used to download the IMSI or equivalent identifier from each phone within the protest area. After identifying the phones, locating and tracking operations can be conducted, and service providers can be forced to turn over account information identifying the phone users. Forcing an increase in signal transmission power Cellular telephones are radio transmitters and receivers, much like a walkie-talkie. However, the cell phone communicates only with a repeater inside a nearby cell tower installation. At that installation, the devices take in all cell calls in its geographic area and repeat them out to other cell installations which repeat the signals onward to their destination telephone (either by radio or landline wires). Radio is used also to transmit a caller's voice/data back to the receiver's cell telephone. The two-way duplex phone conversation then exists via these interconnections. To make all that work correctly, the system allows automatic increases and decreases in transmitter power (for the individual cell phone and for the tower repeater, too) so that only the minimum transmit power is used to complete and hold the call active, "on", and allows the users to hear and be heard continuously during the conversation. The goal is to hold the call active but use the least amount of transmitting power, mainly to conserve batteries and be efficient. The tower system will sense when a cell phone is not coming in clearly and will order the cell phone to boost transmit power. The user has no control over this boosting; it may occur for a split second or for the whole conversation. If the user is in a remote location, the power boost may be continuous. In addition to carrying voice or data, the cell phone also transmits data about itself automatically, and that is boosted or not as the system detects need. Encoding of all transmissions ensures that no crosstalk or interference occurs between two nearby cell users. The boosting of power, however, is limited by the design of the devices to a maximum setting. The standard systems are not "high power" and thus can be overpowered by secret systems using much more boosted power that can then take over a user's cell phone. If overpowered that way, a cell phone will not indicate the change due to the secret radio being programmed to hide from normal detection. The ordinary user can not know if their cell phone is captured via overpowering boosts or not. (There are other ways of secret capture that need not overpower, too.) Just as a person shouting drowns out someone whispering, the boost in RF watts of power into the cell telephone system can overtake and control that system—in total or only a few, or even only one, conversation. This strategy requires only more RF power, and thus it is more simple than other types of secret control. Power boosting equipment can be installed anywhere there can be an antenna, including in a vehicle, perhaps even in a vehicle on the move. Once a secretly boosted system takes control, any manipulation is possible from simple recording of the voice or data to total blocking of all cell phones in the geographic area. Tracking and locating A StingRay can be used to identify and track a phone or other compatible cellular data device even while the device is not engaged in a call or accessing data services. A Stingray closely resembles a portable cellphone tower. Typically, law enforcement officials place the Stingray in their vehicle with a compatible computer software. The Stingray acts as a cellular tower to send out signals to get the specific device to connect to it. Cell phones are programmed to connect with the cellular tower offering the best signal. When the phone and Stingray connect, the computer system determines the strength of the signal and thus the distance to the device. Then, the vehicle moves to another location and sends out signals until it connects with the phone. When the signal strength is determined from enough locations, the computer system centralizes the phone and is able to find it. Cell phones are programmed to constantly search for the strongest signal emitted from cell phone towers in the area. Over the course of the day, most cell phones connect and reconnect to multiple towers in an attempt to connect to the strongest, fastest, or closest signal. Because of the way they are designed, the signals that the Stingray emits are far stronger than those coming from surrounding towers. For this reason, all cell phones in the vicinity connect to the Stingray regardless of the cell phone owner's knowledge. From there, the stingray is capable of locating the device, interfering with the device, and collecting personal data from the device. Denial of service The FBI has claimed that when used to identify, locate, or track a cellular device, the StingRay does not collect communications content or forward it to the service provider. Instead, the device causes a disruption in service. Under this scenario, any attempt by the cellular device user to place a call or access data services will fail while the StingRay is conducting its surveillance. On August 21, 2018, Senator Ron Wyden noted that Harris Corporation confirmed that Stingrays disrupt the targeted phone's communications. Additionally, he noted that "while the company claims its cell-site simulators include a feature that detects and permits the delivery of emergency calls to 9-1-1, its officials admitted to my office that this feature has not been independently tested as part of the Federal Communication Commission’s certification process, nor were they able to confirm this feature is capable of detecting and passing-through 9-1-1 emergency communications made by people who are deaf, hard of hearing, or speech disabled using Real-Time Text technology." Interception of communications content By way of software upgrades, the StingRay and similar Harris products can be used to intercept GSM communications content transmitted over-the-air between a target cellular device and a legitimate service provider cell site. The StingRay does this by way of the following man-in-the-middle attack: (1) simulate a cell site and force a connection from the target device, (2) download the target device's IMSI and other identifying information, (3) conduct "GSM Active Key Extraction" to obtain the target device's stored encryption key, (4) use the downloaded identifying information to simulate the target device over-the-air, (5) while simulating the target device, establish a connection with a legitimate cell site authorized to provide service to the target device, (6) use the encryption key to authenticate the StingRay to the service provider as being the target device, and (7) forward signals between the target device and the legitimate cell site while decrypting and recording communications content. The "GSM Active Key Extraction" performed by the StingRay in step three merits additional explanation. A GSM phone encrypts all communications content using an encryption key stored on its SIM card with a copy stored at the service provider. While simulating the target device during the above explained man-in-the-middle attack, the service provider cell site will ask the StingRay (which it believes to be the target device) to initiate encryption using the key stored on the target device. Therefore, the StingRay needs a method to obtain the target device's stored encryption key else the man-in-the-middle attack will fail. GSM primarily encrypts communications content using the A5/1 call encryption cypher. In 2008 it was reported that a GSM phone's encryption key can be obtained using $1,000 worth of computer hardware and 30 minutes of cryptanalysis performed on signals encrypted using A5/1. However, GSM also supports an export weakened variant of A5/1 called A5/2. This weaker encryption cypher can be cracked in real-time. While A5/1 and A5/2 use different cypher strengths, they each use the same underlying encryption key stored on the SIM card. Therefore, the StingRay performs "GSM Active Key Extraction" during step three of the man-in-the-middle attack as follows: (1) instruct target device to use the weaker A5/2 encryption cypher, (2) collect A5/2 encrypted signals from target device, and (3) perform cryptanalysis of the A5/2 signals to quickly recover the underlying stored encryption key. Once the encryption key is obtained, the StingRay uses it to comply with the encryption request made to it by the service provider during the man-in-the-middle attack. A rogue base station can force unencrypted links, if supported by the handset software. The rogue base station can send a 'Cipher Mode Settings' element (see GSM 04.08 Chapter 10.5.2.9) to the phone, with this element clearing the one bit that marks if encryption should be used. In such cases the phone display could indicate the use of an unsafe link—but the user interface software in most phones does not interrogate the handset's radio subsystem for use of this insecure mode nor display any warning indication. Passive capabilities In passive mode, the StingRay operates either as a digital analyzer, which receives and analyzes signals being transmitted by cellular devices and/or wireless carrier cell sites or as a radio jamming device, which transmits signals that block communications between cellular devices and wireless carrier cell sites. By "passive mode", it is meant that the StingRay does not mimic a wireless carrier cell site or communicate directly with cellular devices. Base station (cell site) surveys A StingRay and a test phone can be used to conduct base station surveys, which is the process of collecting information on cell sites, including identification numbers, signal strength, and signal coverage areas. When conducting base station surveys, the StingRay mimics a cell phone while passively collecting signals being transmitted by cell-sites in the area of the StingRay. Base station survey data can be used to further narrow the past locations of a cellular device if used in conjunction with historical cell site location information ("HCSLI") obtained from a wireless carrier. HCSLI includes a list of all cell sites and sectors accessed by a cellular device, and the date and time each access was made. Law enforcement will often obtain HCSLI from wireless carriers in order to determine where a particular cell phone was located in the past. Once this information is obtained, law enforcement will use a map of cell site locations to determine the past geographical locations of the cellular device. However, the signal coverage area of a given cell site may change according to the time of day, weather, and physical obstructions in relation to where a cellular device attempts to access service. The maps of cell site coverage areas used by law enforcement may also lack precision as a general matter. For these reasons, it is beneficial to use a StingRay and a test phone to map out the precise coverage areas of all cell sites appearing in the HCSLI records. This is typically done at the same time of day and under the same weather conditions that were in effect when the HCSLI was logged. Using a StingRay to conduct base station surveys in this manner allows for mapping out cell site coverage areas that more accurately match the coverage areas that were in effect when the cellular device was used. Usage by law enforcement In the United States The use of the devices has been frequently funded by grants from the Department of Homeland Security. The Los Angeles Police Department used a Department of Homeland Security grant in 2006 to buy a StingRay for "regional terrorism investigations". However, according to the Electronic Frontier Foundation, the "LAPD has been using it for just about any investigation imaginable." In addition to federal law enforcement, military and intelligence agencies, StingRays have in recent years been purchased by local and state law enforcement agencies. In 2006, Harris Corporation employees directly conducted wireless surveillance using StingRay units on behalf the Palm Bay Police Department—where Harris has a campus —in response to a bomb threat against a middle school. The search was conducted without a warrant or Judicial oversight. The American Civil Liberties Union (ACLU) confirmed that local police have cell site simulators in Washington, Nevada, Arizona, Alaska, Missouri, New Mexico, Georgia, and Massachusetts. State police have cell site simulators in Oklahoma, Louisiana, and Pennsylvania, and Delaware. Local and state police have cell site simulators in California, Texas, Minnesota, Wisconsin, Michigan, Illinois, Indiana, Tennessee, North Carolina, Virginia, Florida, Maryland, and New York [60]. The police use of cell site simulators is unknown in the remaining states. However, many agencies do not disclose their use of StingRay technology, so these statistics are still potentially an under-representation of the actual number of agencies. According to the most recent information published by the American Civil Liberties Union, 72 law enforcement agencies in 24 states own StingRay technology in 2017. Since 2014, these numbers have increased from 42 agencies in 17 states [60]. The following are federal agencies in the United States that have validated their use of cell-site simulators: Federal Bureau of Investigation, Drug Enforcement Administration, US Secret Service, Immigration and Customs Enforcement, US Marshals Service, Bureau of Alcohol, Tobacco, Firearms, and Explosives, US Army, US Navy, US Marine Corps, US National Guard, US Special Command, and National Security Agency [60]. In the 2010-14 fiscal years, the Department of Justice has confirmed spending "more than $71 million on cell-cite simulation technology," while the Department of Homeland Security confirmed spending "more than $24 million on cell-cite simulation technology." Several court decisions have been issued on the legality of using a Stingray without a warrant, with some courts ruling a warrant is required and others not requiring a warrant. Outside the United States Police in Vancouver, British Columbia, Canada, admitted after much speculation across the country that they had made use of a Stingray device provided by the RCMP. They also stated that they intended to make use of such devices in the future. Two days later, a statement by Edmonton's police force had been taken as confirming their use of the devices, but they said later that they did not mean to create what they called a miscommunication. Privacy International and The Sunday Times reported on the usage of StingRays and IMSI-catchers in Ireland, against the Irish Garda Síochána Ombudsman Commission (GSOC), which is an oversight agency of the Irish police force Garda Síochána. On June 10, 2015, the BBC reported on an investigation by Sky News about possible false mobile phone towers being used by the London Metropolitan Police. Commissioner Bernard Hogan-Howe refused comment. Between February 2015 and April 2016, over 12 companies in the United Kingdom were authorized to export IMSI-catcher devices to states including Saudi Arabia, UAE, and Turkey. Critics have expressed concern about the export of surveillance technology to countries with poor human rights records and histories of abusing surveillance technology. Secrecy The increasing use of the devices has largely been kept secret from the court system and the public. In 2014, police in Florida revealed they had used such devices at least 200 additional times since 2010 without disclosing it to the courts or obtaining a warrant. One of the reasons the Tallahassee police provided for not pursuing court approval is that such efforts would allegedly violate the non-disclosure agreements (NDAs) that police sign with the manufacturer. The American Civil Liberties Union has filed multiple requests for the public records of Florida law enforcement agencies about their use of the cell phone tracking devices. Local law enforcement and the federal government have resisted judicial requests for information about the use of stingrays, refusing to turn over information or heavily censoring it. In June 2014, the American Civil Liberties Union published information from court regarding the extensive use of these devices by local Florida police. After this publication, United States Marshals Service then seized the local police's surveillance records in a bid to keep them from coming out in court. In some cases, police have refused to disclose information to the courts citing non-disclosure agreements signed with Harris Corporation. The FBI defended these agreements, saying that information about the technology could allow adversaries to circumvent it. The ACLU has said "potentially unconstitutional government surveillance on this scale should not remain hidden from the public just because a private corporation desires secrecy. And it certainly should not be concealed from judges." In 2015 Santa Clara County pulled out of contract negotiations with Harris for StingRay units, citing onerous restrictions imposed by Harris on what could be released under public records requests as the reason for exiting negotiations. Criticism In recent years, legal scholars, public interest advocates, legislators and several members of the judiciary have strongly criticized the use of this technology by law enforcement agencies. Critics have called the use of the devices by government agencies warrantless cell phone tracking, as they have frequently been used without informing the court system or obtaining a warrant. The Electronic Frontier Foundation has called the devices "an unconstitutional, all-you-can-eat data buffet." In June 2015, WNYC Public Radio published a podcast with Daniel Rigmaiden about the StingRay device. In 2016, Professor Laura Moy of the Georgetown University Law Center filed a formal complaint to the FCC regarding the use of the devices by law enforcement agencies, taking the position that because the devices mimic the properties of cell phone towers, the agencies operating them are in violation of FCC regulation, as they lack the appropriate spectrum licenses. On December 4, 2019, the American Civil Liberties Union and the New York Civil Liberties Union (NYCLU) filed a federal lawsuit against the Customs and Border Protection and the Immigrations and Customs Enforcement agencies. According to the ACLU, the union had filed a Freedom of Information Act request in 2017, but were not given access to documents. The NYCLU and ACLU proceeded with the lawsuit under the statement that both CBP and ICE had failed, "to produce a range of records about their use, purchase, and oversight of Stingrays." In an official statement expanding their reasoning for the lawsuit, the ACLU expressed their concern over the Stingrays current and future applications, stating that ICE were using them for "unlawfully tracking journalists and advocates and subjecting people to invasive searches of their electronic devices at the border." Countermeasures A number of countermeasures to the StingRay and other devices have been developed, for example crypto phones such as GSMK's Cryptophone have firewalls that can identify and thwart the StingRay's actions or alert the user to IMSI capture. EFF developed a system to catch stingrays. See also Authentication and Key Agreement (protocol) Cellphone surveillance Evil Twin Attack Mobile phone tracking Kyllo v. United States (lawsuit re thermal image surveillance) United States v. Davis (2014) found warrantless data collection violated constitutional rights, but okayed data use for criminal conviction, as data collected in good faith References Further reading IMSI catchers, and specifically the Harris Stingray, are extensively used in the Intelligence Support Activity/Task Force Orange thriller written by J. T. Patten, a former counterterrorism intelligence specialist. Patten, J. T., Buried in Black. A Task Force Orange novel. Lyrical Press/Penguin, 2018. Telecommunications equipment Mass intelligence-gathering systems Surveillance Mobile security Telephone tapping Telephony equipment Law enforcement equipment
33186216
https://en.wikipedia.org/wiki/Mark%20Rasch
Mark Rasch
Mark D. Rasch is an attorney and author, working in the areas of corporate and government cybersecurity, privacy and incident response. He is the former Chief Security Evangelist for Verizon Communications after having been Vice President, Deputy General Counsel, and Chief Privacy and Data Security Officer for SAIC. From 1983 to 1992, Rasch worked at the U.S. Department of Justice within the Criminal Division's Fraud Section. Rasch earned a J.D. in 1983 from State University of New York at Buffalo and is a 1976 graduate of the Bronx High School of Science. He prosecuted Robert Tappan Morris in the case of United States v. Morris (1991). He was an amicus curiae related to data encryption in Bernstein v. United States, and prosecuted Presidential candidate Lyndon LaRouche, and organized crime figures in New York associated with the Gambino crime family He also helped uncover the individual responsible for the so-called "Craigslist murder" in Boston. Rasch has been a regular contributor to SecurityCurrent and SecurityFocus and Security Boulevard on issues related to law and technology and is a regular contributor to Wired Magazine. He was also a longtime columnist for StorefrontBacktalk, a now-defunct publication that tracked global retail technology. He has appeared on or been quoted by MSNBC, Fox News, CNN, The New York Times, Forbes, PBS, The Washington Post, NPR and other national and international media. Books Notes and references 1958 births American lawyers Living people People associated with computer security
33189313
https://en.wikipedia.org/wiki/Packet%20processing
Packet processing
In digital communications networks, packet processing refers to the wide variety of algorithms that are applied to a packet of data or information as it moves through the various network elements of a communications network. With the increased performance of network interfaces, there is a corresponding need for faster packet processing. There are two broad classes of packet processing algorithms that align with the standardized network subdivision of control plane and data plane. The algorithms are applied to either: Control information contained in a packet which is used to transfer the packet safely and efficiently from origin to destination or The data content (frequently called the payload) of the packet which is used to provide some content-specific transformation or take a content-driven action. Within any network enabled device (e.g. router, switch, network element or terminal such as a computer or smartphone) it is the packet processing subsystem that manages the traversal of the multi-layered network or protocol stack from the lower, physical and network layers all the way through to the application layer. History The history of packet processing is the history of the Internet and packet switching. Packet processing milestones include: 1962–1968: Early research into packet switching 1969: 1st two nodes of ARPANET connected; 15 sites connected by end of 1971 with email as a new application 1973: Packet switched voice connections over ARPANET with Network Voice Protocol. File Transfer Protocol (FTP) specified 1974: Transmission Control Protocol (TCP) specified 1979: VoIP – NVP running on early versions of IP 1981: IP and TCP standardized 1982: TCP/IP standardized 1991: World Wide Web (WWW) released by CERN, authored by Tim Berners-Lee 1998: IPv6 first published Historical references and timeline can be found in the External Resources section below. Communications models For networks to succeed it is necessary to have a unifying standard for which defines the architecture of networking systems. The fundamental requirement for such a standard is to provide a framework that enables the hardware and software manufacturers around the world to develop networking technologies that will work together and to harness their cumulative investment capabilities to move the state of networking forward. In the 1970s, two organizations, the International Organization for Standardization (ISO) and the International Telegraph and Telephone Consultative Committee (CCITT, now called the International Telecommunication Union (ITU-T) each initiated projects with the goal of developing international networking standards. In 1983, these efforts were merged and in 1984 the standard, called The Basic Reference Model for Open Systems Interconnection, was published by ISO and as standard X.200 by the ITU-T. The OSI Model is a 7 layer model describing how a network operating system works. A layered model has many benefits including the ability to change one layer without impacting the others and as a model for understanding how a network OS works. As long as the interconnection between layers is maintained, vendors can enhance the implementation of an individual layer without impact on other layers. In parallel with the development of the OSI model, a research network was being implemented by the United States Defense Advanced Research Projects Agency (DARPA). The internetworking protocol developed to support the network, called ARPAnet, was called TCP or Transmission Control Program. As research and development progressed and the size of the network grew, it was determined that the internetworking design that was being used was becoming unwieldy and it did not exactly follow the layered approach of the OSI Model. This led to the splitting of the original TCP and the creation of the TCP/IP architecture - TCP now standing for Transmission Control Protocol and IP standing for Internet Protocol. Advent of packet processing Packet networks came about as a result of the need in the early 1960s to make communications networks more reliable. It can be viewed as the implementation of the layered model using a packet structure. Early commercial networks were composed of dedicated, analog circuits used for voice communications. The concept of packet switching was introduced to create a communications network that would continue to function in spite of equipment failures throughout the network. In this paradigm shift, networks are viewed as collections of systems that transmit data in small packets that work their way from origin to destination by any number of routes. Initial packet processing functions supported the routing of packets through the network, transmission error detection and correction and other network management functions. Packet switching with its supporting packet processing functions has several practical benefits over traditional circuit-switched networks: An all-digital environment supporting multiple data types (such as voice, data and video) not only enriched the lives of users, it significantly increased the efficiency of network providers who previously had to implement different networks to support different data types. Greater bandwidth utilization, with multiple ’logical circuits’ using the same physical links Communications survivability due to multiple paths through the network from any origin to any destination Added-value information services can be introduced using packet processing functions to provide the necessary processing Packet structure A network packet is the fundamental building block for packet-switched networks. When an item such as a file, e-mail message, voice or video stream is transmitted through the network, it is broken into chunks called packets that can be more efficiently moved through the network than one large block of data. Numerous standards cover the structure of packets, but typically packets are composed of three elements: Header – contains information about the packet, including origin, destination, length and packet number. Payload (or body) – contains the data that comprises the packet Trailer – indicates the end of the packet and frequently include error detection and correction information In a packet-switched network, the sending host computer packetizes the original item and each packet is routed through the network to its destination. Some networks used fixed length packets, typically 1024 bits, while others use variable length packets and include the packet length in the header. Individual packets may take different routes to the destination and arrive at the destination out of order. The destination computer verifies the correctness of the data in each packet (using information in the trailer), reassembles the original item using the packet number information in the header, and presents the item to the receiving application or user. This basic example includes the three most fundamental packet processing functions, packetization, routing, and assembly. Packet processing functions range from the simple to highly complex. As an example, the routing function is actually a multi-step process involving various optimization algorithms and table lookups. A basic routing function on the Internet looks something like: 1. Check to see if the destination is an address ‘owned’ by this computer. If so, process the packet. If not: a. Check to see if IP Forwarding is set to ‘Yes’. If no, the packet is destroyed. If yes, then i. Check to see if a network attached to this computer owns the destination address. If yes, route the packet to the appropriate network. If no, then 1. Check to see if there is any route to the destination network. If yes, route the packet to the next hop gateway. If no, destroy the packet. More advanced routing functions include network load balancing and fastest route algorithms. These examples illustrate the range of packet processing algorithms possible and how they can introduce significant delays into the transmission of an item. Network equipment designers frequently use a combination of hardware and software accelerators to minimize the latency in the network. Network equipment architecture IP-based equipment can be partitioned into three basic elements: data plane, control plane and management plane. Data plane The data plane is a subsystem of a network node that receives and sends packets from an interface, processes them as required by the applicable protocol, and delivers, drops, or forwards them as appropriate. Control plane The control plane maintains information that can be used to change data used by the data plane. Maintaining this information requires handling complex signaling protocols. Implementing these protocols in the data plane would lead to poor forwarding performance. A common way to manage these protocols is to let the data plane detect incoming signaling packets and locally forward them to the control plane. The control plane signaling protocols can update the data plane information and inject outgoing signaling packets into the data plane. This architecture works because signaling traffic is a very small part of the global traffic. Management plane The management plane provides an administrative interface into the overall system. It contains processes that support operational administration, management or configuration/provisioning actions such as: Facilities for supporting statistics collection and aggregation, Support for the implementation of management protocols, Command line interface, graphical user configuration interfaces through Web pages or traditional SNMP (Simple Network Management Protocol) management. More sophisticated solutions based on XML (eXtensible Markup Language) can also be included. Examples The list of packet processing applications is usually divided into two categories. The following are a few examples selected to illustrate the variety in use today. Control applications Forwarding, the basic operation of a router Encryption/Decryption, the protection of information in the payload using cryptographic algorithms Quality of Service (QOS), treating packets differently, such as providing prioritized or specialized services depending upon the packet’s class Data applications Transcoding, the transformation of a particular video encoding to the particular encoding used by the destination Transrating & Transizing, transforming an image size and density appropriate to the destination device Image or Voice Recognition, the detection of a particular pattern (image or voice) that is matched to those in a database with some attending action taken when a match occurs Advanced applications include areas such as security (call monitoring and data leak prevention), targeted advertising, tiered services, copyright enforcement and network usage statistics. These, and many other content-aware applications, are based on the ability to discern specific intelligence contained within packet payloads using Deep Packet Inspection (DPI) technologies. Packet processing architectures Packet switching also introduces some architectural compromises. Performing packet processing functions in the transmission of information introduces delays that may be detrimental to the application being performed. For example, in voice and video applications, the necessary conversion from analog-to-digital and back again at the destination along with delays introduced by the network can cause noticeable gaps that are disruptive to the users. Latency is a measure of the time delay experienced by a complex system. Multiple architectural approaches to packet processing have been developed to address the performance and functionality requirements of a specific network and to address the latency issue. Single threaded architecture (standard operating system) A standard networking stack uses services provided by the Operating System (OS) running on a single processor (single threaded). While single threaded architectures are the simplest to implement, they are subject to overheads associated with the performance of OS functions such as preemptions, thread management, timers and locking. These OS processing overheads are imposed on each packet passing through the system, resulting in a throughput penalty. Multi-threaded architecture (multi-processing operating system) Performance improvements can be made to an OS networking stack by adapting the protocol stack processing software to support multiple processors (multi-threaded), either through the use of Symmetrical Multiprocessing (SMP) platforms or multicore processor architecture. Performance increases are realized for a small number of processors, but fails to scale linearly over larger numbers of processors (or cores) and a processor with, for example, eight cores may not process packets significantly faster than one with two cores. Fast path architecture (operating system by-pass) In a fast path implementation, the data plane is split into two layers. The lower layer, typically called the fast path, processes the majority of incoming packets outside the OS environment and without incurring any of the OS overheads that degrade overall performance. Only those packets that require complex processing are forwarded to the OS networking stack (the upper layer of the data plane), which performs the necessary management, signaling and control functions. When complex algorithms such as routing or security are required, the OS networking stack forwards the packet to dedicated software components in the control plane. A multicore processor can provide additional performance improvement to a fast path implementation. In order to maximize the overall system throughput, multiple cores can be dedicated to running the fast path, while only one core is required to run the Operating System, the OS networking stack and the application’s control plane. The only restriction when configuring the platform is that, since the cores running the fast path are running outside the OS, they must be dedicated exclusively to the fast path and not shared with other software. The system can also be reconfigured dynamically as traffic patterns change. Splitting the data plane into two layers also adds complexity as the two layers must have the same information to ensure system consistency. Packet processing technologies In order to create specialized packet processing platforms, a variety of technologies have been developed and deployed. These technologies, which span the breadth of hardware and software, have all been designed with the aim of maximizing speed and throughput while minimizing latency. Network processors A network processor unit (NPU) is similar in many respects to general purpose processors (GPP) that power most computers but with its internal architecture and functions tailored to network-centric operations. NPUs commonly have network-specific functions such as address lookup, pattern matching and queue management built into their microcode. Higher level packet processing operations such as security or intrusion detection are often built into NPU architectures. Network processor examples would include: Intel - IXP2xxx family Netronome - NFP-6xxx/4xxx/32xx families PMC Sierra – Winpath family EZChip – NP-x family Multicore processors A multicore processor is a single semiconductor package that has 2 or more cores, each representing an individual processing unit, capable of executing code in parallel. General purpose CPUs such as the Intel Xeon now support up to 8 cores. Some multicore processors integrate dedicated packet processing capabilities to provide a complete SoC (System on Chip). They generally integrate Ethernet interfaces, crypto-engines, pattern matching engines, hardware queues for QoS and sometimes more sophisticated functions using micro-cores. All these hardware features are able to offload the software packet processing. Recent examples of these specialized multicore packages, such as the Cavium OCTEON II, can support from 2 up to 32 cores. Tilera - TILE-Gx Processor Family Cavium Networks - OCTEON & OCTEON II multicore Processor Families Freescale – QorIQ Processing Platforms NetLogic Microsystems – XLP, XLR and XLS Processor Families Hardware accelerators For clearly definable and repetitive actions, creating a dedicated accelerator built directly into a semiconductor hardware solution will speed up operations when compared to software running on a general purpose processor. Initial implementations used FPGAs (field-programmable gate array) or ASICs (Application-specific Integrated Circuit), but now specific functions such as encryption and compression are built into both GPPs and NPUs as internal hardware accelerators. Current multicore processor examples with network-specific hardware accelerators include the Cavium CN63xx with acceleration for security, TCP/IP, QOS and HFA pattern matching and the Netlogic Microsystems XFS processor family with networking and security acceleration engines. Deep packet inspection Being able to make decisions based on the content of individual packets enables a wide variety of new applications such as Policy Charging and Rules Functions (PCRF) and Quality of Service. Packet processing systems separate out specific traffic types through the use of Deep Packet Inspection (DPI) technologies. DPI technologies utilize pattern matching algorithms to look inside the data payload to identify the contents of each and every packet flowing through a network device. Successful pattern matches are reported to the controlling application for any appropriate further action to be taken. Packet processing software Operating system software will contain certain standard network stacks that will operate in both single and multicore environments. To be able to implement operating system by-pass (fast path) architectures requires the use of specialized packet processing software such as 6WIND's 6WINDGate. This type of software provides a suite of networking protocols that can be distributed across multiple blades, processors or cores and scale appropriately. References External links Living History. “Internet History” Howe, Walt. (2010) “A Brief History of the Internet” Internet Society. “Histories of the Internet” Living History. “Packet Switching History” Roberts, Dr. Lawrence G. (November 1978). “The Evolution of Packet Switching” Marshall, Dave. “History of the Internet – Timeline.” Rami Rosen "Network acceleration with DPDK", article in lwn.net, July 2017. Rami Rosen "Userspace Networking with DPDK", article in Linux Journal, April 2018. Computer networking Packets (information technology) History of the Internet
33195307
https://en.wikipedia.org/wiki/Telecomix
Telecomix
Telecomix is a decentralized cluster of net activists, committed to the freedom of expression and is a name used by both WeRebuild and Telecomix. WeRebuild is a collaborative project used to propose and discuss laws as well as to collect information about politics and politicians. The Telecomix is the operative body that executes schemes and proposals presented by the WeRebuild. On September 15, 2011, Telecomix diverted all connections to the Syrian web, and redirected internauts to a page with instructions to bypass censorship. Moreover, "Telecomix circulated the ways of using landlines to circumvent state blockages of broadband networks" during the Egyptian Revolution of 2011. Their most recent intervention was a large release of Blue Coat surveillance log files, allegedly revealing vast interception in Syria, which was analyzed and made public from the "telecommunist cluster" of Telecomix. The leak had previously been criticized for possibly revealing too much sensitive information about Syrian users by security researcher and hacker Jacob Appelbaum. History The organization was created at April 18, 2009, as a suggestion following a seminar about surveillance, the legislative changes regulating the National Defence Radio Establishment (FRA), and other laws processed in the European Parliament at the time. The audience was asked to help stop the surveillance laws that were about to be passed in the European Parliament. The evening after the seminar a spontaneous bifurcation started and someone threw a cipher. It was then Telecomix was born. During the first months of Telecomix' existence, focus was mostly on the Telecoms Package, the Data Retention Directive and the laws regulating the FRA. Work consisted of gathering information about the laws and the political processes involved, public conversations with legislators and art projects. As the organization grew larger, the definition and boundaries of the cluster became more undefined. According to one interview, an activist described it as "an ever growing bunch of friends that do things together" consisting of "[...] roughly 20 extremely active members, 50 active and some 300 total including lurkers". Origins Telecomix has their roots in a heterogenous activist and hacker scene. Many of the founding members had followed and participated in and around Piratbyrån and The Pirate Bay. There is membership overlap with The Julia Group and La Quadrature du Net, as well as with the hackerspaces Forskningsavdelningen in Malmö, Gothenburg Hackerspace and Sparvnästet in Stockholm. As Telecomix has a very vague concept of being a "member" of the group (the only formal ritual is to enter their IRC chat), it is however difficult to assess with certainty what their origins are. According to Marcin de Kaminski, who gave one of the earliest interviews on Telecomix's first project (which used to be under the domain telekompaketet.se, now with a new unrelated site owner), he points to a heritage line from Piratbyrån via a sudden side-project called Tapirbyrån ("The Bureau of Tapirs", which is an anagram of the word Pirate in Swedish), which then led to the formation of Telecomix. Apart from this statement by de Kaminski, there are no known written records of this story, and no member of Tapirbyrån has yet confirmed nor denied it. As Telecomix was founded, the initial work consisted of engaging parliamentary politics, to serve as an interface between concerned communities and politicians. As the work gradually moved over to direct interventions and rescue operations, the cluster moved over to a gradually more militaristic rhetoric, with heavy influences from 1990s-style crypto-anarchism. However, during 2011 some Telecomix activists have given interviews and talks at various technology-related conferences under their real names. This is in stark contrast to the earlier practice of eschewing real and individual names in favour of using Telecomix as a collective pseudonym. Rasmus Fleischer argues that the formation of Telecomix signified the end of a long era of pirate rhetoric, and instead shifted attention to a hacktivist approach to politics. Moreover, Christopher Kullenberg describes how "Telecomix News Agency" was shaped as a consequence of close online and offline friendships in connection with the trial against The Pirate Bay in his manifesto Det Nätpolitiska Manifestet. In the book Svenska Hackare ("Swedish Hackers"), the two technology journalists, Daniel Goldberg and Linus Larsson, describe how WeRebuild (a project name used by Telecomix) appeared on a seminar on net neutrality held by the Swedish Government in 2009, to influence the implementation of the Telecoms Package. Projects and operations Streisand A project created and hosted by Telecomix was the Streisand project, named after the Streisand effect. The aim is to mirror certain types of content that gets blocked or censored. WeRebuild WeRebuild was a decentralized wiki page containing Telecomix information and projects. Egypt Operations During the internet blackout in early 2011, Telecomix released a video stating that they would launch series of attempts at restoring internet connectivity by means of old modems, faxes and rerouting of traffic. Syria Operations Similar in approach to the Egypt operations, the organization is was intervening in Syrian networks. The most notable event was when Telecomix released log files from Blue Coat Systems surveillance equipment. Blue Coat Systems were eventually forced to admit that their devices were used in Syria, although they had not been directly sold to the country. search.telecomix.org Telecomix hosted a search service based on Seeks, an open-source Peer-to-Peer distributed search engine with an emphasis on user privacy. Seeks implements a decentralized peer-to-peer architecture: install Seeks on your machine, server or laptop, and automatically start sharing results. While users share queries over the Peer-to-Peer network, Seeks protects your privacy by sending encrypted query fragments to peers. This scheme makes it difficult for other peers to devise your initial query. Blackthrow Telecomix members often experiment with unorthodox encryption technologies. One such project was the "Blackthrow" concept computer (sharing some etymology with the word Black fax), described as: Due to the questionable legality of Blackthrow computers, Telecomix maintains no records of such devices being live and running. Logo The logo contains a variety of symbols. The origins remain unclear, but one common interpretation is that the pyramid in the middle is a symbol of kopimi philosophy, a project originally started by Piratbyrån. Moreover, the lightning arrows seem to originate from the logo of Televerket (Sweden), the old telecoms monopoly of Sweden. The star (also present on Televerket symbol) is a symbol of telecommunism, and the Omega sign is a symbol of resistance, as in Ohm's law. This interpretation of the symbols is sometimes referred to as the Gothenburg interpretation, as many of the founding members are from this town. Others have, however, associated the symbols in the logotype with secret societies, due to several of them being associated with the Illuminati and freemasonry. In an article in French Magazine Lesinrocks, Fabrice Epelboin argues that the pyramid in the Telecomix logo symbolizes power, and that "it is bordered by a bunch of elements - the Omega, the lightning, the star - the challenge ahead". In the same article Fabrice d'Almeida, historian of the propaganda images describes the logo as "giving the impression of a large machine capable of unleashing great energy". Jellyfish Telecomix often refer to oceanic concepts when describing their overall structure. They state that "a siphonophoric organism transmitting its genome through memes and imitation rather than through rules and regulations". Jellyfish are an important part of the symbolism of Telecomix, circulating both as a meme and as an organizational concept guiding participants in evolving the organization. In June 2009 a blog post on "jellyfish memetics" was posted at a Telecomix-affiliated blog, arguably sparking a great interest within Telecomix for the philosophical implications of decentralized self-organization. Datalove The notion of "datalove" appeared for the first time in one of several manifestos written by Telecomix, as to "inspire the body-politic to incarnate creatively via totemized teleportation-flows of datalove". The concept of datalove has been the leitmotif for several spawned projects . Site with interactive Datalove-experience. Crypto-anarchism The organization contributes to the general theme of Crypto-anarchism. Their project Cryptoanarchy.org(Archived) aims at promoting cryptographic research and security technologies. Cameron The organization maintains a MegaHAL speech bot named Cameron. According to one member it is "a computer generated representation of all of us". Cameron has become a core symbol for Telecomix, and her function in governing the actions of the activists remains obscure. References External links Intellectual property activism Digital rights organizations Cyberwarfare
33220568
https://en.wikipedia.org/wiki/Defences%20in%20Canadian%20copyright%20law
Defences in Canadian copyright law
In Canada, the Copyright Act provides a monopoly right to owners of copyrighted works. This implies no person can use the work without authorization or consent from the copyright owner. However, certain exceptions in the Act govern circumstances where a work will not be held to have been infringed. Principal Defences Defendants can, where applicable, argue that copyright infringement could not have taken place, as: There was no copyright in the work created. There was no copyright in the copied element. No substantial part was taken. The work was in the public domain. The plaintiff is not the true owner of the copyrighted work. Substantial similarity and access to the original work may be shown, but the work was not copied. Other defences may be available to the defendants, in cases where some features of copyrighted work exists, but does not constitute infringement. These include: Public interest Fair dealing Other statutory exceptions Public Interest At common law, copyright may be overridden for public interest reasons, albeit in very rare circumstances. In Lion Laboratories v Evans, the copyrighted information about malfunctioning breathalyser machines was reproduced. Such reproduction was held to be justified, despite the nature of material, being confidential and protected by copyright. Court agreed to the defence of public interest, raised by defendants on ground of investigations made regarding the accuracy of the equipment to avoid incorrect readings when used by the police on motorist. As Griffiths LJ noted in his judgment: In Beloff v Pressdram Ltd, the defence of public interest has been interwoven with fair dealing. The court observed fair dealing as a statutory defence limited to infringement of copyright. On the other hand, public interest acts as a defence outside, and independent of statutes, which is based on principles of common law. The public interest defence is identical to that available in cases concerning breach of confidence, and is available when the necessity to publish more than just short extracts is required. It is distinct from the power arising from the inherent jurisdiction of the courts "to refuse to allow their process to be used [to] give effect to contracts which are ... illegal, immoral or prejudicial to family life because they offend against the policy of the law." Fair Dealing The Copyright Act states that fair dealing exists when it is done: for the purpose of research, private study, education, parody or satire; for the purpose of criticism or review, as long as it mentions the source and, if mentioned, the author, performer, maker or broadcaster for the purpose of news reporting, as long as it mentions the source and, if mentioned, the author, performer, maker or broadcaster In Hubbard v Vosper, Lord Denning MR observed, "It is impossible to define what is 'fair dealing.' It must be a question of degree," and "after all is said and done, it must be a matter of impression." He gave several guidelines for analyzing what is fair or not: The number and extent of the quotations or extracts must be consider. Excess number and length might not be fair. Use as a basis for comment, criticism or review may be fair dealing, but being used to convey the same information as the author, for a rival purpose, may be unfair. Taking long extracts and attaching short comments may be unfair, but short extracts and long comments may be fair. There may be other considerations as well. Hubbard was adopted in Canadian jurisprudence in 1997 in Allen v Toronto Star Newspapers Ltd, which ousted the 1943 Exchequer Court of Canada case of Zamacois v Douville and Marchand in the area of what constitutes fair dealing in illustrating a current news story. In so holding, Sedgwick J observed: CCH Canadian Ltd v Law Society of Upper Canada, expanded upon that, with the Supreme Court of Canada holding that fair dealing, as well as related exceptions, is a user’s right. In order to maintain the proper balance between the rights of copyright owners and user’s interest, it must not be interpreted restrictively. It is also integral to the Act, and the defence is always available. The Court gave a two-stage test for determining whether fair dealing applies: The effect of CCH has been for Canada becoming less rigid than the UK in interpreting fair dealing, and more flexible than the US approach of fair use in its copyright law. Further expansion of the jurisprudence came in 2012 with SOCAN v Bell Canada and Alberta (Education) v Canadian Copyright Licensing Agency (Access Copyright). Regarding other specific matters concerning fair dealing: With respect to criticism and review, "Criticism of a work need not be limited to criticism of style. It may also extend to the ideas to be found in a work and its social or moral implications." However, it must be done in good faith. As Lord Denning MR noted in Hubbard, "'It is not fair dealing for a rival in the trade to take copyright material and use it for his own benefit." With respect to news reporting, timeliness may sometimes require the use of copyrighted material without prior permission while the value, importance and interest in the story are still current. It has also been held that "events, such as tragedies in which people are killed, continue to be current events so long as the events themselves continue to feature in the news."<ref>Hyde Park, par. 28</ref> Other statutory exceptions Sections 29.2132.3 provide other exceptions from copyright infringement in cases concerning: educational institutions libraries, archives and museums single reproduction of computer programs as backup incidental use ephemeral recording pre-recorded works persons with disabilities purposes of certain federal Acts, such as the Access to Information Act the author making certain copies agriculture fairs religious purposes non-commercial user-generated content certain reproduction for private purposes fixing signals and recording programs for later listening or viewing backup copies (of works other than software) interoperability of computer programs encryption research computer systems and network security, and temporary reproductions for technological processes Possible defences Several other arguments have been presented as possible defences for copyright infringement: Section 2(b) of the Canadian Charter of Rights and Freedoms, governing freedom of expression, could be said to hold that limiting the use of copyrighted material is unconstitutional, as opposed to asserting that the copyright scheme as a whole is unconstitutional. Canadian courts have not yet definitely rejected or accepted the proposition. In the case of Queen v Lorimer, the Federal Court of Appeal rejected the Charter defence, but left the possibility of it succeeding in future. The Federal Court of Canada - Trial Division considered this defence in Michelin v CAW, but held that the Charter did not confer the right to use private property to express oneself. Thus, the defendants' freedom of expression had not been infringed. The duty to act in good faith, as noted in Houle v National Bank and Wallace v United Grain Growers Ltd'', could be argued to hold that a party may not exercise a right in an unreasonable manner. The US doctrine of copyright misuse has been argued, but not yet accepted, in Canadian courts. Further reading References Canadian copyright law
33224711
https://en.wikipedia.org/wiki/Electronic%20evidence
Electronic evidence
Electronic evidence consists of these two sub-forms: analog (no longer so prevalent, but still existent in some sound recordings e.g), and digital evidence (see longer article) This rather complex relationship can be depicted graphically as shown in this part of an EU-funded project on the topic embedded here at the right. Chapter 10 of the associated 2018 book goes into more detail, as does the website, http://www.evidenceproject.eu/categorization Electronic evidence can be abbreviated as e-evidence, and this shorter term is gaining in acceptance in Continental Europe. This page covers mainly activity there and on the international level. Access to electronic evidence Access is the area where much of the current activity on the international level is taking place. A network called the Internet & Jurisdiction Policy Network holds global conferences on the topic at various locations. Here are six key supranational developments in Geneva, New York, Strasbourg, Paris and Brussels. In February 2022 an authoritative report was published covering worldwide developments. Global Privacy Assembly The GPA of Data Protection and Privacy Commissioners "unsurprisingly places greater emphasis on individuals’ privacy rights than did the OECD draft" of 2021. In 2021 GPA developed a document summing up its concerns. International Organization for Standardization (ISO) There is an international forensic standard issued by ISO with the International Electrical Commission ISO/IEC 27037. United Nations Late in 2019 Russia and China initiated a move to consider drafting a global cybercrime convention. Western democracies are conspicuously absent from the sponsoring parties. Many non-governmental organizations (NGOs) have issued a protest letter claiming the Russian initiative would potentially infringe upon human rights. Council of Europe The Convention on Cybercrime (“Budapest Convention”) is "the first international treaty on crimes committed via the Internet". The CoE is currently drafting an update in the form of a second additional protocol to the Convention. An international group of national data protection authorities with a secretariat in Germany called the International Working Group on Data Protection in Telecommunications is monitoring the Council of Europe Cybercrime Convention holding 60-some meetings on the access problem, most recently to address events in Brazil, Belgium and China in addition to the Microsoft Ireland case. The draft protocol has proven quite controversial. Two joint civil society statements have been submitted. "The Cybercrime Convention Committee had extended the negotiations of the protocol to December 2020." Meanwhile there are the guidelines from 2019. OECD In 2021 deliberations began at the OECD to develop common principles among member countries. There are two major methods of access: compelled (or obliged) access and direct (including covert) access. The EU wants to address both, whereas the United States is hesitant to include covert access. European Union The European Commission (as the only body holding the right of initiative) has made two legislative proposals (a Directive on establishing a legal representative, and a Regulation on access to evidence for criminal investigations). Taken together, these proposals comprise a "package". The legislators, i.e. the Parliament and the Council, have meanwhile found positions in regard to that Commission proposal. The Council calls its position a "general approach". The committees in the Parliament have different competences, which are sometimes not easily distinguished, so sometimes there are competence disputes. LIBE has received "the lead", or lead competence working on the proposals, and has subsequently produced a report. The rapporteur Birgit Sippel MEP proposed changes to the versions of the Commission and the Council. The report has given rise to both a summary and a more detailed commentary analysing its provisions for their efficiency and protection of human rights. Agreement has been reached in the Parliament on how to enter the trilogue negotiations (EP+CoU+COM). The differences in the two versions prepared by the Council and Parliament respectively are shown in a couple of documents In what can be seen as an accelerated procedure as opposed to the ordinary first reading/second reading procedure, the report was only voted on in LIBE. The EP Plenary then mandated the committee to take up negotiations with the Council, while the Commission formally played a neutral advisory role. Formally, the first reading will not be closed until the trilogue has reached an agreement. Then the plenary will vote on the trilogue negotiation outcome as its first reading position, and effectively also allow it to become law. There could also be a situation where no agreement can be reached, in which case the Parliament would vote on the unchanged LIBE report to finalise the first reading and make it the Parliament position, before entering into a second reading. Authoritative texts can be found on the eur-lex website. In February 2019, the European Commission recommended "engaging in two international negotiations on cross-border rules to obtain electronic evidence," one involving the USA and one at the CoE. Indeed, the USA/EU axis and the CoE are the scenes of work on these issues, as described and compared in a 2019 paper advocating a revamping of the Mutual legal assistance treaty, page 17. The reason for the above development was given as due to the fact that "[i]n the offline world, authorities can request and obtain documents necessary to investigate a crime within their own country, but electronic evidence is stored online by service providers often based in a different country than [sic] the investigator, even if the crime is only in one country." The Commission then gave data supporting this decision. Indeed, this is the reason for treating electronic evidence differently from the ways that other evidence is treated. Moreover, it may expedite convergence or some form of reconciliation between the world's two main legal systems, i.e. common law and civil law, at least as regards this use case. Negotiations are set to begin. However, there are questions as to how the two different systems might converge in a common agreement. The core instruments to handle cross-border requests are: a European Production Order (EPOC) and a European Preservation Order (EPOC-PR). The framework for those instruments is the European evidence warrant. Separately from the above, a dedicated convention has been drafted by a British barrister. United Kingdom The UK government announced that the new "UK-US Bilateral Data Access Agreement will dramatically speed up investigations and prosecutions by enabling law enforcement, with appropriate authorisation, to go directly to the tech companies to access data, rather than through governments, which can take years." "It gives effect to the Crime (Overseas Production Orders) Act 2019, which received Royal Assent in February this year and was facilitated by the CLOUD Act in America, passed last year." "The Agreement does not change anything about the way companies can use encryption and does not stop companies from encrypting data." On encryption, the US, UK and Australia are contacting Facebook directly The agreement means that UK officials can now apply to the US via the Crime (Overseas Production Orders) Act 2019. United States The basis for obtaining cross-border access is the Stored Communications Act as amended by the CLOUD Act. A new agreement with the UK was negotiated and it "will enter into force following a six-month Congressional review period mandated by the CLOUD Act, and the related review by UK’s Parliament." Controversy One of the most controversial cases brought yet to a court has been the 2013 Microsoft Corp. v. United States case. Potential conflicts between the EU regime and the US CLOUD Act have led legal scholars Jennifer Daskal and Peter Swire to propose a US/EU agreement. Those authors have also assembled a set of FAQ seeking to address questions specifically that have arisen from the European Union in connection with the CLOUD Act. Highlighting differences from the status quo, the European Parliament's Committee on Civil Liberties, Justice and Home Affairs commissioned a study and held a hearing; the study is available. Europeans discussing ‘Co-operating in the Digital Age’ in the Internet Governance Forum have been critical of the EU's proposals, fearing that "companies and businesses [might] implement stronger filtering and blocking mechanisms in order to avoid sanctions or reputational damages." Later in November at the Internet Governance Forum 2019 in Berlin panelists described new initiatives in Brazil and Russia respectively. Some problems quite different from those in the Microsoft case alluded to above have been found and described in an article in the German weekly ZEIT dated 19 December 2018 with 167 comments on the proposed direct access tracks described above under "European Union"; the journalist Martin Klingst entitled it "Nackt per Gesetz" (Naked by Law, meaning exposed to foreign observation by domestic law). Klingst is appalled at the thought that an EU member state like Hungary might demand his data. Apparently Katharina Barley, German Federal Minister of Justice, agrees. Germany has protections against infringements on one's "informational self-determination" that are the strongest of any EU member state. The European Arrest Warrant is another example of the national limits placed on EU rights in some conditions. Besides, Klingst sees a contradiction between having Internet companies be the guardians of right and wrong, whereas in a new draft German law they might be punished themselves. Would other MSs respect Germany's interpretation of who maintains confidentiality? he asks rhetorically. E-evidence could become the first case, Klingst predicts, testing whether Germany's top judges have reserved enough room for the most basic protections. Much evidence is plain text; but some evidence is encrypted. In 2015 and 2016, another chapter was added to the long-standing encryption controversy with the FBI-Apple encryption dispute. That controversy continues in 2019 with multiple nation-states pressuring Facebook to put a backdoor in its messenger service. References Further reading Journals Digital Evidence and Electronic Signature Law Review (open source), http://journals.sas.ac.uk/deeslr/ Journal of Digital Forensics, Security and Law (open source), https://commons.erau.edu/jdfsl/ Books Paul, George L.: Foundations of Digital Evidence (American Bar Association, 2008) Scanlan, Daniel M.: Digital Evidence in Criminal Law (Thomson Reuters Canada Limited, 2011) Scheindlin Shira A. and The Sedona Conference (2016): Electronic Discovery and Digital Evidence in a Nutshell, Second Edition, West Academic Publishing, Evidence law Computer law Digital forensics
33239815
https://en.wikipedia.org/wiki/SPARC%20T4
SPARC T4
The SPARC T4 is a SPARC multicore microprocessor introduced in 2011 by Oracle Corporation. The processor is designed to offer high multithreaded performance (8 threads per core, with 8 cores per chip), as well as high single threaded performance from the same chip. The chip is the 4th generation processor in the T-Series family. Sun Microsystems brought the first T-Series processor (UltraSPARC T1) to market in 2005. The chip is the first Sun/Oracle SPARC chip to use dynamic threading and out-of-order execution. It incorporates one floating point unit and one dedicated cryptographic unit per core. The cores use the 64-bit SPARC Version 9 architecture running at frequencies between 2.85 GHz and 3.0 GHz, and are built in a 40 nm process with a die size of . History and design An eight core, eight thread per core chip built in a 40 nm process and running at 2.5 GHz was described in Sun Microsystems' processor roadmap of 2009. It was codenamed "Yosemite Falls" and given an expected release date of late 2011. The processor was expected to introduce a new microarchitecture, codenamed "VT Core". The online technology website The Register speculated that this chip would be named "T4", being the successor to the SPARC T3. The Yosemite Falls CPU product remained on Oracle Corporation's processor roadmap after the company took over Sun in early 2010. In December 2010 the T4 processor was confirmed by Oracle's VP of hardware development to be designed for improved per-thread performance, with eight cores, and with an expected release within one year. The processor design was presented at the 2011 Hot Chips conference. The cores (renamed "S3" from "VT") included a dual-issue 16 stage integer pipeline, and 11-cycle floating point pipeline, both giving improvements over the previous ("S2") core used in the SPARC T3 processor. Each core has associated 16 KB data and 16 KB instruction L1 caches, and a unified 128 KB L2 Cache. All eight cores share 4 MB L3 cache, and the total transistor count is approximately 855 million. The design was the first Sun/Oracle SPARC processor with out-of-order execution and was the first processor in the SPARC T-Series family to include the ability to issue more than one instruction per cycle to a core's execution units. The T4 processor was officially introduced as part of Oracle's SPARC T4 servers in September 2011. Initial product releases of a single processor T4-1 rack server ran at 2.85 GHz. The dual processor T4-2 ran at the same 2.85 GHz frequency, and the quad processor T4-4 server ran at 3.0 GHz. The SPARC S3 core also include a thread priority mechanism (called "dynamic threading") whereby each thread is allocated resources based on need, giving increased performance. Most S3 core resources are shared among all active threads, up to 8 of them. Shared resources include branch prediction structures, various buffer entries, and out-of-order execution resources. Static resource allocation reserves the resources to the threads based on a policy whether the thread can use them or not. Dynamic threading allocates these resources to the threads that are ready and will use them, thus improving performance. Cryptographic performance was also increased over the T3 chip by design improvements including a new set of cryptographic instructions. UltraSPARC T2 and T3's per-core cryptographic coprocessors were replaced with in-core accelerators and instruction-based cryptography. The implementation is designed to achieve wire speed encryption and decryption on the SPARC T4's 10-Gbit/s Ethernet ports. The architectural changes are claimed to deliver a 5x improvement in single thread integer performance and twice the per-thread throughput performance compared to the previous generation T3. The published SPECjvm2008 result for a 16-core T4-2 is 454 ops/m and 321 ops/m for the 32-core T3-2 which is a ratio of 2.8x in performance per core. References External links SPARC microprocessors Oracle microprocessors 64-bit microprocessors
33242269
https://en.wikipedia.org/wiki/Scriptcase
Scriptcase
Scriptcase RAD is a development platform for PHP applications. It is web oriented and can be installed in a server in the internet. It acts as a platform for developers and allows them the use of a graphical interface directly through a web browser to automatically generate the codes. It was developed by NetMake in the year 2000. Can be used on Mac, Windows, and Linux operating system. Using Scriptcase, PHP developers can generate complete online applications. Scriptcase is a rapid web development tool that aims to reduce development time and increase productivity. Developers need an environment (web server like Apache + PHP and a database like MySQL) on their desktop (or accessible via network or internet) to develop applications, for hosting the applications the server needs a webserver (incl. PHP) + a database. After programs are finally developed and deployed, Scriptcase is no longer necessary to run the application. Features Scriptcase can be used to create CRUD (Create, Read, Update and Delete) applications. It also enables to add custom code to manage business rules and validation. Scriptcase lets you create forms, queries in PHP, ranging from simple forms to forms having high level of complex elements to manipulate data from databases (MySQL, PostgreSQL, SQLite, Interbase, Firebird, Access, Oracle, MS SQLServer, DB2, SyBase, Informix and ODBC connections). It permits development with JavaScript methods that can be used within the AJAX events and create applications with AJAX through a set of features and services with easy and fast hand coding, such as navigation between pages or sections, automatic validation of fields such as date, currency, zip code and social security number, among others. The generated reports can be exported to MS Word, MS Excel, PDF or printed. Complex SQL statements can be used (sub-select, joins and even stored procedures). Scriptcase allows users to write PHP to handle exceptions and create more complex validation). Scriptcase is Compatible with RTL (Right to Left) writing, support right to left writing prevalent on Arabic languages. It is also possible to create infrastructure such as menus, login screen and security system with authentication, create tabs to group forms or queries to be executed on the same page. Platform development began in 2000. Since then, it has been receiving regular updates. It addresses Web Application Developers (both for desktops and for mobiles) in two ways: it enables starting developers with limited knowledge of programming (PHP, Java) and basic knowledge of databases (SQL) to build applications that read and updates data in Web Databases. But it also aims at the experienced developer, who can with Scriptcase put a lot more focus on business logic rather than editing forms, build database connections etc. The pricing model is about to shift from "buying the software" as till the end of 2016 to "lease as a service" from 2016 (due to a publication from the CEO to subscribers and current users of scriptcase from Nov 14th, 2016). The current prices are 400-$600 for buying the software (depending on how many databases are supported) per developer. Disadvantages The official documentation is very basic and does not work to solve common problems. The community of developers that can help you solve your problems is quite small. The maintenance of applications with complex business logic becomes very complicated. Versions The platform development started in 2000. Since then, the tool has received periodic updates and improvements. 1.0 - Released 2000 - "Fossil Version". Was sold only for some customers in beta mode. 2.0 – 2003 Big Changes in the interface. Theme and CSS Editor Creation of SQL builder Export PDF Macros Security Module 1.0 3.0 - 2006 Native support for AJAX Creation of the concept of Events Stored procedure 4.0 - 2008 Internationalization support Master-Detail Functionality Editable Grid HTML Editor Helpcase (documentation generator) New filter options Navigation using tabs on the internal interface of Scriptcase 5.0 – 2009 Graphics in Flash Creation of the container application Express applications Creation captcha security use dynamic menus Menu with "refresh" option Tree Menu New Security Module Logging module 5.1 – 2010 New implementations such as JQuery support Calendar Google Maps Quick Search bar codes flash graphics Container Field type YouTube New Themes 5.2 – 2010 Focused on editing and field types, creating the option "edit fields" Form: New field formatting, tabs, validation, among others. 6.0 – 2012 The biggest change was in performance, up to 5 times faster. New graphics module New security module New log module Database Manager 7.0 – 2013 Integration with PayPal Integrated social media Buttons, AJAX in Grids Mobile Menu New PDF generator Tool bar for menus Graphics in HTML 5 7.1 – 2013 Multi-Thread Processing Change in interface Graphics with navigation on the Toolbar Friendly URL 8.0 – 2014 Scriptcase consolidation as one of the strongest tools of BI (Business Intelligence), migrating some features that help the end-user to make decisions. For that, we made some major changes on the Grid Application, such as the option for your end-users to create their own Group By within the application based on the fields defined by the developer, add totals and create their own graphics. In addition, there is a new graphic application, added to simplify the creation of such applications. New dynamic filter, and Group By Summary for Grid applications; Interface to message between Scriptcase users; Image Manager; Editor for graphics themes; A tool to import ACCESS, CSV and XLS in MySQL database, SQLite, Postgre and MSSQL; New interface for settings on applications such as Form and grid; Responsiveness to form and make the menu of mobile web applications; Support for the TCPDF class in the application Report PDF; Dynamic Research in form applications; Grouping of option buttons to applications with the option of the toolbar; Past, Present and Future new events in the calendar application. 8.1 – 2015Implemented new filter functionality, and dynamic summary in Grid applications. Created a new Graphic application with support for user-defined combined final graphics. New refined Filter restricting values according to the universe that exists in the database. Group of buttons for the new chart in the toolbar button editor. New Tools Sending messages between users in Scriptcase; New tool for managing images in Scriptcase; Tool for creating to-do list among users; New tool for editing graphic themes; Tool for importing ACCESS, CSV and XLS for MySQL, Postgre, SQLite and MSSQL; New Library Manager; New HelpCase generator; New security module with listing of logged users and blocking by Brute Force; Scriptcase Interface Added new configuration interface of the form and Query applications; Added new parameters in the theme editor Refined Filter; Added new parameters in the theme editor for Menu tabs; Added new parameters in the theme editor to the navigation path of the Menu; Added option to background image in advanced settings for Menu theme; New Technologies Implemented mobile applications responsiveness form; Added TCPDF class support in the implementation Report PDF; Added new sc_webservice macro that supports soap services, curl, file_get_contents and sockets; Application resources Grouping of option buttons in applications that have toolbar; Implemented improvements in the implementation timetable; Added new path (breadcrumb) menu application browsing; Added macros sc_url_library () and sc_include_library (); Added new aggregation of type "weighted average"; Added new field for accumulation of values of other fields; Added new Ajax type button. 9.0 – 2017Comes with important implementations on Business Intelligence (with a complete redesign on the charts, grids, abstracts – Pivot Tables and Dashboards), significant improvements in security, PDF Report module, Menu, interface redesign development and improvement of performance with PHP7, among other implementations. Chart New aggregate functions to graph metrics; A new dimension of options for date fields; Possibility of an analytic and synthetic combination of different sizes in the same graph; Inclusion of a new type of filter (user filter) Inclusion of function LIMIT that can be used to rank the values within the graphics. New types of graphics: scatter and bubble, Gantt, semicircular and linear, funnel 2D and pyramid 2D; New customization options specific to the graphics bar, column, pie, and gauge; Possibility to export the graphs in PNG, JGP, PDF, SVG, and XLS; Dashboard New interface with drag and drop, for setting up widgets dynamically; Responsiveness in the presentation of the widgets in the Dashboard; New index widget to the presentation of KPIs (key performance indicators) within the dashboard; Application of Grids (Reports and Pivot Tables) New interface using drag and drop for defining the Grid breaks and the abstract; New aggregate functions to graph metrics: Count, Distinct count, variance and standard deviation; New dimension of options for date fields; Inclusion of a new filter to the summary (user filter); Inclusion of function LIMIT in the query and in the summary. Function can be used to rank the values. New configuration options within the aggregation of the summary; Layout application Allows integration of forms with customized HTML and CSS or imported in the form of external library; Body handling variables from HTML: visual can now be changed completely and adapted; Report PDF Drag & Drop New option of drawing with drag drop and dynamic configuration of & fields within the same interface; Menu application New option "menu structure" for customization of menu layout; Scriptcase Interface Great part of Scriptcase interface has been modified, thereby improving the usability of the tool for developers; New interface for creating projects with search option; New interface for creating application with multiple selection of tables, etc. Environment and safety PHP support 7.0; Update ODP drivers (SQL Server, MySQL); Addition of the PDO Dlib drive for SQL Server; Addition of the new MySQLi for MySQL connections drive; SSL for secure connections with MySQL; New version of Apache 2.4.25; Option to login with Google, Facebook and Twitter automatically by the safety module; New options for the encryption password login application field (MD5, SHA1, SHA256, SHA512); Security module Integration with the new option of free-form control with responsive templates. Key features AJAX Editable grid Master / Detail Forms Consultations Reports Menus Tabs Customizable Layouts Documentation generator Data Dictionary Language Editor Import HTML template jQuery JavaScript Scriptcase still allows the advanced settings for the generated applications meet the requirements of complex systems. There is also a documentation generator that can integrate the whole team. References Scriptcase Official Site Samples Features Scriptcase Download Scriptcase Host PHP Web development software
33250748
https://en.wikipedia.org/wiki/Verax%20IPMI
Verax IPMI
Verax IPMI in an open source Java library implementing IPMI protocol 2.0 over UDP. The library allows to probe devices over IPMI which has been adopted as a SNMP alternative for hardware management by many vendors. Library is compliant with the IPMI v2.0, revision 1.0 standard. Verax IPMI library is a native Java 1.6 implementation and no additional native libraries or drivers required. Overview Library provides UDP session management (connect, disconnect, keep-alives, sliding window for messages and message sequence numbers) and supports any number of concurrent sessions. Library contains standard Intelligent Platform Management Interface encryption algorithms for authentication (RAKP-HMAC-SHA1), integrity (HMAC-SHA1-96) and confidentiality (AES-CBC-128), however additional algorithms can be provided. Library contains encoders and decoders for event log, sensor values and hardware information (FRU - Field Replaceable Unit). Library can be extended with additional, user-defined encoders. The library supports encoders/decoders for IPMI version 1.5 messages, however session management is provided only for IPMI version 2.0. License Verax IPMI Library for Java has been developed by Verax Systems and released under GPL v3 license. See also Java (programming language) IPMI protocol User datagram protocol References Java platform software Free computer libraries Java development tools Java (programming language) libraries
33265857
https://en.wikipedia.org/wiki/CIPURSE
CIPURSE
CIPURSE is an open security standard for transit fare collection systems. It makes use of smart card technologies and additional security measures. History The CIPURSE open security standard was established by the Open Standard for Public Transportation Alliance to address the needs of local and regional transit authorities for automatic fare collection systems based on smart card technologies and advanced security measures. Products developed in conformance with the CIPURSE standard are intended to: include advanced security technology, support multiple applications, help enable compatibility with legacy systems, and be available in a variety of form factors. The open CIPURSE standard is intended to: promote vendor neutrality, enable cross-vendor system interoperability, reduce the risk of adopting new technology, and improve market responsiveness. All of these factors are intended to reduce operating costs and increase flexibility for transport system operators. Background In the past, public transport systems were often implemented using standalone, proprietary fare collection systems. In such cases, each fare collection system employed unique fare media (such as its own style of ticket printed on card) and data management systems. Because fare collection systems did not interoperate with each other, payment schemes and tokens varied widely between local and regional systems, and new systems were often costly to develop and maintain. Transport systems are migrating to microcontroller-based fare collection systems. These are converging with similar applications and technologies, such as branded credit-debit payment cards, micropayments, multi-application cards, and Near Field Communication (NFC) mobile phones and devices. These schemes will enable passengers to use transit tokens seamlessly across multiple transit systems. These new applications demand higher levels of security than most existing schemes that they will replace. The OSPT Alliance defined the CIPURSE standard to provide an open platform for securing both new and legacy transit fare collection applications. Systems using the CIPURSE open security standard address public transport services, collection of transport fares, and transactions related to micropayments. The transition to an open standard platform creates opportunities to adopt open standards for important parts of the fare collection system, including data management, the media interface and security. An open standard for developing secure transit fare collection solutions could make systems more cost-effective, secure, flexible, scalable and extensible. Specification In December 2010, the OSPT Alliance introduced the first draft of the CIPURSE standard. It employs existing, proven open standards, including the ISO/IEC 7816 smart card standard, as well as the 128-bit Advanced Encryption Standard and the ISO/IEC 14443 protocol layer. Designed for low-cost silicon implementations, the CIPURSE security concept uses an authentication scheme that is resistant to most of today’s electronic attacks. Its security mechanisms include a unique cryptographic protocol for fast and efficient implementations with robust, inherent protection against differential power analysis (DPA) and Differential fault analysis attacks. Because the protocol is inherently resistant to these kinds of attacks and does not require dedicated hardware measures, it should be both more secure and less costly. It is intended to guard against counterfeiting, cloning, eavesdropping, man-in-the-middle attacks and other security threats. The CIPURSE standard also: Defines a secure messaging protocol Identifies four minimum mandatory file types and a minimum mandatory command set to access these files Specifies encryption keys and access conditions Is radio frequency (RF) layer agnostic Includes personalization and life cycle management, as well as system functionality to provide interoperability and fast adoption Provides a security concept and guidelines OSPT Alliance technology providers are allowed to add functionality outside the common core (which is defined in the standard) to differentiate their products, so long as they do not jeopardize interoperability of the core functions. Introduced in late 2012, Version 2.0 of the CIPURSE Specification is the latest version. Designed as a layered, modular architecture with application-specific profiles, the open and secure CIPURSE V2 standard comprises a single, consistent set of specifications for all security, personalization, administration and life-cycle management functions needed to create a broad range of interoperable transit applications – from inexpensive single-ride or daily paper tickets to rechargeable fixed-count or weekly plastic tickets to longer-term smart card- or smart phone-based commuter tickets that can also support loyalty and other applications. Three application-specific profiles – subsets of the CIPURSE V2 standard tailored for different use cases – have been defined, with which vendors are required to comply when creating products targeting these applications: CIPURSE T – Takes advantage of the new transaction mechanisms included in the specification to support the use of high-level, microprocessor-based transactions using smart cards, mobile phones and similar devices for more complex transit fare applications, such as monthly or annual tickets, multi-system tickets and loyalty programs. CIPURSE S – Supports tickets that can be recharged for a specific number of rides or weekly tickets and is essentially equivalent to and supplants the current CIPURSE 1.1 specification. CIPURSE L – Supports applications that use very inexpensive, disposable single-ride or daily tickets. Products based on different profiles can be added to fare collection systems at any time and can be used in parallel to provide transit operators the greatest flexibility in offering riders a range of transit fare options. Because they are derived from the same set of specifications, all the profiles are interoperable, reflect the same design criteria and have the same appearance, enabling developers to create products according to a family concept. With its modular “onion-layered” design, the CIPURSE standard can be easily enhanced in the future with additional functionality and new profiles created to address changes in technology and business. The CIPURSE V2 specification enables technology suppliers to develop and deliver innovative, more secure and interoperable transit fare collection solutions for cards, stickers, fobs, mobile phones and other consumer devices, as well as infrastructure components. In early 2013, the OSPT introduced the CIPURSE V2 Mobile Guidelines, a comprehensive set of requirements and use cases for developing and deploying CIPURSE-secured transit fare mobile apps for near field communication (NFC)-enabled smartphones, tablets and other smart devices. Providing everything developers need to implement and use the CIPURSE V2 open security standard when embedded in an NFC mobile device, the new guidelines enable transit operators to enhance their systems to support mobile ticketing with these new form factors. Organization Founded by smart card manufacturers Giesecke & Devrient GmbH (G&D) and Oberthur Technologies and chip suppliers Infineon Technologies AG, and INSIDE Secure S.A. (formerly INSIDE Contactless) in January 2010, the OSPT Alliance collectively defined the CIPURSE standard. The Alliance partners test their products for conformance with CIPURSE to demonstrate interoperability, and have engaged an independent test authority to test compliance with the standard, interoperability, and performance. The OSPT Alliance The OSPT Alliance is a nonprofit industry organization open to technology vendors, transit operators, government agencies, systems integrators, mobile device manufacturers, trusted service operators, consultants, industry associations and others wishing to participate in the organization’s education, marketing and technology development activities. Members As of February 2019, Full members of the alliance are: Americaneagle.com Artesp ATM Barcelona AUSTRIACARD Brush Industries CEITEC S.A. City Group Consorcio Sir Cuenca Cosmo.ID Crane Payment Innovation Dataprom Delerrok Inc. DIMTS Discovery Research and Development Center Enotria ETDA Etertin Corp Facillite FEIG Electronic FIME G+D Mobile Security Gemalto GTech Technologia E Software (Gbits) GuardTek HID Global IDEMIA Identiv Infineon Technologies AG Instituto Modal ITSO Ltd. Keith Smith Consulting Kenetics Innovations KEOLABS Korean Testing Certification Linxens MaskTech Medius Miskimmin Consulting MK Smart Nexus Group NSB phg Planeta Informática Pri-Num Prokart Quanta-IT QuantumAeon Rambus Rede Ponto Certo Rede Protege RioCard RioCard TI San Joaquin Regional Transit District (RTD) São Paulo Transporte SC Soft Secure Technology Alliance Sequent Setransp Silone SIMA Smarting solutionLAB SpringCard Stratos Group Telenor Group Telexis The Open Ticketing Institute (OTI) Tmonet Transdata Smart TU Wien - Vienna University of Technology Tubitak Tue Minh Udobny Marshrut Universitat Politécnica de Catalunya Urbanito UTI Infrastructure Technology And Services Ltd. VISALUX Comércio e Indústria Ltda Washington Metropolitan Area Transit Authority Watchdata Technologies Ltd. WUXI HUAJIE ZeitControl cardsystems GmbH The alliance is open to companies on the component supply and system integration side, as well as transport agencies and other standards bodies, to contribute their experience and knowledge to the development of the CIPURSE open standard. See also Calypso (electronic ticketing system) Resources White Paper: An Open Standard for Next-Generation Transit Fare Collection Presentation: A Secure and Open Solution for Seamless Transit Systems References External links The OSPT Alliance Public transport fare collection Electronic trading systems
33301133
https://en.wikipedia.org/wiki/Lightweight%20Portable%20Security
Lightweight Portable Security
Lightweight Portable Security (LPS) was a Linux LiveCD (or LiveUSB) distribution, developed and publicly distributed by the United States Department of Defense’s Air Force Research Laboratory, that is designed to serve as a secure end node. The Air Force Research Laboratory actively maintained LPS and its successor, Trusted End Node Security (TENS) from 2007 to 2021. It can run on almost any x86_64 computer (PC or Mac). LPS boots only in RAM, creating a pristine, non-persistent end node. It supports DoD-approved Common Access Card (CAC) readers, as required for authenticating users into PKI-authenticated gateways to access internal DoD networks. LPS turns an untrusted system (such as a home computer) into a trusted network client. No trace of work activity (or malware) can be written to the local computer's hard drive. As of September 2011 (version 1.2.5), the LPS public distribution includes a smart card-enabled Firefox browser supporting DoD's CAC and Personal Identity Verification (PIV) cards, a PDF and text viewer, Java, a file browser, remote desktop software (Citrix, Microsoft or VMware View), an SSH client, the public edition of Encryption Wizard and the ability to use USB flash drives. A Public Deluxe version is also available that adds LibreOffice and Adobe Reader software. History LPS and Encryption Wizard were initiated by the Air Force Research Laboratory's Anti-Tamper Software Protection Initiative program, started in 2001. In 2016, that program was ending, so LPS and Encryption Wizard were moved to the Trusted End Node Security program office. LPS, as of version 1.7 was rebranded Trusted End Node Security, or TENS. Encryption Wizard retained its name, but received the TENS logo as of version 3.4.11. In 2020, the COVID-19 outbreak caused new interest in telecommuting. The National Security Agency recommended U.S. government employees use government furnished computers when working from home. However, when it was necessary for an employee to use their home computer, the National Security Agency recommended TENS as one measure an individual employee could use to make that computer more secure. In 2021, TENS became compatible with UEFI Secure Boot. UEFI Secure Boot is used to protect the operating system installed on the computer's hard drive. As of June 2020, UEFI Secure Boot was available on many newer PCs. UEFI Secure Boot would prevent older versions of TENS from booting. In August 2021, the TENS web site announced the TENS program office had been decommissioned. The Defense Information Systems Agency was no longer willing to fund the program. No other agency had agreed to champion the program. "Potentially final" editions of TENS and Encryption Wizard had been released in April and May 2021. Encryption Wizard LPS comes with Encryption Wizard (EW), a simple, strong file and folder encryptor for protection of sensitive but unclassified information (FOUO, Privacy Act, CUI, etc.). Written in Java, EW encrypts all file types for data at rest and data in transit protection. Without installation or elevated privileges, EW runs on Windows, Mac, Linux, Solaris, and other computers that support the Java software platform. With a simple drag and drop interface, EW offers 128-bit and 256-bit AES encryption, SHA-256 hashing, RSA signatures, searchable metadata, archives, compression, secure deleting, and PKI/CAC/PIV support. Encryption can be keyed from a passphrase or a PKI certificate. EW is GOTS—U.S. Government invented, owned, and supported software—and comes in three versions, a public version that uses the standard Java cryptographic library, a unified version that uses a FIP-140-2 certified crypto licensed by The Legion of the Bouncy Castle, and a government-only version that uses a FIPS-140-2 certified crypto stack licensed from RSA Security. The three versions interoperate. Public HTTPS access The general public has had some difficulty accessing the LPS and TENS web sites, because from time to time, Department of Defense web sites have used security settings somewhat different than common practice. As a result, users have to configure their web browsers a particular way in order to obtain LPS or TENS. Circa 2020, the main difference is the web sites implement HTTPS using a Department of Defense certificate authority rather than one of the commonly accepted certificate authorities. Because of these difficulties with the Department of Defense web servers, the LPS and TENS program office established a commercially hosted web site http://www.gettens.online/ with instructions how to configure a browser to work with the official TENS web site. This article incorporates text from the US Department of Defense SPI web site. See also XFCE Lightweight Linux distribution References References to the Trusted End Node Security Program office refer to the Trusted End Node Security Program Office, Information Directorate, Air Force Research Laboratories, United States Air Force. References to the Software Protection Initiative refer to the DoD Anti-Tamper Program, Sensors Directorate, Air Force Research Laboratories, United States Air Force. External links . Home page for the TENS Program office. Operating system security Operating system distributions bootable from read-only media Live USB State-sponsored Linux distributions Linux distributions
33312073
https://en.wikipedia.org/wiki/Weightless%20%28wireless%20communications%29
Weightless (wireless communications)
Weightless was a set of low-power wide-area network (LPWAN) wireless technology specifications for exchanging data between a base station and thousands of machines around it. History An event was held at the Moller Centre in Cambridge, UK by Cambridge Wireless on September 30, 2011. Presentations were given by Neul, Landis+Gyr, Cable & Wireless, and ARM Holdings. The technology was promoted by the Weightless Special Interest Group (SIG), announced December 7, 2012. The group was led by William Webb, a professor at Cambridge and a founder of the company Neul. Another event was held in September, 2013, about when a version 1.0 was published. The name Weightless was chosen to reflect the low overhead per transmission for devices that need to communicate just a few bytes of data. The Weightless logo appears as uppercase letters with the 'W' appearing in the top-right corner of a light blue box that has a solid blue line above it. In September, 2014, Cambridge-based Neul was acquired by Huawei, for an estimated $25 million. By 2015, the company Nwave Technologies announced deployments in Copenhagen, Denmark and Esbjerg, Denmark. However, observers noted no products on the market. A company called Ubiik, based in Taiwan, announced pre-orders in 2017. Implementation Weightless-N is designed around a differential binary phase shift keying (DBPSK) digital modulation scheme to transmit within narrow frequency bands using a frequency hopping algorithm for interference mitigation and enhanced security. It provides for encryption and implicit authentication using a shared secret key regime to encode transmitted information via a 128 bit AES algorithm. The technology supports mobility with the network automatically routing terminal messages to the correct destination. Multiple networks, typically operated by different companies, are enabled and can be co-located. Each base station queries a central database to determine which network the terminal is registered to in order to decode and route data accordingly. Weightless-W uses time-division duplex operation with frequency hopping and variable spreading factors in an attempt to increase range and accommodate low power devices in frequency bands, or channels, within the terrestrial television broadcast band. Channels that are in use by a nearby television transmitter are identified and left unaffected while channels not being used for broadcasting television can be allocated for use by Weightless devices. A network of base stations communicate with the Internet, or a private network, in order to pass information from devices to a computer system; and to pass information back to the devices. The downlink to devices uses time slots (TDMA) and the uplink to the base station is divided into sub-channels so that several devices can communicate to the base station. Originally, there were three published Weightless connectivity standards Weightless-P, Weightless-N and Weightless-W. Weightless-N was an uplink only LPWAN technology. Weightless W was designed to operate in the TV whitespace. Weightless-P, with bi-directional, narrowband technology designed to be operated in licensed and unlicensed ISM frequencies, was then just called "Weightless". Communication and connection A base station transmits a Weightless frame which is received by a few thousand devices. The devices are allocated a specific time and frequency to transmit their data back to the base station. The base station is connected to the Internet or a private network. The base station accesses a database to identify the frequencies, or channels, that it can use without interfering with terrestrial television broadcasts in its local area. Weightless is a wireless communications protocol designed for what is called machine to machine (M2M) communications known as the Internet of things – over distances ranging from a few metres to about 10 km. Related technologies Other technologies which use the channels not used for terrestrial television broadcast in a particular area are also being developed. One is Wi-Fi under the standard IEEE 802.11af. The IEEE 802.22 standard defines a MAC and PHY layer for TV white spaces that complies with the FCC and international standards for broadcasting in this spectrum. It also defines general protocol model for negotiating and selecting shared spectrum band for device operation. A Weightless Radio implementation would comply with this standard to cooperatively share the available spectrum. Another technology is developed by the company Sigfox. Specifications and features The original Weightless specification was developed for machine-to-machine, low-cost, low-power communication system for use in the white space between TV channels in 2011 by engineers working at Neul in Cambridge, UK. The Weightless-W specification is based on time-division duplex technology with spread spectrum frequency hopping in an attempt to minimise the impact of interference and with variable spreading factors in an attempt to increase range (at the expense of lower data rate) and to accommodate low power devices with low data rates. Weightless v1.0 The formal Weightless-W Standard was published in February 2013. The Weightless-N Standard was published in May 2015. In networks using Weightless-W technology a base station queries a database which identifies the channels that are being used for terrestrial television broadcast in its local area. The channels not in use – the so-called white space – can be used by the base station to communicate with terminals using the Weightless-W protocol. Terminal endpoints were designed to be low-cost devices using minimal power so that they could work autonomously for up to several years. Air interface The Weightless-W protocol operates in the TV channels band. The Weightless-W protocol divides the band into channels. A database is queried by a base station to determine which channels are in use by terrestrial television broadcast stations in the area, and which ones are free for use by white space devices (such as those using Weightless). A range of modulation and encoding techniques are used to permit each base station to communicate at a variety of speeds with terminals, some of which may be nearby and others several km away. Data rates may vary depending on the distance and the presence of radio interference – the typical range is alleged to be between about 0.1Mbit/s and 16Mbit/s. The design of the air interface and protocol minimises the costs of the equipment and its power consumption. A broadband downlink from a base station to a terminal uses single carrier in an unused 6 MHz (for USA) or 8 MHz (for UK) TV channel. See also Wimax Telemetry DASH7 References External links White Space Networks and Machine-to-Machine (M2M) Services at Oxford University Mobile computers Networking standards Wireless
33336285
https://en.wikipedia.org/wiki/Adam%20Werritty
Adam Werritty
Adam Werritty (born 18 July 1978) is a Scottish businessman. Werritty is a friend of the former UK Secretary of State for Defence and ex Secretary of State for International Trade, Liam Fox. He lived for a period in 2002 and 2003 at Fox's London flat and was best man at his wedding in 2005. The two were also business associates who once held joint investments in the healthcare consultancy firm UK Health. Werritty was reportedly an adviser of Fox's and is known to have accompanied him on at least 18 foreign business trips between 2009 and 2011. In 2007, when Fox was shadow Defence Secretary, they both attended a meeting with the Gulf Research Centre. Werritty was also appointed by Fox as the chief executive of the now disbanded conservative Atlanticist think-tank, "The Atlantic Bridge". Werritty made visits to Fox at the Ministry of Defence (MoD) in Whitehall on 22 occasions in 16 months; Werrity was not security-cleared with the MoD. Additionally, over a 17-month period, ending October 2011, Werritty was present at 40 of Fox's 70 recorded engagements. The uncertain nature of Werritty's relationship with Fox led to an investigation by senior civil servants, initially the MoD's Permanent Secretary, Ursula Brennan and latterly the Cabinet Secretary Sir Gus O'Donnell. Fox claimed that Werrity had never worked for him either in an official or unofficial capacity despite allegations that he was using a source of advice outside the Civil Service, paid for by private funds. Disclosure of increasing amounts of detail of their contact, funding and explanations of their relationship led to Fox's resignation on 14 October 2011 in advance of O'Donnell's report of his investigation. Personal life Born in Kirkcaldy, Werrity was raised in St Andrews, Fife, and went to Madras College, where he played rugby in the 1st XV. He was also the 1st year boys school sports champion. Werritty went to the University of Edinburgh to study public policy, becoming vice-president of the Scottish Conservative and Unionist Students branch. He graduated with a 2:2 in social policy. He left Scotland to work for the healthcare company PPP and lived in several places in London and stayed rent-free between 2002 and 2003 in Fox's taxpayer-subsidised flat in Southwark, near Tower Bridge. Werritty was Fox's best man at his wedding in 2005. Werritty lives in Pimlico near Vauxhall Bridge, close to Parliament. He is a member of the Carlton Club and the Conservative Party. Investigation Werritty was investigated by senior civil servants led by Cabinet Secretary Sir Gus O'Donnell. The Prime Minister David Cameron first asked for an interim report of the MoD internal inquiry by 10 October 2011. The final report was initially due to be submitted on 21 October 2011 but O'Donnell's finding were released earlier than anticipated on 18 October. Amongst other findings, the report stated that the former defence secretary had blocked civil servants from attending key meetings where Adam Werritty was in attendance and had also failed to tell his permanent secretary that he had solicited funds to bankroll Werritty, he had also ignored private office requests to distance himself from the relationship. The investigations into Werrity's close ties also revealed that he had visited Fox in the Ministry of Defence Main Building on 22 occasions during a period of 16 months. In addition, Werrity was present on 18 overseas trips undertaken by Fox in the course of his duties as Secretary of State. On 10 October 2011 the MoD published a full list of Fox's meetings, from the beginning of his term in office (20 May 2010) to 8 October 2011, and it revealed that Werrity was present at 40 of Fox's 70 engagements in that period. Ties to Liam Fox His friendship with Liam Fox began in the late 1990s, when Fox was an Opposition Front Bench Spokesman on Scotland and Constitutional Affairs and when Werritty was studying public policy at Edinburgh University. They had a shared interest in politics and the United States. On 10 October 2011 in a statement to the House of Commons, Liam Fox said that Werritty worked as a paid intern in Fox's parliamentary office when the Conservative Party was in opposition (1997-2010) and at this time had a Parliamentary pass. Fox said records showed Werritty received a total payment of £5,800 for research work undertaken during that time. Werritty lived in Fox's apartment in Southwark, London, during 2002 and 2003. The property in which Werrity stayed rent free was mortgaged at £1,400 per month and covered by Fox's Additional Costs Allowance (ACA), part of his MP's expenses. In 2011, Werritty stayed with Fox at a villa in Spain during an August holiday break at the climax of the 2011 Libyan civil war. Foreign trips to Dubai, Israel, Washington, and Sri Lanka Financial backers linked to Israel and a private intelligence firm helped fund Werritty's travels with Fox. In April 2007, Werritty and Fox attended an official meeting with the Gulf Research Centre, an independently run body that conducts research on issues concerning the Middle East. The two also attended an Israeli security conference centred on relations with Palestine, as well as Iranian sanctions, which took place in Herzliya in 2009. Fox is a strong supporter of Israel and is a member of Conservative Friends of Israel. Werritty is listed in conference proceedings as "Dr. Adam Werritty", an adviser to Fox in his role as Shadow Defence Secretary. In September 2010 the pair were in Washington and met at a defence industry dinner attended by some of the US's leading generals, including General James Mattis, commander of US Central Command. In June 2011 Werritty organised a business meeting at the Shangri-La Hotel in Dubai. The meeting was attended by Werritty, Fox, the British private equity boss and CEO of the Porton Group Harvey Boulter, and two other Dubai-based businessmen. Werritty had earlier been contacted by a lobbying firm known as Tetra Strategy, whom Boulter had hired at a rate of £10,000 per month, in an attempt to have Fox intervene in a Porton Group legal dispute that indirectly involved the MoD. Tetra are believed to have begun working towards arranging a meeting with Werritty or Fox as early as 25 March 2011. In an email from Lee Petar, Tetra's boss, to Boulter, Werrity is described as the "special adviser to the secretary of state for defence Liam Fox." Werritty's initial meeting with Boulter in April 2011 led to discussions with Fox regarding the sale of a product called Cellcrypt. The 45-minute Dubai meeting in June 2011 was primarily about the possible sale of the voice encryption software to the British MoD. Boulter has claimed that the matter of a legal battle between Porton Group and 3M concerning Acolyte, an EU regulatory approved rapid detection technology for MRSA, and a deal worth £41 million was allocated no more than 5–10 minutes at the end of the meeting. According to The Guardian, details relating to the nature of the visit and the business matters discussed suggest that it was "highly irregular". The MoD has stated that there were no officials present at the meeting but that one of those present claimed to have received the impression that all of those in attendance had been security cleared. Werritty did not have such clearance. On 7 October 2011, The Guardian reported that Werritty met senior Sri Lankan ministers on an official visit with Fox in summer 2011. The Sri Lankan trip was originally scheduled for December 2010 but a disagreement with the foreign secretary, William Hague, saw the visit changed to July despite allegations that the Sri Lankan government supported paramilitary groups in defeating the Tamil Tigers. As a result of the trip questions were raised about the appropriateness of Werritty accompanying Fox on government trips abroad. Fox had previously claimed that Werritty had never joined him on such trips but details relating to how frequently Werritty was in his company while abroad later emerged. Overview of all foreign trips with Fox Between February 2009 and 2011 Werritty was in Fox's company on many trips abroad: Israel, February 2009. Singapore, 4–6 June 2010. Dubai, 7–8 June 2010. Florida, 2–3 July 2010. Dubai, 6–8 August 2010. Washington DC September 2010 Bahrain, 2–6 December 2010. Dubai, 17–22 December 2010. Hong Kong, 16–23 January 2011. Israel, 6–7 February. Switzerland, 17–21 February. Dubai, April 2011. Abu Dhabi, 14–18 April 2011. Florida/Washington, 22–25 May 2011. Hong Kong, 31 May – 1 June 2011. Singapore, 2–6 June 2011. Sri Lanka, July 2011. Dubai, 17 June 2011. Washington DC, 30 June – 3 July 2011. Spain, 5–9 August. Reported advisory role Wealthy Conservative donors including Michael Hintze indirectly provided financial support for Werrity's role as a political and strategic adviser to Fox by funding organisations such as Atlantic Bridge, a registered charity set up by Fox and run by Werrity that was used to bankroll the adviser's various international travels. Hintze, a major Tory party donor, donated £104,000 to Fox's charity. He also provided Werritty with free office space at the headquarters of his £5bn CQS hedge fund and allowed both Fox and Werritty to use his private jet. Days before the forced cessation of Atlantic Bridge's operations by the Charity Commission, Werrity founded a company called Pargav Ltd., which went on to receive a further £147,000 in donations from Tory party supporters and businessmen. Pargav's sole director was Oliver Hylton, a close senior aide of Hintze's and the manager of his charitable foundation that paid the donations to Atlantic Bridge. It emerged that Hylton, who was initially suspended by CQS following news of the Werrity affair, later ceased employment with the company. Werritty was involved in a number of secret meetings (the first, on 8 September 2009) organised by Denis Macshane, involving himself and Matthew Gould, Britain's Ambassador to Israel, with the intention of enlisting British support for an Israeli attack on Iran. Werrity's close ties to Conservative hardliners, it was argued, enabled him to bypass Whitehall officials and helped Fox promote strongly pro-American policies and Euroscepticism in the UK and abroad. In response to revelations about Werrity's activities becoming publicly known, Fox's political allies launched an effort to distance Werritty from the minister by describing him as an opportunist who had "taken advantage" of his personal relationship with Fox and as someone who was a fantasist "masquerading as someone he was not." Werritty distributed business cards that declared he was an "advisor (sic) to the Rt Hon Dr Liam Fox MP" although Fox claims he requested him not to do so. Werritty also made visits to Fox at the MoD's HQ in Whitehall on 22 occasions in 16 months which led the Labour Party to request an inquiry into a possible national security breach. Werrity was able to arrange access to the minister for private sector companies on matters where they could both see commercial gains, despite denials of any role as an adviser. Fox has claimed that Werritty was not connected with backers of companies who wanted defence contracts but was instead funded by ideological backers. According to the BBC this meant that Fox was using sources of advice outside the Civil Service and paid for by private funds. Fox has declared that: "I do accept that given Mr Werritty's defence-related business interests, my frequent contacts with him may have given an impression of wrongdoing, and may also have given third parties the misleading impression that Mr Werritty was an official adviser rather than simply a friend". The former chairman of the Standards and Privileges Committee, Sir Alistair Graham, stated that Fox had shown serious misjudgement and the BBC's political editor Nick Robinson observed that whatever Fox claimed "many will judge that Adam Werritty acted as his adviser...his business cards stated he was an adviser, he booked hotels as an adviser, he fixed meetings with people who believed he was an adviser...he raised funds from people who thought that too...[and] the sole director of the not-for-profit company set up to fund Werritty regarded him as 'an adviser of some sort to Dr Fox'." Role in the Atlantic Bridge Werritty was also responsible for operating The Atlantic Bridge from Fox's office at taxpayers' expense. The conservative "charity" worked in conjunction with a US lobbying group, the American Legislative Exchange Council (ALEC), which allows cooperation between legislators and corporations such as Philip Morris, Texaco and McDonald's in addressing common interests. According to US charity records, Werritty was listed as the UK executive director with an address corresponding to Fox's former room at the House of Commons, No. 341 in the MPs' block at Portcullis House, which served as the charity's official headquarters. The Guardian reported that, between 2007 and 2010, Werritty's income as chief executive of The Atlantic Bridge was in excess of £90,000. The charity was established by Fox to help US/UK relations and serve as a reminder of the Reagan-Thatcher era and Werritty was given a lead role. The charity also functioned as a counterpart to the ALEC-founded Atlantic Bridge Group, a sister organisation in the United States. Following criticism by regulators that the charity was too politically oriented to be eligible for charitable status, the UK wing disbanded in September 2011. Resignation of Liam Fox On 14 October 2011, following the furore over Adam Werritty, Liam Fox resigned from his position of Defence Secretary. See also The Atlantic Bridge Research and Education Scheme References External links 1978 births Living people Alumni of the University of Edinburgh Scottish businesspeople People educated at Madras College People from Kirkcaldy People from St Andrews Scottish Conservative Party politicians Scottish people of American descent British lobbyists
33356408
https://en.wikipedia.org/wiki/Home%20Video%20Channel
Home Video Channel
Home Video Channel (HVC) was a British cable television service devoted to broadcasting low-budget movies (such as horror, action, adventure, science fiction and erotica) from 8.00pm to midnight, and the owners also operated The Adult Channel which started on 31 January 1992. History The service launched in September 1985 and was created by Ealing Cable as one of two channels to help build up content and viewership, the other being Indra Dhnush, an Asian channel. During its early years in operation, HVC purchased many movies as cheaply as possible, making copies via low-band U-matic tapes and distributing the films to other cable operators (along with a paper-based schedule) to play within their own local cable areas using a semi-automated system. In March 1987, HVC was sold to one of its rivals Premiere as the new owner continued with the channel's existing operational model including the distribution of tapes, and increased its broadcasting hours from 7.00pm to 7.00am. In 1989, HVC was sold to a private consortium which expanded its operation by switching to direct broadcast to British and European cable operators instead of sending out tapes, transmitting on the Astra satellite system. The Adult Channel As from 31 January 1992, the company started a pornographic network called The Adult Channel, a satellite-delivered subscription service that feature cable-related versions of adult movies with softcore content, as well as top quality erotic programmes include Electric Blue and various selection of short stories from Teresa Orlowski. The Adult Channel broadcast for four hours a day commencing at midnight, and was available to approximately two million cable and four million (direct-to-home) satellite households in the United Kingdom. The Adult Channel was also broadcasting throughout continental Europe and had subscribers in over 40 countries. HVC continued to operate during the evening with its movie service showing science fiction, erotica, action, adventure and horror films (especially the uncut versions where available) during the pre-midnight period. The two services were offered to cable operators as a seamless 8.00pm to 4.00am programming service at a single package price. In January 1995, the station transmission was moved from the Astra 1B satellite to the new Astra 1D, the channels used frequencies that were not available on the original Sky receivers as they were outside the original BSS band. Sky issued viewers with frequency shifters (known as "ADX Plus Channel Expanders") comprising small boxes as the size of a cigarette packet, with a single switch and an on/off LED circuit connected between the dish and powered by the receiver, these allowed viewers to switch manually between the Astra 1A and Astra 1D frequency bands with a difference of 250 MHz. By 1997, The Adult Channel had lost subscribers and much of its market share in the United Kingdom. Several factors were believed to have contributed to this decline, including the launch of two competing pornographic services (Television X and Playboy TV) in 1995 to increased piracy, along with the channel's use of an Astra 1D satellite transponder. In an effort to address these issues in 1998, the company restructured HVC's management and instituted a change in its transponder to allow it to broadcast on the British Sky Broadcasting satellite from August 1998. The cost of the new transponder was less expensive and the changes were successful with the number of subscribers increasing. The Adult Channel was broadcast after the Sci-Fi Channel and The History Channel as two widely distributed networks. HVC also switched to the Sky encryption technology in October 1997 to curtail signal piracy. HVC also increased The Adult Channel's programming budget for 1998 with an added emphasis on European programming. The company also increased HVC's advertising budget and reallocated it to the UK DTH market in an effort to regain lost market share. On 1 May 1999, Home Video Channel has ceased transmission after 14 years on air with only the Adult Channel continuing to broadcast afterwards. Spice Networks In 1994, Home Video Channel Limited was acquired by Spice Networks, which expanded the distribution throughout the rest of Europe when it increased the number of authorised agents throughout Western Europe that distribute direct-to-home subscriptions through sales of smart cards. HVC entered into an agency agreement with Nuevas Estructuras Televisivas who had secured affiliation agreements in over thirty Spanish cable systems. The Adult Channel was also carried on the Canal Digital platform which served subscribers in Scandinavia, Benelux and Germany (via Deutsche Telekom) which supplied 16 million homes via cable. In Eastern Europe, the channel was carried by cable systems in Russia, Lithuania, Estonia and Slovenia. One of the more promising programming arrangements was with Metromedia which operated cable systems in Romania and Russia. However, several of the Romanian cable systems ceased distribution of The Adult Channel as a result of the devaluation of the Romanian currency. Playboy TV UK On 11 May 1995, it was announced that Hugh Hefner's Playboy magazine, which had been producing Playboy TV in the United States launched on 1 November 1982, would start a new British television station in partnership with Flextech (51%) and British Sky Broadcasting (30%). Playboy Enterprises chairman and chief executive officer Christie Hefner said: David Chance, deputy managing director of British Sky Broadcasting, and Roger Luard, managing director of Flextech plc, jointly said: On 1 November 1995, Playboy TV UK started broadcasting for the first time, commences from 11.30pm to 4.00am on Mondays to Thursdays, and also from midnight on Fridays to Sundays. The channel offers a combination of programmes including live shows, comedy series, documentaries and films, as well as several big name stars include Pamela Anderson, Jenny McCarthy, Kathy Lloyd and Jo Guest. On 1 December 1998, HVC acquired the 81% interest in Playboy TV UK/Benelux from Flextech and British Sky Broadcasting Limited, and HVC and Playboy TV UK were subsequently merged. Playboy said that HVC would pay approximately US$9 million for the 81% interest and that the timing of the payments would be based on the network's future cash flows. Playboy TV and HVC continued to be delivered on Sky's satellite platform as well as via cable. Playboy Entertainment Group president Anthony J. Lynn said: On 11 February 2005, Playboy TV UK was fined by Ofcom for broadcasting Sandy Babe Abroad, a hardcore pornographic film, and said "it includes material which should not be transmitted at any time under any circumstance on British television". On 2 April 2009, the station again fined by Ofcom for breaches of its licence, by broadcasting "sexually explicit material unencrypted". On 16 January 2013, it was fined once again for failing to ensure that children were protected from potentially harmful pornographic material, they said "there wasn't a system in place on Playboy's on-demand programmes services and they didn't have acceptable controls in place to check that users were aged 18 or over". Playboy TV UK was also available in Finland and Scandinavia through Canal Digital (in Norway also via Get), in Belgium through Telenet, in Switzerland through Cablecom, in Africa, and also in New Zealand through Sky Network Television. On 30 November 2017, Sky's EPG slot was bought for the Television X Pay-Per Night service owned by Portland TV, it was unavailable on Virgin Media for two weeks but returned in mid-December 2017. The channel finally closed on Virgin Media in July 2018, with its slot being taken over by sister station XXX Brits. See also List of European television stations Timeline of cable television in the United Kingdom Pornography in the United Kingdom List of adult television channels Video nasty References Television channels in the United Kingdom Television channels and stations established in 1985 Defunct British television channels Television channels and stations disestablished in 1999 1985 establishments in the United Kingdom 1999 disestablishments in the United Kingdom
33364019
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S%20III
Samsung Galaxy S III
The Samsung Galaxy S III (or Galaxy S3) is an Android smartphone designed, developed, and marketed by Samsung Electronics. Launched in 2012, it had sold about 70 million units by 2015 with no recalls ever recorded. It is the third smartphone in the Samsung Galaxy S series. It has additional software features, expanded hardware, and a redesigned physique from its predecessor, the Samsung Galaxy S II, released the previous year. The "S III" employs an intelligent personal assistant (S Voice), eye-tracking ability, and increased storage. Although a wireless charging option was announced, it never came to fruition. However, there are third party kits which add support for Qi wireless charging. Depending on country, the smartphone comes with different processors and RAM capacity, and 4G LTE support. The device was launched with Android 4.0.4 "Ice Cream Sandwich", was updated to Android 4.3 "Jelly Bean", and can be updated to Android 4.4 "KitKat" on variants with 2 GB of RAM. The phone's successor, the Samsung Galaxy S4, was announced on 14 March 2013 and was released the following month. Following an 18-month development phase, Samsung unveiled the S III on 3 May 2012. The device was released in 28 European and Middle Eastern countries on 29 May 2012, before being progressively released in other major markets in June 2012. Prior to release, 9 million pre-orders were placed by more than 100 carriers globally. The S III was released by approximately 300 carriers in nearly 150 countries at the end of July 2012. More than 20 million units of the S III were sold within the first 100 days of release and more than 50 million until April 2013. Because of overwhelming demand and a manufacturing problem with the blue variant of the phone, there was an extensive shortage of the S III, especially in the United States. Nevertheless, the S III was well-received commercially and critically, with some technology commentators touting it as the "iPhone killer". In September 2012, TechRadar ranked it as the No. 1 handset in its constantly updated list of the 20 best mobile phones, while Stuff magazine likewise ranked it at No. 1 in its list of 10 best smartphones in May 2012. The handset also won the "European Mobile Phone of 2012–13" award from the European Imaging and Sound Association, as well as T3 magazine's "Phone of the Year" award for 2012. It played a major role in boosting Samsung's record operating profit during the second quarter of 2012. , the S III is part of a high-profile lawsuit between Samsung and Apple. In November 2012, research firm Strategy Analytics announced that the S III had overtaken Apple's iPhone 4S to become the world's best-selling smartphone model in Q3 2012. In April 2014, following the release of its new flagship, the Galaxy S5, Samsung released a refreshed version called the "Galaxy S3 Neo", which has a quad-core Snapdragon 400 processor clocked either at 1.2 or 1.4 GHz. It has 1.5 GB of RAM and 16 GB of internal storage and ships with Android 4.4.4 "KitKat". Samsung Galaxy S III was succeeded by the Samsung Galaxy S4 in April 2013. History Design work on the S III started in late 2010 under the supervision of Chang Dong-hoon, Samsung's Vice President and Head of the Design Group of Samsung Electronics. From the start, the design group concentrated on a trend which Samsung dubs "organic", which suggests that a prospective design should reflect natural elements such as the flow of water and wind. Some of the results of this design were the curved outline of the phone and its home screen's "Water Lux" effect, where taps and slides produce water ripples. Throughout the eighteen-month design process, Samsung implemented stringent security measures and procedures to maintain secrecy of the eventual design until its launch. Designers worked on three prototypes concurrently while regarding each of them as the final product. Doing so required a constant duplication of effort, as they had to repeat the same process for all three prototypes. The prototypes, of which taking photos was forbidden, were locked in a separate laboratory, accessible only by core designers. They were transported by trusted company employees, instead of third-party couriers. "Because we were only permitted to see the products and others weren't," explained Principal Engineer Lee Byung-Joon, "we couldn't send pictures or drawings. We had to explain the Galaxy S III with all sorts of words." Despite such security measures, specifications of one of the three units were leaked by Vietnamese Web site Tinhte, although it was not the selected design. Speculation in the general public and media outlets regarding the handset's specifications began gathering momentum several months before its formal unveiling in May 2012. In February 2012, prior to the Mobile World Congress (MWC) in Barcelona, Spain, there were rumors that the handset would incorporate a 1.5 GHz quad-core processor, a display of 1080p (1080×1920 pixels) resolution, a 12-megapixel rear camera and a HD Super AMOLED Plus touchscreen. More accurate rumored specifications included 2 GB of RAM, 64 GB of internal storage, 4G LTE, a screen, an 8-megapixel rear camera, and a thick chassis. Samsung confirmed the existence of the Galaxy S II's successor on 5 March 2012, but it was not until late April 2012 that Samsung's Senior Vice-President Robert Yi confirmed the phone to be called "Samsung Galaxy S III". After inviting reporters in mid-April, Samsung launched the Galaxy S III during the Samsung Mobile Unpacked 2012 event at Earls Court Exhibition Centre, London, United Kingdom, on 3 May 2012, instead of unveiling their products earlier in the year during either the World Mobile Congress or Consumer Electronics Show (CES). One explanation for this decision is that Samsung wanted to minimize the time between its launch and availability. The keynote address of the hour-long event was delivered by Loesje De Vriese, Marketing Director of Samsung Belgium. Following the launch of the Galaxy S4 in June 2013, Samsung was reportedly retiring the phone earlier than planned because of low sales numbers and to streamline manufacturing operations. Features Hardware Design The S III has a plastic chassis measuring long, wide, and thick, with the device weighing . Samsung abandoned the rectangular design of the Galaxy S and Galaxy S II, and instead incorporated round corners and curved edges, reminiscent of the Galaxy Nexus. The device has been available in several color options: white (marketed as "marble white"), black, grey, brushed dark blue (marketed as "pebble blue"), red (marketed as "garnet red"), and brown. A "Garnet Red" model was made available exclusively to US carrier AT&T on 15 July 2012. In addition to the touchscreen, the S III has several physical user inputs, including a home button located below the screen, an option key to the left side of the home button, a back key on the right side of the home button, a volume key on the left edge and a power/lock key on the right. At the top there is a headphone jack and one of the two microphones on the S III; the other is located below the home button. Chipsets The S III comes in two distinct variations that differ primarily in the internal hardware. The international S III version has Samsung's Exynos 4 Quad system on a chip (SoC) containing a 1.4 GHz quad-core ARM Cortex-A9 central processing unit (CPU) and an ARM Mali-400 MP graphics processing unit (GPU). According to Samsung, the Exynos 4 Quad doubles the performance of the Exynos 4 Dual used on the S II, while using 20 percent less power. Samsung had also released several 4G LTE versions—4G facilitates higher-speed mobile connection compared to 3G—in selected countries to exploit the corresponding communications infrastructures that exist in those markets. Most of these versions use Qualcomm's Snapdragon S4 SoC featuring a dual-core 1.5 GHz Krait CPU and an Adreno 225 GPU. The South Korean and Australian versions are a hybrid of the international and 4G-capable versions. Sensors Like the predecessor, the S3 is equipped with an accelerometer, gyroscope, front-facing proximity sensor and a digital compass sensor. However, the Galaxy S3 is the first Samsung flagship phone to be equipped with a barometer sensor. Storage The S III has a maximum of 2 GB of RAM, depending on the model. The phone comes with either 16, 32, or 64 GB storage; additionally, microSDXC storage offers a further 64 GB for a potential total of 128 GB. Moreover, 50 GB of space is offered for two years on Dropbox—a cloud storage service—for purchasers of the device, doubling rival HTC's 25 GB storage for the same duration. Display The S III's HD Super AMOLED display measures on the diagonal. With a 720×1280-pixel (720p) resolution, its 306 pixels per inch (PPI, a measure of pixel density) is a relatively high, which is accommodated by the removal of one of the three subpixels—red, green and blue—in each pixel to create a PenTile matrix-display; consequently, it does not share the "Plus" suffix found on the S II's Super AMOLED Plus display. The glass used for the display is the damage-resistant corning Gorilla Glass 2, except for S3 Neo variant. The device's software includes a feature known as "Smart Stay", which uses the device's front camera to detect whether the user's eyes are looking at the screen, and prevents the screen from automatically turning off while the user is still looking at it. Like its predecessor, the Samsung Galaxy S3 supports Mobile High-Definition Link (MHL) for connection to HDMI displays. The S3 is newly equipped with Miracast support (also known as Screen Mirroring; also branded "AllShare Cast" by Samsung) that allows wirelessly transmitting the device's display view to a supported television or Blu-ray player with integrated miracast support. Camera The S III has an 8-megapixel (3264×2448) camera similar to that of the Galaxy S II. It can take 3264×2448-pixel resolution photos and record videos in 1920×1080-pixel (4k) resolution. The camera software allows digital zooming up to four times, and displays the video's current file size (in kilobytes) as well as remaining storage capacity (in megabytes) in real-time during video recording. Samsung improved the camera's software over that of its predecessor to include zero shutter lag, and a Burst shot mode that allows capturing up to 20 full-resolution photos per row in quick succession. Another feature, Best Shot, allows selecting the best photo out of eight frames captured in quick succession. The phone can also take pictures while recording videos. Photos can additionally be captured using voice commands such as "cheese", "shoot", "photo", and "picture". The shortcuts on the left pane are customizable. The rear-facing camera is complemented by a 1.9-megapixel front-facing camera that can record 720p videos. The phone has LED flash and autofocus. The Galaxy S3 records videos with stereo audio and is able to capture 6 MP (3264×1836) photos during 4k video recording, which is the full 16:9 aspect ratio section of the 4:3 image sensor. Battery The S III's user-replaceable Li-ion 2,100 mAh battery is said to have a 790-hour standby time or 11 hours of talk time on 3G, compared to 900 hours in standby and 21 hours of talk time on 2G. Connectivity Built into the battery is near field communication (NFC) connectivity, which allows users to share files, map directions and YouTube videos quickly using Wi-Fi Direct (through Android Beam), and perform non-touch payments at shops that employ specially equipped NFC cash registers. The battery can be wirelessly charged using a special charging pad (sold separately) that utilizes magnetic resonance to produce a magnetic field through which electricity could be transferred. The S III is advertised as having an MHL port that can be used both as a micro-USB On-The-Go port and for connecting the phone to HDMI devices. However, a retailer later discovered that Samsung had made a modification to the electronics of the port such that only the adapter made specifically for this model by Samsung could be used. CNET TV torture-tested an S III by cooling it to , placing it in a heat-proof box and heating it to , and submerging it in water—the S III survived all three tests. The phone also did not exhibit any scratches when a key was repeatedly scraped against the display. However, Android Authority later carried out a drop test with the purpose of comparing the S III and the iPhone 5. The screen on the S III shattered on the second drop test, while the iPhone received only minor scuffs and scratches on the metal composite frame after three drop tests. Accessories Accessories for the Galaxy S3 include a wireless charging kit, the Pebble MP3 player, a docking station, a C-Pen, a slimline case, and a car mount. Software and services User interface The S III is powered by Android, a Linux-based, open source mobile operating system developed by Google and introduced commercially in 2008. Among other features, the software allows users to maintain customized home screens which can contain shortcuts to applications and widgets for displaying information. Four shortcuts to frequently used applications can be stored on a dock at the bottom of the screen; the button in the center of the dock opens the application drawer, which displays a menu containing all of the apps installed on the device. A tray accessed by dragging from the top of the screen allows users to view notifications received from other apps, and contains toggle switches for commonly used functions. Pre-loaded apps also provide access to Google's various services. The keyboard software is equipped with a clipboard manager. The S III uses Samsung's proprietary TouchWiz graphical user interface (GUI). The "Nature" version used by the S III has a more "organic" feel than previous versions, and contains more interactive elements such as a water ripple effect on the precluded lock screen, to resemble its appearance in nature. To complement the TouchWiz interface, and as a response to Apple's Siri, the phone introduces S Voice, Samsung's intelligent personal assistant. S Voice can recognize eight languages including English, Korean, Italian and French. Based on Vlingo, S Voice enables the user to verbally control 20 functions such as playing a song, setting the alarm, or activating driving mode; it relies on Wolfram Alpha for online searches. With the Wake-up commands feature, voice commands can be set to launch apps and tasks out of stand-by mode, such as S Voice, camera, music player, voice recorder, missed calls, messages, and schedule.The Auto Haptic feature can complement audio with synchronous haptic feedback. The precluded telephone application is equipped with additional options for noise cancellation, call holding, volume boosting and the ability to personalize the call sound. Gallery software The new gallery software of the Galaxy S3 allows sorting photos and videos chronologically, by location, by group. Photos with tagged faces can also be sorted by person. A new Spiral View feature has been added with the Android Jelly Bean 4.1.2 update, which displays the thumbnails in a 3D spiral. Video player The precluded video player software is newly equipped with the ability to play videos in a floating pop-up that can be moved freely around the screen. In addition, the video player application is able to show motion thumbnails, which means that the preview thumbnails show a moving portion of the video. Software updates The S III initially shipped with Android version 4.0.4, named "Ice Cream Sandwich", which became commercially available in March 2012 with the Nexus S and Galaxy Nexus. Ice Cream Sandwich has a refined user interface, and expanded camera capabilities, security features and connectivity. In mid-June 2012, Google unveiled Android 4.1 "Jelly Bean", which employs Google Now, a voice-assistant similar to S Voice, and incorporates other software changes. Samsung accommodated Jelly Bean in the S III by making last-minute hardware changes to the phone in some markets. Jelly Bean updates began rolling out to S IIIs in selected European countries, and to the T-Mobile in the United States in November 2012. Samsung started pushing Android 4.1.2 Jelly Bean to the international version of the S III in December 2012. This update shipped the so-called Premium Suite Upgrade which brought additional features to the Galaxy S3, such as split-screen app view as known from the Galaxy Note 2.<ref name=Suite>{{Cite web |url=https://www.cnet.com/news/samsungs-galaxy-s3-to-get-premium-suite-upgrade/ |title=CNet.com article about the Galaxy S3 ““Premium Suite”’’ upgrade. |access-date=2 July 2020 |archive-date=2 July 2020 |archive-url=https://web.archive.org/web/20200702135347/https://www.cnet.com/news/samsungs-galaxy-s3-to-get-premium-suite-upgrade/ |url-status=live }}</ref> In December 2013, Samsung began rolling out Android 4.3 for the S III, adding user interface features back ported from the Galaxy S4, and support for the Samsung Galaxy Gear smartwatch. In March 2014, Samsung started the rollout of 4.4.2 KitKat for the 2 GB variant of the S III. Services The S III comes with a multitude of pre-installed applications, including Google Apps like Google Play, YouTube, Google+, Gmail, Google Maps, Voice Search and Calendar, in addition to Samsung-specific apps such as ChatON, Game Hub, Music Hub, Video Hub, Social Hub and Navigation. To address the fact that iPhone users are reluctant to switch to Android because the OS is not compatible with iTunes, from June 2012 Samsung offered customers of its Galaxy series the Easy Phone Sync app to enable the transfer of music, photos, videos, podcasts, and text messages from an iPhone to a Galaxy device. The user is able to access Google Play, a digital-distribution multimedia-content service exclusive to Android, to download applications, games, music, movies, books, magazines, and TV programs. Interaction Apart from S Voice, Samsung has directed the bulk of the S III's marketing campaign towards the device's "smart" features, which facilitate improved human-device interactivity. These features include: "Direct Call", the handset's ability to recognize when a user wants to talk to somebody instead of messaging them, if they bring the phone to their head; "Social Tag", a function that identifies and tags people in a photo and shares photos with them, "Smart Alert", a haptic feedback (short vibration) when the device detects being picked up after new notifications have arrived; and "Pop Up Play", which allows a video and other applications to occupy the screen at the same time. In addition, the S III can beam its screen to a monitor or be used as a remote controller (AllShare Cast and Play) and share photos with people who are tagged in them (Buddy Photo Share). Multimedia The S III can access and play traditional media formats such as music, movies, TV programs, audiobooks, and podcasts, and can sort its media library alphabetically by song title, artist, album, playlist, folder, and genre. One notable feature of the S III's music player is Music Square, which analyses a song's intensity and ranks the song by mood so that the user can play songs according to their current emotional state. The device also introduced Music Hub, an online music store powered by 7digital with a catalogue of over 19 million songs. Its "Auto Haptic" feature vibrates synchronously to the audio output for intensification, similarly to the audio-coupled haptic effect, a feature added to stock Android in 2021. Voice over LTE The S III was the first smartphone to support Voice Over LTE with the introduction of HD Voice service in South Korea. The phone enables video calling with its 1.9 MP front-facing camera, and with support for the aptX codec, improves Bluetooth-headset connectivity. Texting on the S III does not embody any new significant features from the S II. Speech-to-text is aided by the Vlingo and Google's voice-recognition assistant. Not unlike other Android devices, there is a multitude of third-party typing applications available that could complement the S III's stock keyboard. Enterprise On 18 June 2012, Samsung announced that the S III would have a version with enterprise software under the company's Samsung Approved For Enterprise (SAFE) program, an initiative facilitating the use of its devices for "bring your own device" scenarios in workplace environments. The enterprise S III version would support AES-256 bit encryption, VPN and Mobile Device Management functionality, and Microsoft Exchange ActiveSync. It was scheduled to be released in the United States in July 2012. The enterprise version was expected to penetrate the business market dominated by Research in Motion's BlackBerry, following the release of similar enterprise versions of the Galaxy Note, Galaxy S II and the Galaxy Tab line of tablet computers. Developer edition A separate "Developer Edition" of the S III was made available from Samsung's Developer Portal. It came with an unlockable bootloader to allow the user to modify the phone's software. Model variants Issues On 19 September 2012, security researchers demonstrated during Pwn2Own, a computer hacking contest held in Amsterdam, Netherlands, that the S III can be hacked via NFC, allowing attackers to download all data from the phone. In December 2012, two hardware issues were reported by users of the S III: A vulnerability of the Exynos SoC allowed malicious apps to gain root privileges even on unrooted devices, and a spontaneous bricking of the unit, called the "sudden death vulnerability", that occurs about six months after activation. Samsung has been replacing the mainboards of affected units under warranty. In January 2013, Samsung released a firmware update that corrected both issues. Affecting both Galaxy S II and III, some units can have high memory use without apparent cause, in itself causing units to be unable to store any more data and making the units memory to be 'full' when apparently not using all of the units internal memory available. In October 2012 Samsung noted that this was caused by a mass caching archive running in the background of units operational tasks. This copied and saved media, tasks and app information to a background archive which was not accessible to the user without change and re-writing of the phones operational script. When this has been altered access can be gained and the cache can be deleted and no further caching will occur unless requested. This issue was resolved for the Galaxy S III (and Later) model. , two S III explosions were reported. The first involved a man from Ireland, while the more recent incident occurred when a Swiss teenager was left with second and third degree burns in her thigh caused by her phone's explosion. In October 2013, Samsung acknowledged swelling and overheating issues with the Li-ion batteries in many S III phones, and offered replacement batteries for affected devices. Reception Commercial reception According to an anonymous Samsung official speaking to the Korea Economic Daily, the S III received more than 9 million pre-orders from 100 carriers during the two weeks following its London unveiling, making it the fastest-selling gadget in history. Within a month of the London unveiling, auction and shopping website eBay noted a 119-percent increase in second-hand Android phone sales. According to an eBay spokesperson, this was "the first time anything other than an Apple product has sparked such a selling frenzy." The S III was released in 28 countries in Europe and the Middle East on 29 May 2012. To showcase its flagship device, Samsung afterwards embarked on a global month-long tour of the S III to nine cities, including Sydney, New Delhi, and cities in China, Japan, South Korea and the United States. The S III has helped Samsung consolidate its market share in several countries including India, where Samsung expected to capture 60 percent of the country's smartphone market, improving on its previous 46 percent. Within a month of release, Samsung had a 60-percent market share in France, while the company controlled over 50 percent of the German and Italian smartphone markets. Over a similar period the S III helped increase Samsung's market share in the United Kingdom to over 40 percent, while eroding the iPhone 4S's 25 percent to 20 percent in the country. The S III was scheduled to be released in North America on 20 June 2012, but because of high demand, some US and Canadian carriers delayed the release by several days, while some other carriers limited the market at launch. The S III's US launch event took place in New York City, hosted by Twilight actress Ashley Greene and attended by dubstep artist Skrillex, who performed at Skylight Studios. Samsung estimated that by the end of July 2012, the S III would have been released by 296 carriers in 145 countries, and that more than 10 million handsets would have been sold. Shin Jong-kyun, president of Samsung's mobile communications sector, announced on 22 July that sales had exceeded 10 million. According to an assessment by Swiss financial services company UBS, Samsung had shipped 5–6 million units of the phone in the second quarter of 2012 and would ship 10–12 million handsets per quarter throughout the rest of the year. An even more aggressive prediction by Paris-based banking group BNP Paribas said 15 million units will be shipped in the third quarter of 2012, while Japanese financial consultant company Nomura placed the figure for this quarter as high as 18 million. Sales of the S III were estimated to top 40 million by the end of the year. To meet demand, Samsung had hired 75,000 workers, and its South Korean factory was running at its peak capacity of 5 million smartphone units per month. A manufacturing flaw resulted in a large portion of the new smartphones having irregularities with the "hyper-glazing" process. The mistake caused an undesirable finish on the blue back covers and resulted in the disposal of up to 600,000 plastic casings and a shortage of the blue model. The issue was later resolved; however, Reuters estimated that the shortage had cost Samsung two million S III sales during its first month of release. On 6 September 2012, Samsung revealed that sales of the S III had reached 20 million in 100 days, making it three and six times faster-selling than the Galaxy S II and the Galaxy S, respectively. Europe accounted for more than 25 percent of this figure with 6 million units, followed by Asia (4.5 million) and the US (4 million); sales in South Korea, the S III's home market, numbered 2.5 million. Around the same time of Samsung's announcement, sales of the S III surpassed that of the iPhone 4S in the US. In the third quarter of 2012, more than 18 million S III units were shipped, making it the most popular smartphone at the time, ahead of the iPhone 4S's 16.2 million units. Analysts deduced that the slump in iPhone sales was due to customers' anticipation of the iPhone 5. By May 2014, the S III had sold approximately 60 million units since its 2012 release. In April 2015, the total sales number was reported as 70 million. On 11 October 2012 Samsung unveiled the Galaxy S III Mini, a smartphone with lower specifications compared to the S III. Critical reception The reception of the S III has been particularly positive. Critics noted the phone's blend of features, such as its S Voice application, display, processing speed, and dimensions as having an edge over its competition, the Apple iPhone 4S and HTC One X. Vlad Savov of The Verge declared it a "technological triumph", while Natasha Lomas of CNET UK lauded the phone's "impossibly slim and light casing and a quad-core engine", calling it the "Ferrari of Android phones", a sentiment affirmed ("a prince among Android phones") by Dave Oliver of Wired UK and ("king of Android") Esat Dedezade of Stuff magazine. Gareth Beavis of TechRadar described the S III as "all about faster, smarter and being more minimal than ever before while keeping the spec list at the bleeding edge of technology." Matt Warman of The Daily Telegraph said, "On spending just a short time with the S3, I'm confident in saying that it's a worthy successor to the globally popular S2". Upon release, a number of critics and publications have made references to the S III, Samsung's 2012 flagship phone, as an "iPhone killer", responding perhaps to Apple's favourable customer perception. The label owes itself to the S III's use of the Android OS—the chief rival of Apple's iOS—as well as its design and features that rival the iPhone 4S such as Smart Stay, a large display, a quad-core processor, Android customizability, and a multitude of connectivity options. The S III was the first Android phone to have a higher launch price than the iPhone 4S when the Apple product was released in 2011. With the S III, Tim Weber, business editor of the BBC, observed, "With the new Galaxy S3 they [Samsung] have clearly managed to move to the front of the smartphone field, ahead of mighty Apple itself." Conversely, reviewers have opined on the design and feel of phone, calling its polycarbonate shell "cheap" and having a "slippery feel". The S Voice was described as "not optimised" and "more rigid than Siri" with its poor voice-recognition accuracy, with instances when it would not respond at all. Another usage problem was a microphone malfunction that resulted in difficulty communicating during a call. Reviewers have noted the somewhat abrupt auto-adjustment of display brightness, which tends to under-illuminate the screen; however, it has twice the battery life compared to the HTC handset, achieved partly through the dim display. Others say the numerous pre-installed apps make the S III feel "bloated". In late-September 2012 TechRadar ranked it as the No. 1 handset in its constantly updated list of the 20 best mobile phones; Stuff magazine also ranked it at No. 1 in its list of 10 best smartphones in May 2012. The S III won an award from the European Imaging and Sound Association under the category of "European Mobile Phone" of 2012–2013. In 2012, the S III won T3'''s "Phone of the Year" award, beating the iPhone 4S, the Nokia Lumia 900, the Sony Xperia S and others and was voted Phone of the Year by readers of tech website S21. In February 2013, the S III won the "Best Smartphone" award from the GSMA at Mobile World Congress. Litigation On 5 June 2012, Apple filed for preliminary injunctions in the United States District Court for the Northern District of California against Samsung Electronics, claiming the S III had violated at least two of the company's patents. Apple requested that the court include the phone in its existing legal battle against Samsung, and ban sales of the S III prior to its scheduled 21 June 2012 US launch. Apple claimed the alleged infringements would "cause immediate and irreparable harm" to its commercial interest. Samsung responded by declaring it would "vigorously oppose the request and demonstrate to the court that the Galaxy S3 is innovative and distinctive", and reassured the public that 21 June release would proceed as planned. On 11 June, Judge Lucy Koh said that Apple's claim would overload her work schedule, as she would also be overseeing the trial of Samsung's other devices; consequently, Apple dropped its request to block 21 June release of the S III. In mid-July 2012, Samsung removed the universal search feature on Sprint and AT&T S III phones with over-the-air (OTA) software updates to disable the local search function as a "precautionary measure" prior to its patent court trial with Apple, which began on 30 July 2012. Although Apple won the trial, the S III experienced a sales spike because of the public's belief that the phone would be banned. On 31 August 2012, Apple asked the same federal court to add the S III into its existing complaint, believing the device has violated its patents. Samsung countered with the statement: "Apple continues to resort to litigation over market competition in an effort to limit consumer choice." See also Comparison of Samsung Galaxy S smartphones Comparison of smartphones Samsung Galaxy S series Notes References External links Android (operating system) devices Smartphones Samsung mobile phones Samsung Galaxy Mobile phones introduced in 2012 Discontinued smartphones Mobile phones with user-replaceable battery
33367993
https://en.wikipedia.org/wiki/Google%20Drive
Google Drive
Google Drive is a file storage and synchronization service developed by Google. Launched on April 24, 2012, Google Drive allows users to store files in the cloud (on Google's servers), synchronize files across devices, and share files. In addition to a web interface, Google Drive offers apps with offline capabilities for Windows and macOS computers, and Android and iOS smartphones and tablets. Google Drive encompasses Google Docs, Google Sheets, and Google Slides, which are a part of the Google Docs Editors office suite that permits collaborative editing of documents, spreadsheets, presentations, drawings, forms, and more. Files created and edited through the Google Docs suite are saved in Google Drive. Google Drive offers users 15 GB of free storage through Google One. Google One also offers 100 GB, 200 GB, 2 TB, offered through optional paid plans. Files uploaded can be up to 750 GB in size. Users can change privacy settings for individual files and folders, including enabling sharing with other users or making content public. On the website, users can search for an image by describing its visuals, and use natural language to find specific files, such as "find my budget spreadsheet from last December". The website and Android app offer a Backups section to see what Android devices have data backed up to the service, and a completely overhauled computer app released in July 2017 allows for backing up specific folders on the user's computer. A Quick Access feature can intelligently predict the files users need. Google Drive is a key component of Google Workspace, Google's monthly subscription offering for businesses and organizations that operated as G Suite until October 2020. As part of select Google Workspace plans, Drive offers unlimited storage, advanced file audit reporting, enhanced administration controls, and greater collaboration tools for teams. Following the launch of the service, Google Drive's privacy policy was heavily criticized by some members of the media. Google has one set of Terms of Service and Privacy Policy agreements that cover all of its services, meaning that the language in the agreements grants the company broad rights to reproduce, use, and create derivative works from content stored on Google Drive. While the policies also confirm that users retain intellectual property rights, privacy advocates raised concerns that the licenses grant Google the right to use the information and data to customize the advertising and other services Google provides. In contrast, other members of the media noted that the agreements were no worse than those of competing cloud storage services, but that the competition uses "more artful language" in the agreements, and also stated that Google needs the rights in order to "move files around on its servers, cache your data, or make image thumbnails". As of July 2018, Google Drive had over one billion active users, and as of September 2015, it had over one million organizational paying users. , there were over two trillion files stored on the service. Platforms Google Drive was introduced on April 24, 2012 with apps available for Windows, macOS, and Android, as well as a website interface. The iOS app was released later in June 2012. Computer apps Google Drive is available for PCs running Windows 7 or later, and Macs running OS X Lion or later. Google indicated in April 2012 that work on Linux software was underway, but there was no news on this as of November 2013. In April 2012, Google's then-Senior Vice President Sundar Pichai said that Google Drive would be tightly integrated with Chrome OS version 20. In October 2016, Google announced that, going forward, it would drop support for versions of the computer software older than 1 year. In June 2017, Google announced that a new app called Backup and Sync would replace the existing separate Google Drive and Google Photos desktop apps, creating one unified app on desktop platforms. Originally intended for release on June 28, its release was delayed until July 12. In September 2017, Google announced that it would discontinue the Google Drive desktop app in March 2018 and end support in December 2017. In July 2021, Google released a new app for Windows and Mac which is meant to replace "Backup and Sync" and "Drive File Stream". Backup and Sync In July 2017, Google announced their new downloadable software, Backup and Sync. It was made mainly to replace the Google Drive desktop app, which was discontinued. Its main function is for the user to be able to set certain folders to constantly sync onto their Google Account's Drive. The synced folders and files count against the shared quota allocated between Gmail, Google Photos, and Google Drive. In early 2021, Google announced that it would be combining its Drive File Stream and Backup and Sync products into one product, Google Drive for Desktop, which will support features previously exclusive to each respective Client. Mobile apps Google Drive is available for Android smartphones and tablets running Android 4.1 "Jelly Bean" or later, and iPhones and iPads running iOS 8 or later. In August 2016, Google Drive ended support for Android devices running Android 4.0 "Ice Cream Sandwich" or older versions, citing Google's mobile app update policy, which states: "For Android devices, we provide updates for the current and 2 previous Android versions." According to the policy, the app will continue to work for devices running older Android versions, but any app updates are provided on a best-efforts basis. The policy also states a notice will be given for any planned end of service. On May 4, 2020, Google rolled out a new feature update in its Google Drive app version 4.2020.18204 for iOS and iPadOS, known as Privacy Screen, which requires Face ID or Touch ID authentication whenever the app is open. Website interface Google Drive has a website that allows users to see their files from any Internet-connected computer, without the need to download an app. The website received a visual overhaul in 2014 that gave it a completely new look and improved performance. It also simplified some of the most common tasks, such as clicking only once on a file to see recent activity or share the file and added drag-and-drop functionality, where users can simply drag selected files to folders, for improved organization. A new update in August 2016 changed several visual elements of the website; the logo was updated, the search box design was refreshed, and the primary color was changed from red to blue. It also improved the functionality to download files locally from the website; users can now compress and download large Drive items into multiple 2 GB .zip files with an improved naming structure, better Google Forms handling, and empty folders are now included in the .zip, thereby preserving the user's folder hierarchy. Storage Individual user account storage Google gives every user 15 GB (1 GB = 1 billion bytes) of free storage through Google One. Google Docs, Google Sheets and Google Slides files do not count towards the storage limit. This cloud storage is also shared with Gmail and Google Photos. Photos at maximum 16 megapixels and videos at maximum 1080p resolutions can be stored using the "High quality" setting in Google Photos. Using the "High quality" or "Original quality" setting uses Google Drive quota. Users can purchase additional space through either a monthly or yearly payment. The option of yearly payments was introduced in December 2016, and is limited to the 100 GB, 200 GB or 2 TB (1 TB = 1000 billion bytes) storage plans. Furthermore, the yearly payments offer a discount. In May 2018, Google announced that storage plans (including the free 15 gigabyte plan) would be moved over to Google One. , these are the storage plans offered by Google: Chromebook promotions Chromebook users can obtain 100 GB of Google Drive storage free for 2 years as long as the promotion is activated within 180 days of the Chromebook device's initial purchase. This is available in all countries where Google Drive is available. Offer can only be redeemed once per device. Used, open-box, and refurbished devices are not eligible for the offer. Google Workspace storage Google offers 30 GB of Drive storage for all Google Workspace Starter customers, and unlimited storage for those using Google Workspace for Business. Since July 2022, the G Suite Workspace for Education – valid for educational institutions and Universities in particular – provides 100 TB of storage. Universities with more than 20.000 Workspace users (students, staff and related entities) are offered an optionally increased storage limit . Storage scheme revisions Before the introduction of Google Drive, Google Docs initially provided 15 GB of storage free of charge. On April 24, 2012, Google Drive was introduced with free storage of 5 GB. Storage plans were revised, with 25 GB costing $2.49/month, 100 GB costing $4.99/month and 1 TB costing $49.99/month. Originally, Gmail, Google Docs, and Picasa had separate allowances for free storage and a shared allowance for purchased storage. Between April 2012 and May 2013, Google Drive and Google+ Photos had a shared allowance for both free and purchased storage, whereas Gmail had a separate 10 GB storage limit, which increased to 25 GB on the purchase of any storage plan. In September 2012, Google announced that a paid plan would now cover total storage, rather than the paid allocation being added to the free; e.g. a 100 GB plan allowed a total of 100 GB rather than 115 GB as previously. In May 2013, Google announced the overall merge of storage across Gmail, Google Drive and Google+ Photos, giving users 15 GB of unified free storage between the services. In March 2014, the storage plans were revised again and prices were reduced by 80% to $1.99/month for 100 GB, $9.99/month for 1 TB, and $99.99/month for 10 TB. This was much cheaper than competitors Dropbox and OneDrive offered at the time. In 2018, the paid plans were re-branded as "Google One" to emphasize their application beyond Google Drive, along with the addition of a $2.99/month plan for 200 GB, and increasing the $9.99 plan to 2 TB at no additional charge. In most cases during these changes, users could continue with their existing plans as long as they kept their accounts active and did not make any adjustments to the plan. However, if the account lapsed for any reason, users had to choose from current plans. On November 11, 2020, Google announced charging Google Photos' storage once the users exceed the limit of 15 GB on their account. The update was announced to come into effect from June 1, 2021. Before June 1, all photos and documents uploaded on Google’s online storage will not be counted under the 15 GB cap. In September 2021, Google added a 5 TB storage plan priced at $24.99/month. Features Sharing Google Drive incorporates a system of file sharing in which the creator of a file or folder is, by default, its owner. The owner can regulate the public visibility of the file or folder. Ownership is transferable. Files or folders can be shared privately with particular users having a Google account, using the email address (usually, but not necessarily, ending in @gmail.com) associated with that account. Sharing files with users not having a Google account requires making them accessible to "anybody with the link". This generates a secret URL for the file, which may be shared via email or private messages. Files and folders can also be made "public on the web", which means that they can be indexed by search engines and thus can be found and accessed by anyone. The owner may also set an access level for regulating permissions. The three access levels offered are "can edit", "can comment" and "can view". Users with editing access can invite others to edit. On September 13, 2021, the URL to a portion of existing files was changed, ostensibly for security reasons. Third-party apps A number of external web applications that work with Google Drive are available from the Chrome Web Store. To add an app, users are required to sign in to the Chrome Web Store, but the apps are compatible with all supported web browsers. Some of these apps are first-party, such as Google Docs, Sheets, and Slides. Drive apps operate on online files and can be used to view, edit, and create files in various formats, edit images and videos, fax and sign documents, manage projects, create flowcharts, etc. Drive apps can also be made the default for handling file formats supported by them. Some of these apps also work offline on Google Chrome and Chrome OS. All of the third-party apps are free to install. However, some have fees associated with continued usage or access to additional features. Saving data from a third-party app to Google Drive requires authorization the first time. The Google Drive software development kit (SDK) works together with the Google Drive user interface and the Chrome Web Store to create an ecosystem of apps that can be installed into Google Drive. In February 2013, the "Create" menu in Google Drive was revamped to include third-party apps, thus effectively granting them the same status as Google's own apps. In March 2013, Google released an API for Google Drive that enables third-party developers to build collaborative apps that support real-time editing. File viewing The Google Drive viewer on the web allows the following file formats to be viewed: Native formats (Docs, Sheets, Slides, Forms, Drawings) Image files (.JPEG, .PNG, .GIF, .TIFF, .BMP, .WEBP) Video files (.WEBM, .MPEG4, .3GPP, .MOV, .AVI, .MPEG, .MPEGPS, .WMV, .FLV, .OGG) Audio formats (.MP3, .M4A, .WAV, .OGG) Text files (.TXT) Executable program files (.EXE) Markup/Code (.CSS, .HTML, .PHP, .C, .CPP, .H, .HPP, .JS) Microsoft Word (.DOC and .DOCX) Microsoft Excel (.XLS and .XLSX) Microsoft PowerPoint (.PPT and .PPTX) Adobe Portable Document Format (.PDF) Apple Pages (.PAGES) Adobe Illustrator (.AI) Adobe Photoshop (.PSD) Autodesk AutoCad (.DXF) Scalable Vector Graphics (.SVG) PostScript (.EPS, .PS) Python (.PY) Fonts (.TTF) XML Paper Specification (.XPS) Archive file types (.ZIP, .RAR, tar, gzip) .MTS files Raw Image formats Files in other formats can also be handled through third-party apps that work with Google Drive, available from the Chrome Web Store. File limits Files that are uploaded, but not converted to Google Docs, Sheets, or Slides formats, may be up to 5 TB in size. There are also limits, specific to file type, listed below: Documents (Google Docs) Up to 1.02 million characters, regardless of the number of pages or font size. Document files converted to .gdoc Docs format cannot be larger than 50 MB (1 MB = 1 million bytes). Images inserted cannot be larger than 50 MB, and must be in either .jpg, .png, or non-animated .gif formats. Spreadsheets (Google Sheets) Up to 2 million cells. Presentations (Google Slides) Presentation files converted to .gslides Slides format cannot be larger than 100 MB. Images inserted cannot be larger than 50 MB, and must be in either .jpg, .png, or non-animated .gif formats. Quick Access Introduced in the Android app in September 2016, Quick Access uses machine learning to "intelligently predict the files you need before you've even typed anything". The feature was announced to be expanded to iOS and the web in March 2017, though the website interface received the feature in May. Search Search results can be narrowed by file type, ownership, visibility, and the open-with app. Users can search for images by describing or naming what is in them. For example, a search for "mountain" returns all the photos of mountains, as well as any text documents about mountains. Text in images and PDFs can be extracted using optical character recognition. In September 2016, Google added "natural language processing" for searching on the Google Drive website, enabling specific user search queries like "find my budget spreadsheet from last December". In February 2017, Google integrated Drive and the Google Search app on Android, letting users search for keywords, switch to an "In Apps" tab, and see any relevant Drive files. Backups In December 2016, Google updated the Android app and website with a "Backups" section, listing the Android device and app backups saved to Drive. The section lets users see what backups are stored, the backups' sizes and details, and delete backups. In June 2017, Google announced that a new app, "Backup and Sync", would be able to synchronize any folder on the user's computer to Google. The app was released on July 12, 2017. Metadata A Description field is available for both files and folders that users can use to add relevant metadata. Content within the Description field is also indexed by Google Drive and searchable. Accessibility to the visually impaired In June 2014, Google announced a number of updates to Google Drive, which included making the service more accessible to visually impaired users. This included improved keyboard accessibility, support for zooming and high contrast mode, and better compatibility with screen readers. Save to Google Drive browser extension Google offers an extension for Google Chrome, Save to Google Drive, that allows users to save web content to Google Drive through a browser action or through the context menu. While documents and images can be saved directly, webpages can be saved in the form of a screenshot (as an image of the visible part of the page or the entire page), or as a raw HTML, MHTML, or Google Docs file. Users need to be signed into Chrome to use the extension. Mobile apps The main Google Drive mobile app supported editing of documents and spreadsheets until April 2014, when the capability was moved to separate, standalone apps for Google Docs, Google Sheets, and Google Slides. The Google Drive app on Android allows users to take a photo of a document, sign, or other text and use optical character recognition to convert to text that can be edited. In October 2014, the Android app was updated with a Material Design user interface, improved search, the ability to add a custom message while sharing a file, and a new PDF viewer. Encryption Before 2013, Google did not encrypt data stored on its servers. Following information that the United States' National Security Agency had "direct access" to servers owned by multiple technology companies, including Google, the company began testing encrypting data in July and enabled encryption for data in transit between its data centers in November. However, as of 2015, Google Drive does not provide client-side encryption. Professional editions Google Drive Enterprise Google Drive Enterprise (formerly Google Drive for Work) is a business version, as part of Google Workspace (formerly Google Apps for Work or G Suite), announced at the Google I/O conference on June 25, 2014, and made available immediately. The service features unlimited storage, advanced file audit reporting, and eDiscovery services, along with enhanced administration control and new APIs specifically useful to businesses. Users can upload files as large as 5 TB. A press release posted on Google's Official Enterprise Blog assured businesses that Google would encrypt data stored on its servers, as well as information being transmitted to or from them. Google delivers 24/7 phone support to business users and has guaranteed 99.9% uptime for its servers. In September 2015, Google announced that Google Drive for Work would be compliant with the new ISO/IEC 27018:2014 security and privacy standard, which confirmed that Google would not use data in Drive for Work accounts for advertising, enabled additional tools for handling and exporting data, more transparency about data storage, and protection from third-party data requests. In July 2018, Google announced a new edition, called Drive Enterprise, for businesses that don't want to buy the full Google Workspace. Drive Enterprise includes Google Docs, Sheets, and Slides which permits collaborative editing of documents, spreadsheets, presentations, drawings, forms, and other file types. Drive Enterprise also allows users to access and collaborate on Microsoft Office files and 60+ other file types. The pricing of Drive Enterprise is based on usage, with $8 per active user per month, plus $0.04 per GB per month. Google Drive for Education Google Drive for Education was announced on September 30, 2014. It was made available for free to all Google Apps for Education users. It includes unlimited storage and support for individual files up to 5 TB in size in addition to full encryption. Shared Drives In September 2016, Google announced Team Drives, later renamed Shared Drives, as a new way for Google Workspace teams to collaborate on documents and store files. In Shared Drives, file/folder sharing and ownership are assigned to a team rather than to an individual user. Since 2020, Shared Drives had an ability to assign different access levels to files and folders to different users and teams, and an ability to share a folder publicly. Unlike individual Google Drive, Shared Drives offer unlimited storage. Drive File Stream (soon Google Drive for desktop) In March 2017, Google introduced Drive File Stream, a desktop application for G Suite (now Google Workspace) customers using Windows and macOS computers that maps Google Drive to a drive letter on the operating system, and thus allows easy access to Google Drive files and folders without using a web browser. It also features on-demand file access, when the file is downloaded from Google Drive only when it is accessed. Additionally, Drive File Stream supports the Shared Drives functionality of Google Workspace. In early 2021, Google announced that it will be combining its Drive File Stream and Backup and Sync products into one product, Google Drive for desktop, which will support features previously exclusive to each respective Client. Signing into the Client using a Google Workspace enable account is expected to enable the same enterprise features that Google is migrating to the new product. Docs, Sheets and Slides Google Docs, Google Sheets, and Google Slides constitute a free, web-based office suite offered by Google and integrated with Google Drive. It allows users to create and edit documents, spreadsheets, and presentations online while collaborating in real-time with other users. The three apps are available as web applications, as Chrome apps that work offline, and as mobile apps for Android and iOS. The apps are also compatible with Microsoft Office file formats. The suite also consists of Google Drawings, Google Forms, Google Sites, and Google Keep. While Forms and Sites are only available as web applications, Drawings is also available as a Chrome app, while a mobile app for Keep is also available. The suite is tightly integrated with Google Drive, and all files created with the apps are by default saved to Google Drive. Updates Updates to Docs, Sheets, and Slides have introduced features using machine learning, including "Explore", offering search results based on the contents of a document, answers based on natural language questions in a spreadsheet, and dynamic design suggestions based on contents of a slideshow, and "Action items", allowing users to assign tasks to other users. While Google Docs has been criticized for lacking the functionality of Microsoft Office, it has received praise for its simplicity, ease of collaboration, and frequent product updates. In order to view and edit Docs, Sheets, or Slides documents offline, users need to be using the Google Chrome web browser. A Chrome extension, Google Docs Offline, allows users to enable offline support for Docs, Sheets, and Slides files on the Google Drive website. Google also offers an extension for the Google Chrome web browser called Office editing for Docs, Sheets and Slides that enables users to view and edit Microsoft Office documents on Google Chrome, via Docs, Sheets and Slides apps. The extension can be used for opening Office files stored on the computer using Chrome, as well as for opening Office files encountered on the web (in the form of email attachments, web search results, etc.) without having to download them. The extension is installed on Chrome OS by default. Reception Features In a review of Google Drive after its launch in April 2012, Dan Grabham of TechRadar wrote that the integration of Google Docs into Google Drive was "a bit confusing", mainly due to the differences in the user interfaces between the two, where Drive offers a "My Drive" section with a specific "Shared with me" view for shared documents. He stated that "We think the user interface needs a lot more work. It's like a retread of Google Docs at the moment and Google surely needs to do work here". He considered uploading files "fairly easy", but noted that folder upload was only supported through the Google Chrome web browser. The lack of native editing of Microsoft Office documents was "annoying". Regarding Google Drive's computer apps, he stated that the option in Settings to synchronize only specific folders was "powerful". He wrote that Drive was "a great addition to Google armory of apps and everything does work seamlessly", while again criticizing the interface for being "confusing" and that the file view was "not quite intuitive enough" without file icons. Grabham also reviewed the mobile Android app, writing that "it's a pretty simple app that enables you to access your files on the move and save some for offline access should you wish", and praised Google Docs creation and photo uploading for being "easy". He also praised that "everything is easily searchable". A review by Michael Muchmore of PC Magazine in February 2016 praised the service as "truly impressive" in creating and editing files, describing its features as "leading" in office-suite collaboration. He added that "Compatibility is rarely an issue", with importing and exporting options, and that the free storage of 15 gigabytes was "generous". However, he also criticized the user interface for being confusing to navigate, and wrote that "Offline editing isn't simple". The Android version of Google Drive has been criticized for requiring users to individually toggle each file for use offline instead of allowing entire folders to be stored offline. Ownership and licensing Immediately after its announcement in April 2012, Google faced criticism over Google Drive privacy. In particular, privacy advocates have noted that Google has one unified set of Terms of Service and Privacy Policy agreements for all its products and services. In a CNET report, Zack Whittaker noted that "the terms and service have come under heavy fire by the wider community for how it handles users' copyright and intellectual property rights". In a comparison of Terms of Service agreements between Google Drive and competing cloud storage services Dropbox and OneDrive, he cited a paragraph stating that Google has broad rights to reproduce, use, and create derivative works from content stored on Google Drive, via a license from its users. Although the user retains intellectual property rights, the user licenses Google to extract and parse uploaded content to customize the advertising and other services that Google provides to the user and to promote the service. Summarized, he wrote that "According to its terms, Google does not own user-uploaded files to Google Drive, but the company can do whatever it likes with them". In a highly critical editorial of the service, Ed Bott of ZDNet wrote that language in the agreements contained "exact same words" as Dropbox used in a July 2011 Privacy Policy update that sparked criticism and forced Dropbox to update its policy once again with clarifying language, adding that "It's a perfect example of Google's inability to pay even the slightest bit of attention to anything that happens outside the Googleplex". Matt Peckham of Time criticized the lack of unique service agreements for Drive, writing that "If any Google service warrants privacy firewalling, it’s Google Drive. This isn’t YouTube or Calendar or even Gmail——the potential for someone’s most sensitive data to be snooped, whether to glean info for marketing or otherwise, is too high. [...] Google ought to create a privacy exception that “narrows the scope” of its service terms for Google Drive, one that minimally states the company will never circulate the information generated from searching your G-Drive data in any way." In contrast, a report by Nilay Patel of The Verge stated that "all web services should be subject to the harsh scrutiny of their privacy policies—but a close and careful reading reveals that Google's terms are pretty much the same as anyone else's, and slightly better in some cases", pointing to the fact that Google "couldn't move files around on its servers, cache your data, or make image thumbnails" without proper rights. In comparing the policies with competing services, Patel wrote that "it's clear that they need the exact same permissions—they just use slightly more artful language to communicate them". Growth On November 12, 2013, Google announced that Google Drive had 120 million active users, a figure that the company was releasing for the first time. On June 25, 2014 at the Google I/O developer conference, Sundar Pichai announced that Google Drive now had 190 million monthly active users, and that it was being used by 58% of the Fortune 500 companies as well as by 72 of the top universities. On October 1, 2014, at its Atmosphere Live event, it was announced that Google Drive had 240 million monthly active users. The Next Web noted that this meant an increase of 50 million users in just one quarter. On September 21, 2015, it was announced that Google Drive had over one million organizational paying users. In March 2017, Google announced that Google Drive had 800 million active users. In May 2017, a Google executive stated at a company event that there were over two trillion files stored on Google Drive. Downtime issues Although Google has a 99.9% uptime guarantee for Google Drive for Google Workspace customers, Google Drive has suffered downtimes for both consumers and business users. During significant downtimes, Google's App Status Dashboard gets updated with the current status of each service Google offers, along with details on restoration progress. Notable downtimes occurred in March 2013, October 2014, January 2016, September 2017, January 2020, and December 2020. When the January 2016 outage was resolved, a Google spokesperson told The Next Web: At Google, we recognize that failures are statistically inevitable, and we strive to insulate our users from the effects of failures. As that did not happen in this instance, we apologize to everyone who was inconvenienced by this event. Our engineers are conducting a post-mortem investigation to determine how to make our services more resilient to unplanned network failures, and we will do our utmost to continue to make Google service outages notable for their rarity. In an outage that affected all of Google's services for five minutes in August 2013, CNET reported that global Internet traffic dropped 40%. Spam issues Google Drive allows users to share drive contents with other Google users without requiring any authorization from the recipient of a sharing invitation. This has resulted in users receiving spam from unsolicited shared drives. Google is reported to be working on a fix. See also Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services References External links Official blog 2012 software Cloud storage Companies' terms of service File hosting for macOS File hosting for Windows Drive Online backup services
33369891
https://en.wikipedia.org/wiki/IMessage
IMessage
iMessage is an instant messaging service developed by Apple Inc. and launched in 2011. iMessage functions exclusively on Apple platforms: macOS, iOS, iPadOS, and watchOS. Core features of iMessage, available on all supported platforms, include sending text messages, images, videos, and documents; getting delivery and read statuses (read receipts); and end-to-end encryption so only the sender and recipient - no one else, including Apple itself - can read the messages. The service also allows sending location data and stickers. On iOS and iPadOS, third-party developers can extend iMessage capabilities with custom extensions, an example being quick sharing of recently played songs. Launched on iOS in 2011, iMessage arrived on macOS (then called OS X) in 2012. In 2020, Apple announced an entirely redesigned version of the macOS Messages app which adds some of the features previously unavailable on the Mac, including location sharing and message effects. History iMessage was announced by Scott Forstall at the WWDC 2011 keynote on June 6, 2011. A version of the Messages app for iOS with support for iMessage was included in the iOS 5 update on October 12, 2011. On February 16, 2012, Apple announced that a new Messages app replacing iChat would be part of OS X Mountain Lion. Mountain Lion was released on July 25, 2012. On October 23, 2012, Apple CEO, Tim Cook announced that Apple device users have sent 300 billion messages using iMessage and that Apple delivers an average of 28,000 messages per second. In February 2016, Eddy Cue announced that the number of iMessages sent per second had grown to 200,000. In May 2014, a lawsuit was filed against Apple over an issue that, if a user switches from an Apple device to a non-Apple device, messages being delivered to them through iMessage would not reach their destination. In November 2014 Apple addressed this problem by providing instructions and an online tool to deregister iMessage. A federal court dismissed the suit in Apple's favor. On March 21, 2016, a group of researchers from Johns Hopkins University published a report in which they demonstrated that an attacker in possession of iMessage ciphertexts could potentially decrypt photos and videos that had been sent via the service. The researchers published their findings after the vulnerability had been patched by Apple. On June 13, 2016, Apple announced the addition of Apps to iMessage service, accessible via the Messages apps. Apps can create and share content, add stickers, make payments, and more within iMessage conversations without having to switch to standalone apps. One could develop standalone iMessage apps or an extension to existing iOS apps. Publishers can also create standalone stickers apps without writing any code. According to Sensor Tower, as of March 2017 the iMessage App Store features nearly 5,000 Message-enabled apps. At the WWDC 2020 keynote on June 22, 2020, Apple previewed the next version of its macOS operating system, planned for release in late 2020. Big Sur ships with a redesigned version of Messages with features previously available only on iOS devices, such as message effects, memojis, stickers and location sharing. Features iMessage allows users to send texts, documents, photos, videos, contact information, and group messages over the Internet to other iOS or macOS users. iMessage is an alternative to the SMS and MMS messaging for most users with devices running iOS 5 or later. The "Send as SMS" setting under Messages will cause the message to be sent via SMS if the sender does not have an active Internet connection. If the receiver has no Internet connection, the message should be stored on a server until a connection is restored. iMessage is accessible through the Messages app on an iPhone, iPad or iPod touch running iOS 5 or later, or on a Mac running OS X Mountain Lion or later. Owners of these devices can register one or more email addresses with Apple. Additionally, iPhone owners can register their phone numbers with Apple, provided their carrier is supported. When a message is sent to a mobile number, Messages will check with Apple if the mobile number is set up for iMessage. If it is not, the message will seamlessly transition from iMessage to SMS. In Messages, the user's sent communication is aligned to the right, with replies from other people on the left. A user can see if the other iMessage user is typing a message. A pale gray ellipsis appears in the text bubble of the other user when a reply is started. It is also possible to start a conversation on one iOS device and continue it on another. On iPhones, green buttons and text bubbles indicate SMS-based communication; on all iOS devices, blue buttons and text bubbles indicate iMessage communication. All iMessages are encrypted and can be tracked using delivery receipts. If the recipient enables Read Receipts, the sender will be able to see when the recipient has read the message. iMessage also allows users to set up chats with more than two people—a "group chat". With the launch of iOS 10, users can send messages accompanied by a range of "bubble" or "screen" effects. By holding down the send button with force, the range of effects is surfaced for users to select an effect to be sent to the receiver. With the launches of iOS 14 and macOS 11 Big Sur, users gain a myriad of features such as the ability to pin individual conversations, mention other users, set an image for group conversations, and send inline replies. Additionally, more of the features from the Messages app on iOS and iPadOS were ported over to their macOS counterpart. Technology The iMessage protocol is based on the Apple Push Notification service (APNs)—a proprietary, binary protocol. It sets up a Keep-Alive connection with the Apple servers. Every connection has its own unique code, which acts as an identifier for the route that should be used to send a message to a specific device. The connection is encrypted with TLS using a client-side certificate, that is requested by the device on the activation of iMessage. Platforms iMessage is only available on the Apple operating systems, such as iOS, iPadOS, macOS, and watchOS. Unlike some other messaging apps, it does not have compatibility for Android or Microsoft Windows, and does not have any web access/interface. This means iMessage must be accessed using the app on a device using an Apple operating system. Unofficial platforms iMessage is only officially supported on Apple devices, but many apps exist that forward iMessages to devices that don't run Apple's operating system. The iMessage forwarding apps achieve this by creating an iMessage server on an iOS or macOS device that forwards the messages to a client on any other device, including Android, Windows, and Linux machines. The apps that use an iOS device as a server require the device to be jailbroken. On November 23, 2012, Beast Soft released the first version of their Remote Messages jailbreak tweak for iOS 5. Remote Messages created an iMessage and SMS server on the iOS device that could be accessed by any other internet enabled device through a web app. Remote Messages had the ability to send any attachments from the client device, as well as sending photos from the iOS server device through the web app. Beast Soft would continue to update Remote Messages through October 2015, supporting all iOS versions from iOS 5 through iOS 9. On May 3, 2016, an independent open-source project named "PieMessage" was announced by app developer Eric Chee, consisting of code for OS X that communicates with iMessage and connects to an Android client, allowing the Android client to send and receive messages. On October 16, 2017, following inactivity from Beast Soft as well as a monetary bounty requesting an iMessage tweak compatible with iOS 10, SparkDev released AirMessage. AirMessage was similar to Remote Messages in that the client was accessed through a web app, however it was more limited in features and did not support sending attachments like Remote Messages previously had. AirMessage also did not add support for any of the new iMessage features of iOS 10, such as tapback reactions or screen effects. AirMessage was updated through June 2020, ending with support for iOS 10 through iOS 13. On December 10, 2017, 16-year-old developer Roman Scott released weMessage, the first publicly available Android app that forwarded iMessages from a macOS server device to an Android client. Scott released two substantial updates to weMessage, the first of which added iMessage screen effects and bug fixes and the second of which added SMS and MMS support, as well as fixes for contact syncing and server management. On November 11th, 2018, citing his inability to spend more time on the project, Scott open-sourced weMessage. On February 22, 2019 independent developer Cole Feuer released the AirMessage app for Android. Feuer's AirMessage coincidentally shares a name with SparkDev's iOS tweak, but AirMessage for Android is not in any way related to the AirMessage jailbreak tweak. AirMessage for Android includes code for a server running on OS X Yosemite and higher, and an Android client that runs on Android 6 and higher that can send and receive iMessages. Like weMessage, AirMessage has support for displaying, but not sending, screen effects, and AirMessage also has the ability to display tapback messages and send tapback notifications. In January 2020, Feuer released an update that added SMS and MMS capabilities, as well as web link previews, a photo gallery viewer, and the ability to send a location message. On August 15, 2020, Ian Welker released SMServer as a free and open-source iOS jailbreak tweak for iOS 13 that uses a web app client. Welker maintains an API on his GitHub page with extensive documentation on how to use the IMCore and ChatKit libraries. SMServer was the first app to support iOS 14 and macOS Big Sur features of iMessage, such as group chat photos and displaying pinned conversations. It was also the first app to support remote sending of tapback messages and subject line text. On August 21, 2020, Eric Rabil released a video showcasing his upcoming server and web app, MyMessage. MyMessage was the first app to showcase support for sending tapback messages and receiving digital touch and handwritten messages, which Rabil claimed to have achieved by writing code that directly communicated with the iMessage service rather than using AppleScript and reading the database. MyMessage is the only app to run its server on both macOS and iOS, but as of February 2021, only the server component of MyMessage has been released, with the web app frontend still receiving stability development. From August 2020 through October 2020, a free and open-source project called BlueBubbles was publicly released. BlueBubbles was built to address some of the difficulties and limitations of AirMessage for Android, such as the fact that AirMessage was closed source, required port forwarding, and had no native apps for operating systems such as Windows or Linux. BlueBubbles requires a server running MacOS High Sierra or higher, and like AirMessage, it has some limitations on MacOS Big Sur. In November and December 2020, BlueBubbles added the ability to send and receive typing indicators from the Android app, as well as the ability to send read receipts and tapback messages. (both on Android) On January 29, 2021, Aziz Hasanain released a free and open-source jailbreak tweak called WebMessage for iOS 12 through iOS 14. Hasanain used Welker's documentation of the IMCore and ChatKit libraries to assist his development of WebMessage, which is the first jailbreak tweak to use a downloaded app as the client instead of a web app. Reception On November 12, 2012, Chetan Sharma, a technology and strategy consulting firm, published the US Mobile Data Market Update Q3 2012, noting the decline of text messaging in the United States, and suggested the decline may be attributed to Americans using alternative free messaging services such as iMessage. In 2017, Google announced they would compete with iMessage with their own messaging service, Android Messaging. Security and privacy On November 4, 2014, the Electronic Frontier Foundation (EFF) listed iMessage on its "Secure Messaging Scorecard", giving it a score of 5 out of 7 points. It received points for having communications encrypted in transit, having communications encrypted with keys the provider doesn't have access to (end-to-end encryption), having past communications secure if the keys are stolen (forward secrecy), having their security designs well-documented, and having a recent independent security audit. It missed points because users can not verify contacts' identities and because the source code is not open to independent review. In September 2015, Matthew Green noted that, because iMessage does not display key fingerprints for out-of-band verification, users are unable to verify that a man-in-the-middle attack has not occurred. The post also noted that iMessage uses RSA key exchange. This means that, as opposed to what EFF's scorecard claims, iMessage does not feature forward secrecy. On August 7, 2019, researchers from Project Zero presented 6 “interaction-less” exploits in iMessage that could be used to take over control of a user's device. These six exploits have been fixed in iOS 12.4, released on July 22, 2019, however there are still some undisclosed exploits which will be patched in a future update. Project Pegasus revelations in July 2021 found the software used iMessage exploits. See also FaceTime, Apple's videotelephony service which also uses APNs Signal (software), an end-to-end encrypted messenger with forward secrecy, available for the same platforms on which iMessage runs Threema WhatsApp Facebook Messenger WeChat Line Skype Snapchat References Further reading Instant messaging protocols
33450621
https://en.wikipedia.org/wiki/Canal%2B%20%28Spanish%20TV%20provider%29
Canal+ (Spanish TV provider)
Canal+ was a Spanish satellite broadcasting platform. It was previously known as Digital+ since its launch in 2003, and recently since 2011 as Canal+, being named after its main premium channel. Formed on 23 July 2003 as a result of the equal merger of Via Digital (owned by Telefónica) and Canal Satélite Digital (owned by Sociedad de Television Canal Plus, S.A.), it was the largest pay-TV broadcaster in Spain. The company used to be a subdivision of Sogecable (renamed Prisa TV in 2010) with shares held by Mediaset Espana and Telefónica. In October 2011, Digital+ changed its name to Canal+. History Merging to Vía Digital Before the creation of Digital+, there were two pay-TV companies in Spain: Vía Digital, owned by Telefónica and which operated through Hispasat, and Canal Satélite Digital, property of Sogecable, which used Astra for their services. The economic loss of both them in the early years and Telefónica's none investment in its division motivated the fusion process between them for merging, apart from the government. The process finished on 8 May 2002. The agreement closed the acquisition of Vía Digital by Sogecable, carrying out a capital increase in 23% for Vía Digital shareholders. The resultant company would take in 2.5 million subscribers. However, the agreement was subject to CNMC intervention. After several months of consideration, on 28 August 2002 CNMC published a report in which the organization recognized that the existence of both satellite companies was unviable, but it warned of fusion's danger to free competition, in markets like movies and sports broadcasting, as well as in pay-TV channels production. On 13 November CNMC Defense Court sent the definitive report to the government, containing 10 conditions, including PPV's cession to cable companies, and limitations in football clubs rights contracts renewals. Days later, on 29 November, Ministers Council approved the fusion, with an increase to 34 conditions (24 more ones than CNMC requested). These requirements did not satisfy anyone: companies involved in integration of the two companies (Sogecable, Vía Digital and Telefónica) as well as affected business because of it, such as Telecinco, Mediapark (Teuve), cable companies joined in ONO and AOC throughout Auna had brought administrative appeals before Supreme Court. Even though, the merger followed its process and on 21 July 2003 the new resultant pay-TV company was born in Astra and Hispasat: Digital+. But business’ creation was controversial, because of cable companies and private TV channels that stated some of the conditions weren't achieved, like increasing packages prices instead of maintaining them as it should have, or not fulfilling 20% of independent production companies’ contents. Digital+'s broadcasts stayed encrypted in both encryption systems used by Canal Satélite Digital and Vía Digital. The first one is SECA Mediaguard, used in the Astra satellite, currently in its second version. The second one is Nagravisión, in its third version since 4 December 2007. Strategy changes On 1 February 2008 Digital+ started broadcasting in high definition with Canal+ HD launch. It initially would offer repeated HD contents from Canal+, but since 9 December 2008 transmitted the same programming of Canal+ in simulcast, offering in HD all contents that were available to broadcast in this quality. Canal+ Deportes HD and Canal+ DCine HD were launched that day too. In September 2008, Prisa published on its newspaper El País that the business had received several offers to buy its pay-TV company, as a part of a corporate reorganization Prisa planned to make. Among the business interested in buying out Digital+, there were Telefónica, Vivendi, Telecinco, British Sky Broadcasting and ONO. On 22 July 2009 the group presented its new channel called Canal+ Liga, direct opponent to Mediapro football channel Gol Televisión and that would include the same matches as this one. Among these ones, three of them would be from Primera División, always one with Real Madrid or FC Barcelona as a player, two from Segunda División, other two from every Copa del Rey season, as well as exclusively UEFA Europa League encrypted matches. The channel launched on 29 August 2009 with a HD simulcast. In November 2009, Telefónica acquired Digital+’s 22% with a 240 million euros payment, moreover the debt repayment Prisa owed to give to Telefónica, which was worth in 230 million. The total cost of the operation was 470 million euros. Afterwards, in December, Telecinco bought another 22%, as a part of fusion operation of this channel with Sogecable's TV channel Cuatro. In June 2010, the telecommunications operator Jazztel stopped commercializing its own pay-per-view services on television, to focus its audiovisual offer in agreements and Internet television services. The company signed in the beginning of May an agreement with Digital+ for being able to be commercialized with its telecom offer. New naming On 17 October 2011 Digital+ was rebranded as Canal+. Consequently, the premium channel known as Canal+ transformed in Canal+ 1. This change also affected Festival de Series de Digital+, which would be from then Festival de Series de Canal+, and later, Canal+ Series. Canal+ sale and transformation to Movistar+ In 2013, even though they had previously happened during 2008, some negotiations about Canal+ sale appeared again. These ones would be more real because of the hurry banks had to solve Prisa's debt to them. In June 2014, Prisa accepted Telefónica's offer to own Canal+. One month later, Mediaset España would sell its share to Telefónica so it could obtain the 100% of the company. On 8 July 2015, Telefónica launched Movistar+, a new platform as a result of the merger of Canal+ and Movistar TV which involved changes in the packages and new channels like Canal+ Estrenos, which broadcast the newest films, or Canal+ Series Xtra, which had alternative and European shows. Canal+ 1 disappeared and traditionally named Canal+ was back, although it would not contain premium programming such as premieres of movies, American cable TV series and the exclusive Sunday football match. They all would be transferred to other Canal+ family channels. Coverage Canal+ offers its services through Astra satellites (at 19.2° east) and Hispasat (30° west). HbbTv Canal+ is a supporter of the Hybrid Broadcast Broadband TV (HbbTV) initiative that is promoting and establishing an open European standard for hybrid set-top boxes for the reception of broadcast TV and broadband multimedia applications with a single user interface, and has run pilot HbbTV services in Spain. References External links Official website PRISA TV Television networks in Spain Direct broadcast satellite services Telecommunications companies established in 2003 Mass media companies established in 2003 2003 establishments in Spain
33454300
https://en.wikipedia.org/wiki/Xiaomi
Xiaomi
Xiaomi Corporation (; ), registered in Asia as Xiaomi Inc., is a Chinese designer and manufacturer of consumer electronics and related software, home appliances, and household items. Behind Samsung, it is the second largest manufacturer of smartphones, most of which run the MIUI operating system. The company is ranked 338th and is the youngest company on the Fortune Global 500. Xiaomi was founded in 2010 in Beijing by now multi-billionaire Lei Jun when he was 40 years old, along with six senior associates. Lei had founded Kingsoft as well as Joyo.com, which he sold to Amazon for $75 million in 2004. In August 2011, Xiaomi released its first smartphone and, by 2014, it had the largest market share of smartphones sold in China. Initially the company only sold its products online; however, it later opened brick and mortar stores. By 2015, it was developing a wide range of consumer electronics. In 2020, the company sold 146.3 million smartphones and its MIUI operating system has over 500 million monthly active users. In the second quarter of 2021, Xiaomi surpassed Apple Inc. to become the second-largest seller of smartphones worldwide, with a 17% market share, according to Canalys. It also is a major manufacturer of appliances including televisions, flashlights, unmanned aerial vehicles, and air purifiers using its Internet of Things and Xiaomi Smart Home product ecosystems. Xiaomi keeps its prices close to its manufacturing costs and bill of materials costs by keeping most of its products in the market for 18 months, longer than most smartphone companies, The company also uses inventory optimization and flash sales to keep its inventory low. Name The name "Xiaomi" literally means millet and rice, and is based on the Buddhist concept of starting from the bottom before aiming for the top. History 2010-2013 On 6 April 2010 Xiaomi was co-founded by Lei Jun and six others: Lin Bin (), vice president of the Google China Institute of Engineering Zhou Guangping (), senior director of the Motorola Beijing R&D center Liu De (), department chair of the Department of Industrial Design at the University of Science and Technology Beijing Li Wanqiang (), general manager of Kingsoft Dictionary Huang Jiangji (), principal development manager Hong Feng (), senior product manager for Google China Lei had founded Kingsoft as well as Joyo.com, which he sold to Amazon for $75 million in 2004. At the time of the founding of the company, Lei was dissatisfied with the products of other mobile phone manufacturers and thought he could make a better product. On 16 August 2010, Xiaomi launched its first Android-based firmware MIUI. In 2010, the company raised $41 million in a Series A round. In August 2011, the company launched its first phone, the Xiaomi Mi1. The device had Xiaomi's MIUI firmware along with Android installation. In December 2011, the company raised $90 million in a Series B round. In June 2012, the company raised $216 million of funding in a Series C round at a $4 billion valuation. Institutional investors participating in the first round of funding included Temasek Holdings, IDG Capital, Qiming Venture Partners and Qualcomm. In August 2013, the company hired Hugo Barra from Google, where he served as vice president of product management for the Android platform. He was employed as vice president of Xiaomi to expand the company outside of mainland China, making Xiaomi the first company selling smartphones to poach a senior staffer from Google's Android team. He left the company in February 2017. In September 2013, Xiaomi announced its Xiaomi Mi3 smartphone and an Android-based 47-inch 3D-capable Smart TV assembled by Sony TV manufacturer Wistron Corporation of Taiwan. In October 2013, it became the fifth-most-used smartphone brand in China. In 2013, Xiaomi sold 18.7 million smartphones. 2014-2017 In February 2014, Xiaomi announced its expansion outside China, with an international headquarters in Singapore. In April 2014, Xiaomi purchased the domain name mi.com for a record , the most expensive domain name ever bought in China, replacing xiaomi.com as the company's main domain name. In September 2014, Xiaomi acquired a 24.7% stake in Roborock. In December 2014, Xiaomi raised US$1.1 billion at a valuation of over US$45 billion, making it one of the most valuable private technology companies in the world. The financing round was led by Hong Kong-based technology fund All-Stars Investment Limited, a fund run by former Morgan Stanley analyst Richard Ji. In 2014, the company sold over 60 million smartphones. In 2014, 94% of the company's revenue came from mobile phone sales. In April 2015, Ratan Tata acquired a stake in Xiaomi. On 30 June 2015, Xiaomi announced its expansion into Brazil with the launch of locally manufactured Redmi 2; it was the first time the company assembled a smartphone outside of China. However, the company left Brazil in the second half of 2016. On 26 February 2016, Xiaomi launched the Mi5, powered by the Qualcomm Snapdragon 820 processor. On 3 March 2016, Xiaomi launched the Redmi Note 3 Pro in India, the first smartphone to powered by a Qualcomm Snapdragon 650 processor. On 10 May 2016, Xiaomi launched the Mi Max, powered by the Qualcomm Snapdragon 650/652 processor. In June 2016, the company acquired patents from Microsoft. In September 2016, Xiaomi launched sales in the European Union through a partnership with ABC Data. Also in September 2016, the Xiaomi Mi Robot vacuum was released by Roborock. On 26 October 2016, Xiaomi launched the Mi Mix, powered by the Qualcomm Snapdragon 821 processor. On 22 March 2017, Xiaomi announced that it planned to set up a second manufacturing unit in India in partnership with contract manufacturer Foxconn. On 19 April 2017, Xiaomi launched the Mi6, powered by the Qualcomm Snapdragon 835 processor. In July 2017, the company entered into a patent licensing agreement with Nokia. On 5 September 2017, Xiaomi released Xiaomi Mi A1, the first Android One smartphone under the slogan: Created by Xiaomi, Powered by Google. Xiaomi stated started working with Google for the Mi A1 Android One smartphone earlier in 2017. An alternate version of the phone was also available with MIUI, the MI 5X. In 2017, Xiaomi opened Mi Stores in India, Pakistan and Bangladesh. The EU's first Mi Store was opened in Athens, Greece in October 2017. In Q3 2017, Xiaomi overtook Samsung to become the largest smartphone brand in India. Xiaomi sold 9.2 million units during the quarter. On 7 November 2017, Xiaomi commenced sales in Spain and Western Europe. 2018-present In April 2018, Xiaomi announced a smartphone gaming brand called Black Shark. It beared 6 GB of RAM coupled with Snapdragon 845 SoC, and was priced at $508, which was cheaper than its competitors. On 2nd May 2018, Xiaomi announced the launch of Mi Music and Mi Video to offer "value-added internet services" in India. On 3rd May 2018, Xiaomi announced a partnership with 3 to sell smartphones in the United Kingdom, Ireland, Austria, Denmark, and Sweden In May 2018, Xiaomi began selling smart home products in the United States through Amazon. In June 2018, Xiaomi became a public company via an initial public offering on the Hong Kong Stock Exchange, raising $4.72 billion. On 7 August 2018, Xiaomi announced that Holitech Technology Co. Ltd., Xiaomi's top supplier, would invest up to $200 million over the next three years to set up a major new plant in India. In August 2018, the company announced POCO as a mid-range smartphone line, first launching in India. In Q4 of 2018, the Xiaomi Poco F1 became the best selling smartphone sold online in India. The Pocophone was sometimes referred to as the "flagship killer" for offering high-end specifications at an affordable price. In October 2019, the company announced that it would launch more than 10 5G phones in 2020, including the Mi 10/10 Pro with 5G functionality. On 17 January 2020, Poco became a separate sub-brand of Xiaomi with entry-level and mid-range devices. In March 2020, Xiaomi showcased its new 40W wireless charging solution, which was able to fully charge a smartphone with a 4,000mAh battery from flat in 40 minutes. In October 2020, Xiaomi became the third largest smartphone maker in the world by shipment volume, shipping 46.2 million handsets in Q3 2020. On 30 March 2021, Xiaomi announced that it will invest US$10 billion in electric vehicles over the following ten years. On 31 March 2021, Xiaomi announced a new logo for the company, designed by Kenya Hara. In July 2021, Xiaomi became the second largest smartphone maker in the world, according to Canalys. It also surpassed Apple for the first time in Europe, making it the second largest in Europe according to Counterpoint. In August 2021, the company acquired autonomous driving company Deepmotion for $77 million. Innovation and development In the 2021 review of WIPO's annual World Intellectual Property Indicators Xiaomi was ranked as 2nd in the world, with 216 designs in industrial design registrations being published under the Hague System during 2020. This position is up on their previous 3rd place ranking in 2019 for 111 industrial design registrations being published. On 8 February 2022, Lei released a statement on Weibo to announce plans for Xiaomi to enter the high-end smartphone market and surpass Apple as the top seller of premium smartphones in China in three years. To achieve that goal, Xiaomi will invest US$15.7 billion in R&D over the next five years, and the company will benchmark its products and user experience against Apple’s product lines. Lei described the new strategy as a "life-or-death battle for our development" in his Weibo post, after Xiaomi's market share in China contracted over consecutive quarters, from 17% to 14% between Q2 and Q3 2021, dipping further to 13.2% as of Q4 2021. Corporate identity Name etymology Xiaomi () is the Chinese word for "millet". In 2011 its CEO Lei Jun suggested there are more meanings than just the "millet and rice". He linked the "Xiao" () part to the Buddhist concept that "a single grain of rice of a Buddhist is as great as a mountain", suggesting that Xiaomi wants to work from the little things, instead of starting by striving for perfection, while "mi" () is an acronym for Mobile Internet and also "mission impossible", referring to the obstacles encountered in starting the company. He also stated that he thinks the name is cute. In 2012 Lei Jun said that the name is about revolution and being able to bring innovation into a new area. Xiaomi's new "Rifle" processor has given weight to several sources linking the latter meaning to the Communist Party of China's "millet and rifle" (小米加步枪) revolutionary idiom during the Second Sino-Japanese War. Logo and mascot Xiaomi's first logo consisted of a single orange square with the letters "MI" in white located in the center of the square. This logo was in use until 31 March 2021, when a new logo, designed by well-known Japanese designer Kenya Hara, replaced the old one, consisting of the same basic structure as the previous logo, but the square was replaced with a "squircle" with rounded corners instead, with the letters "MI" remaining identical to the previous logo, along with a slightly darker hue. Xiaomi's mascot, Mitu, is a white rabbit wearing an Ushanka (known locally as a "Lei Feng hat" in China) with a red star and a red scarf around its neck. Controversy, criticism and regulatory actions Imitation of Apple Inc. Xiaomi has been accused of imitating Apple Inc. The hunger marketing strategy of Xiaomi was described as riding on the back of the "cult of Apple". After reading a book about Steve Jobs in college, Xiaomi's chairman and CEO, Lei Jun, carefully cultivated a Steve Jobs image, including jeans, dark shirts, and Jobs' announcement style at Xiaomi's earlier product announcements. He was characterized as a "counterfeit Jobs." In 2012, the company was said to be counterfeiting Apple's philosophy and mindset. In 2013, critics debated how much of Xiaomi's products were innovative, and how much of their innovation was just really good public relations. Others point out that while there are similarities to Apple, the ability to customize the software based upon user preferences through the use of Google's Android operating system sets Xiaomi apart. Xiaomi has also developed a much wider range of consumer products than Apple. Violation of GNU General Public License In January 2018, Xiaomi was criticized for its non-compliance with the terms of the GNU General Public License. The Android project's Linux kernel is licensed under the copyleft terms of the GPL, which requires Xiaomi to distribute the complete source code of the Android kernel and device trees for every Android device it distributes. By refusing to do so, or by unreasonably delaying these releases, Xiaomi is operating in violation of intellectual property law in China, as a WIPO state. Prominent Android developer Francisco Franco publicly criticized Xiaomi's behaviour after repeated delays in the release of kernel source code. Xiaomi in 2013 said that it would release the kernel code. The kernel source code is available on the GitHub website. Privacy concerns and data collection As a company based in China, Xiaomi is obligated to share data with the Chinese government under the China Internet Security Law and National Intelligence Law. There were reports that Xiaomi's Cloud messaging service sends some private data, including call logs and contact information, to Xiaomi servers. Xiaomi later released an MIUI update that made cloud messaging optional and that no private data was sent to Xiaomi servers if the cloud messaging service was turned off. On 23 October 2014, Xiaomi announced that it was setting up servers outside of China for international users, citing improved services and compliance to regulations in several countries. On 19 October 2014, the Indian Air Force issued a warning against Xiaomi phones, stating that they were a national threat as they sent user data to an agency of the Chinese government. In April 2019, researchers at Check Point found a security breach in Xiaomi phone apps. The security flaw was reported to be preinstalled. On 30 April 2020, Forbes reported that Xiaomi extensively tracks use of its browsers, including private browser activity, phone metadata and device navigation, and more alarmingly, without secure encryption or anonymization, more invasively and to a greater extent than mainstream browsers. Xiaomi disputed the claims, while confirming that it did extensively collect browsing data, and saying that the data was not linked to any individuals and that users had consented to being tracked. Xiaomi posted a response stating that the collection of aggregated usage statistics data is used for internal analysis, and would not link any personally identifiable information to any of this data. However, after a followup by Gabriel Cirlig, the writer of the report, Xiaomi added an option to completely stop the information leak when using its browser in incognito mode. State administration of radio, film and television issue In November 2012, Xiaomi's smart set-top box stopped working one week after the launch due to the company having run foul of China's State Administration of Radio, Film, and Television. The regulatory issues were overcome in January 2013. Misleading sales figures The Taiwanese Fair Trade Commission investigated the flash sales and found that Xiaomi had sold fewer smartphones than advertised. Xiaomi claimed that the number of smartphones sold was 10,000 units each for the first two flash sales, and 8,000 units for the third one. However, FTC investigated the claims and found that Xiaomi sold 9,339 devices in the first flash sale, 9,492 units in the second one, and 7,389 for the third. It was found that during the first flash sale, Xiaomi had given 1,750 priority ‘F-codes’ to people who could place their orders without having to go through the flash sale, thus diminishing the stock that was publicly available. The FTC fined Xiaomi . Shut down of Australia store In March 2014, Xiaomi Store Australia (an unrelated business) began selling Xiaomi mobile phones online in Australia through its website, XiaomiStore.com.au. However, Xiaomi soon "requested" that the store be shut down by 25 July 2014. On 7 August 2014, shortly after sales were halted, the website was taken down. An industry commentator described the action by Xiaomi to get the Australian website closed down as unprecedented, saying, "I’ve never come across this [before]. It would have to be a strategic move." At the time this left only one online vendor selling Xiaomi mobile phones into Australia, namely Yatango (formerly MobiCity), which was based in Hong Kong. This business closed in late 2015. Temporary ban in India due to patent infringement On 9 December 2014, the High Court of Delhi granted an ex parte injunction that banned the import and sale of Xiaomi products in India. The injunction was issued in response to a complaint filed by Ericsson in connection with the infringement of its patent licensed under fair, reasonable, and non-discriminatory licensing. The injunction was applicable until 5 February 2015, the date on which the High Court was scheduled to summon both parties for a formal hearing of the case. On 16 December, the High Court granted permission to Xiaomi to sell its devices running on a Qualcomm-based processor until 8 January 2015. Xiaomi then held various sales on Flipkart, including one on 30 December 2014. Its flagship Xiaomi Redmi Note 4G phone sold out in six seconds. A judge extended the division bench's interim order, allowing Xiaomi to continue the sale of Qualcomm chipset-based handsets until March 2018. U.S. sanctions due to ties with People's Liberation Army In January 2021, the United States government named Xiaomi as a company "owned or controlled" by the People's Liberation Army and thereby prohibited any American company or individual from investing in it. However, the investment ban was blocked by a US court ruling after Xiaomi filed a lawsuit in the United States District Court for the District of Columbia, with the court expressing skepticism regarding the government's national security concerns. Xiaomi denied the allegations of military ties and stated that its products and services were of civilian and commercial use. In May 2021, Xiaomi reached an agreement with the Defense Department to remove the designation of the company as military-linked. Lawsuit by KPN alleging patent infringement On 19 January 2021, KPN, a Dutch landline and mobile telecommunications company, sued Xiaomi and others for patent infringement. KPN filed similar lawsuits against Samsung in 2014 and 2015 in a court in the US. Lawsuit by Wyze alleging invalid patent In July 2021, Xiaomi submitted a report to Amazon alleging that Wyze Labs had infringed upon its 2019 "Autonomous Cleaning Device and Wind Path Structure of Same" robot vacuum patent. On 15 July 2021, Wyze filed a lawsuit against Xiaomi in the U.S. District Court for the Western District of Washington, arguing that prior art exists and asking the court for a declaratory judgment that Xiaomi's 2019 robot vacuum patent is invalid. Censorship In September 2021, the Lithuanian Ministry of National Defence urged people to dispose the Chinese-made mobile phones and avoid buying new ones. This was after the National Cyber Security Centre of Lithuania found that Xiaomi devices have built-in censorship capabilities that can be turned on remotely. Xiaomi phones sold in Europe had a built-in ability to detect and censor terms such as "Free Tibet", "Long live Taiwan independence" or "democracy movement". This capability was discovered in Xiaomi's flagship phone Mi 10T 5G. The list of terms which could be censored by the Xiaomi phone's system apps, including the default internet browser, in September 2021 includes 449 terms in Chinese and the list was continuously updated. References External links 2018 initial public offerings Chinese brands Chinese companies established in 2010 Companies listed on the Hong Kong Stock Exchange Computer hardware companies Electronics companies established in 2010 Electronics companies of China Home automation companies Manufacturing companies established in 2010 Mobile phone companies of China Mobile phone manufacturers Multinational companies headquartered in China Networking hardware companies Telecommunication equipment companies of China
33492675
https://en.wikipedia.org/wiki/Jihadist%20extremism%20in%20the%20United%20States
Jihadist extremism in the United States
Jihadist extremism in the United States (or Islamist extremism in the United States) refers to Islamic extremism occurring within the United States. Islamic extremism is adherence to a fundamentalist interpretation of Islam (see Islamic fundamentalism), potentially including the promotion of violence to achieve political goals (see Jihadism). In the aftermath of the September 11, 2001 terror attacks, Islamic extremism became a prioritized national security concern of the United States government and a focus by many subsidiary security and law enforcement entities. Initially, the focus of concern was on foreign terrorist groups, particularly al-Qaeda, but in the course of the years since 9/11 the focus has shifted more towards Islamic extremism within the United States. The number of American citizens or long-term residents involved in extremist activity is small, but nevertheless is a national security concern. Zeyno Baran, senior fellow and director of the Center for Eurasian Policy at the Hudson Institute, argues a more appropriate term is Islamist extremism to distinguish the religion from the political ideology that leads to extremism: Islam, the religion, deals with piety, ethics, and beliefs, and can be compatible with secular liberal democracy and basic civil liberties. Islamists, however, believe Islam is the only basis for the legal and political system that governs the world's economic, social, and judicial mechanisms. Islamic law, or sharia, must shape all aspects of human society, from politics and education to history, science, the arts, and more. It is diametrically opposed to liberal democracy. With the value placed on freedom of religion in the United States, religious extremism is a difficult and divisive topic. Dr. M. Zuhdi Jasser, president and founder of the American Islamic Forum for Democracy, testified before Congress that the United States is "polarized on its perceptions of Muslims and the radicalization that occurs within our communities... One camp refuses to believe any Muslim could be radicalized living in blind multiculturalism, apologetics, and denial, and the other camp believes all devout Muslims and the faith of Islam are radicalized..." In between the two polarities is a respect for the religion of Islam coupled with an awareness of the danger "of a dangerous internal theo-political domestic and global ideology that must be confronted – Islamism." Violent Islamic extremism "The single biggest change in terrorism over the past several year has been the wave of Americans joining the fight – not just as foot soldiers but as key members of Islamist groups and as operatives inside terrorist organizations, including al-Qaeda." American citizens or longtime residents are "masterminds, propagandists, enablers, and media strategists" in foreign terror groups and working to spread extremist ideology in the West. This trend is worrisome because these American extremists "understand the United States better than the United States understands them." There is a lack of understanding of how American Radical Jihadists are propagated. There is "no typical profile" of an American extremist and the "experiences and motivating factors vary widely." Janet Napolitano, former Secretary of the Department of Homeland Security, stated that it is unclear if there has been an "increase in violent radicalization" or "a rise in the mobilization of previously radicalized individuals". Terrorist organizations seek Americans to radicalize and recruit because of a familiarity with the United States and the West. The evolving extremist threat makes it "more difficult for law enforcement or the intelligence community to detect and disrupt plots." Some American extremists are actively recruited and trained by foreign terrorist organizations and others are known as "lone wolves" that radicalize on their own. The Fort Hood shooter, Major Nidal Hasan, is an American of Palestinian descent. He communicated via email with Anwar al-Awlaki, but had no direct ties to al-Qaeda. Al-Qaeda propaganda uses Hasan to promote the idea of "be al-Qaeda by not being al-Qaeda". Abdulhakim Muhammad, an American citizen, shot a military recruiter in Little Rock, Arkansas in June 2009 after spending time in Yemen; he was born Carlos Bledsoe and converted to Islam as a young adult. Faisal Shahzad is a naturalized American citizen from Pakistan and received bomb training from the Tehrik-i-Taliban Pakistan; his plot to detonate a bomb in New York's Times Square was discovered only after the bomb failed. Zachary Chesser converted to Islam after high school and began to spread extremism over the internet. He was arrested attempting to board a flight to Somalia to join the terrorist group al-Shabaab. Americans in Radical Islamic terrorist organizations Since 2007, over 50 American citizens and permanent residents have been arrested or charged in connection with attempts to join terrorist groups abroad, including al-Qaeda in the Arabian Peninsula (AQAP) and Al Shabaab. In 2013 alone, 9 Americans are known to have joined or attempted to join foreign terrorist organizations. Americans inside al-Qaeda provide insider's knowledge of the United States. Adam Gadahn was an American convert who joined al-Qaeda in the late 1990s. He released English-language propaganda videos, but Gadahn lacked charisma and his voice was replaced by Anwar al-Awlaki. Awlaki was an American of Yemeni descent, killed on September 30, 2011 by a U.S. missile strike in Yemen. Awlaki had religious credentials Gadahn lacks and a "gently persuasive" style; "tens of thousands, maybe millions, have watched [Awlaki's] lectures on the Internet." His perfect English and style broadened al-Qaeda's reach. Another key American in al-Qaeda's power structure was a man named Adnan Shukrijumah. Shukrijumah is believed to be the highest ranking American in al-Qaeda. He was born in Saudi Arabia, grew up in Trinidad, and moved to Florida as a teenager; he was a naturalized American citizen and left the United States in the spring of 2001. Shukrijumah was a mystery to authorities until he was identified by Najibullah Zazi after Zazi was arrested for a failed plot to bomb transportation targets around New York City. Zazi had traveled to Afghanistan to fight U.S. forces, but Shukrijumah convinced Zazi to return to the United States and plan an attack here. In May 2014, Florida born convert Mohammed Abusalha conducted a deadly suicide bombing while fighting for Islamist extremists in Syria. In 2014, Troy Kastigar and Douglas McAuthur McCain, two Americans who converted to Islam traveled to the Islamic State for jihad and were killed in battle. In 2015, Zulfi Hoxha traveled to Syria where he became a significant figure in ISIS. Places for radicalization Prison The United States has the world's largest prison population and "prisons have long been places where extremist ideology and calls to violence could find a willing ear, and conditions are often conducive to radicalization." Muslim prisoners have been characterized as a danger or threat for radicalization in the media post-9/11. There is a "significant lack of social science research" on the issue of Islamic extremism in U.S. prisons and there is disagreement on the danger Islamic extremism in prisons poses to U.S. national security. Some suggest that the gravity of so-called prison radicalization should be questioned due to the fact that data presents only one terrorism-related case among millions of individuals. Reports have cautioned for the potential for radicalization as a result of vulnerable inmates having little exposure to mainstream Islam and monitoring of religious services activities in the event they may be exposed to extremist versions of the religion from inadequate religious service providers or other inmates via anti-US sermons and extremist media; which may be embraced or influenced by the Salafi form of Sunni Islam (including revisionist versions commonly known as "prison Islam") and an extremist view of Shia Islam. The term "Prison" or "Jailhouse Islam" is unique to prison and incorporates values of gang loyalty and violence into the religion. In spite of the fact of there being over 350,000 Muslim inmates in the United States, little evidence indicates widespread radicalization or foreign recruitment. Some argue that empirical studies have not supported the claims that prisons are fertile grounds for terrorism. Only one Black American prison convert was convicted for involvement among the millions of adult males under supervision in the United States. This individual founded a prison extremist group, called Jami'iy yat Ul-Isla Is Saheeh (JIS), from New Folsom State Prison in California and hatched a plot to attack numerous local government and Jewish targets. In July 2005, members of JIS "were involved in almost a dozen armed gas station robberies in Los Angeles with the goal of financing terrorist operations." Statistics are not kept on the religious orientation of inmates in the U.S. prison system, limiting the ability to adequately judge the potential for Islamic extremism. A report published by the Department of Justice's Office of the Inspector General in 2004 on the issue of the Federal Bureau of Prisons' selection of Muslim chaplains, estimated that 6% of the federal inmate population seek Muslim Islamic services. Through prisoner self-reporting, the majority of Muslims in federal prison are Sunni or Nation of Islam followers. The federal prison population is only a small percentage of the total U.S. prison population, however, and cannot provide an overall representation of Muslims inmates in the United States. Mosques Some mosques in the United States transmit extremist ideas. The North American Islamic Trust (NAIT), a group with ties to the Muslim Brotherhood, "holds titles of approximately 300 properties [mosques and Islamic schools]". The organization's website states: "NAIT does not administer these institutions or interfere in their daily management, but is available to support and advise them regarding their operation in conformity with the Shari'ah." Other research on the Muslim Brotherhood in the United States claims NAIT influences a far larger number of Islamic institutions in the U.S. There is no government policy on the establishment of mosques in the United States and no way to monitor activity. The value placed on religious freedom in the U.S. complicates the situation as mosques are places of worship that may be used to spread extremist ideology. Internet The internet can be used as a "facilitator—even an accelerant—for terrorist and criminal activity." The increase of online English-language extremist material in recent years is readily available with guidance to plan violent activity. "English-language web forums […] foster a sense of community and further indoctrinate new recruits". The Internet has "become a tool for spreading extremist propaganda, and for terrorist recruiting, training, and planning. It is a means of social networking for like-minded extremists...including those who are not yet radicalized, but who may become so through the anonymity of cyberspace." Al-Qaeda in the Arabian Peninsula (AQAP) published an English-language online magazine called Inspire. The magazine is designed to appeal to Westerners. It is "[w]ritten in colloquial English, [with] jazzy headlines and articles that made it seem almost mainstream—except that they were all about terrorism." Inspire "included tips for aspiring extremists on bomb-making, traveling overseas, email encryption, and a list of individuals to assassinate." The editor is believed to be Samir Khan, an American citizen, based on work he did before leaving the United States. The magazine appeared six months after Khan arrived in Yemen. There have been seven issues of Inspire. Khan died in the same missile attack that killed Anwar al-Awlaki and the future of the magazine is unknown. Yousef al-Khattab and Younes Abdullah Mohammed, both converts to Islam, started a group called Revolution Muslim. The group was meant "to be both a radical Islamic organization and a movement" with goals that include "establishing Islamic law in the United States, destroying Israel and taking al-Qaeda’s messages to the masses." A list of its members "reads like a who’s who of American homegrown terrorism suspects"; Samir Khan and Jihad Jane were regulars in the Revolution Muslim chat rooms. Revolution Muslim had a website and a YouTube account before it was shut down after a posting glorifying the stabbing of a British member of Parliament. The revolutionmuslim.com domain now redirects to a website called Islam Policy run by Younes Abdullah Mohammed. The danger of the website, and others that offer similar content, is the websites offer the chance to become further involved in violent extremism and connect to like-minded people in the U.S. and aboard. U.S.-specific extremist narrative Key to the trend of increasing Islamic extremism in the United States "has been the development of a US-specific narrative that motivates individuals to violence." "This narrative—a blend of al-Qa‘ida inspiration, perceived victimization, and glorification of past plotting—has become increasingly accessible through the Internet, and English-language websites are tailored to address the unique concerns of US-based extremists." "To disaffected, aggrieved, or troubled individuals, this narrative explains in a simple framework the ills around them and the geopolitical discord they see on their television sets and on the Internet." The narrative is easy to understand and grants "meaning and heroic outlet" for the discontented and alienated. U.S. government response The President, Federal Bureau of Investigation (FBI), Department of Homeland Security (DHS), and the National Counterterrorism Center (NCTC) are the most relevant elements of the U.S. government to the threat of American Islamic extremism and each has taken steps to address and counter the issue. Since 9/11 the government has worked to improve information sharing "within the government, and between federal, state, local, and tribal law enforcement, as well as with the public." The "If You See Something, Say Something" campaign, instituted by DHS and local law enforcement, was created to raise public awareness of the potential dangers. In August 2011, the Office of the President released a strategy to counter violent extremism called Empowering Local Partners to Prevent Violent Extremism in the United States. The strategy takes a three-pronged approach of community engagement, better training, and counternarratives. The plan states: "We must actively and aggressively counter the range of ideologies violent extremists employ to radicalize and recruit individuals by challenging justifications for violence and by actively promoting the unifying and inclusive visions of our American ideals," challenging extremist propaganda through words and deeds. The goal is to "prevent violent extremists and their supporters from inspiring, radicalizing, financing, or recruiting individuals or groups in the United States to commit acts of violence." American Muslim community response There are Muslim Americans speaking out against Islamic extremist activities, although the most common complaint is Muslim leaders do not speak out forcefully against Radical Jihad. An important voice is Dr. Zuhdi Jasser, the president of the American Islamic Forum for Democracy. Dr. Jasser testified before a House hearing on Muslim radicalization in the U.S. in early 2010: For me it is a very personal mission to leave my American Muslim children a legacy that their faith is based in the unalienable right to liberty and to teach them that the principles that founded America do not contradict their faith but strengthen it. Our founding principle is that I as a Muslim am able to best practice my faith in a society like the United States that guarantees the rights of every individual blind to faith with no governmental intermediary stepping between the individual and the creator to interpret the will of God. Because of this, our mission is to advocate for the principles of the Constitution of the United States of America, liberty and freedom and the separation of mosque and state. We believe that this mission from within the "House of Islam" is the only way to inoculate Muslim youth and young adults against radicalization. The "Liberty narrative" is the only effective counter to the "Islamist narrative." Another voice, that warned of Islamic extremism before the September 11 attacks is Shaykh Muhammad Hisham Kabbani, chairman of the Islamic Supreme Council of America. Attacks or failed attacks by date 1993 World Trade Center bombing 1995 Bojinka plot 1997 Brooklyn bombing plot 2000 millennium attack plots 2001 September 11 attacks 2001 shoe bomb attempt 2002 Los Angeles Airport shooting 2002 José Padilla (Abdullah al-Muhajir) Plot 2002 Buffalo Six 2004 financial buildings plot 2005 Los Angeles bomb plot 2006 Hudson River bomb plot 2006 Sears Tower plot 2006 Seattle Jewish Federation shooting 2006 Toledo terror plot 2006 transatlantic aircraft plot 2006 UNC SUV attack 2007 Fort Dix attack plot 2007 John F. Kennedy International Airport attack plot 2009 Failed underwear bomb on Northwest Airlines Flight 253 2009 Little Rock recruiting office shooting 2009 Bronx terrorism plot 2009 Dallas Car Bomb Plot by Hosam Maher Husein Smadi 2009 New York City Subway and United Kingdom plot 2009 Fort Hood shooting 2009 Colleen LaRose arrested (not made public until March 2010) 2010 Transatlantic aircraft bomb plot 2010 King Salmon, Alaska local meteorologist and wife assassination plots 2010 Alleged Washington Metro bomb plot 2011 Alleged Saudi Arabian student bomb plots 2011 Manhattan terrorism plot 2011 Lone Wolf New York City, Bayonne, NJ pipe bombs plot. 2012 Car bomb plot in Florida. 2013 Boston Marathon bombing 2013 Wichita bombing attempt 2014 beheading by Alton Nolen 2014 Seattle, Washington and West Orange, New Jersey killing spree by Ali Muhammad Brown 2015 Boston beheading plot 2015 Curtis Culwell Center attack 2015 Chattanooga shootings 2015 San Bernardino attack 2016 Orlando nightclub shooting 2016 New York and New Jersey bombings 2016 St. Cloud, Minnesota mall stabbing 2016 Ohio State University attack 2017 New York City attack 2019 Naval Air Station Pensacola shooting Media attention in the US on Islamic Terrorism The Universities of Georgia and Alabama in the United States conducted a study comparing media coverage of "terrorist attacks" committed by Islamist militants with those of non-Muslims in the United States.  Researchers found that "terrorist attacks" by Islamist militants receive 357% more media attention than attacks committed by non-Muslims or whites. Terrorist attacks committed by non-Muslims (or where the religion was unknown) received an average of 15 headlines, while those committed by Muslim extremists received 105 headlines. The study was based on an analysis of news reports covering terrorist attacks in the United States between 2005 and 2015. See also Islam and violence Islam and domestic violence List of Islamist terrorist attacks, worldwide (since the 1970s) Homegrown terrorism Terrorism in the United States Islamism Sharia Empowering local partners to prevent violent extremism in the United States References United States Islamism in the United States Political controversies in the United States Religious extremism
33496160
https://en.wikipedia.org/wiki/Mobile%20app
Mobile app
A mobile application or app is a computer program or software application designed to run on a mobile device such as a phone, tablet, or watch. Mobile applications often stand in contrast to desktop applications which are designed to run on desktop computers, and web applications which run in mobile web browsers rather than directly on the mobile device. Apps were originally intended for productivity assistance such as email, calendar, and contact databases, but the public demand for apps caused rapid expansion into other areas such as mobile games, factory automation, GPS and location-based services, order-tracking, and ticket purchases, so that there are now millions of apps available. Many apps require Internet access. Apps are generally downloaded from app stores, which are a type of digital distribution platforms. The term "app", short for "software application", has since become very popular; in 2010, it was listed as "Word of the Year" by the American Dialect Society. Apps are broadly classified into three types: native apps, hybrid and web apps. Native applications are designed specifically for a mobile operating system, typically iOS or Android. Web apps are written in HTML5 or CSS and typically run through a browser. Hybrid apps are built using web technologies such as JavaScript, CSS, and HTML5 and function like web apps disguised in a native container. Overview Most mobile devices are sold with several apps bundled as pre-installed software, such as a web browser, email client, calendar, mapping program, and an app for buying music, other media, or more apps. Some pre-installed apps can be removed by an ordinary uninstall process, thus leaving more storage space for desired ones. Where the software does not allow this, some devices can be rooted to eliminate the undesired apps. Apps that are not preinstalled are usually available through distribution platforms called app stores. These may operated by the owner of the device's mobile operating system, such as the App Store (iOS) or Google Play Store; by the device manufacturers, such as the Galaxy Store and Huawei AppGallery; or by third parties, such as the Amazon Appstore and F-Droid. Usually, they are downloaded from the platform to a target device, but sometimes they can be downloaded to laptops or desktop computers. Apps can also be installed manually, for example by running an Android application package on Android devices. Some apps are freeware, while others have a price, which can be upfront or a subscription. Some apps also include microtransactions and/or advertising. In any case, the revenue is usually split between the application's creator and the app store. The same app can, therefore, cost a different price depending on the mobile platform. Mobile apps were originally offered for general productivity and information retrieval, including email, calendar, contacts, the stock market and weather information. However, public demand and the availability of developer tools drove rapid expansion into other categories, such as those handled by desktop application software packages. As with other software, the explosion in number and variety of apps made discovery a challenge, which in turn led to the creation of a wide range of review, recommendation, and curation sources, including blogs, magazines, and dedicated online app-discovery services. In 2014 government regulatory agencies began trying to regulate and curate apps, particularly medical apps. Some companies offer apps as an alternative method to deliver content with certain advantages over an official website. With a growing number of mobile applications available at app stores and the improved capabilities of smartphones, people are downloading more applications to their devices. Usage of mobile apps has become increasingly prevalent across mobile phone users. A May 2012 comScore study reported that during the previous quarter, more mobile subscribers used apps than browsed the web on their devices: 51.1% vs. 49.8% respectively. Researchers found that usage of mobile apps strongly correlates with user context and depends on user's location and time of the day. Mobile apps are playing an ever-increasing role within healthcare and when designed and integrated correctly can yield many benefits. Market research firm Gartner predicted that 102 billion apps would be downloaded in 2013 (91% of them free), which would generate $26 billion in the US, up 44.4% on 2012's US$18 billion. By Q2 2015, the Google Play and Apple stores alone generated $5 billion. An analyst report estimates that the app economy creates revenues of more than €10 billion per year within the European Union, while over 529,000 jobs have been created in 28 EU states due to the growth of the app market. Types Mobile applications may be classified by numerous methods. A common scheme is to distinguish native, web-based, and hybrid apps. Native app All apps targeted toward a particular mobile platform are known as native apps. Therefore, an app intended for Apple device does not run in Android devices. As a result, most businesses develop apps for multiple platforms. While developing native apps, professionals incorporate best-in-class user interface modules. This accounts for better performance, consistency and good user experience. Users also benefit from wider access to application programming interfaces and make limitless use of all apps from the particular device. Further, they also switch over from one app to another effortlessly. The main purpose for creating such apps is to ensure best performance for a specific mobile operating system. Web-based app A web-based app is implemented with the standard web technologies of HTML, CSS, and JavaScript. Internet access is typically required for proper behavior or being able to use all features compared to offline usage. Most, if not all, user data is stored in the cloud. The performance of these apps is similar to a web application running in a browser, which can be noticeably slower than the equivalent native app. It also may not have the same level of features as the native app. Hybrid app The concept of the hybrid app is a mix of native and web-based apps. Apps developed using Apache Cordova, Flutter, Xamarin, React Native, Sencha Touch, and other frameworks fall into this category. These are made to support web and native technologies across multiple platforms. Moreover, these apps are easier and faster to develop. It involves use of single codebase which works in multiple mobile operating systems. Despite such advantages, hybrid apps exhibit lower performance. Often, apps fail to bear the same look-and-feel in different mobile operating systems. Development Developing apps for mobile devices requires considering the constraints and features of these devices. Mobile devices run on battery and have less powerful processors than personal computers and also have more features such as location detection and cameras. Developers also have to consider a wide array of screen sizes, hardware specifications and configurations because of intense competition in mobile software and changes within each of the platforms (although these issues can be overcome with mobile device detection). Mobile application development requires the use of specialized integrated development environments. Mobile apps are first tested within the development environment using emulators and later subjected to field testing. Emulators provide an inexpensive way to test applications on mobile phones to which developers may not have physical access. Mobile user interface (UI) Design is also essential. Mobile UI considers constraints and contexts, screen, input and mobility as outlines for design. The user is often the focus of interaction with their device, and the interface entails components of both hardware and software. User input allows for the users to manipulate a system, and device's output allows the system to indicate the effects of the users' manipulation. Mobile UI design constraints include limited attention and form factors, such as a mobile device's screen size for a user's hand. Mobile UI contexts signal cues from user activity, such as location and scheduling that can be shown from user interactions within a mobile application. Overall, mobile UI design's goal is primarily for an understandable, user-friendly interface. Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The mobile back-end facilitates data routing, security, authentication, authorization, working off-line, and service orchestration. This functionality is supported by a mix of middleware components including mobile app servers, Mobile Backend as a service (MBaaS), and SOA infrastructure. Conversational interfaces display the computer interface and present interactions through text instead of graphic elements. They emulate conversations with real humans. There are two main types of conversational interfaces: voice assistants (like the Amazon Echo) and chatbots. Conversational interfaces are growing particularly practical as users are starting to feel overwhelmed with mobile apps (a term known as "app fatigue"). David Limp, Amazon's senior vice president of devices, says in an interview with Bloomberg, "We believe the next big platform is voice." Distribution The three biggest app stores are Google Play for Android, App Store for iOS, and Microsoft Store for Windows 10, Windows 10 Mobile, and Xbox One. Google Play Google Play (formerly known as the Android Market) is an international online software store developed by Google for Android devices. It opened in October 2008. In July 2013, the number of apps downloaded via the Google Play Store surpassed 50 billion, of the over 1 million apps available. As of September 2016, according to Statista the number of apps available exceeded 2.4 million. Over 80% of apps in the Google Play Store are free to download. The store generated a revenue of 6 billion U.S. dollars in 2015. App Store Apple's App Store for iOS and iPadOS was not the first app distribution service, but it ignited the mobile revolution and was opened on July 10, 2008, and as of September 2016, reported over 140 billion downloads. The original AppStore was first demonstrated to Steve Jobs in 1993 by Jesse Tayler at NeXTWorld Expo As of June 6, 2011, there were 425,000 apps available, which had been downloaded by 200 million iOS users. During Apple's 2012 Worldwide Developers Conference, CEO Tim Cook announced that the App Store has 650,000 available apps to download as well as 30 billion apps downloaded from the app store until that date. From an alternative perspective, figures seen in July 2013 by the BBC from tracking service Adeven indicate over two-thirds of apps in the store are "zombies", barely ever installed by consumers. Microsoft Store Microsoft Store (formerly known as the Windows Store) was introduced by Microsoft in 2012 for its Windows 8 and Windows RT platforms. While it can also carry listings for traditional desktop programs certified for compatibility with Windows 8, it is primarily used to distribute "Windows Store apps"—which are primarily built for use on tablets and other touch-based devices (but can still be used with a keyboard and mouse, and on desktop computers and laptops). Others Amazon Appstore is an alternative application store for the Android operating system. It was opened in March 2011 and as of June 2015, the app store has nearly 334,000 apps. The Amazon Appstore's Android Apps can also be installed and run on BlackBerry 10 devices. BlackBerry World is the application store for BlackBerry 10 and BlackBerry OS devices. It opened in April 2009 as BlackBerry App World. Ovi (Nokia) for Nokia phones was launched internationally in May 2009. In May 2011, Nokia announced plans to rebrand its Ovi product line under the Nokia brand and Ovi Store was renamed Nokia Store in October 2011. Nokia Store will no longer allow developers to publish new apps or app updates for its legacy Symbian and MeeGo operating systems from January 2014. Windows Phone Store was introduced by Microsoft for its Windows Phone platform, which was launched in October 2010. , it has over 120,000 apps available. Samsung Apps was introduced in September 2009. As of October 2011, Samsung Apps reached 10 million downloads. The store is available in 125 countries and it offers apps for Windows Mobile, Android and Bada platforms. The Electronic AppWrapper was the first electronic distribution service to collectively provide encryption and purchasing electronically F-Droid — Free and open Source Android app repository. Opera Mobile Store is a platform independent app store for iOS, Java, BlackBerry OS, Symbian, iOS, and Windows Mobile, and Android based mobile phones. It was launched internationally in March, 2011. There are numerous other independent app stores for Android devices. Enterprise management Mobile application management (MAM) describes software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings. The strategy is meant to off-set the security risk of a Bring Your Own Device (BYOD) work strategy. When an employee brings a personal device into an enterprise setting, mobile application management enables the corporate IT staff to transfer required applications, control access to business data, and remove locally cached business data from the device if it is lost, or when its owner no longer works with the company. Containerization is an alternate approach to security. Rather than controlling an employee/s entire device, containerization apps create isolated pockets separate from personal data. Company control of the device only extends to that separate container. App wrapping vs. native app management Especially when employees "bring your own device" (BYOD), mobile apps can be a significant security risk for businesses, because they transfer unprotected sensitive data to the Internet without knowledge and consent of the users. Reports of stolen corporate data show how quickly corporate and personal data can fall into the wrong hands. Data theft is not just the loss of confidential information, but makes companies vulnerable to attack and blackmail. Professional mobile application management helps companies protect their data. One option for securing corporate data is app wrapping. But there also are some disadvantages like copyright infringement or the loss of warranty rights. Functionality, productivity and user experience are particularly limited under app wrapping. The policies of a wrapped app can't be changed. If required, it must be recreated from scratch, adding cost. An app wrapper is a mobile app made wholly from an existing website or platform, with few or no changes made to the underlying application. The "wrapper" is essentially a new management layer that allows developers to set up usage policies appropriate for app use. Examples of these policies include whether or not authentication is required, allowing data to be stored on the device, and enabling/disabling file sharing between users. Because most app wrappers are often websites first, they often do not align with iOS or Android Developer guidelines. Alternatively, it is possible to offer native apps securely through enterprise mobility management. This enables more flexible IT management as apps can be easily implemented and policies adjusted at any time. See also Appbox Pro (2009) App store optimization Enterprise mobile application Mobile commerce Super-app References External links User interface techniques
33594304
https://en.wikipedia.org/wiki/OwnCloud
OwnCloud
ownCloud is a suite of client–server software for creating and using file hosting services. ownCloud functionally has similarities to the widely used Dropbox. The primary functional difference between ownCloud and Dropbox is that ownCloud is primarily server software. (The company's ownCloud.online is a hosted service.) The Server Edition of ownCloud is free and open-source, thereby allowing anyone to install and operate it without charge on their own private server. ownCloud supports extensions that allow it to work like Google Drive, with online office suite document editing, calendar and contact synchronization, and more. Its openness avoids enforced quotas on storage space or the number of connected clients, instead of having hard limits (for example on storage space or number of users) limits are determined by the physical capabilities of the server. History The development of ownCloud was announced in January 2010, in order to provide a free software replacement to proprietary storage service providers. The company was founded in 2011 and forked the code away from KDE to GitHub. ownCloud Inc., the company founded by Markus Rex, Holger Dyroff and Frank Karlitschek, has attracted funding from investors, including an injection of 6.3 million US$ in 2014. In April 2016, Karlitschek left ownCloud Inc. and founded a new company and project called Nextcloud in June 2016, resulting in the closure of ownCloud's U.S. operations. Some former ownCloud Inc. developers left ownCloud to form the fork with Karlitschek. In July 2016, ownCloud GmbH, based in Nuremberg Germany, secured additional financing, and expanded its management team. In 2018, ownCloud launched its own SaaS offer for small businesses and NGOs. The service aimed to provide a secure and GDPR-compatible solution for organizations without their own IT department. In March 2019, ownCloud launched the BayernBox in cooperation with the Bavarian State Office for Survey and Geoinformation, an ownCloud-based collaboration solution for the Bavarian municipalities. The deployment involves one ownCloud instance for each of the over 2000 municipalities. Server releases Overview Design Desktop clients for ownCloud are available for Windows, macOS, FreeBSD and Linux, mobile clients for iOS and Android devices. Files and other data (such as calendars, contacts or bookmarks) can also be accessed, managed, and uploaded using a web browser. Updates are pushed to all computers and mobile devices connected to a account. Encryption of files may be enforced by the server administrator. The ownCloud server is written in PHP and JavaScript scripting languages. In September 2020, ownCloud announced to switch to Go. The Go-based "ownCloud Infinite Scale" became first available to the public in early 2021, and in late 2021, the beta was announced for the first quarter of 2022. ownCloud is designed to work with several database management systems, including SQLite, MariaDB, MySQL, Oracle Database, and PostgreSQL. Features owncloud is a software only product and does not offer off-premise storage. This is in contrast to Dropbox, for example, which offers off-premise storage. The storage capacity for owncloud has to be provided on user-owned devices. ownCloud files are stored in conventional directory structures and can be accessed via WebDAV if necessary. User files are encrypted both at rest and during transit. ownCloud can synchronise with local clients running Windows, macOS and various Linux distributions. ownCloud users can manage calendars (CalDAV), contacts (CardDAV) scheduled tasks and streaming media (Ampache) from within the platform. ownCloud permits user and group administration (via OpenID or LDAP). Content can be shared by granular read/write permissions between users or groups. Alternatively, ownCloud users can create public URLs for sharing files. Furthermore, users can interact with the browser-based ODF-format word processor, bookmarking service, URL shortening suite, gallery, RSS feed reader and document viewer tools from within ownCloud. ownCloud can be augmented with "one-click" applications and connection to Dropbox, Google Drive and Amazon S3. Enterprise features Enterprise customers have access to apps with additional functionality. They are mainly useful for large organizations with more than 500 users. An Enterprise subscription includes support services. Commercial features include end-to-end encryption, ransomware and antivirus protection, branding, document classification, and single sign-on via Shibboleth/SAML. Distribution ownCloud server and clients may be downloaded from the website, from mobile app stores, such as Google Play and Apple iTunes,, and repositories of Linux distributions. There exist projects to use ownCloud on a Raspberry Pi to create a small-scale cloud storage system. ownCloud.online is an SaaS that offers a secure and GDPR-compatible solution for small businesses, NGOs and others without their own IT department. See also Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services References External links Forum for open source community and project Cloud storage Free software for cloud computing Free software programmed in JavaScript Free software programmed in PHP Software using the GNU AGPL license Internet software for Linux
33619051
https://en.wikipedia.org/wiki/Online%20console%20gaming
Online console gaming
Online console gaming involves connecting a console to a network over the Internet for services. Through this connection, it provides users the ability to play games with other users online, in addition to other online services. The three most common networks now are Microsoft's Xbox Live, Sony's PlayStation Network, and Nintendo's Nintendo Switch Online and Nintendo Network. These networks feature cross platform capabilities which allows users to use a single account. However, the services provided by both are still limited to the console connected (e.g. an Xbox One cannot download an Xbox 360 game, unless the game is part of the Xbox 360 to Xbox One backwards compatibility program). Additional services provided by these networks include the capability of buying additional games, online chatting, downloadable content, and game demos. Early attempts The earliest experiments relating to online connectivity on game consoles were done as far back as the early 1980s. For some consoles, Dial-Up internet connectivity was made available through the use of special cartridges, along with an adapter. The GameLine for the Atari 2600 and the PlayCable for the Intellivision are two notable examples of this. Services like these did not have multiplayer online gaming capability, but did allow users to download games from a central server and play them, usually requiring a fee for continued access. However, neither the GameLine or PlayCable attained mainstream popularity and both services were shut down during the 1983 video game crash. During the 1990s a number of online gaming networks were introduced for home consoles, but due to a multitude of problems they failed to make a significant impact on the console gaming industry. For a number of years such networks were limited to the Japanese market. In a November 1996 interview, Shigeru Miyamoto remarked that online multiplayer gaming had not achieved mainstream success, and would not for a long while yet, because the technology of the time could not provide the quick-and-easy startup that general consumers would want from a "plug and play" console. The first online initiative by Nintendo was the Family Computer Network System for the Famicom, only released for Japan in 1988. This device allowed users to access things such as game cheats, stock trades, weather reports, and some downloadable content for their games. It failed to catch on. The Sega Net Work System (Sega Meganet) was a network service in Japan for people using the Sega Mega Drive. Debuting in 1990, this service worked with the Game Toshokan (literally meaning "Game Library") cartridge to download games on the console (meaning that the game would have to be re-downloaded each time). Players attached a Mega Modem (modem, with a speed of 1,600 to 2,400 bit/s) to the "EXT" DE-9 port on the back of the Mega Drive, and used it to dial up other players to play games. There was a monthly fee of ¥800. The service was also in North America under the name "Tele-Genesis" at the Winter Consumer Electronics Show (Winter CES) in January 1990, but it was never released for the region. Sega then brought a similar online service to North America, the Sega Channel, debuted in December 1994. Sega Channel provided users the opportunity to download new games straight to their consoles with the purchase of a cartridge similar sold through General Instruments. The service cost $15 (USD) per month and at one point had over 250,000 American subscribers while also building a following overseas but Sega decided to halt the project and provide an online portal in their new console the Sega Saturn. AT&T unveiled the Edge-16, an online gaming peripheral which featured simultaneous voice and data transmission, at the 1993 Consumer Electronics Show. However, AT&T cancelled it in 1994, having decided that its $150 (USD) price tag and lack of a match-up service (meaning players would have to find someone to play with on the network themselves) would prevent it from achieving any popularity. In 1994 an American company, Catapult Entertainment, developed the XBAND, a 3rd party peripheral which provided customers the ability to connect with other users and play games through network connections. The peripheral cost $19.99 (USD) and required a monthly fee of $4.95 (USD) for 50 sessions/month or $9.95 (USD) for unlimited use. The Xband supported the Super NES and Sega Genesis consoles and received a mushrooming installed base (the number of users quadrupled over the second half of 1995), but once the Super NES and Sega Genesis's popularity faded the peripheral was discontinued. The Satellaview was launched in mid-1995 for the Super Famicom in Japan. The access provided downloadable versions of hit games free to the user but required the user to download the games only at certain times through a TV antenna, in a fashion similar to recording a TV show. NET Link for the Sega Saturn provided users the ability to surf the web, check email, and play multiplayer games online. Released in 1996, the modem peripheral cost $199 (USD) and came with a web brower program and a free month of access. Despite the device's low price, strong functionality, and prominent marketing, less than 1% of Saturn owners purchased the NetLink in 1996, an outcome cited as evidence that the idea of online console gaming had not yet achieved widespread interest. Phil Harrison of Sony Computer Entertainment commented on the issue of online console gaming during a 1997 round table discussion: The first home console with built-in internet connection, the Apple Pippin, was launched in 1996. However, its $599 price tag kept it from effectively competing with other internet gaming options (by comparison, the Sega Saturn and its separately sold Netlink device combined cost less than $400). The Philips CD-i and its CD-Online service (released in 1996) also rang up at less than the Pippin, but suffered from mediocre functionality. In 1999 Nintendo decided to take another shot at online gaming with the Nintendo 64DD. The new peripheral was delayed often and only released in Japan, it provided users to connect with each other and share in-game art and designs and even play games online, after purchasing the peripheral for 30,000 yen. The 64DD failed to impact gamers as it was released shortly before Nintendo announced the release of its new console, the GameCube, and only nine games would be released supporting the new peripheral. Dreamcast SegaNet became a short-lived internet service operated by Sega, geared for dial-up based online gaming on their Dreamcast game console. A replacement for Sega's original, PC-only online gaming service, Heat.net, it was initially quite popular when launched on September 10, 2000. Unlike a standard ISP, game servers would be connected directly into SegaNet's internal network, providing very low connection latency between the consoles and servers along with standard Internet access. ChuChu Rocket! was the first online multiplayer game for the Dreamcast. Modern networks Xbox network (formerly Xbox Live) Xbox network, originally branded as Xbox Live, is an online multiplayer gaming and digital media delivery service created and operated by Microsoft Corporation. It was first made available to the Xbox system in 2002. An updated version of the service became available for the Xbox 360 console at that system's launch in 2005. The service was extended in 2007 on the Windows platform, named Games for Windows – Live, which makes most aspects of the system available on Windows computers. Microsoft has announced plans to extend Live to other platforms such as handhelds and mobile phones as part of the Live Anywhere initiative. With Microsoft's Windows Phone 7, full Xbox Live functionality was integrated into new Windows Phones that launched in late 2010. The Xbox Live service is available as both a free and subscription-based service, known as Xbox Live Free and Xbox Live Gold respectively, with several features such as online gaming restricted to the Gold service. Prior to October 2010, the free service was known as Xbox Live Silver. It was announced on June 10, 2011 that the service would be fully integrated into Microsoft's Windows 8. Xbox Live continued to be offered as part of Microsoft's future consoles, the Xbox One and the Xbox Series X and Series S, as well as with integration with Windows 10 and Windows 11. The Xbox Live Gold subscription was also bundled as part of the Xbox Game Pass Ultimate service, providing both the Xbox Live services and the library of games offered from Xbox Game Pass. Microsoft rebranded the service as simply Xbox network in March 2021. By January 2021, Microsoft reported that there were more than 100 million Xbox network subscribers (including those through the Xbox Game Pass subscription). PlayStation Network PlayStation Network, often abbreviated as PSN, is an online multiplayer gaming and digital media delivery service provided/run by Sony Computer Entertainment for use with the PlayStation 3, PlayStation 4, PlayStation 5, PlayStation Portable and PlayStation Vita video game consoles. The PlayStation Network is free to use, giving the user an identity for their online presence and to earn trophies in games. PlayStation Plus A paid subscription atop the PlayStation Network, PlayStation Plus, provides additional features such as the ability to play online games (otherwise not offered as free-to-play) and cloud saving for supported games. In additional, Plus subscribers gain access to free games and special deals on the PlayStation Store on a monthly basis. By October 2021, Sony reported that there were over 47.2 million PlayStation Plus subscribers. Nintendo Network The Nintendo Network is Nintendo's second online service after Nintendo Wi-Fi Connection to provide online play for Nintendo 3DS and Wii U compatible games. It was announced on January 26, 2012, at an investor's conference. Nintendo's president Satoru Iwata said, "Unlike Nintendo Wi-Fi Connection, which has been focused upon specific functionalities and concepts, we are aiming to establish a platform where various services available through the network for our consumers shall be connected via Nintendo Network service so that the company can make comprehensive proposals to consumers." Nintendo's plans include personal accounts for Wii U, digitally distributed packaged software, and paid downloadable content. Wii (Online) The Wii console is able to connect to the Internet through its built-in 802.11b/g Wi-Fi or through a USB-to-Ethernet adapter, with both methods allowing players to access the established Nintendo Wi-Fi Connection service. Wireless encryption by WEP, WPA (TKIP/RC4) and WPA2 (CCMP/AES) are supported. AOSS support was discreetly added in System Menu version 3.0. Just as for the Nintendo DS, Nintendo does not charge fees for playing via the service and the 12 digit Friend Code system controls how players connect to one another. Each Wii also has its own unique 16 digit Wii Code for use with Wii's non-game features. This system also implements console-based software including the Wii Message Board. One can also connect to the internet with third-party devices. Nintendo Switch Online Like the Wii online service, the Nintendo Switch Online service allows users of the Nintendo Switch to play various multiplayer games online (outside of those offered as free-to-play titles). The service also offers access to a selection of emulated Nintendo Entertainment System and Super Nintendo Entertainment System games for free, as well as other free games such as Tetris 99. An Expansion Pass, added in October 2021, expanded this emulation service to include select Nintendo 64 and Sega Genesis games. Subscribers also had access to unique offers for certain products, such as special Joy-Con controllers for the emulated games. Nintendo reported by September 2021 that the Nintendo Switch Online had reached 32 million subscribers. References Online video game services Video game consoles Video game gameplay
33626726
https://en.wikipedia.org/wiki/List%20of%20software%20forks
List of software forks
This is a list of notable software forks. Undated The many varieties of proprietary Unix in the 1980s and 1990s — almost all derived from AT&T Unix under licence and all called "Unix", but increasingly mutually incompatible. See UNIX wars. Most Linux distributions are descended from other distributions, most being traceable back to Debian, Red Hat or Softlanding Linux System (see image right). Since most of the content of a distribution is free and open source software, ideas and software interchange freely as is useful to the individual distribution. Merges (e.g., United Linux or Mandriva) are rare. Pretty Good Privacy, forked outside of the United States to free it from restrictive US laws on the exportation of cryptographic software. The game NetHack has spawned a number of variants using the original code, notably Slash'EM (1997), and was itself a fork (1987) of Hack. Openswan and strongSwan, from the discontinued FreeS/WAN. 1981 Symbolics Lisp Machine operating system, later called Symbolics Genera. Forked from the MIT Lisp Machine operating system, which was licensed by MIT to Symbolics in 1980. This fork later motivated Richard Stallman to start the GNU Project. 1985 POSTGRES (later PostgreSQL), after Ingres branched off as a proprietary project. 1990 Microsoft SQL Server, from Sybase SQL Server, via a technology-sharing agreement concerning the Tabular Data Stream protocol. SWLPC, from LPMud. 1991 Xemacs, from GNU Emacs, originally for Lucid Corporation internal needs. 1993 FreeBSD, started as a patchkit to 386BSD. NetBSD, started as a patchkit to 386BSD. 1995 Apache HTTP Server, from the moribund NCSA HTTPd. OpenBSD, a fork of NetBSD 1.0 by Theo de Raadt due to internal developer personality clashes. 1997 EGCS was a fork of GCC, later named as the official version. 1998 Grace, from Xmgr, after that project ceased development. 1999 FilmGIMP, later called CinePaint, from GIMP, to handle 48-bit colour. OSSH from SSH, when that project was proprietised. OpenSSH, from OSSH. Sodipodi, from Gill. Steel Bank Common Lisp, from CMU Common Lisp. 2000 TrueCrypt, from E4M when the latter was discontinued. Tux Racer went proprietary in 2000, leading to several forks including OpenRacer, PlanetPenguin Racer and Extreme Tux Racer. OpenOffice.org, from StarOffice after Sun Microsystems made the source code publicly available. OpenOffice.org was eventually forked into LibreOffice. 2001 ELinks, began as an experimental fork of Links. Fluxbox, from Blackbox. GNU Radio, from pSpectra. Xvid, was a fork of OpenDivX. WebKit, project was started within Apple by Don Melton on 25 June 2001 as a fork of KHTML. 2002 GForge, from SourceForge. GraphicsMagick, from ImageMagick due to concerns over the openness of development. The Matroska container format, from the Multimedia Container Format, due to differences in direction. MirOS BSD, from OpenBSD. Syllable Desktop, from the stagnant AtheOS. 2003 aMule, from xMule, which itself forked from lMule shortly before, over developer disagreements. b2evolution, from b2/CafeLog. DragonFly BSD, from FreeBSD 4.8 by long-time FreeBSD developer Matt Dillon, due to disagreement over FreeBSD 5's technical direction. Epiphany, from Galeon, after developer disagreements about Galeon's growing complexity. Inkscape (vector-graphics program), from Sodipodi. NeoOffice, a fork of OpenOffice.org, with an incompatible license (GPL rather than LGPL), due to disagreements about licensing and about the best method to port OpenOffice.org to Mac OS X. The Safari renderer that became WebKit, from KHTML. sK1, from Skencil when the latter moved from Tk to GTK+. WordPress, from b2/CafeLog. Zen Cart, from osCommerce. 2004 Baz, the previous version of Bazaar, from GNU arch. FrostWire, from LimeWire after LimeWire's developers considered adding RIAA-sponsored blocking code. MediaPortal, from XBMC. WineX (later Cedega), was a proprietary fork of Wine. XOrg, from XFree86, in order to adopt a more open development model and due to concerns over the latter's change to a license many distributors found unacceptable. 2005 Audacious, from Beep Media Player to continue work on the old version of that project. Joomla, from Mambo due to concerns over project structure. Claws Mail, from Sylpheed, due to perceived slowness in accepting enhancements. 2006 Adempiere, a community maintained fork of Compiere 2.5.3b, due to disagreement with commercial and technical direction of Compiere Inc. Cdrkit, from Cdrtools due to perceived licensing issues. LedgerSMB, from SQL-Ledger, due to disagreements over handling of security issues. MindTouch, a fork of MediaWiki. Mulgara, from Kowari after trademark threats from Northrop Grumman. MPC-HC, a fork of Media Player Classic. 2007 Batavi, from osCommerce, due to that project's slow release schedule. Go-oo, from OpenOffice.org, due to that project's contributor licensing agreement. 2008 Boxee, a proprietary fork of XBMC. Dreamwidth, from LiveJournal by ex-LiveJournal developers. Drizzle, was intended as a slimmed-down and faster fork of MySQL. MiaCMS, from Mambo. Plex, a proprietary fork of XBMC. 2009 dbndns, from djbdns after the latter was released into the public domain and abandoned. Freeplane, from FreeMind. FusionForge, from GForge when GForge shifted focus to its proprietary version. Icinga, from Nagios, due to perceived slow development and problems dealing with Nagios LLC. kompoZer, from Nvu after that project went dormant. MariaDB, from MySQL, over concern as to Sun Microsystems' plans for the latter. Pale Moon, from Firefox. Qt Extended Improved, from Qtopia after the latter was discontinued by Qt Software. Voddler, is a proprietary fork of XBMC and FFmpeg. 2010 Peppermint Linux OS, from Lubuntu, due to a perceived need for a cloud-centric derivative of the Ubuntu OS. Chamilo, from Dokeos, due to community management concerns with that project. LibreOffice, from OpenOffice.org (and merging Go-oo), due to Oracle Corporation's perceived neglect of the software. OpenIndiana, from OpenSolaris after Oracle Corporation discontinued the latter. Illumos, from the OpenSolaris kernel OS/Net, after Oracle closed down public access to the source code. webtrees, from PhpGedView, due to SourceForge's policy on exporting encryption. Xonotic, from Nexuiz, after that project was taken proprietary. Mageia, from Mandriva Linux, due to financial uncertainty and the layoff by Edge-IT, a Mandriva subsidiary employing many of the corporate staff working on the Mandriva distribution OpenAM, from OpenSSO after Oracle Corporation discontinued the latter. Calligra, from KOffice after developer disagreements. 2011 Fire OS, a fork of Android for the Kindle Fire Jenkins, from Hudson (2011), due to Oracle Corporation's perceived neglect of the project's infrastructure and disagreements over use of the name on non-Oracle-maintained infrastructure. Univa Grid Engine, from Oracle Grid Engine, after Oracle Corporation stopped releasing project source. Mer, started as a fork of MeeGo. libav, a fork of ffmpeg. 2012 MPC-BE, a fork of Media Player Classic 2013 Blink, a fork of WebKit. SuiteCRM, from the last open source version of SugarCRM. 2014 LibreSSL, from OpenSSL. Nokia X software platform, a fork of the Android Open Source Project developed by Nokia exclusively for its X family of Android smartphones. io.js from node.js. In 2015 it was blessed as the official version of node.js. 2015 EdgeHTML, from Trident Open Live Writer, from Windows Live Writer 2012 2016 Collabora Online, from LibreOffice, Collabora Online is a web-based enterprise ready edition of LibreOffice Goanna, from Gecko Nextcloud, from ownCloud 2017 Basilisk, from Firefox. Bitcoin Cash, from Bitcoin Core, supported by the forked implementations Bitcoin ABC, Bitcoin Unlimited and Bitcoin XT. Unified XUL Platform, from XUL. References
33629639
https://en.wikipedia.org/wiki/Stop%20Online%20Piracy%20Act
Stop Online Piracy Act
The Stop Online Piracy Act (SOPA) was a controversial United States bill introduced on October 26, 2011, by U.S. Representative Lamar S. Smith (R-TX) to expand the ability of U.S. law enforcement to combat online copyright infringement and online trafficking in counterfeit goods. Provisions included the requesting of court orders to bar advertising networks and payment facilities from conducting business with infringing websites, and web search engines from linking to the websites, and court orders requiring Internet service providers to block access to the websites. The proposed law would have expanded existing criminal laws to include unauthorized streaming of copyrighted content, imposing a maximum penalty of five years in prison. Proponents of the legislation said it would protect the intellectual-property market and corresponding industry, jobs and revenue, and was necessary to bolster enforcement of copyright laws, especially against foreign-owned and operated websites. Claiming flaws in existing laws that do not cover foreign-owned and operated websites, and citing examples of active promotion of rogue websites by U.S. search engines, proponents asserted that stronger enforcement tools were needed. The bill received strong, bipartisan support in the House of Representatives and the Senate. It also received support from the Fraternal Order of Police, the National Governors Association, The National Conference of Legislatures, the U.S. Conference of Mayors, the National Association of Attorneys General, the Chamber of Commerce, the Better Business Bureau, the AFL–CIO and 22 trade unions, the National Consumers League, and over a hundred associations representing industries throughout the economy which claim that they are being harmed by online piracy. Opponents argued that the proposed legislation threatened free speech and innovation, and enabled law enforcement to block access to entire Internet domains due to infringing content posted on a single blog or webpage. They also stated that SOPA would bypass the "safe harbor" protections from liability presently afforded to websites by the Digital Millennium Copyright Act. Some library associations also claimed that the legislation's emphasis on stronger copyright enforcement would expose libraries to prosecution. Other opponents claimed that requiring search engines to delete domain names violated the First Amendment and could begin a worldwide arms race of unprecedented Internet censorship. The move to protest against SOPA and PIPA was initiated when Fight for the Future organized thousands of the most popular websites in the world, including Reddit, Craigslist, and the English Wikipedia, to consider temporarily closing their content and redirecting users to a message opposing the proposed legislation. On January 18, 2012, the English Wikipedia, Google, and an estimated 7,000 other smaller websites ceased standard operation as part of a coordinated service blackout as an attempt to spread awareness and objection to the bill. In many cases, websites replaced the entirety of their main content with facts regarding SOPA and the entity's case against its passing. Boycotts of companies and organizations that support the legislation were organized, along with an opposition rally held in New York City. Google announced the company had collected over 4.5 million signatures opposing the bill in their January petition. In response to the protest actions, the Recording Industry Association of America (RIAA) stated, "It's a dangerous and troubling development when the platforms that serve as gateways to information intentionally skew the facts to incite their users and arm them with misinformation", and "it's very difficult to counter the misinformation when the disseminators also own the platform." Access to websites of several pro-SOPA organizations and companies such as RIAA, CBS.com, and others was impeded or blocked with denial-of-service attacks which started on January 19, 2012. Self-proclaimed members of the "hacktivist" group Anonymous claimed responsibility and stated the attacks were a protest of both SOPA and the United States Department of Justice's shutdown of Megaupload on that same day. Some opponents of the bill support the Online Protection and Enforcement of Digital Trade Act (OPEN) as an alternative. On January 20, 2012, House Judiciary Committee Chairman Smith postponed plans to draft the bill: "The committee remains committed to finding a solution to the problem of online piracy that protects American intellectual property and innovation ... The House Judiciary Committee will postpone consideration of the legislation until there is wider agreement on a solution." The bill was effectively dead at that point. History Bill 3261 or , was a proposed law that was introduced in the United States House of Representatives on October 26, 2011, by House Judiciary Committee Chair Representative Lamar S. Smith (R-TX) and a bipartisan group of 12 initial co-sponsors. Presented to the House Judiciary Committee, it builds on the similar PRO-IP Act of 2008 and the corresponding Senate bill, the PROTECT IP Act (PIPA). The originally proposed bill would allow the United States Department of Justice, as well as copyright holders, to seek court orders against websites outside U.S. jurisdiction accused of enabling or facilitating copyright infringement. A court order requested by the DOJ could include barring online advertising networks and payment facilitators from conducting business with websites found to infringe on federal criminal intellectual-property laws, barring search engines from linking to such sites, and requiring Internet service providers to block access to such sites. The bill establishes a two-step process for intellectual property-rights holders to seek relief if they have been harmed by a site dedicated to infringement. The rights holder must first notify, in writing, related payment facilitators and ad networks of the identity of the website, who, in turn, must then forward that notification and suspend services to that identified website, unless that site provides a counter notification explaining how it is not in violation. The rights holder can then sue for limited injunctive relief against the site operator, if such a counter notification is provided, or if the payment or advertising services fail to suspend service in the absence of a counter notification. The second section covers penalties for streaming video and for selling counterfeit drugs, military materials, or consumer goods. The bill would increase penalties and expand copyright offenses to include unauthorized streaming of copyrighted content and other intellectual property offenses. The bill would criminalize unauthorized streaming of copyrighted content if they knowingly misrepresent the activity of the site, with a maximum penalty of five years in prison for ten such infringements within six months. The copyrighted content can be removed, and infringements can lead to the site being shut down. In July 2013, the Department of Commerce's Internet Policy Task Force issued a report endorsing "[a]dopting the same range of penalties for criminal streaming of copyrighted works to the public as now exists for criminal reproduction and distribution." The bill provides immunity from liability to the ad and payment networks that comply with this Act or that take voluntary action to sever ties to such sites. Any copyright holder who knowingly misrepresents that a website is involved in copyright infringement would be liable for damages. Supporters include the Motion Picture Association of America, pharmaceuticals makers, media businesses, and the United States Chamber of Commerce. They state it protects the intellectual-property market and corresponding industry, jobs and revenue, and is necessary to bolster enforcement of copyright laws, especially against foreign websites. They cite examples such as Google's $500 million settlement with the Department of Justice for its role in a scheme to target U.S. consumers with ads to illegally import prescription drugs from Canadian pharmacies. Opponents stated that it violated the First Amendment, is Internet censorship, would cripple the Internet, and would threaten whistle-blowing and other free speech actions. In October, 2011, co-sponsor Representative Bob Goodlatte (R-VA), chairman of the House Judiciary Committee's Intellectual Property sub-panel, told The Hill that SOPA is a rewrite of the Senate's bill that addresses some tech-industry concerns, noting that under the House version of the legislation copyright holders won't be able to directly sue intermediaries such as search engines to block infringing websites and would instead need a court's approval before taking action against third parties. On December 12, 2011 a revised version of the bill was tabled. Titled the "Manager's Amendment", it contained a number of changes in response to criticism of the original. As part of the revisions, the definition of sites that might be subject to enforcement was narrowed: the amendment limited such actions to sites that are designed or operated with the intent to promote copyright infringement, and it now only applies to non-US sites. Goals Protecting intellectual property of content creators According to Rep. Goodlatte, "Intellectual property is one of America's chief job creators and competitive advantages in the global marketplace, yet American inventors, authors, and entrepreneurs have been forced to stand by and watch as their works are stolen by foreign infringers beyond the reach of current U.S. laws. This legislation will update the laws to ensure that the economic incentives our Framers enshrined in the Constitution over 220 years ago—to encourage new writings, research, products, and services— remain effective in the 21st century's global marketplace, which will create more American jobs." Rights holders see intermediaries—the companies who host, link to, and provide e-commerce around the content—as the only accessible defendants. Sponsor Rep. John Conyers (D-MI) said, "Millions of American jobs hang in the balance, and our efforts to protect America's intellectual property are critical to our economy's long-term success." Smith added, "The Stop Online Piracy Act helps stop the flow of revenue to rogue websites and ensures that the profits from American innovations go to American innovators." The Motion Picture Association of America (MPAA) representative who testified before the committee said that the motion picture and film industry supported two million jobs and 95,000 small businesses. Protection against counterfeit drugs Pfizer spokesman John Clark testified that patients could not always detect cleverly-forged websites selling drugs that were either misbranded or simply counterfeit. RxRights, a consumer-advocacy group, issued a statement saying that Clark failed "to acknowledge that there are Canadian and other international pharmacies that do disclose where they are located, require a valid doctor's prescription and sell safe, brand-name medications produced by the same leading manufacturers as prescription medications sold in the U.S." They had earlier said that SOPA "fails to distinguish between counterfeit and genuine pharmacies" and would prevent American patients from ordering their medications from Canadian pharmacies online. Bill sponsor Smith accused Google of obstructing the bill, citing its $500 million settlement with the DOJ on charges that it allowed ads from Canadian pharmacies, leading to illegal imports of prescription drugs. Shipment of prescription drugs from foreign pharmacies to customers in the US typically violates the Federal Food, Drug and Cosmetic Act and the Controlled Substances Act. Impact on online freedom of speech Mentioned on the Texas Insider, President Obama "will not support legislation that reduces freedom of expression", said interviewer Jay Carney. On TIME Techland blog, Jerry Brito wrote, "Imagine if the U.K. created a blacklist of American newspapers that its courts found violated celebrities' privacy? Or what if France blocked American sites it believed contained hate speech?" Similarly, the Center for Democracy and Technology warned, "If SOPA and PIPA are enacted, the US government must be prepared for other governments to follow suit, in service to whatever social policies they believe are important—whether restricting hate speech, insults to public officials, or political dissent." Laurence H. Tribe, a Harvard University professor of constitutional law, released an open letter on the web stating that SOPA would "undermine the openness and free exchange of information at the heart of the Internet. And it would violate the First Amendment". The AFL–CIO's Paul Almeida, arguing in favor of SOPA, has stated that free speech was not a relevant consideration, because "Freedom of speech is not the same as lawlessness on the Internet. There is no inconsistency between protecting an open Internet and safeguarding intellectual property. Protecting intellectual property is not the same as censorship; the First Amendment does not protect stealing goods off trucks." Autocratic countries According to the Electronic Frontier Foundation, proxy servers, such as those used during the Arab Spring, can also be used to thwart copyright enforcement and therefore may be regulated by the act. John Palfrey, co-director of the Berkman Center for Internet & Society, expressed disagreement with the use of his research findings to support SOPA. He wrote that "SOPA would make many DNS circumvention tools illegal," which could put "dissident communities" in autocratic countries "at much greater risk than they already are." He added, "The single biggest funder of circumvention tools has been and remains the U.S. government, precisely because of the role the tools play in online activism. It would be highly counter-productive for the U.S. government to both fund and outlaw the same set of tools." Marvin Ammori has stated the bill might make The Tor Project illegal. Initially sponsored by the U.S. Naval Research Laboratory, the Tor Project creates encryption technology used by dissidents in repressive regimes (that consequently outlaw it). Ammori says that the U.S. Supreme Court case of Lamont v. Postmaster General 381 U.S. 301 (1965) makes it clear that Americans have the First Amendment right to read and listen to such foreign dissident free speech, even if those foreigners themselves lack an equivalent free speech right (for example, under their constitution or through Optional Protocols under the United Nations International Covenant on Civil and Political Rights). Impact on websites Websites that host user content Opponents have warned that SOPA could have a negative impact on online communities. Journalist Rebecca MacKinnon argued in an op-ed that making companies liable for users' actions could have a chilling effect on user-generated sites such as YouTube. "The intention is not the same as China's Great Firewall, a nationwide system of Web censorship, but the practical effect could be similar," Mackinnon stated. The Electronic Frontier Foundation (EFF) warned that websites Etsy, Flickr and Vimeo all seemed likely to shut down if the bill becomes law. Policy analysts for New America Foundation say this legislation would enable law enforcement to take down an entire domain due to something posted on a single blog, arguing, "an entire largely innocent online community could be punished for the actions of a tiny minority". Additional concerns include the possible impact on common Internet functions such as links from one site to another or accessing data from the cloud. EFF claimed the bill would ban linking to sites deemed offending, even in search results and on services such as Twitter. Christian Dawson, Chief Operating Officer (COO) of Virginia-based hosting company ServInt, predicted that the legislation would lead to many cloud computing and Web hosting services moving out of the US to avoid lawsuits. Even without SOPA, the U.S. Immigration and Customs Enforcement agency (ICE) has already launched extradition proceedings against Richard O'Dwyer in the UK. O'Dwyer hosted the TVShack.net website which had links to material elsewhere and did not host any files. ICE has stated that it intends to pursue websites even if their only connection to the USA is a .com or .net web domain. The Electronic Frontier Foundation stated that the requirement that any site must self-police user generated content would impose significant liability costs and explains "why venture capitalists have said en masse they won't invest in online startups if PIPA and SOPA pass". Proponents of the bill argue that filtering is already common. Michael O'Leary of the MPAA testified on November 16 that the act's effect on business would be more minimal, noting that at least 16 countries already block websites and that the Internet still functions in those countries. MPAA Chairman Chris Dodd noted that Google figured out how to block sites when China requested it. Some ISPs in Denmark, Finland, Ireland and Italy blocked The Pirate Bay after courts ruled in favor of music and film industry litigation, and a coalition of film and record companies has threatened to sue British Telecom if it does not follow suit. Maria Pallante of the United States Copyright Office said that Congress had updated the Copyright Act before and should again, or "the U.S. copyright system will ultimately fail." Asked for clarification, she said that the US currently lacks jurisdiction over websites in other countries. Weakening of "safe harbor" protections The 1998 Digital Millennium Copyright Act (DMCA) includes the Online Copyright Infringement Liability Limitation Act, that provides a "safe harbor" for websites that host content. Under that provision, copyright owners who felt that a site was hosting infringing content are required to request the site to remove the infringing material within a certain amount of time. SOPA would bypass this "safe harbor" provision by placing the responsibility for detecting and policing infringement onto the site itself, and allowing judges to block access to websites "dedicated to theft of U.S. property". According to critics of the bill such as the Center for Democracy and Technology and the Electronic Frontier Foundation, the bill's wording is vague enough that a single complaint about a site could be enough to block it, with the burden of proof resting on the site. A provision in the bill states that any site would be blocked that "is taking, or has taken deliberate actions to avoid confirming a high probability of the use of the U.S.-directed site to carry out acts that constitute a violation." Critics have read this to mean that a site must actively monitor its content and identify violations to avoid blocking, rather than relying on others to notify it of such violations. Law professor Jason Mazzone wrote, "Damages are also not available to the site owner unless a claimant 'knowingly materially' misrepresented that the law covers the targeted site, a difficult legal test to meet. The owner of the site can issue a counter-notice to restore payment processing and advertising, but services need not comply with the counter-notice." Goodlatte stated, "We're open to working with them on language to narrow [the bill's provisions], but I think it is unrealistic to think we're going to continue to rely on the DMCA notice-and-takedown provision. Anybody who is involved in providing services on the Internet would be expected to do some things. But we are very open to tweaking the language to ensure we don't impose extraordinary burdens on legitimate companies as long as they aren't the primary purveyors [of pirated content]." O'Leary submitted written testimony in favor of the bill that expressed guarded support of current DMCA provisions. "Where these sites are legitimate and make good faith efforts to respond to our requests, this model works with varying degrees of effectiveness," O'Leary wrote. "It does not, however, always work quickly, and it is not perfect, but it works." Web-related businesses An analysis in the information technology magazine eWeek stated, "The language of SOPA is so broad, the rules so unconnected to the reality of Internet technology and the penalties so disconnected from the alleged crimes that this bill could effectively kill e-commerce or even normal Internet use. The bill also has grave implications for existing U.S., foreign and international laws and is sure to spend decades in court challenges." Art Bordsky of advocacy group Public Knowledge similarly stated, "The definitions written in the bill are so broad that any US consumer who uses a website overseas immediately gives the US jurisdiction the power to take action against it potentially." On October 28, 2011, the EFF called the bill a "massive piece of job-killing Internet regulation," and said, "This bill cannot be fixed; it must be killed." Gary Shapiro, CEO of the Consumer Electronics Association, spoke out strongly against the bill, stating, "The bill attempts a radical restructuring of the laws governing the Internet", and that "It would undo the legal, safe harbors that have allowed a world-leading Internet industry to flourish over the last decade. It would expose legitimate American businesses and innovators to broad and open-ended liability. The result will be more lawsuits, decreased venture capital investment, and fewer new jobs." Lukas Biewald, founder of CrowdFlower, stated, "It'll have a stifling effect on venture capital... No one would invest because of the legal liability." Booz & Company on November 16 published a Google-funded study finding that almost all of the 200 venture capitalists and angel investors interviewed would stop funding digital media intermediaries if the bill became law. More than 80 percent said they would rather invest in a risky, weak economy with the current laws than a strong economy with the proposed law in effect. If legal ambiguities were removed and good faith provisions in place, investing would increase by nearly 115 percent. As reported by David Carr of The New York Times in an article critical of SOPA and PIPA, Google, Facebook, Twitter, and other companies sent a joint letter to Congress, stating "We support the bills' stated goals – providing additional enforcement tools to combat foreign 'rogue' Web sites that are dedicated to copyright infringement or counterfeiting. However, the bills as drafted would expose law-abiding U.S. Internet and technology companies to new uncertain liabilities, private rights of action and technology mandates that would require monitoring of Web sites. We are concerned that these measures pose a serious risk to our industry's continued track record of innovation and job creation, as well as to our nation's cybersecurity." Smith responded, saying, the article "unfairly criticizes the Stop Online Piracy Act", and, "does not point to any language in the bill to back up the claims. SOPA targets only foreign Web sites that are primarily dedicated to illegal and infringing activity. Domestic Web sites, like blogs, are not covered by this legislation." Smith also said that Carr incorrectly framed the debate as between the entertainment industry and high-tech companies, noting support by more than "120 groups and associations across diverse industries, including the United States Chamber of Commerce". Users uploading illegal content Lateef Mtima, director of the Institute for Intellectual Property and Social Justice at Howard University School of Law, expressed concern that users who upload copyrighted content to sites could potentially be held criminally liable themselves, saying, "Perhaps the most dangerous aspect of the bill is that the conduct it would criminalize is so poorly defined. While on its face the bill seems to attempt to distinguish between commercial and non-commercial conduct, purportedly criminalizing the former and permitting the latter, in actuality the bill not only fails to accomplish this but, because of its lack of concrete definitions, it potentially criminalizes conduct that is currently permitted under the law." An aide to Rep. Smith said, "This bill does not make it a felony for a person to post a video on YouTube of their children singing to a copyrighted song. The bill specifically targets websites dedicated to illegal or infringing activity. Sites that host user content—like YouTube, Facebook, and Twitter—have nothing to be concerned about under this legislation." Internal networks A paper by the Center for Democracy and Technology claimed that the bill "targets an entire website even if only a small portion hosts or links to some infringing content." According to A. M. Reilly of Industry Leaders Magazine, under SOPA, culpability for distributing copyright material is extended to those who aid the initial poster of the material. For companies that use virtual private networks (VPN) to create a network that appears to be internal but is spread across various offices and employees' homes, any of these offsite locations that initiate sharing of copyright material could put the entire VPN and hosting company at risk of violation. Answering similar criticism in a CNET editorial, Recording Industry Association of America (RIAA) head, Cary Sherman wrote, "Actually, it's quite the opposite. By focusing on specific sites rather than entire domains, action can be targeted against only the illegal subdomain or Internet protocol address rather than taking action against the entire domain." Impact on web-browsing software The Electronic Frontier Foundation expressed concern that free and open source software (FOSS) projects found to be aiding online piracy could experience serious problems under SOPA. Of special concern was the web browser Firefox, which has an optional extension, MAFIAAFire Redirector, that redirects users to a new location for domains that were seized by the U.S. government. In May 2011, Mozilla refused a request by the United States Department of Homeland Security to remove MAFIAAFire from its website, questioning whether the software had ever been declared illegal. Potential effectiveness Edward J. Black, president and CEO of the Computer & Communications Industry Association, wrote in the Huffington Post that "Ironically, it would do little to stop actual pirate websites, which could simply reappear hours later under a different name, if their numeric web addresses aren't public even sooner. Anyone who knows or has that web address would still be able to reach the offending website." An editorial in the San Jose Mercury-News stated, "Imagine the resources required to parse through the millions of Google and Facebook offerings every day looking for pirates who, if found, can just toss up another site in no time." John Palfrey of the Berkman Center for Internet & Society commented, "DNS filtering is by necessity either overbroad or underbroad; it either blocks too much or too little. Content on the Internet changes its place and nature rapidly, and DNS filtering is ineffective when it comes to keeping up with it." Technical issues Deep-packet inspection and privacy According to Markham Erickson, head of NetCoalition, which opposes SOPA, the section of the bill that would allow judges to order internet service providers to block access to infringing websites to customers located in the United States would also allow the checking of those customers' IP address, a method known as IP address blocking. Erickson has expressed concerns that such an order might require those providers to engage in "deep packet inspection," which involves analyzing all of the content being transmitted to and from the user, raising new privacy concerns. Policy analysts for New America Foundation say this legislation would "instigate a data obfuscation arms race" whereby by increasingly invasive practices would be required to monitor users' web traffic resulting in a "counterproductive cat-and-mouse game of censorship and circumvention [that] would drive savvy scofflaws to darknets while increasing surveillance of less technically proficient Internet users". Domain Name System The Domain Name System (DNS) servers, sometimes likened to a telephone directory, translate browser requests for domain names into the IP address assigned to that computer or network. The original bill requires these servers to stop referring requests for infringing domains to their assigned IP addresses. DNS is robust by design against failure and requires that a lack of response is met by inquiries to other DNS servers. Andrew Lee, CEO of ESET North America, objected that since the bill would require internet service providers to filter DNS queries for the sites, this would undermine the integrity of the Domain Name System. According to David Ulevitch, the San Francisco-based head of OpenDNS, the passage of SOPA could cause Americans to switch to DNS providers located in other countries who offer encrypted links, and may cause U.S. providers, such as OpenDNS itself, to move to other countries, such as the Cayman Islands. In November 2011, an anonymous top-level domain, .bit, was launched outside of ICANN control, as a response to the perceived threat from SOPA, although its effectiveness (as well as the effectiveness of other alternative DNS roots) remains unknown. On January 12, 2012, House sponsor Lamar Smith announced that provisions related to DNS redirection would be pulled from the bill. Internet security A white paper by several internet security experts, including Steve Crocker and Dan Kaminsky, wrote, "From an operational standpoint, a resolution failure from a nameserver subject to a court order and from a hacked nameserver would be indistinguishable. Users running secure applications need to distinguish between policy-based failures and failures caused, for example, by the presence of an attack or a hostile network, or else downgrade attacks would likely be prolific." Domain Name System Security Extensions Stewart Baker, former first Assistant Secretary for Policy at the Department of Homeland Security and former General Counsel of the National Security Agency, stated that SOPA would do "great damage to Internet security" by undermining Domain Name System Security Extensions (DNSSEC), a proposed security upgrade for DNS, since a browser must treat all redirects the same, and must continue to search until it finds a DNS server (possibly overseas) providing untampered results. On December 14, 2011 he wrote that SOPA was "badly in need of a knockout punch" due to its impact on security and DNS: DNSSEC is a set of protocols developed by the Internet Engineering Task Force (IETF) for ensuring internet security. A white paper by the Brookings Institution noted, "The DNS system is based on trust", adding that DNSSEC was developed to prevent malicious redirection of DNS traffic, and that "other forms of redirection will break the assurances from this security tool". On November 17, Sandia National Laboratories, a research agency of the U.S. Department of Energy, released a technical assessment of the DNS filtering provisions in the House and Senate bills, in response to Representative Zoe Lofgren's (D-CA) request. The assessment stated that the proposed DNS filtering would be unlikely to be effective, would negatively impact internet security, and would delay full implementation of DNSSEC. On November 18, House Cybersecurity Subcommittee chair Dan Lungren stated that he had "very serious concerns" about SOPA's impact on DNSSEC, adding, "we don't have enough information, and if this is a serious problem as was suggested by some of the technical experts that got in touch with me, we have to address it". Transparency in enforcement Brooklyn Law School professor Jason Mazzone warned, "Much of what will happen under SOPA will occur out of the public eye and without the possibility of holding anyone accountable. For when copyright law is made and enforced privately, it is hard for the public to know the shape that the law takes and harder still to complain about its operation." Supporters Legislators The Stop Online Piracy Act was introduced by Representative Lamar Smith (R-TX) and was initially co-sponsored by Howard Berman (D-CA), Marsha Blackburn (R-TN), Mary Bono Mack (R-CA), Steve Chabot (R-OH), John Conyers (D-MI), Ted Deutch (D-FL), Elton Gallegly (R-CA), Bob Goodlatte (R-VA), Timothy Griffin (R-AR), Dennis A. Ross (R-FL), Adam Schiff (D-CA) and Lee Terry (R-NE). As of January 16, 2012, there were 31 sponsors. Companies and organizations The legislation has broad support from organizations that rely on copyright, including the Motion Picture Association of America, the Recording Industry Association of America, Entertainment Software Association, Macmillan US, Viacom, and various other companies and unions in the cable, movie, and music industries. Supporters also include trademark-dependent companies such as Nike, L'Oréal, and Acushnet Company. Both the AFL–CIO and the U.S. Chamber of Commerce support H.R. 3261, and many trade unions and industry groups large and small, have also publicly praised the legislation. In a joint statement, the American Federation of Musicians (AFM), American Federation of Television and Radio Artists (AFTRA), Directors Guild of America (DGA), International Alliance of Theatrical Stage Employees, Moving Picture Technicians, Artists and Allied Crafts of the United States, Its Territories and Canada (IATSE), International Brotherhood of Teamsters (IBT), and Screen Actors Guild (SAG) all showed support for SOPA. Smaller trade organizations, such as A2IM, which represents independent musicians, have also backed the bill. In June 2011, former Bill Clinton press secretary Mike McCurry and former George W. Bush advisor Mark McKinnon, business partners in Public Strategies, Inc., started a campaign which echoed McCurry's earlier work in the network neutrality legislative fight. McCurry represented SOPA/PIPA in Politico as a way to combat theft online, drawing a favorable comment from the MPAA. On the 15th, McCurry and Arts + Labs co-chair McKinnon sponsored the "CREATE – A Forum on Creativity, Commerce, Copyright, Counterfeiting and Policy" conference with members of Congress, artists and information-business executives. On September 22, 2011, a letter signed by over 350 businesses and organizations—including NBCUniversal, Pfizer, Ford Motor Company, Revlon, NBA, and Macmillan US—was sent to Congress encouraging the passage of the legislation. Fightonlinetheft.com, a website of The Coalition Against Counterfeiting and Piracy (a project of the United States Chamber of Commerce Global Intellectual Property Center,) cites a long list of supporters including these and the Fraternal Order of Police, the National Governors Association, the U.S. Conference of Mayors, the National Association of Attorneys General, the Better Business Bureau, and the National Consumers League. On November 22 the CEO of the Business Software Alliance (BSA) said, "valid and important questions have been raised about the bill." He said that definitions and remedies needed to be tightened and narrowed, but "BSA stands ready to work with Chairman Smith and his colleagues on the Judiciary Committee to resolve these issues". On December 5, the Information Technology and Innovation Foundation, a non-partisan non-profit, published an article that blasted critics of SOPA and defended the bill. The report called opponents' claims about DNS filtering "inaccurate," their warnings against censorship as "unfounded" and recommended that the legislation be revised and passed into law. On December 22, Go Daddy, one of the world's largest domain name registrars, stated that it supported SOPA. Go Daddy then rescinded its support, its CEO saying, "Fighting online piracy is of the utmost importance, which is why Go Daddy has been working to help craft revisions to this legislation—but we can clearly do better. It's very important that all Internet stakeholders work together on this. Getting it right is worth the wait. Go Daddy will support it when and if the Internet community supports it." In January 2012, the Entertainment Software Association announced support for SOPA, although some association members expressed opposition. Creative America, a group representing television networks, movie studios, and entertainment unions, produced a "fact vs. fiction" flyer that aimed to correct misperceptions about rogue sites legislation. Others Professor and Intellectual Property rights lawyer, Hillel I. Parness, a Partner of Robins, Kaplan, Miller & Ciresi has reviewed the bill, stating in a legal analysis that "There's a court involved here." In regards to "safe harbors," he stated the safe harbor provisions created by the DMCA in 1998 would still apply. "I think the proponents of the bill would say, what we're looking at today is a very different kind of Internet. The fact that the courts have said that entities like YouTube can be passive when it comes to copyright infringement, and just wait for notices rather than having to take any affirmative action, is also frustrating to them", he said. Regarding censorship concerns, he explained that none of the criminal copyright statutes in the bill were new, and therefore, "if there was a risk of abuse, that risk has always been there. And I have confidence in the structure of our court system, that the prosecutors and the courts are held to certain standards that should not allow a statute such as this to be manipulated in that way." Constitutional law expert Floyd Abrams, on behalf of the American Federation of Television and Radio Artists (AFTRA), the Directors Guild of America (DGA), the International Alliance of Theatrical and Stage Employees (IATSE), the Screen Actors Guild (SAG), the Motion Picture Association of America (MPAA) and others, reviewed the proposed legislation and concluded, "The notion that adopting legislation to combat the theft of intellectual property on the Internet threatens freedom of expression and would facilitate, as one member of the House of Representatives recently put it, 'the end of the Internet as we know it,' is thus insupportable. Copyright violations have never been protected by the First Amendment and have been routinely punished wherever they occur; including the Internet. This proposed legislation is not inconsistent with the First Amendment; it would protect creators of speech, as Congress has done since this Nation was founded, by combating its theft." White House position On January 14, 2012, the Obama administration responded to a petition against the bill, stating that while it would not support legislation with provisions that could lead to Internet censorship, squelching of innovation, or reduced Internet security, it encouraged "all sides to work together to pass sound legislation this year that provides prosecutors and rights holders new legal tools to combat online piracy originating beyond U.S. borders while staying true to the principles outlined above in this response." More than 100,000 people petitioned the White House in protest. Three officials from the Obama administration articulated the White House's position on proposed anti-piracy legislation, balancing the need for strong antipiracy measures while respecting both freedom of expression and the way information and ideas are shared on the Internet. "While we believe that online piracy by foreign websites is a serious problem that requires a serious legislative response, we will not support legislation that reduces freedom of expression, increases cybersecurity risk, or undermines the dynamic, innovative global Internet." Opposition Legislators House Minority Leader Nancy Pelosi (D-CA) expressed opposition to the bill, as well as Representatives Darrell Issa (R-CA) and presidential candidate Ron Paul (R-TX), who joined nine Democrats to sign a letter to other House members warning that the bill would cause "an explosion of innovation-killing lawsuits and litigation". "Issa said the legislation is beyond repair and must be rewritten from scratch", reported The Hill. Issa and Lofgren announced plans for legislation offering "a copyright enforcement process modeled after the U.S. International Trade Commission's (ITC) patent infringement investigations". Politico referred to support as an "election liability" for legislators. Subsequently, proponents began hinting that key provisions might be deferred with opponents stating this was inadequate. Representative Jared Polis (D-CO) has been known to lobby against SOPA in the game League of Legends, also making a post in the official game message boards. Companies and organizations Opponents include protest organizer, Fight for the Future, Google, Yahoo!, YouTube, Facebook, Twitter, AOL, LinkedIn, eBay, Mozilla Corporation, Mojang, Riot Games, Epic Games, Reddit, Wikipedia and the Wikimedia Foundation, in addition to human rights organizations such as Reporters Without Borders, the Electronic Frontier Foundation (EFF), the ACLU, and Human Rights Watch. Kaspersky Lab, a major computer security company, demonstrated its opposition to SOPA and "decided to discontinue its membership in the BSA". On December 13, 2011, Julian Sanchez of the libertarian think tank Cato Institute came out in strong opposition to the bill saying that while the amended version "trims or softens a few of the most egregious provisions of the original proposal... the fundamental problem with SOPA has never been these details; it's the core idea. The core idea is still to create an Internet blacklist..." The Library Copyright Alliance (including the American Library Association) objected to the broadened definition of "willful infringement" and the introduction of felony penalties for noncommercial streaming infringement, stating that these changes could encourage criminal prosecution of libraries. A Harvard law professor's analysis said that this provision was written so broadly that it could make mainstream musicians felons for uploading covers of other people's music to sites like YouTube. On November 22, Mike Masnick of Techdirt called SOPA "toxic" and published a detailed criticism of the ideas underlying the bill, writing that "one could argue that the entire Internet enables or facilitates infringement", and saying that a list of sites compiled by the entertainment industry included the personal site of one of their own artists, 50 Cent, and legitimate internet companies. The article questioned the effect of the bill on $2 trillion in GDP and 3.1 million jobs, with a host of consequential problems on investment, liability and innovation. Paul Graham, the founder of venture capital company Y Combinator opposed the bill, and banned all SOPA-supporting companies from their "demo day" events. "If these companies are so clueless about technology that they think SOPA is a good idea", he asks, "how could they be good investors?" Prominent pro-democracy movement, Avaaz.org started a petition in protest over SOPA and so far has got over 3.4 million signatures worldwide. The Center for Democracy and Technology maintains a list of SOPA and PIPA opponents consisting of the editorial boards of The New York Times, the Los Angeles Times, 34 other organizations and hundreds of prominent individuals. Zynga Game Network, creator of Facebook games Texas HoldEm Poker and FarmVille, wrote to the sponsors of both bills highlighting concerns over the effect on "the DMCA's safe harbor provisions ... [which] ... have been a cornerstone of the U.S. Technology and industry's growth and success", and opposing the bill due to its impact on "innovation and dynamism". Others Computer scientist Vint Cerf, one of the founders of the Internet, now Google vice president, wrote to Smith, saying "Requiring search engines to delete a domain name begins a worldwide arms race of unprecedented 'censorship' of the Web", in a letter published on CNet. On December 15, 2011, a second hearing was scheduled to amend and vote on SOPA. Many opponents remained firm even after Smith proposed a 71-page amendment to the bill to address concerns. NetCoalition, which works with Google, Twitter, eBay, and Facebook, appreciated that Smith was listening, but says it nonetheless could not support the amendment. Issa stated that Smith's amendment, "retains the fundamental flaws of its predecessor by blocking Americans' ability to access websites, imposing costly regulation on Web companies and giving Attorney General Eric Holder's Department of Justice broad new powers to police the Internet". In December 2011, screenwriter and comics writer Steve Niles spoke out against SOPA, commenting, "I know folks are scared to speak out because a lot of us work for these companies, but we have to fight. Too much is at stake." In January 2012, novelist, screenwriter and comics writer Peter David directed his ire at the intellectual property pirates whose activities he felt provoked the creation of SOPA. While expressing opposition to SOPA because of his view that the then-current language of the bill would go too far in its restriction of free expression, and would probably be scaled down, David argued that content pirates, such as the websites that had posted his novels online in their entirety for free downloads, as well as users who supported or took advantage of these activities, could have prevented SOPA by respecting copyright laws. Twenty-one artists signed an open letter to Congress urging them to exercise extreme caution, including Comedian Aziz Ansari, The Lonely Island music parody band, MGMT, OK Go, Jason Mraz and Trent Reznor of Nine Inch Nails. The letter reads, "As creative professionals, we experience copyright infringement on a very personal level. Commercial piracy is deeply unfair and pervasive leaks of unreleased films and music regularly interfere with the integrity of our creations. We are grateful for the measures policymakers have enacted to protect our works. [...] We fear that the broad new enforcement powers provided under SOPA and PIPA could be easily abused against legitimate services like those upon which we depend. These bills would allow entire websites to be blocked without due process, causing collateral damage to the legitimate users of the same services - artists and creators like us who would be censored as a result." Filmmaker Michael Moore also shut down his websites during the week of protest, while other celebrities, including Ashton Kutcher, Alec Baldwin, and rapper B.o.B expressed their opposition via Twitter. The Daily Shows Jon Stewart stated that SOPA will "break the Internet". According to a New York Times report (February 8, 2012), Art Brodsky of Public Knowledge said, "The movie business is fond of throwing out numbers about how many millions of dollars are at risk and how many thousands of jobs are lost ... We don't think it correlates to the state of the industry." The report also noted that "some in the internet world, including Tim O'Reilly, ... go so far as to question whether illegitimate downloading and sharing is such a bad thing. In fact, some say that it could even be a boon to artists and other creators." Tim O'Reilly is quoted as saying, "The losses due to piracy are far outweighed by the benefits of the free flow of information, which makes the world richer, and develops new markets for legitimate content ... Most of the people who are downloading unauthorized copies of O'Reilly books would never have paid us for them anyway." International response Organizations in the international civil and human rights community expressed concerns that SOPA would cause the United States to lose its position as a global leader in supporting a free and open Internet for public good. On November 18, 2011, the European Parliament adopted by a large majority a resolution that "stresses the need to protect the integrity of the global Internet and freedom of communication by refraining from unilateral measures to revoke IP addresses or domain names". Private individuals petitioned the Foreign and Commonwealth Office, asking for the British government to condemn the bill. Vice-President of the European Commission and European Commissioner for Digital Agenda Neelie Kroes said she is "Glad [the] tide is turning on SOPA," explaining rather than having "bad legislation" there "should be safeguarding benefits of open net". "Speeding is illegal too but you don't put speed bumps on the motorway", she said. Protest actions On November 16, 2011, Tumblr, Mozilla, Techdirt, the Center for Democracy and Technology were among many Internet companies that protested by participating in American Censorship Day. They displayed black banners over their site logos with the words "STOP CENSORSHIP." Google linked an online petition to its site, and says it collected more than 7 million signatures from the United States. Markham Erickson, executive director of NetCoalition, told Fox News that "a number of companies have had discussions about [blacking out services]" and discussion of the option spread to other media outlets. In January 2012, Reddit announced plans to black out its site for twelve hours on January 18, as company co-founder Alexis Ohanian announced he was going to testify to Congress. "He's of the firm position that SOPA could potentially 'obliterate' the entire tech industry", Paul Tassi wrote in Forbes. Tassi also opined that Google and Facebook would have to join the blackout to reach a sufficiently broad audience. Other prominent sites that planned to participate in the January 18 blackout were Cheezburger Sites, Mojang, Major League Gaming, Boing Boing, BoardGameGeek, xkcd, SMBC and The Oatmeal. Wider protests were considered and in some cases committed to by major internet sites, with high-profile bodies such as Google, Facebook, Twitter, Yahoo, Amazon, AOL, Reddit, Mozilla, LinkedIn, IAC, eBay, PayPal, WordPress and Wikimedia being widely named as "considering" or committed to an "unprecedented" internet blackout on January 18, 2012. On January 17 a Republican aide on Capitol Hill said that the protests were making their mark, with SOPA having already become "a dirty word beyond anything you can imagine". A series of pickets against the bill were held at the U.S. Embassy in Moscow. Two picketers were arrested. On January 21, 2012 RT news reported, "Bill Killed: SOPA death celebrated as Congress recalls anti-piracy acts". The Electronic Frontier Foundation, a rights advocacy non-profit group opposing the bill, said the protests were the biggest in Internet history, with over 115 thousand sites altering their webpages. SOPA supporters complained that the bill was being misrepresented amidst the protests. RIAA spokesman Jonathan Lamy said, "It's a dangerous and troubling development when the platforms that serve as gateways to information intentionally skew the facts to incite their users and arm them with misinformation", a sentiment echoed by RIAA CEO Cary Sherman who said "it's very difficult to counter the misinformation when the disseminators also own the platform". At the American Constitution Society's 2012 National Convention, the Democratic Party's chief counsel to the United States House Judiciary Subcommittee on Courts, Intellectual Property and the Internet said that the protests were "orchestrated by misinformation by a few actors," adding that "activism is welcome on the Hill, but... There's this thing called 'mob rule', and it's not always right." Wikipedia blackout The English Wikipedia blackout occurred for 24 hours on January 18–19, 2012. In place of articles, (with the exception of those for SOPA and PIPA themselves) the site showed only a message in protest of SOPA and PIPA asking visitors to "Imagine a world without free knowledge." It is estimated in excess of 160 million people saw the banner. A month earlier, Wikipedia co-founder Jimmy Wales initiated discussion with editors regarding a potential knowledge blackout, a protest inspired by a successful campaign by the Italian-language Wikipedia to block the Italian DDL intercettazioni bill, terms of which could have infringed the encyclopedia's editorial independence. Editors and others mulled interrupting service for one or more days as in the Italian protest, or presenting site visitors with a blanked page directing them to further information before permitting them to complete searches. On January 16, the Wikimedia Foundation announced that the English-language Wikipedia would be blacked out for 24 hours on January 18. SOPA's sponsor in the House, Chairman Smith, called Wikipedia's blackout a "publicity stunt" saying: "It is ironic that a website dedicated to providing information is spreading misinformation about the Stop Online Piracy Act." Smith went on to insist that SOPA "will not harm Wikipedia, domestic blogs or social networking sites". Megaupload shutdown and protest On January 19, 2012, Megaupload, a Hong Kong–based company providing file sharing services, was shut down by the US Department of Justice and the Federal Bureau of Investigation. Some commentators and observers have asserted that the FBI shut down of Megaupload proves that SOPA and PIPA are unnecessary. Legislative history The House Judiciary Committee held hearings on November 16 and December 15, 2011. The Committee was scheduled to continue debate in January 2012, but on January 17 Chairman Smith said that "Due to the Republican and Democratic retreats taking place over the next two weeks, markup of the Stop Online Piracy Act is expected to resume in February." However, in the wake of online protests held on January 18, 2012, Rep. Lamar Smith has stated, "The House Judiciary Committee will postpone consideration of the legislation until there is wider agreement on a solution", and Sen. Reid announced that the PIPA test vote scheduled for January 24 would also be postponed. November 16 House Judiciary Committee hearing At the House Judiciary Committee hearing, there was concern among some observers that the set of speakers who testified lacked technical expertise. Technology news site CNET reported "One by one, each witness—including a lobbyist for the Motion Picture Association of America—said they weren't qualified to discuss... DNSSEC." Adam Thierer, a senior research fellow at the Mercatus Center, similarly said, "The techno-ignorance of Congress was on full display. Member after member admitted that they really didn't have any idea what impact SOPA's regulatory provisions would have on the DNS, online security, or much of anything else." Lofgren stated, "We have no technical expertise on this panel today." She also criticized the tone of the hearing, saying, "It hasn't generally been the policy of this committee to dismiss the views of those we are going to regulate. Impugning the motives of the critics instead of the substance is a mistake." Lungren told Politico's Morning Tech that he had "very serious concerns" about SOPA's impact on DNSSEC, adding "we don't have enough information, and if this is a serious problem as was suggested by some of the technical experts that got in touch with me, we have to address it. I can't afford to let that go by without dealing with it." Gary Shapiro, CEO of the Consumer Electronics Association, stated, "The significant potential harms of this bill are reflected by the extraordinary coalition arrayed against it. Concerns about SOPA have been raised by Tea Partiers, progressives, computer scientists, human rights advocates, venture capitalists, law professors, independent musicians, and many more. Unfortunately, these voices were not heard at today's hearing." An editorial in Fortune wrote, "This is just another case of Congress doing the bidding of powerful lobbyists—in this case, Hollywood and the music industry, among others. It would be downright mundane if the legislation weren't so draconian and the rhetoric surrounding it weren't so transparently pandering." December 15 markup of the bill Since its introduction, a number of opponents to the bill have expressed concerns. The bill was presented for markup by the House Judiciary Committee on December 15. An aide to Smith stated that "He is open to changes but only legitimate changes. Some site[s] are totally capable of filtering illegal content, but they won't and are instead profiting from the traffic of illegal content." Markup outcome After the first day of the hearing, more than 20 amendments had been rejected, including one by Darrell Issa which would have stripped provisions targeting search engines and Internet providers. PC World reported that the 22–12 vote on the amendment could foreshadow strong support for the bill by the committee. The Committee adjourned on the second day agreeing to continue debate early in 2012. Smith announced a plan to remove the provision that requires Internet service providers to block access to certain foreign websites. On January 15, 2012, Issa said he has received assurances from Rep. Eric Cantor that the bill would not come up for a vote until a consensus could be reached. MPAA's continued efforts to enact SOPA principles The 2014 Sony Pictures hack revealed that the MPAA had continued its efforts to enact SOPA-like blocking principles since the bill died in Congress. The emails indicated that the MPAA was actively exploring new strategies to implement SOPA-like regulations, such as using the All Writs Act to "allow [the MPAA] to obtain court orders requiring site blocking without first having to sue and prove the target ISPs are liable for copyright infringement." The MPAA has also allied itself with National Association of Attorneys General president Jim Hood, who supports SOPA principles and has stated that "Google's not a government… they don't owe anyone a First Amendment right… [i]f you're an illegal site, you ought to clean up your act, instead of Google making money off it." On November 27, 2013, Hood sent a letter to Google outlining his grievances. It was later revealed that much of the letter was drafted by the law firm representing the MPAA. On October 21, 2014, Hood issued a subpoena to Google for information about, among other items, its advertising partnerships and practices concerning illegal and sexual content. Google requested an injunction to quash the subpoena from the United States District Court of the Southern District of Mississippi, Northern Division. Google was granted such an injunction on March 2, 2015. The injunction also prevented Hood from bringing a charge against Google for making third-party content available to internet users. Effectively, the injunction protected Google from having Hood's claims enforced until after the conclusion of the case. An MPAA spokesperson criticized Google's use of the First Amendment, accusing the company of using freedom of speech "as a shield for unlawful activities." Leaders in the technology industry commended the federal court for issuing the injunction. In addition, one of Google's head lawyers noted that "[w]e're pleased with the court's ruling, which recognizes that the MPAA's long-running campaign to censor the web — which started with SOPA — is contrary to federal law." See also Anti-Counterfeiting Trade Agreement (ACTA) Combating Online Infringement and Counterfeits Act (COICA) Commercial Felony Streaming Act Children's Online Privacy Protection Act (COPPA) Copyright bills in the 112th United States Congress Copyright Term Extension Act (CTEA) Cybercrime Prevention Act of 2012 (Philippines) Cyber Intelligence Sharing and Protection Act Digital Economy Act 2010 (in the UK) Ley Sinde Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act (PROTECT IP Act, or PIPA), the corresponding Senate bill Protecting Children from Internet Pornographers Act of 2011 Protests against SOPA and PIPA Russian State Duma Bill 89417-6 Splinternet Trans-Pacific Partnership Trans-Pacific Strategic Economic Partnership (TPP) European Directive on Copyright in the Digital Single Market, which passed the European Parliament in March 2019 despite similar protests to the ones against SOPA/PIPA, expands copyright liability to websites. Also known as the "meme ban" among critics. References External links H.R. 3261 on Thomas – Library of Congress (archive) - YouTube video: Internet's Own Boy: The story of Aaron Swartz H.R. 3261 on GovTrack Individual congressmen and senators' positions on SOPA Copyright Policy, Creativity, and Innovation in the Digital Economy; The Department of Commerce Internet Policy Task Force What DNS Is Not Brookings Institution white paper Statement on SOPA and PIPA ACM position statement. What Wikipedia Won't Tell You Cary H. Sherman (CEO, RIAA) - NYT, Op-Ed (02/08/2012). It's Evolution, Stupid Peter Sunde (Co-Founder, The Pirate Bay) - Wired, Column (02/10/2012) Proposed legislation of the 112th United States Congress United States federal computing legislation Copyright enforcement Domain Name System Internet access Internet law in the United States United States proposed federal intellectual property legislation Mass media-related controversies in the United States
33647361
https://en.wikipedia.org/wiki/Cryptanalysis%20of%20the%20Lorenz%20cipher
Cryptanalysis of the Lorenz cipher
Cryptanalysis of the Lorenz cipher was the process that enabled the British to read high-level German army messages during World War II. The British Government Code and Cypher School (GC&CS) at Bletchley Park decrypted many communications between the Oberkommando der Wehrmacht (OKW, German High Command) in Berlin and their army commands throughout occupied Europe, some of which were signed "Adolf Hitler, Führer". These were intercepted non-Morse radio transmissions that had been enciphered by the Lorenz SZ teleprinter rotor stream cipher attachments. Decrypts of this traffic became an important source of "Ultra" intelligence, which contributed significantly to Allied victory. For its high-level secret messages, the German armed services enciphered each character using various online Geheimschreiber (secret writer) stream cipher machines at both ends of a telegraph link using the 5-bit International Telegraphy Alphabet No. 2 (ITA2). These machines were subsequently discovered to be the Lorenz SZ (SZ for Schlüssel-Zusatz, meaning "cipher attachment") for the army, the Siemens and Halske T52 for the air force and the Siemens T43, which was little used and never broken by the Allies. Bletchley Park decrypts of messages enciphered with the Enigma machines revealed that the Germans called one of their wireless teleprinter transmission systems "Sägefisch" (sawfish), which led British cryptographers to refer to encrypted German radiotelegraphic traffic as "Fish". "Tunny" (tunafish) was the name given to the first non-Morse link, and it was subsequently used for the cipher machines and their traffic. As with the entirely separate cryptanalysis of the Enigma, it was German operational shortcomings that allowed the initial diagnosis of the system, and a way into decryption. Unlike Enigma, no physical machine reached allied hands until the very end of the war in Europe, long after wholesale decryption had been established. The problems of decrypting Tunny messages led to the development of "Colossus", the world's first electronic, programmable digital computer, ten of which were in use by the end of the war, by which time some 90% of selected Tunny messages were being decrypted at Bletchley Park. Albert W. Small, a cryptanalyst from the US Army Signal Corps who was seconded to Bletchley Park and worked on Tunny, said in his December 1944 report back to Arlington Hall that: German Tunny machines The Lorenz SZ cipher attachments implemented a Vernam stream cipher, using a complex array of twelve wheels that delivered what should have been a cryptographically secure pseudorandom number as a key stream. The key stream was combined with the plaintext to produce the ciphertext at the transmitting end using the exclusive or (XOR) function. At the receiving end, an identically configured machine produced the same key stream which was combined with the ciphertext to produce the plaintext, i. e. the system implemented a symmetric-key algorithm. The key stream was generated by ten of the twelve wheels. This was a product of XOR-ing the 5-bit character generated by the right hand five wheels, the chi () wheels, and the left hand five, the psi () wheels. The chi wheels always moved on one position for every incoming ciphertext character, but the psi wheels did not. The central two mu () or "motor" wheels determined whether or not the psi wheels rotated with a new character. After each letter was enciphered either all five psi wheels moved on, or they remained still and the same letter of psi-key was used again. Like the chi wheels, the 61 wheel moved on after each character. When 61 had the cam in the active position and so generated x (before moving) 37 moved on once: when the cam was in the inactive position (before moving) 37 and the psi wheels stayed still. On all but the earliest machines, there was an additional factor that played into the moving on or not of the psi wheels. These were of four different types and were called "Limitations" at Bletchley Park. All involved some aspect of the previous positions of the machine's wheels. The numbers of cams on the set of twelve wheels of the SZ42 machines totalled 501 and were co-prime with each other, giving an extremely long period before the key sequence repeated. Each cam could either be in a raised position, in which case it contributed x to the logic of the system, reversing the value of a bit, or in the lowered position, in which case it generated •. The total possible number of patterns of raised cams was 2501 which is an astronomically large number. In practice, however, about half of the cams on each wheel were in the raised position. Later, the Germans realized that if the number of raised cams was not very close to 50% there would be runs of xs and •s, a cryptographic weakness. The process of working out which of the 501 cams were in the raised position was called "wheel breaking" at Bletchley Park. Deriving the start positions of the wheels for a particular transmission was termed "wheel setting" or simply "setting". The fact that the psi wheels all moved together, but not with every input character, was a major weakness of the machines that contributed to British cryptanalytical success. Secure telegraphy Electro-mechanical telegraphy was developed in the 1830s and 1840s, well before telephony, and operated worldwide by the time of the Second World War. An extensive system of cables linked sites within and between countries, with a standard voltage of −80 V indicating a "mark" and +80 V indicating a "space". Where cable transmission became impracticable or inconvenient, such as for mobile German Army Units, radio transmission was used. Teleprinters at each end of the circuit consisted of a keyboard and a printing mechanism, and very often a five-hole perforated paper-tape reading and punching mechanism. When used online, pressing an alphabet key at the transmitting end caused the relevant character to print at the receiving end. Commonly, however, the communication system involved the transmitting operator preparing a set of messages offline by punching them onto paper tape, and then going online only for the transmission of the messages recorded on the tape. The system would typically send some ten characters per second, and so occupy the line or the radio channel for a shorter period of time than for online typing. The characters of the message were represented by the codes of the International Telegraphy Alphabet No. 2 (ITA2). The transmission medium, either wire or radio, used asynchronous serial communication with each character signaled by a start (space) impulse, 5 data impulses and 1½ stop (mark) impulses. At Bletchley Park mark impulses were signified by ("cross") and space impulses by ("dot"). For example, the letter "H" would be coded as . The figure shift (FIGS) and letter shift (LETRS) characters determined how the receiving end interpreted the string of characters up to the next shift character. Because of the danger of a shift character being corrupted, some operators would type a pair of shift characters when changing from letters to numbers or vice versa. So they would type 55M88 to represent a full stop. Such doubling of characters was very helpful for the statistical cryptanalysis used at Bletchley Park. After encipherment, shift characters had no special meaning. The speed of transmission of a radio-telegraph message was three or four times that of Morse code and a human listener could not interpret it. A standard teleprinter, however would produce the text of the message. The Lorenz cipher attachment changed the plaintext of the message into ciphertext that was uninterpretable to those without an identical machine identically set up. This was the challenge faced by the Bletchley Park codebreakers. Interception Intercepting Tunny transmissions presented substantial problems. As the transmitters were directional, most of the signals were quite weak at receivers in Britain. Furthermore, there were some 25 different frequencies used for these transmissions, and the frequency would sometimes be changed part way through. After the initial discovery of the non-Morse signals in 1940, a radio intercept station called the Foreign Office Research and Development Establishment was set up on a hill at Ivy Farm at Knockholt in Kent, specifically to intercept this traffic. The centre was headed by Harold Kenworthy, had 30 receiving sets and employed some 600 staff. It became fully operational early in 1943. Because a single missed or corrupted character could make decryption impossible, the greatest accuracy was required. The undulator technology used to record the impulses had originally been developed for high-speed Morse. It produced a visible record of the impulses on narrow paper tape. This was then read by people employed as "slip readers" who interpreted the peaks and troughs as the marks and spaces of ITA2 characters. Perforated paper tape was then produced for telegraphic transmission to Bletchley Park where it was punched out. The Vernam cipher The Vernam cipher implemented by the Lorenz SZ machines utilizes the Boolean "exclusive or" (XOR) function, symbolised by ⊕ and verbalised as "A or B but not both". This is represented by the following truth table, where x represents "true" and • represents "false". Other names for this function are: exclusive disjunction, not equal (NEQ), and modulo 2 addition (without "carry") and subtraction (without "borrow"). Modulo 2 addition and subtraction are identical. Some descriptions of Tunny decryption refer to addition and some to differencing, i.e. subtraction, but they mean the same thing. The XOR operator is both associative and commutative. Reciprocity is a desirable feature of a machine cipher so that the same machine with the same settings can be used either for enciphering or for deciphering. The Vernam cipher achieves this, as combining the stream of plaintext characters with the key stream produces the ciphertext, and combining the same key with the ciphertext regenerates the plaintext. Symbolically: Plaintext ⊕ Key = Ciphertext and Ciphertext ⊕ Key = Plaintext Vernam's original idea was to use conventional telegraphy practice, with a paper tape of the plaintext combined with a paper tape of the key at the transmitting end, and an identical key tape combined with the ciphertext signal at the receiving end. Each pair of key tapes would have been unique (a one-time tape), but generating and distributing such tapes presented considerable practical difficulties. In the 1920s four men in different countries invented rotor Vernam cipher machines to produce a key stream to act instead of a key tape. The Lorenz SZ40/42 was one of these. Security features A monoalphabetic substitution cipher such as the Caesar cipher can easily be broken, given a reasonable amount of ciphertext. This is achieved by frequency analysis of the different letters of the ciphertext, and comparing the result with the known letter frequency distribution of the plaintext. With a polyalphabetic cipher, there is a different substitution alphabet for each successive character. So a frequency analysis shows an approximately uniform distribution, such as that obtained from a (pseudo) random number generator. However, because one set of Lorenz wheels turned with every character while the other did not, the machine did not disguise the pattern in the use of adjacent characters in the German plaintext. Alan Turing discovered this weakness and invented the differencing technique described below to exploit it. The pattern of which of the cams were in the raised position, and which in the lowered position was changed daily on the motor wheels (37 and 61). The chi wheel cam patterns were initially changed monthly. The psi wheel patterns were changed quarterly until October 1942 when the frequency was increased to monthly, and then to daily on 1 August 1944, when the frequency of changing the chi wheel patterns was also changed to daily. The number of start positions of the wheels was 43×47×51×53×59×37×61×41×31×29×26×23 which is approximately 1.6×1019 (16 billion billion), far too large a number for cryptanalysts to try an exhaustive "brute-force attack". Sometimes the Lorenz operators disobeyed instructions and two messages were transmitted with the same start positions, a phenomenon termed a "depth". The method by which the transmitting operator told the receiving operator the wheel settings that he had chosen for the message which he was about to transmit was termed the "indicator" at Bletchley Park. In August 1942, the formulaic starts to the messages, which were useful to cryptanalysts, were replaced by some irrelevant text, which made identifying the true message somewhat harder. This new material was dubbed quatsch (German for "nonsense") at Bletchley Park. During the phase of the experimental transmissions, the indicator consisted of twelve German forenames, the initial letters of which indicated the position to which the operators turned the twelve wheels. As well as showing when two transmissions were fully in depth, it also allowed the identification of partial depths where two indicators differed only in one or two wheel positions. From October 1942 the indicator system changed to the sending operator transmitting the unenciphered letters QEP followed by a two digit number. This number was taken serially from a code book that had been issued to both operators and gave, for each QEP number, the settings of the twelve wheels. The books were replaced when they had been used up, but between replacements, complete depths could be identified by the re-use of a QEP number on a particular Tunny link. Diagnosis {| class="wikitable" | border=1 | align="right" | style="margin: 1em auto 1em auto" |+ Notation Letters can represent character streams, individual 5-bit characters or, if subscripted, individual bits of characters |- | || plaintext |- | || key – the sequence of characters XOR'ed (added) to the plaintext to give the ciphertext |- | ||chi component of key |- | ||psi component of key |- | {{math|1=ψ}} ||extended psi – the actual sequence of characters added by the psi wheels, including those when they do not advance |- | || ciphertext |- | || de-chi — the ciphertext with the chi component of the key removed |- | || any of the above XOR'ed with its successor character or bit |- | || the XOR operation |- |} The first step in breaking a new cipher is to diagnose the logic of the processes of encryption and decryption. In the case of a machine cipher such as Tunny, this entailed establishing the logical structure and hence functioning of the machine. This was achieved without the benefit of seeing a machine—which only happened in 1945, shortly before the allied victory in Europe. The enciphering system was very good at ensuring that the ciphertext contained no statistical, periodic or linguistic characteristics to distinguish it from random. However this did not apply to {{nowrap|, , {{math|1=ψ}} and ,}} which was the weakness that meant that Tunny keys could be solved. During the experimental period of Tunny transmissions when the twelve-letter indicator system was in use, John Tiltman, Bletchley Park's veteran and remarkably gifted cryptanalyst, studied the Tunny ciphertexts and identified that they used a Vernam cipher. When two transmissions (a and b) use the same key, i.e. they are in depth, combining them eliminates the effect of the key. Let us call the two ciphertexts and , the key and the two plaintexts and . We then have: If the two plaintexts can be worked out, the key can be recovered from either ciphertext-plaintext pair e.g.: or On 31 August 1941, two long messages were received that had the same indicator HQIBPEXEZMUG. The first seven characters of these two ciphertexts were the same, but the second message was shorter. The first 15 characters of the two messages were as follows (in Bletchley Park interpretation): John Tiltman tried various likely pieces of plaintext, i.e. a "cribs", against the string and found that the first plaintext message started with the German word SPRUCHNUMMER (message number). In the second plaintext, the operator had used the common abbreviation NR for NUMMER. There were more abbreviations in the second message, and the punctuation sometimes differed. This allowed Tiltman to work out, over ten days, the plaintext of both messages, as a sequence of plaintext characters discovered in , could then be tried against and vice versa. In turn, this yielded almost 4000 characters of key. Members of the Research Section worked on this key to try to derive a mathematical description of the key generating process, but without success. Bill Tutte joined the section in October 1941 and was given the task. He had read chemistry and mathematics at Trinity College, Cambridge before being recruited to Bletchley Park. At his training course, he had been taught the Kasiski examination technique of writing out a key on squared paper with a new row after a defined number of characters that was suspected of being the frequency of repetition of the key. If this number was correct, the columns of the matrix would show more repetitions of sequences of characters than chance alone. Tutte thought that it was possible that, rather than using this technique on the whole letters of the key, which were likely to have a long frequency of repetition, it might be worth trying it on the sequence formed by taking only one impulse (bit) from each letter, on the grounds that "the part might be cryptographically simpler than the whole". Given that the Tunny indicators used 25 letters (excluding J) for 11 of the positions, but only 23 letters for the twelfth, he tried Kasiski's technique on the first impulse of the key characters using a repetition of 25 × 23 = 575. This did not produce a large number of repetitions in the columns, but Tutte did observe the phenomenon on a diagonal. He therefore tried again with 574, which showed up repeats in the columns. Recognising that the prime factors of this number are 2, 7 and 41, he tried again with a period of 41 and "got a rectangle of dots and crosses that was replete with repetitions". It was clear, however, that the sequence of first impulses was more complicated than that produced by a single wheel of 41 positions. Tutte called this component of the key χ1 (chi). He figured that there was another component, which was XOR-ed with this, that did not always change with each new character, and that this was the product of a wheel that he called ψ1 (psi). The same applied for each of the five impulses—indicated here by subscripts. So for a single character, the key K consisted of two components: . The actual sequence of characters added by the psi wheels, including those when they do not advance, was referred to as the extended psi, and symbolised by ψ′ . Tutte's derivation of the ψ component was made possible by the fact that dots were more likely than not to be followed by dots, and crosses more likely than not to be followed by crosses. This was a product of a weakness in the German key setting, which they later stopped. Once Tutte had made this breakthrough, the rest of the Research Section joined in to study the other impulses, and it was established that the five ψ wheels all moved together under the control of two μ (mu or "motor") wheels. Diagnosing the functioning of the Tunny machine in this way was a truly remarkable cryptanalytical achievement, and was described when Tutte was inducted as Officer of the Order of Canada in October 2001, as "one of the greatest intellectual feats of World War II". Turingery In July 1942 Alan Turing spent a few weeks in the Research Section. He had become interested in the problem of breaking Tunny from the keys that had been obtained from depths. In July, he developed a method of deriving the cam settings ("wheel breaking") from a length of key. It became known as "Turingery" (playfully dubbed "Turingismus" by Peter Ericsson, Peter Hilton and Donald Michie) and introduced the important method of "differencing" on which much of the rest of solving Tunny keys in the absence of depths, was based. Differencing The search was on for a process that would manipulate the ciphertext or key to produce a frequency distribution of characters that departed from the uniformity that the enciphering process aimed to achieve. Turing worked out that the XOR combination of the values of successive (adjacent) characters in a stream of ciphertext or key, emphasised any departures from a uniform distribution. The resultant stream was called the difference (symbolised by the Greek letter "delta" Δ) because XOR is the same as modulo 2 subtraction. So, for a stream of characters S, the difference ΔS was obtained as follows, where underline indicates the succeeding character: The stream S may be ciphertext Z, plaintext P, key K or either of its two components χ and ψ. The relationship amongst these elements still applies when they are differenced. For example, as well as: It is the case that: Similarly for the ciphertext, plaintext and key components: So: The reason that differencing provided a way into Tunny, was that although the frequency distribution of characters in the ciphertext could not be distinguished from a random stream, the same was not true for a version of the ciphertext from which the chi element of the key had been removed. This is because, where the plaintext contained a repeated character and the psi wheels did not move on, the differenced psi character (Δψ) would be the null character ('' at Bletchley Park). When XOR-ed with any character, this character has no effect, so in these circumstances, The ciphertext modified by the removal of the chi component of the key was called the de-chi D at Bletchley Park, and the process of removing it as "de-chi-ing". Similarly for the removal of the psi component which was known as "de-psi-ing" (or "deep sighing" when it was particularly difficult). So the delta de-chi ΔD was: Repeated characters in the plaintext were more frequent both because of the characteristics of German (EE, TT, LL and SS are relatively common), and because telegraphists frequently repeated the figures-shift and letters-shift characters as their loss in an ordinary telegraph transmission could lead to gibberish. To quote the General Report on Tunny:Turingery introduced the principle that the key differenced at one, now called ΔΚ, could yield information unobtainable from ordinary key. This Δ principle was to be the fundamental basis of nearly all statistical methods of wheel-breaking and setting. Differencing was applied to each of the impulses of the ITA2 coded characters. So, for the first impulse, that was enciphered by wheels χ1 and ψ1, differenced at one: And for the second impulse: And so on. The periodicity of the chi and psi wheels for each impulse (41 and 43 respectively for the first impulse) is also reflected in the pattern of ΔK. However, given that the psi wheels did not advance for every input character, as did the chi wheels, it was not simply a repetition of the pattern every 41 × 43 = 1763 characters for ΔK1, but a more complex sequence. Turing's method Turing's method of deriving the cam settings of the wheels from a length of key obtained from a depth, involved an iterative process. Given that the delta psi character was the null character '/' half of the time on average, an assumption that had a 50% chance of being correct. The process started by treating a particular ΔK character as being the Δχ for that position. The resulting putative bit pattern of and for each chi wheel, was recorded on a sheet of paper that contained as many columns as there were characters in the key, and five rows representing the five impulses of the Δχ. Given the knowledge from Tutte's work, of the periodicity of each of the wheels, this allowed the propagation of these values at the appropriate positions in the rest of the key. A set of five sheets, one for each of the chi wheels, was also prepared. These contained a set of columns corresponding in number to the cams for the appropriate chi wheel, and were referred to as a 'cage'. So the χ3 cage had 29 such columns. Successive 'guesses' of Δχ values then produced further putative cam state values. These might either agree or disagree with previous assumptions, and a count of agreements and disagreements was made on these sheets. Where disagreements substantially outweighed agreements, the assumption was made that the Δψ character was not the null character '/', so the relevant assumption was discounted. Progressively, all the cam settings of the chi wheels were deduced, and from them, the psi and motor wheel cam settings. As experience of the method developed, improvements were made that allowed it to be used with much shorter lengths of key than the original 500 or so characters." Testery The Testery was the section at Bletchley Park that performed the bulk of the work involved in decrypting Tunny messages. By July 1942, the volume of traffic was building up considerably. A new section was therefore set up, led by Ralph Tester—hence the name. The staff consisted mainly of ex-members of the Research Section, and included Peter Ericsson, Peter Hilton, Denis Oswald and Jerry Roberts. The Testery's methods were almost entirely manual, both before and after the introduction of automated methods in the Newmanry to supplement and speed up their work. The first phase of the work of the Testery ran from July to October, with the predominant method of decryption being based on depths and partial depths. After ten days, however, the formulaic start of the messages was replaced by nonsensical quatsch, making decryption more difficult. This period was productive nonetheless, even though each decryption took considerable time. Finally, in September, a depth was received that allowed Turing's method of wheel breaking, "Turingery", to be used, leading to the ability to start reading current traffic. Extensive data about the statistical characteristics of the language of the messages was compiled, and the collection of cribs extended. In late October 1942 the original, experimental Tunny link was closed and two new links (Codfish and Octopus) were opened. With these and subsequent links, the 12-letter indicator system of specifying the message key was replaced by the QEP system. This meant that only full depths could be recognised—from identical QEP numbers—which led to a considerable reduction in traffic decrypted. Once the Newmanry became operational in June 1943, the nature of the work performed in the Testery changed, with decrypts, and wheel breaking no longer relying on depths. British Tunny The so-called "British Tunny Machine" was a device that exactly replicated the functions of the SZ40/42 machines. It was used to produce the German cleartext from a ciphertext tape, after the cam settings had been determined. The functional design was produced at Bletchley Park where ten Testery Tunnies were in use by the end of the war. It was designed and built in Tommy Flowers' laboratory at the General Post Office Research Station at Dollis Hill by Gil Hayward, "Doc" Coombs, Bill Chandler and Sid Broadhurst. It was mainly built from standard British telephone exchange electro-mechanical equipment such as relays and uniselectors. Input and output was by means of a teleprinter with paper tape reading and punching. These machines were used in both the Testery and later the Newmanry. Dorothy Du Boisson who was a machine operator and a member of the Women's Royal Naval Service (Wren), described plugging up the settings as being like operating an old fashioned telephone exchange and that she received electric shocks in the process. When Flowers was invited by Hayward to try the first British Tunny machine at Dollis Hill by typing in the standard test phrase: "Now is the time for all good men to come to the aid of the party", he much appreciated that the rotor functions had been set up to provide the following Wordsworthian output: Additional features were added to the British Tunnies to simplify their operation. Further refinements were made for the versions used in the Newmanry, the third Tunny being equipped to produce de-chi tapes. Newmanry The Newmanry was a section set up under Max Newman in December 1942 to look into the possibility of assisting the work of the Testery by automating parts of the processes of decrypting Tunny messages. Newman had been working with Gerry Morgan, head of the Research Section on ways of breaking Tunny when Bill Tutte approached them in November 1942 with the idea of what became known as the "1+2 break in". This was recognised as being feasible, but only if automated. Newman produced a functional specification of what was to become the "Heath Robinson" machine. He recruited the Post Office Research Station at Dollis Hill, and Dr C.E. Wynn-Williams at the Telecommunications Research Establishment (TRE) at Malvern to implement his idea. Work on the engineering design started in January 1943 and the first machine was delivered in June. The staff at that time consisted of Newman, Donald Michie, Jack Good, two engineers and 16 Wrens. By the end of the war the Newmanry contained three Robinson machines, ten Colossus Computers and a number of British Tunnies. The staff were 26 cryptographers, 28 engineers and 275 Wrens. The automation of these processes required the processing of large quantities of punched paper tape such as those on which the enciphered messages were received. Absolute accuracy of these tapes and their transcription was essential, as a single character in error could invalidate or corrupt a huge amount of work. Jack Good introduced the maxim "If it's not checked it's wrong". The "1+2 break in" W. T. Tutte developed a way of exploiting the non-uniformity of bigrams (adjacent letters) in the German plaintext using the differenced cyphertext and key components. His method was called the "1+2 break in," or "double-delta attack". The essence of this method was to find the initial settings of the chi component of the key by exhaustively trying all positions of its combination with the ciphertext, and looking for evidence of the non-uniformity that reflected the characteristics of the original plaintext. The wheel breaking process had to have successfully produced the current cam settings to allow the relevant sequence of characters of the chi wheels to be generated. It was totally impracticable to generate the 22 million characters from all five of the chi wheels, so it was initially limited to 41 × 31 = 1271 from the first two. Given that for each of the five impulses i: and hence for the first two impulses: Calculating a putative in this way for each starting point of the sequence would yield xs and •s with, in the long run, a greater proportion of •s when the correct starting point had been used. Tutte knew, however, that using the differenced (∆) values amplified this effect because any repeated characters in the plaintext would always generate •, and similarly ∆ψ1 ⊕ ∆ψ2 would generate • whenever the psi wheels did not move on, and about half of the time when they did - some 70% overall. Tutte analyzed a decrypted ciphertext with the differenced version of the above function: and found that it generated • some 55% of the time. Given the nature of the contribution of the psi wheels, the alignment of chi-stream with the ciphertext that gave the highest count of •s from was the one that was most likely to be correct. This technique could be applied to any pair of impulses and so provided the basis of an automated approach to obtaining the de-chi (D) of a ciphertext, from which the psi component could be removed by manual methods. Robinsons Heath Robinson was the first machine produced to automate Tutte's 1+2 method. It was given the name by the Wrens who operated it, after cartoonist William Heath Robinson, who drew immensely complicated mechanical devices for simple tasks, similar to the American cartoonist Rube Goldberg. The functional specification of the machine was produced by Max Newman. The main engineering design was the work of Frank Morrell at the Post Office Research Station at Dollis Hill in North London, with his colleague Tommy Flowers designing the "Combining Unit". Dr C. E. Wynn-Williams from the Telecommunications Research Establishment at Malvern produced the high-speed electronic valve and relay counters. Construction started in January 1943, the prototype machine was in use at Bletchley Park in June. The main parts of the machine were: a tape transport and reading mechanism (dubbed the "bedstead" because of its resemblance to an upended metal bed frame) that ran the looped key and message tapes at between 1000 and 2000 characters per second; a combining unit that implemented the logic of Tutte's method; a counting unit that counted the number of •s, and if it exceeded a pre-set total, displayed or printed it. The prototype machine was effective despite a number of serious shortcomings. Most of these were progressively overcome in the development of what became known as "Old Robinson". Colossus Tommy Flowers' experience with Heath Robinson, and his previous, unique experience of thermionic valves (vacuum tubes) led him to realize that a better machine could be produced using electronics. Instead of the key stream being read from a punched paper tape, an electronically generated key stream could allow much faster and more flexible processing. Flowers' suggestion that this could be achieved with a machine that was entirely electronic and would contain between one and two thousand valves, was treated with incredulity at both the Telecommunications Research Establishment and at Bletchley Park, as it was thought that it would be "too unreliable to do useful work". He did, however, have the support of the Controller of Research at Dollis Hill, W Gordon Radley, and he implemented these ideas producing Colossus, the world's first electronic, digital, computing machine that was at all programmable, in the remarkably short time of ten months. In this he was assisted by his colleagues at the Post Office Research Station Dollis Hill: Sidney Broadhurst, William Chandler, Allen Coombs and Harry Fensom. The prototype Mark 1 Colossus (Colossus I), with its 1500 valves, became operational at Dollis Hill in December 1943 and was in use at Bletchley Park by February 1944. This processed the message at 5000 characters per second using the impulse from reading the tape's sprocket holes to act as the clock signal. It quickly became evident that this was a huge leap forward in cryptanalysis of Tunny. Further Colossus machines were ordered and the orders for more Robinsons cancelled. An improved Mark 2 Colossus (Colossus II) contained 2400 valves and first worked at Bletchley Park on 1 June 1944, just in time for the D-day Normandy landings. The main parts of this machine were: a tape transport and reading mechanism (the "bedstead") that ran the message tape in a loop at 5000 characters per second; a unit that generated the key stream electronically; five parallel processing units that could be programmed to perform a large range of Boolean operations; five counting units that each counted the number of •s or xs, and if it exceeded a pre-set total, printed it out. The five parallel processing units allowed Tutte's "1+2 break in" and other functions to be run at an effective speed of 25,000 characters per second by the use of circuitry invented by Flowers that would now be called a shift register. Donald Michie worked out a method of using Colossus to assist in wheel breaking as well as for wheel setting. This was then implemented in special hardware on later Colossi. A total of ten Colossus computers were in use and an eleventh was being commissioned at the end of the war in Europe (VE-Day). Special machines As well as the commercially produced teleprinters and re-perforators, a number of other machines were built to assist in the preparation and checking of tapes in the Newmanry and Testery. The approximate complement as of May 1945 was as follows. Steps in wheel setting Working out the start position of the chi (χ) wheels required first that their cam settings had been determined by "wheel breaking". Initially, this was achieved by two messages having been sent in depth. The number of start positions for the first two wheels, χ1 and χ2 was 41×31 = 1271. The first step was to try all of these start positions against the message tape. This was Tutte's "1+2 break in" which involved computing —which gives a putative ()—and counting the number of times this gave •. Incorrect starting positions would, on average, give a dot count of 50% of the message length. On average, the dot count for a correct starting point would be 54%, but there was inevitably a considerable spread of values around these averages. Both Heath Robinson, which was developed into what became known as "Old Robinson", and Colossus were designed to automate this process. Statistical theory allowed the derivation of measures of how far any count was from the 50% expected with an incorrect starting point for the chi wheels. This measure of deviation from randomness was called sigma. Starting points that gave a count of less than 2.5 × sigma, named the "set total", were not printed out. The ideal for a run to set χ1 and χ2 was that a single pair of trial values produced one outstanding value for sigma thus identifying the start positions of the first two chi wheels. An example of the output from such a run on a Mark 2 Colossus with its five counters: a, b, c, d and e, is given below. With an average-sized message, this would take about eight minutes. However, by utilising the parallelism of the Mark 2 Colossus, the number of times the message had to be read could be reduced by a factor of five, from 1271 to 255. Having identified possible χ1, χ2 start positions, the next step was to try to find the start positions for the other chi wheels. In the example given above, there is a single setting of χ1 = 36 and χ2 = 21 whose sigma value makes it stand out from the rest. This was not always the case, and Small enumerates 36 different further runs that might be tried according to the result of the χ1, χ2 run. At first the choices in this iterative process were made by the cryptanalyst sitting at the typewriter output, and calling out instructions to the Wren operators. Max Newman devised a decision tree and then set Jack Good and Donald Michie the task of devising others. These were used by the Wrens without recourse to the cryptanalysts if certain criteria were met. In the above one of Small's examples, the next run was with the first two chi wheels set to the start positions found and three separate parallel explorations of the remaining three chi wheels. Such a run was called a "short run" and took about two minutes. So the probable start positions for the chi wheels are: χ1 = 36, χ2 = 21, χ3 = 01, χ4 = 19, χ5 = 04. These had to be verified before the de-chi (D) message was passed to the Testery. This involved Colossus performing a count of the frequency of the 32 characters in ΔD. Small describes the check of the frequency count of the ΔD characters as being the "acid test", and that practically every cryptanalyst and Wren in the Newmanry and Testery knew the contents of the following table by heart. If the derived start points of the chi wheels passed this test, the de-chi-ed message was passed to the Testery where manual methods were used to derive the psi and motor settings. As Small remarked, the work in the Newmanry took a great amount of statistical science, whereas that in the Testery took much knowledge of language and was of great interest as an art. Cryptanalyst Jerry Roberts made the point that this Testery work was a greater load on staff than the automated processes in the Newmanry. See also The National Museum of Computing Notes and references Bibliography in in in Updated and extended version of Action This Day: From Breaking of the Enigma Code to the Birth of the Modern Computer Bantam Press 2001 in in That version is a facsimile copy, but there is a transcript of much of this document in '.pdf' format at: , and a web transcript of Part 1 at: in in in in in in in in in Transcript of a lecture given by Prof. Tutte at the University of Waterloo in in Bletchley Park History of computing in the United Kingdom History of cryptography Signals intelligence of World War II
33658868
https://en.wikipedia.org/wiki/Point%20of%20care
Point of care
Clinical point of care (POC) is the point in time when clinicians deliver healthcare products and services to patients at the time of care. Clinical documentation Clinical documentation is a record of the critical thinking and judgment of a health care professional, facilitating consistency and effective communication among clinicians. Documentation performed at the time of clinical point of care can be conducted using paper or electronic formats. This process aims to capture medical information pertaining to patient's healthcare needs. The patient's health record is a legal document that contains details regarding patient's care and progress. The types of information captured during the clinical point of care documentation include the actions taken by clinical staff including physicians and nurses, and the patient's healthcare needs, goals, diagnosis and the type of care they have received from the healthcare providers. Such documentations provide evidence regarding safe, effective and ethical care and insinuates accountability for healthcare institutions and professionals. Furthermore, accurate documents provide a rigorous foundation for conducting appropriate quality of care analysis that can facilitate better health outcomes for patients. Thus, regardless of the format used to capture the clinical point of care information, these documents are imperative in providing safe healthcare. Also, it is important to note that electronic formats of clinical point of care documentation are not intended to replace existing clinical process but to enhance the current clinical point of care documentation process. Traditional approach One of the major responsibilities for nurses in healthcare settings is to forward information about the patient's needs and treatment to other healthcare professionals. Traditionally, this has been done verbally. However, today information technology has made its entrance into the healthcare system whereby verbal transfer of information is becoming obsolete. In the past few decades, nurses have witnessed a change toward a more independent practice with explicit knowledge of nursing care. The obligation to point of care documentation not only applies to the performed interventions, medical and nursing, but also impacts the decision making process; explaining why a specific action has been prompted by the nurse. The main benefit of point of care documentation is advancing structured communication between healthcare professionals to ensure the continuity of patient care. Without a structured care plan that is closely followed, care tends to become fragmented. Electronic documentation Point of care (POC) documentation is the ability for clinicians to document clinical information while interacting with and delivering care to patients. The increased adoption of electronic health records (EHR) in healthcare institutions and practices creates the need for electronic POC documentation through the use of various medical devices. POC documentation is meant to assist clinicians by minimizing time spent on documentation and maximizing time for patient care. The type of medical devices used is important in ensuring that documentation can be effectively integrated into the clinical workflow of a particular clinical environment. Devices Mobile technologies such as personal digital assistants (PDAs), laptop computers and tablets enable documentation at the point of care. The selection of a mobile computing platform is contingent upon the amount and complexity of data. To ensure successful implementation, it is important to examine the strengths and limitations of each device. Tablets are more functional for high volume and complex data entry, and are favoured for their screen size, and capacity to run more complex functions. PDAs are more functional for low volume and simple data entry and are preferred for their lightweight, portability and long battery life. Electronic medical record An electronic medical record (EMR) contains patient's current and past medical history. The types of information captured within this document include patient's medical history, medication allergies, immunization statuses, laboratory and diagnostic test images, vital signs and patient demographics. This type of electronic documentation enables healthcare providers to use evidence-based decision support tools and share the document via the Internet. Moreover, there are two types of software included within EMR: practice management and EMR clinical software. Consequently, the EMR is able to capture both the administrative and clinical data. Computer physician order entries A computerized physician order entry allows medical practitioners to input medical instructions and treatment plans for the patients at the point of care. CPOE also enable healthcare practitioners to use decision support tools to detect medication prescription errors and override non-standard medication regimes that may cause fatalities. Furthermore, embedded algorithms may be chosen for people of certain age and weight to further support the clinical point of care interaction. Overall, such systems reduce errors due to illegible writing on paper and transcribing errors. Mobile EMRs mHealth Mobile devices and tablets provide accessibility to the Electronic Medical Record during the clinical point of care documentation process. Mobile technologies such as Android phones, iPhones, BlackBerrys, and tablets feature touchscreens to further support the ease of use for the physicians. Furthermore, mobile EMR applications support workflow portability needs due to which clinicians can document patient information at the patient's bedside. Advantages Workflow The use of POC documentation devices changes clinical practice by affecting workflow processes and communication. With the availability of POC documentation devices, for example, nurses can avoid having to go to their deskspace and wait for a desktop computer to become available. They are able to move from patient to patient, eliminating steps in work process altogether. Furthermore, redundant tasks are avoided as data is captured directly from the particular encounter without the need for transcription. Safety A delay between face-to-face patient care and clinical documentation can cause corruption of data, leading to errors in treatment. Giving clinicians the ability to document clinical information when and where care is being delivered allows for accuracy and timeliness, contributing to increased patient safety in a dynamic and highly interruptive environment. Point of care documentation can reduce errors in a variety of clinical tasks including diagnostics, medication prescribing and medication administration. Collaboration and communication Ineffective communication among patient care team members is a root cause of medical errors and other adverse events. Point of care documentation facilitates the continuity of high quality care and improves communication between nurses and other healthcare providers. Proper documentation at the point of care can optimize flow of information among various clinicians and enhances communication. Clinicians can avoid going to a workstation and can access patient information at the bedside. It will also enable the timeliness of documentation, which is important to prevent adverse events. Nurse-patient time Literature from various studies show that approximately 25-50% of a nurse's shift is spent on documentation. As most documentation is done in the traditional manner, that is using paper and pen, enabling a POC documentation device could potentially allow 25-50% more time at the bedside. Using speech recognition and information has been studied . as a way to support nurses in POC documentation with encouraging results: 5276 of 7277 test words were recognised correctly and information extraction achieved the F1 of 0.86 in the category for irrelevant text and the macro-averaged F1 of 0.70 over the remaining 35 nonempty categories of the nursing handover form with our 101 test documents. Disadvantages Complexities Numerous point of care documentation systems produce data redundancies, inconsistencies and irregularities of charting. Some electronic formats are repetitious and time-consuming. Moreover, some point of care documentation from one setting to another without a standardized pattern, and there are no guidelines for standards to documenting. Inaccessibility also causes time to be lost in searching for charts. These issues all lead to wasted time, increasing costs and uncomfortable charting. A study adopted both qualitative and quantitative methods have confirmed complexities in point of care documentation. The study has also categorized these complexities into three themes: disruption of documentation; incompleteness in charting; and inappropriate charting. As a result, these barriers limit nurses competence, motivation and confidence; ineffective nursing procedures; and inadequate nursing auditing, supervision and staff development. Privacy and security When examining the use of any type of technology in healthcare its important to remember that technology holds private personal health information. As such, security measures need to be in place to minimize the risk for breaches of privacy and patient confidentiality. Depending on the country you live in its important to ensure that legislation standards are met. According to Collier in 2012, privacy and confidentiality breaches are rising largely attributed to the lack of appropriate encryption technology. For successful implementation of any health technologies it is vital to ensure adequate security measures are used such as strong encryption technology. Countries Canada Ontario The adoption of electronic formats of clinical point of care documentation is particularly low in Ontario. Consequently, provincial leaders such as eHealth Ontario and Ontario MD provide financial and technical assistance in supporting electronic documentation of clinical point of care through EMR. Furthermore, currently more than six million Ontarians have EMR; however, by 2012 this number is expected to increase to 10 million citizens. Conclusively, continued efforts are being made to adopt charting of patient information in electronic format to improve the quality of clinical point of care services See also Adoption of Electronic Medical Records in U.S. Hospitals Personal health record Point-of-care testing References Practice of medicine Health care Health informatics