text
stringlengths 29
148k
|
---|
Stata (, STAY-ta, alternatively , occasionally stylized as STATA) is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fields, including biomedicine, economics, epidemiology, and sociology.Stata was initially developed by Computing Resource Center in California and the first version was released in 1985. In 1993, the company moved to College Station, TX and was renamed Stata Corporation, now known as StataCorp. A major release in 2003 included a new graphics system and dialog boxes for all commands. Since then, a new version has been released once every two years. The current version is Stata 18, released in April 2023.
Technical overview and terminology
User interface
From its creation, Stata has always employed an integrated command-line interface. Starting with version 8.0, Stata has included a graphical user interface based on Qt framework which uses menus and dialog boxes to give access to many built-in commands. The dataset can be viewed or edited in spreadsheet format. From version 11 on, other commands can be executed while the data browser or editor is opened.
Data structure and storage
Until the release of version 16, Stata could only open a single dataset at any one time. Stata allows for flexibility with assigning data types to data. Its compress command automatically reassigns data to data types that take up less memory without loss of information. Stata utilizes integer storage types which occupy only one or two bytes rather than four, and single-precision (4 bytes) rather than double-precision (8 bytes) is the default for floating-point numbers.
Stata's data format is always tabular in format. Stata refers to the columns of tabular data as variables.
Data format compatibility
Stata can import data in a variety of formats. This includes ASCII data formats (such as CSV or databank formats) and spreadsheet formats (including various Excel formats).
Stata's proprietary file formats have changed over time, although not every Stata release includes a new dataset format. Every version of Stata can read all older dataset formats, and can write both the current and most recent previous dataset format, using the saveold command. Thus, the current Stata release can always open datasets that were created with older versions, but older versions cannot read newer format datasets.
Stata can read and write SAS XPORT format datasets natively, using the fdause and fdasave commands.
Some other econometric applications, including gretl, can directly import Stata file formats.
History
Origins
The development of Stata began in 1984, initially by William (Bill) Gould and later by Sean Becketti. The software was originally intended to compete with statistical programs for personal computers such as SYSTAT and MicroTSP. Stata was written, then as now, in the C programming language, initially for PCs running the DOS operating system. The first version was released in 1985 with 44 commands.
Development
There have been 17 major releases of Stata between 1985 and 2021, and additional code and documentation updates between major releases. In its early years, extra sets of Stata programs were sometimes sold as "kits" or distributed as Support Disks. With the release of Stata 6 in 1999, updates began to be delivered to users via the web. The initial release of Stata was for the DOS operating system. Since then, versions of Stata have been released for systems running Unix variants like Linux distributions, Windows, and MacOS. All Stata files are platform-independent.
Hundreds of commands have been added to Stata in its 37-year history. Certain developments have proved to be particularly important and continue to shape the user experience today, including extensibility, platform independence, and the active user community.
Extensibility
The program command was implemented in Stata 1.2, giving users the ability to add their own commands. ado-files followed in Stata 2.1, allowing a user-written program to be automatically loaded into memory. Many user-written ado-files are submitted to the [ Statistical Software Components Archive] hosted by Boston College. StataCorp added an ssc command to allow community-contributed programs to be added directly within Stata. More recent editions of Stata allow users to call Python and R scripts using commands, as well as allowing Python IDEs like Jupyter Notebooks to import Stata commands.
User community
A number of important developments were initiated by Stata's active user community. The Stata Technical Bulletin, which often contains user-created commands, was introduced in 1991 and issued six times a year. It was relaunched in 2001 as the peer-reviewed Stata Journal, a quarterly publication containing descriptions of community-contributed commands and tips for the effective use of Stata. In 1994, a listserv began as a hub for users to collaboratively solve coding and technical issues; in 2014, it was converted into a web forum. In 1995, Statacorp began organizing user and developer conferences that meet annually. Only the annual Stata Conference held in the United States is hosted by StataCorp. Other user group meetings are held annually in the United States (the Stata Conference), the UK, Germany, and Italy, and less frequently in several other countries. Local Stata distributors host User Group meetings in their own countries.
Software products
There are four builds of Stata: Stata/MP, Stata/SE, Stata/BE, and Numerics by Stata. Whereas Stata/MP allows for built-in parallel processing of certain commands, Stata/SE and Stata/BE are bottlenecked and limit usage to only one single core. Stata/MP runs certain commands about 2.4 times faster, roughly 60% of theoretical maximum efficiency, when running parallel processes on four CPU cores compared to SE or BE versions. Numerics by Stata allows for web integration of Stata commands.
SE and BE versions differ in the amount of memory datasets may utilize. Though Stata/MP can store 10 to 20 billion observations and up to 120,000 variables, Stata/SE and Stata/BE store up to 2.14 billion observations and handle 32,767 variables and 2,048 variables respectively. The maximum number of independent variables in a model is 65,532 variables in Stata/MP, 10,998 variables in Stata/SE, and 798 variables in Stata/BE.The pricing and licensing of Stata depends on its intended use: business, government/nonprofit, education, or student. Single user licenses are either renewable annually or perpetual. Other license types include a single license for use by concurrent users, a site license, volume single user for bulk pricing, or a student lab.
Example code
The following set of commands revolve around simple data management.
The next set of commands move onto descriptive statistics.
A simple hypothesis test:
Graphing data:
Linear regression:
See also
List of statistical packages
Comparison of statistical packages
Data analysis
Further reading
Bittmann, Felix (2019). Stata - A Really Short Introduction. Boston: DeGruyter Oldenbourg. ISBN 978-3-11061-729-0.
Pinzon, Enrique, ed. (2015). Thirty Years with Stata: A Retrospective. College Station, Texas: Stata Press. ISBN 978-1-59718-172-3.
Hamilton, Lawrence C. (2013). Statistics with STATA. Boston: Cengage. ISBN 978-0-84006-463-9.
Official website
Stata Journal
Stata Press
Stata Technical Bulletin
Statistical Software Components Archive |
Teacher forcing is an algorithm for training the weights of recurrent neural networks (RNNs). It involves feeding observed sequence values (i.e. ground-truth samples) back into the RNN after each step, thus forcing the RNN to stay close to the ground-truth sequence.The term "teacher forcing" can be motivated by comparing the RNN to a human student taking a multi-part exam where the answer to each part (for example a mathematical calculation) depends on the answer to the preceding part. In this analogy, rather than grading every answer in the end, with the risk that the student fails every single part even though they only made a mistake in the first one, a teacher records the score for each individual part and then tells the student the correct answer, to be used in the next part.The use of an external teacher signal is in contrast to real time recurrent learning (RTRL). Teacher signals are known from oscillator networks. The promise is, that teacher forcing helps to reduce the training time.The term "teacher forcing" was introduced in 1989 by Ronald J. Williams and David Zipser, who reported that the technique was already being "frequently used in dynamical supervised learning tasks" around that time.A NeurIPS 2016 paper introduced the related method of "professor forcing". |
Master Class is a 1995 play by American playwright Terrence McNally, presented as a fictional master class by opera singer Maria Callas near the end of her life, in the 1970s. The play features incidental vocal music by Giuseppe Verdi, Giacomo Puccini, and Vincenzo Bellini. The play opened on Broadway in 1995, with stars Zoe Caldwell and Audra McDonald winning Tony Awards.
Plot
The opera diva Maria Callas, a glamorous, commanding, larger-than-life, caustic, and surprisingly funny pedagogue is holding a singing master class. Alternately dismayed and impressed by the students who parade before her, she retreats into recollections about the glories of her own life and career. Included in her musings are her younger years as an ugly duckling, her fierce hatred of her rivals, the unforgiving press that savaged her early performances, her triumphs at La Scala, and her relationship with Aristotle Onassis. It culminates in a monologue about sacrifice taken in the name of art.
Production history
The play originally was staged by the Philadelphia Theatre Company in March 1995, the Mark Taper Forum and the Kennedy Center.The play premiered on Broadway at the John Golden Theatre on November 15, 1995 and closed on June 29, 1997 after 598 performances and twelve previews. Directed by Leonard Foglia, the original cast featured Zoe Caldwell (as Callas), Audra McDonald (as Sharon), Karen Kay Cody, David Loud, Jay Hunter Morris, and Michael Friel. Patti LuPone (from July 1996) and Dixie Carter (from January 1997) subsequently replaced Caldwell as Callas, Matthew Walley replaced Morris and Alaine Rodin replaced McDonald later in the run. LuPone played the role in the West End production at the Queens Theatre, opening in April 1997 (previews) and Faye Dunaway played the role in the U.S. national tour in 1996.Master Class ran at the Kennedy Center from March 25, 2010 to April 18, 2010, directed by Stephen Wadsworth and starring Tyne Daly as Callas. The play was then revived on Broadway in a Manhattan Theatre Club production at the Samuel J. Friedman Theatre, running from June 14, 2011 (previews) to September 4, 2011 for 70 regular performances and 26 previews. Directed by Stephen Wadsworth, the cast featured Tyne Daly as Callas, with Sierra Boggess as Sharon and Alexandra Silber as Sophie. This production transferred to the West End at the Vaudeville Theatre from January to April 2012, with Daly as Callas and Naomi O'Connell as Sharon.A 2010/11 UK touring production of the play, starred Stephanie Beacham as CallasA production in Paris, Master Class – La leçon de chant (the singing lesson) in 1997 starred Fanny Ardant as Callas and was directed by Roman Polanski.In 1997, Norma Aleandro played the role of Maria Callas at the Teatro Maipo in Buenos Aires directed by Agustín Alezzo. In 2012, Aleandro and Alezzo did a new version of the play.
An Australian production in 1997 starred Robyn Nevin as Callas. Nevin played the role in Brisbane and Sydney. Amanda Muggleton then played Callas in Adelaide in 1998 and Melbourne in 1999. Muggleton reprised the role in the 2001/02 Australian touring production and won the 2002 Helpmann Award for Best Actress in a Play.Jelisaveta Seka Sablić played Callas in the 1997 production of the Bitef theater, before touring other Belgrade and Serbian theaters, and Switzerland in 2005. Soprano Radmila Smiljanić was a music supervisor. Sablić was awarded the Miloš Žutić Award for the role.In 2014, Maria Mercedes brought the work to life again in Australia to critical acclaim: "It's an awe-inspiring performance by any measure." She was nominated for a number of awards, winning the Green Room Award for Female Performer for Independent Theatre. Her portrayal is the first time in professional theatre that a woman of Greek heritage has played Maria Callas. The production moved to Sydney in August 2015, before returning to Melbourne in September.In 2018 and 2019 a production of Master Class took place in Athens, Greece, at the Dimitris Horn Theatre with Greek actress Maria Nafpliotou in the starring role. The production has also received critical acclaim and by February 2019 counted 125 consecutive sold out performances.
Critical reception
Ben Brantley, in his review of the 2011 Broadway revival for The New York Times wrote that, although Master Class is not "a very good play", he felt that Tyne Daly "transforms that script into one of the most haunting portraits I've seen of life after stardom."
Awards and nominations
Master Class won both the 1996 Drama Desk Award for Outstanding New Play and the 1996 Tony Award for Best Play. Zoe Caldwell won the 1996 Tony Award for Actress in a Play, and Audra McDonald won the 1996 Tony Award for Featured Actress in a Play.The 2011 revival received a 2012 Tony Award nomination, Best Revival of a Play.
Master Class at the Internet Broadway Database
Master Class at Dramatists Play Service |
MM7 may refer to:
MM7 (MMS), a type of Multimedia Messaging Service interface
MM7 register, a CPU register used by the MMX extension
Mega Man 7, a 1995 SNES game in the Mega Man video game series
Might and Magic VII: For Blood and Honor, a 1999 PC role-playing video game
mM7, mM7, m/M7 or m(M7), chord symbols for a minor major seventh chord
"MM7", a 2022 song by Jer Lau |
Alive may refer to:
Life
Books, comics and periodicals
Alive (novel), a 2015 novel by Scott Sigler
Alive: The Final Evolution, a 2003 shonen manga by Tadashi Kawashima and Adachitoka
Alive: The Story of the Andes Survivors, a 1974 book by Piers Paul Read
Alive (magazine), a monthly Canadian natural health magazine
Alive! (newspaper), an Irish Catholic newspaper
Film
Alive (1993 film), a film by Frank Marshall based on the book Alive: The Story of the Andes Survivors
Alive: 20 Years Later, a 1993 documentary about the book Alive: The Story of the Andes Survivors and the Frank Marshall film
Alive (2002 film), a Japanese horror film by Ryuhei Kitamura based on the manga ALIVE
Alive, a 2003 DVD by Audio Adrenaline
Alive (2006 film), a Russian film by Aleksandr Veledinsky
Alive (Meshuggah video), a 2010 concert film
Alive (2014 film), a South Korean film by Park Jung-bum
Alive, a 2016 Overwatch animated short film
Alive (2020 film), a South Korean film by Cho Il-hyung
Alive Films, an American production company co-founded by Carolyn Pfeiffer
Music
Albums
Alive (10cc album), 1993
Alive (3rd Party album), 1997
Alive!! (Becca album), 2008
Alive (Big Bang album) or the title song, 2012
Alive (Bruce Dickinson album), 2005
Alive (Chick Corea album), 1991
Alive (Do As Infinity album) or the title song, 2018
Alive (Dr. Sin album), 1999
Alive (Ed Kowalczyk album), 2010
Alive! (Grant Green album), 1970
Alive (Gryffin album) or the title song, 2022
Alive (Hiromi album) or the title song, 2014
Alive (Jacky Terrasson album), 1998
Alive (Jessie J album) or the title song, 2013
Alive (Kate Ryan album) or the title song (see below), 2006
Alive (Kim Kyung-ho album), 2009
Alive! (Kiss album), 1975
Alive (Nitty Gritty Dirt Band album), 1969
Alive (Rising Appalachia album), 2017
Alive (Sa Dingding album) or the title song, 2007
Alive (Shawn Desman album) or the title song, 2013
Alive! (Snot album), 2002
Alive! (Turbo album), 1986
Alive (in concert), by Axelle Red, 2000
Alive! The Millennium Concert, by Kiss, 2006
Kenny Loggins Alive, 1980
Alive or the title song, by Ben Haenow, 2018
Alive, by Cimorelli, 2016
Alive, by Crystal Bowersox, 2017
Alive, by Julie Roberts, 2011
Alive 1997, by Daft Punk
Alive 2007, by Daft Punk
Mixtapes
Alivë by Yeat, 2021
EPs
Alive (Adler's Appetite EP) or the title song, 2012
Alive (Big Bang EP) or the title song, 2012
Alive, by Target, 2018
Songs
"Alive" (Beastie Boys song), 1999
"Alive" (Bee Gees song), 1972
"Alive" (Black Eyed Peas song), 2009
"Alive" (Breed 77 song), 2006
"Alive" (Chase & Status song), 2013
"Alive" (Dami Im song), 2013
"Alive" (Empire of the Sun song), 2013
"Alive" (Goldfrapp song), 2010
"Alive" (Jennifer Lopez song), 2002
"Alive" (Kate Ryan song), 2006
"Alive" (Krewella song), 2013
"Alive" (Lo-Pro song), 2010
"Alive" (Melissa O'Neil song), 2005
"Alive!" (Mondotek song), 2007
"Alive" (Natalie Bassingthwaighte song), 2008
"Alive" (Pearl Jam song), 1991
"Alive" (P.O.D. song), 2001
"Alive" (Rebecca St. James song), 2005
"Alive" (Rüfüs Du Sol song), 2021
"Alive" (S Club song), 2002
"Alive" (Sia song), 2015
"Alive" (Sonique song), 2003
"Alive" (Vincent Bueno song), 2020
"Alive"/"Physical Thing", by Koda Kumi, 2009
"Alive", by Adelitas Way from Home School Valedictorian, 2011
"Alive", by Breed 77 from In My Blood (En Mi Sangre), 2006
"Alive", by Cheap Trick from The Latest, 2009
"Alive", by Daft Punk from The New Wave, 1994
"Alive", by Dala from Everyone Is Someone, 2009
"Alive", by Drugstore, 1993
"Alive", by Edwin from Another Spin Around the Sun, 1999
"Alive", by Good Charlotte from Cardiology, 2010
"Alive", by Gravity Kills from Perversion, 1998
"Alive", by Hawk Nelson from Live Life Loud, 2009
"Alive", by Jennifer Brown, 1998
"Alive", by Khalid from Free Spirit, 2019
"Alive", by Kid Cudi from Man on the Moon: The End of Day, 2009
"Alive", by Korn from Neidermayer's Mind, 1993
"Alive", by Leona Lewis from Echo, 2009
"Alive", by Lightsum, 2022
"Alive", by Meat Loaf from Bat Out of Hell III: The Monster Is Loose, 2006
"Alive", by Milky Chance from Blossom, 2017
"Alive", by Monni, 2019
"Alive", by Mýa from K.I.S.S. (Keep It Sexy & Simple), 2011
"Alive", by Oasis from Shakermaker, 1994
"Alive", by One Direction from Midnight Memories, 2013
"Alive", by Ozzy Osbourne from Down to Earth, 2001
"Alive", by Shihad from Love Is the New Hate, 2005
"Alive", by Sick Individuals, 2016
"Alive", by SR-71 from Now You See Inside, 2000
"Alive", by Steve Aoki, 2017
"Alive", by Wage War from Blueprints, 2015
"Alive", by X Japan from Vanishing Vision, 1988
"Alive!", from the musical Jekyll & Hyde, 1997
"Alive", from the video game The Idolmaster Dearly Stars, 2009
"Alive (N' Out Of Control)", by Papa Roach from The Paramour Sessions, 2006
Events
Alive 2006/2007, a concert tour by Daft Punk
Alive! Tour, a 1975–1976 concert tour by Kiss
Alive Festival, a Christian music festival in Mineral City, Ohio, US
Other music
Alive Naturalsound Records, a record label
"Alive", a television news music package by 615 Music
Other uses
"Alive!" (Cow and Chicken), a television episode
"Alive!" (Justice League Unlimited), a television episode
Alive 90.5, a radio station based in Sydney, Australia
Alive, a 1998 PlayStation video game developed by General Entertainment
Alive 1997, 2001 Daft Punk live album
Alive 2007, 2008 Daft Punk live album
All pages with titles beginning with Alive
All pages with titles containing Alive
Life (disambiguation)
Live (disambiguation)
Living (disambiguation) |
Chick-fil-A, Inc. ( CHIK-fil-AY, a play on the American English pronunciation of "filet") is an American fast food restaurant chain and the largest chain specializing in chicken sandwiches. Headquartered in College Park, Georgia, Chick-fil-A operates 2,973 restaurants across 48 states, as well as in the District of Columbia and Puerto Rico. The company also has operations in Canada, and previously had restaurants in the United Kingdom and South Africa. The restaurant has a breakfast menu, and a lunch and dinner menu. The chain also provides catering services.Many of the company's values are influenced by the Christian religious beliefs of its late founder, S. Truett Cathy (1921–2014), a devout Southern Baptist. Reflecting a commitment to Sunday Sabbatarianism, all Chick-fil-A restaurants are closed for business on Sundays, Thanksgiving, and Christmas Day. During the Western Christian liturgical season of Lent, Chick-fil-A promotes fish sandwiches, following the Christian tradition of abstinence from meat during Lent. The company's conservative opposition to same-sex marriage has caused controversy, though the company began to loosen its stance on this issue from 2019. Despite numerous controversies and boycott attempts, the 2022 American Customer Satisfaction Index found that Chick-fil-A remained the country's favorite fast food chain for the eighth consecutive year.
History
The chain's origin can be traced to the Dwarf Grill (now the Dwarf House), a restaurant opened by S. Truett Cathy, the chain's former chairman and CEO, in Hapeville, Georgia, a suburb of Atlanta, in 1946, and is near the location of the Ford Motor Company Atlanta Assembly Plant, for many years a source of many of the restaurant's patrons but later demolished.In 1961, after 15 years in the fast-food business, Cathy found a pressure-fryer that could cook the chicken sandwich in the same time it took to cook a hamburger. Following this discovery, he registered the name Chick-fil-A, Inc. The company's trademarked slogan, "We Didn't Invent the Chicken, Just the Chicken Sandwich," refers to their flagship menu item, the Chick-fil-A chicken sandwich. Though Chick-Fil-A was the first national chain to make a fast, fried chicken sandwich its flagship item, it has been shown that Cathy's claim to have "invented the chicken sandwich" is false.From 1964 to 1967, the sandwich was licensed to over fifty eateries, including Waffle House and the concession stands of the new Houston Astrodome. The Chick-Fil-A sandwich was withdrawn from sale at other restaurants when the first standalone location opened in 1967, in the food court of the Greenbriar Mall in Atlanta.During the 1970s and early 1980s, the chain expanded by opening new locations in suburban malls' food courts. The first freestanding location was opened April 16, 1986, on North Druid Hills Road in Atlanta, Georgia, and the company began to focus more on stand-alone units rather than food courts. Although it has expanded outward from its original geographic base, most new restaurants are located in Southern US suburban areas.Since 1997, the Atlanta-based company has sponsored the Peach Bowl, an annual college football bowl game played in Atlanta on New Year's Eve. Chick-fil-A also sponsors the Southeastern Conference and the Atlantic Coast Conference of college athletics.In 2008, Chick-fil-A was among the first fast-food restaurants to become completely free of trans fats.In October 2015, the company opened its largest restaurant, a three-story 5,000 square feet (460 m2) restaurant in Manhattan.
On December 17, 2017, Chick-fil-A broke their tradition and opened on a Sunday to prepare meals for passengers left stranded during a power outage at Atlanta Hartsfield-Jackson International Airport, and on January 13, 2019, a Chick-fil-A franchise in Mobile, Alabama, opened on Sunday to honor a birthday wish of a 14-year-old boy with cerebral palsy and autism.On February 13, 2023, they began offering their first non-meat sandwich, a breaded cauliflower sandwich.In May 2023, Chick-fil-A closed its first stand-alone restaurant, in Greenbriar Mall, Atlanta, without stating a reason.
Business model
Chick-fil-A's business strategy involves a focus on a small menu and on customer service. While many fast food chains offer many dishes, Chick-fil-A's is focused on selling chicken sandwiches. The name's capital A is intended to indicate that their chicken is "grade A top quality". The company's emphasis on customer service is reported to have contributed to its success and growth in the United States.Chick-fil-A builds and owns its restaurants. Chick-fil-A franchisees need a US$10,000 initial investment. Franchisees are selected and trained, a process that can take months.
Chick-fil-A grossed an average of $4.8 million per restaurant in 2016, despite opening only 6 days a week, the highest sales of all fast-food restaurants in the United States. (Whataburger was second with $2.7 million per restaurant average).In 2019, Chick-fil-A reported $11.3 billion in sales in the United States, behind only McDonald's with $40.4 billion in sales that year.To compete with Chick-fil-A, fried chicken chain Popeyes, followed by others, introduced a fried chicken sandwich in 2019, starting the Chicken Sandwich Wars.
Retail sale of sauces
In the spring of 2020, Chick-fil-A test-trialed the sale of two of their dipping sauces at some supermarkets in Florida, with all profits earmarked for a scholarship fund for company's store level employees. The trial was considered successful, and distribution was expanded nationwide by 2021. Two more sauces were added in 2023.In October 2022, the company trialed expansion of the program to include its salad dressings in the Cincinnati metropolitan area and in parts of Tennessee, hoping to expand nationwide in spring 2023.
Corporate culture
S. Truett Cathy was a devout Southern Baptist; his religious beliefs had a major impact on the company. The company's official statement of corporate purpose says that the business exists "To glorify God by being a faithful steward of all that is entrusted to us. To have a positive influence on all who come in contact with Chick-fil-A." Cathy opposed the company becoming public for religious and personal reasons.A company spokesperson said in 2012, "The Chick-fil-A culture and service tradition in our Restaurants is to treat every person with honor, dignity and respect –regardless of their belief, race, creed, sexual orientation or gender."
Sunday closing
In accordance with the founder's belief in the Christian doctrine of first-day Sabbatarianism, all Chick-fil-A locations are closed on Sundays, Thanksgiving, and Christmas. Cathy said "Our decision to close on Sunday was our way of honoring God and of directing our attention to things that mattered more than our business."In an interview with ABC News's Nightline, Truett's son Dan T. Cathy told reporter Vicki Mabrey that the company is also closed on Sundays because "by the time Sunday came, he was just worn out. And Sunday was not a big trading day, anyway, at the time. So he was closed that first Sunday and we've been closed ever since. He figured if he didn't like working on Sundays, that other people didn't either."Even Chick-fil-A locations at sports stadiums close on Sundays, although many games are played on Sundays.
Lenten observance
During the Western Christian liturgical season of Lent, Chick-fil-A promotes fish sandwiches, following the Christian tradition of abstinence from meat during Lent.
Serving chicken without antibiotics
In February 2014, Chick-fil-A announced plans to serve chicken raised without antibiotics in its restaurants nationwide within five years, the first fast-food restaurant to make this commitment. This was implemented by May 2019.
Recipe changes
In 2011, food blogger and activist Vani Hari asserted that Chick-fil-A sandwiches contained nearly 100 ingredients, including peanut oil with tert-butylhydroquinone (TBHQ), a preservative for unsaturated vegetable oils. In October 2012, Chick-fil-A invited Hari to meet with company executives at its headquarters. In December 2013, Chick-fil-A notified Hari that it had eliminated the dye Yellow No. 5 and had reduced the sodium content in its chicken soup, and said that it was testing peanut oil without TBHQ, and would test sauces and dressings without high-fructose corn syrup.
International locations
Canada
In September 1994, Chick-fil-A opened its first location outside of the United States inside a student center food court at the University of Alberta in Edmonton, Alberta, Canada. This location did not perform very well and was closed within two or three years. The company opened an outlet at the Calgary International Airport in Alberta in May 2014, and closed it in 2019.Chick-fil-A opened a restaurant in Toronto, Ontario, on September 6, 2019, in the Yonge and Bloor Street area. There were protests criticizing the company's violation of animal rights and "history of supporting anti-LGBTQ causes". Chick-fil-A announced that it would open two other locations in Toronto during 2019, and 12 additional stores in the Greater Toronto Area over the subsequent five years. The chain's second Toronto location opened at the Yorkdale Shopping Centre in January 2020.The company expanded to other areas of Ontario in 2021, opening standalone locations with drive-through restaurants in Kitchener in August, and Windsor in October.
South Africa
In August 1996, Chick-fil-A opened its first location outside of North America by building a restaurant in Durban, South Africa. A second location was opened in Johannesburg in November 1997. Neither was profitable, and they were were closed in 2001.
United Kingdom
A Chick-fil-A operated in Edinburgh during Spring 2018. On October 10, 2019, Chick-fil-A returned to Europe, with the opening of a store at The Oracle shopping centre in Reading, UK. The store closed in March 2020 after The Oracle opted not to continue the lease of the location beyond the six-month pilot period in the face of continued protests over the chain's anti-LGBTQ stance.In February 2019, Chick-fil-A opened a store on a 12-month pilot scheme in Aviemore, Scotland. The store was closed in January 2020 amidst protest and controversy from locals and customers regarding the chain's former donations to charities supporting anti-LGBT rights causes. Chick-Fil-A said that they had always planned a short-term stay at the location.Later the company changed some policies, appointing its first head of diversity, in 2020, and focused its charitable activities on education and hunger alleviation rather than opponents of same-sex marriage. In September 2023 the company said that it planned to open five restaurants in the UK from early 2025, investing over $100m over the following ten years in the UK. The chain said that it would apply its charitable policies, including a $25,000 donation to a local organisation on opening a Chick-fil-A restaurant and donation of surplus food to local charitable causes, to its UK branches too.
Expansion to Asia and Europe
Chick-fil-A CEO Andrew Cathy announced in March 2023 that it planned to open restaurants in both Asia and Europe by 2026, and was set to expand to a total of five overseas markets by 2030. The Wall Street Journal reported that it was seeking countries with "stable economies, dense populations, and a demand for chicken".
Planned locations
In July 2018, Chick-fil-A announced it would be opening its first location in Hawaii in Kahului in early 2022, with additional locations in Honolulu and Kapoleo to follow. The first Puerto Rican location opened on March 3, 2022, in Bayamón.
Advertising
"Eat Mor Chikin" is the chain's most prominent advertising slogan, created by The Richards Group in 1995. The slogan is often seen in advertisements, featuring Holstein dairy cows that are often seen wearing (or holding) signs that (usually) read "Eat Mor Chikin" in capital letters. The ad campaign was temporarily halted on January 1, 2004, during a mad cow disease scare, so as not to make the chain seem insensitive or appear to be taking advantage of the scare to increase its sales. Two months later, the cows were put up again. The cows replaced the chain's old mascot, Doodles, an anthropomorphized chicken that still appears as the C on the logo.Chick-fil-A vigorously protects its intellectual property, sending cease and desist letters to those they think have infringed on their trademarks. The corporation has successfully protested at least 30 instances of the use of an "eat more" phrase, saying that the use would cause confusion of the public, dilute the distinctiveness of their intellectual property, and diminish its value.A 2011 letter to Vermont artist Bo Muller-Moore who screen prints T-shirts reading: "Eat More Kale" demanded that he cease printing the shirts and turn over his website. The incident drew criticism from Vermont governor Peter Shumlin, and created backlash against what he termed Chick-fil-A's "corporate bullying." On December 11, 2014, Bo Muller-Moore announced that the U.S. Patent Office granted his application to trademark his "Eat More Kale" phrase. A formal announcement of his victory took place on December 12, 2014, with Shumlin and other supporters on the Statehouse steps. His public fight drew regional and national attention, the support of Shumlin, and a team of pro-bono law students from the University of New Hampshire legal clinic.After 22 years with The Richards Group, Chick-fil-A switched to McCann New York in 2016. Along with the cows, ads included famous people in history in a campaign called "Chicken for Breakfast. It's not as crazy as you think."
Sponsored events
Chick-fil-A Classic
The Chick-fil-A Classic is a high school basketball tournament held in Columbia, South Carolina, featuring nationally ranked players and teams. The tournament is co-sponsored by the Greater Columbia Educational Advancement Foundation (GCEAF), which provides scholarships to high school seniors in the greater Columbia area.Chick-fil-A Peach Bowl
The Chick-fil-A Peach Bowl, first known as the Peach Bowl until 2006 and renamed Chick-fil-A Peach Bowl in 2014, is a college football bowl game played each year in Atlanta, Georgia.Chick-fil-A Kickoff Game
The Chick-fil-A Kickoff Game is an annual early-season college football game played at the Mercedes-Benz Stadium in Atlanta, Georgia; before 2017, it was played at the Georgia Dome. It features two highly ranked teams, one of which has always been from the Southeastern Conference. In the 2012 season and again in the 2014 season, the event was expanded to two games. It was also two games in 2017. On July 12, 2023, Georgia-based insurance company Aflac, became the new sponsor of the game.
Controversies
Same-sex marriage
Chick-fil-A has donated over $5 million, via the WinShape Foundation, to groups that oppose same-sex marriage. In response, students at several colleges and universities worked to ban or remove the company's restaurants from their campuses.In June and July 2012, Chick-fil-A's chief operating officer Dan T. Cathy made several public statements about same-sex marriage, saying that those who "have the audacity to define what marriage is about" were "inviting God's judgment on our nation". Several prominent politicians expressed disapproval. Boston Mayor Thomas Menino and Chicago Alderman Proco "Joe" Moreno said they hoped to block franchise expansion into their areas.The proposed bans drew criticism from liberal pundits, legal experts, and the American Civil Liberties Union. The Jim Henson Company, which had a Pajanimals kids' meal toy licensing arrangement with Chick-fil-A, said it would cease its business relationship, and donate the payment to the Gay & Lesbian Alliance Against Defamation. Chick-fil-A stopped distributing the toys, claiming that unrelated safety concerns had arisen prior to the controversy. Chick-fil-A released a statement on July 31, 2012, saying, "We are a restaurant company focused on food, service, and hospitality; our intent is to leave the policy debate over same-sex marriage to the government and political arena."Less than a year after Moreno's comments, Chick-fil-A opened its first restaurant in downtown Chicago.
In 2022, ten years after Menino's comments, Chick-fil-A opened its first store in Boston, a 5,280-square-foot restaurant on Boylston Street. At the time of the opening, there were 15 other Chick-fil-A restaurants operating in the greater Boston area.In response to the controversy, former Arkansas Governor Mike Huckabee initiated a Chick-fil-A Appreciation Day movement to counter a boycott of Chick-fil-A launched by same-sex marriage activists. Many of the chain's stores reported record levels of customers that day. The United States Federal Aviation Administration also responded to two cities that were preventing Chick-fil-A from opening in their international airport, citing "Federal requirements prohibit airport operators from excluding persons on the basis of religious creed from participating in airport activities that receive or benefit from FAA grant funding."In April 2018, Chick-fil-A reportedly continued to donate to the Fellowship of Christian Athletes, which opposes gay marriage. In a November 18, 2019, interview, Chick-fil-A president Tim Tassopoulos said the company would stop donating to The Salvation Army and the Fellowship of Christian Athletes.
Drive-through traffic
The popularity of Chick-fil-A's drive-throughs in the United States has led to traffic problems, police interventions, and complaints by neighboring businesses in more than 20 states. The long drive-through lines have been reported to cause traffic backups, blocking emergency vehicles and city buses and increasing the risk of collisions and pedestrian injuries.
Related restaurants
Hapeville Dwarf House
Truett Cathy opened his first restaurant in 1946, The Dwarf Grill – later renamed the Dwarf House – in Hapeville, Georgia, and developed the pressure-cooked chicken breast sandwich there. At the original Chick-fil-A Dwarf Grill, in addition to the full-size entrances, there is also an extra small-sized front door.The original Dwarf House in Hapeville, Georgia is open 24 hours a day, six days a week, except on Sundays, Thanksgiving, and Christmas. The store closes at 10:00 p.m. on Saturday nights, and the day before Thanksgiving and Christmas and reopens at 6 a.m. on Monday mornings and the day after Thanksgiving and Christmas. It has a larger dine-in menu than the other Dwarf House locations as well as an animated seven dwarfs display in the back of the restaurant. It was across the street from the former Ford Motor Company factory called Atlanta Assembly.
Dwarf House
Opened in 1986, Truett's original, full-service restaurants offer a substantial menu and provide customers a choice of table service, walk-up counter service or a drive-thru window. There are currently five of the original eleven Chick-fil-A Dwarf House restaurants still operating in the metro Atlanta area, including Duluth, Riverdale, Jonesboro, Forest Park and Fayetteville.
Truett's Grill
In 1996, the first Truett's Grill was opened in Morrow, Georgia. The second location opened in 2003 in McDonough, Georgia, and a third location opened in 2006 in Griffin, Georgia. Similar to the Chick-fil-A Dwarf Houses, these independently owned restaurants offer traditional, sit-down dining and expanded menu selections in a diner-themed restaurant. In 2017, Chick-fil-A demolished several Dwarf House locations to replace them with Truett's Grill locations.
Truett's Chick-fil-A
Truett's Chick-fil-A is designed in honor of founder S. Truett Cathy. The restaurant is decorated with family photos and favorite quotes of the restaurant founder. The restaurant offers drive-thru, counter, and sit-down service for breakfast, lunch and dinner. There are four locations including Newnan, Rome, Stockbridge, and Woodstock, Georgia.
Truett's Luau
Truett Cathy visited Hawaii; at the age of 92 he opened Truett's Luau in Fayetteville Georgia in 2013. The menu includes island favorites with a southern US spin. The restaurant offers sit-down, counter and drive-thru service.
Senior leadership
Chick-fil-A has been run by the Cathy family since the restaurant chain's founding in 1946; as of 2021 it was being led by the third generation of the Cathy family.
List of chairmen
S. Truett Cathy (1946–2013)
Dan Cathy (2013– )
List of chief executives
S. Truett Cathy (1946–2013)
Dan Cathy (2013–2021)
Andrew T. Cathy (2021– )
Menu
Based on data from 2018, the most-ordered item was the waffle fries followed by soft drinks, chicken nuggets, and the original chicken sandwich, available at all franchises, with some offering the full menu. Chick-fil-A's website lists all menu items and nutrition information.
See also
List of chicken restaurants
Official website |
Unique primarily refers to:
Uniqueness, a state or condition wherein something is unlike anything else
In mathematics and logic, a unique object is the only object with a certain property, see Uniqueness quantificationUnique may also refer to:
Companies
Unique Art, an American toy company
Unique Broadcasting Company, a former name of UBC Media Group, based in London
Unique Business News a television news channel in Taiwan
Unique Mobility, a former name of UQM Technologies, a manufacturing company based in the United States
Unique Pub Company, a pub company based in the United Kingdom, acquired by Enterprise Inns
Unique Theater, a theater in Minneapolis, Minnesota, United States
Unique Group, a conglomerate in Bangladesh
Music
Unique (DJ Encore album)
Unique (Juliette Schoppmann album)
Unique (band), a musical group from New York City
Unique (also known as Darren Styles), British musician
Unique Records, a former name of RKO/Unique Records
Unique Recording Studios, a recording studio in New York City
Unique (musician), a musician and singer-songwriter based in Philippines
Other
UNIQUe Certification, a label awarded to higher education institutions
HMS Unique, name of three ships of the Royal Navy
Operation Unique, code-name of criminal proceedings linked to the Pitcairn sexual assault trial of 2004
Team Unique, a synchronized skating team from Finland
Unique, Iowa, a hamlet in Iowa, United States
Wade "Unique" Adams, a character in the television series Glee
See also
All pages with titles beginning with Unique
Essentially unique
Sui generis
The Uniques (disambiguation)
Uniq (disambiguation) |
Pac-Man, originally called Puck Man in Japan, is a 1980 maze action video game developed and released by Namco for arcades. In North America, the game was released by Midway Manufacturing as part of its licensing agreement with Namco America. The player controls Pac-Man, who must eat all the dots inside an enclosed maze while avoiding four colored ghosts. Eating large flashing dots called "Power Pellets" causes the ghosts to temporarily turn blue, allowing Pac-Man to eat them for bonus points.
Game development began in early 1979, directed by Toru Iwatani with a nine-man team. Iwatani wanted to create a game that could appeal to women as well as men, because most video games of the time had themes of war or sports. Although the inspiration for the Pac-Man character was the image of a pizza with a slice removed, Iwatani has said he also rounded out the Japanese character for mouth, kuchi (Japanese: 口). The in-game characters were made to be cute and colorful to appeal to younger players. The original Japanese title of Puck Man was derived from the Japanese phrase "Paku paku taberu" which refers to gobbling something up; the title was changed to Pac-Man for the North American release.
Pac-Man was a widespread critical and commercial success, leading to several sequels, merchandise, and two television series, as well as a hit single by Buckner & Garcia. The character of Pac-Man has become the official mascot of Bandai Namco Entertainment. The game remains one of the highest-grossing and best-selling games, generating more than $14 billion in revenue (as of 2016) and 43 million units in sales combined, and has an enduring commercial and cultural legacy, commonly listed as one of the greatest video games of all time.
Gameplay
Pac-Man is an action maze chase video game; the player controls the eponymous character through an enclosed maze. The objective of the game is to eat all of the dots placed in the maze while avoiding four colored ghosts — Blinky (red), Pinky (pink), Inky (cyan), and Clyde (orange) — that pursue Pac-Man. When Pac-Man eats all of the dots, the player advances to the next level. Levels are indicated by fruit icons at the bottom of the screen. In between levels are short cutscenes featuring Pac-Man and Blinky in humorous, comical situations.
If Pac-Man is caught by a ghost, he will lose a life; the game ends when all lives are lost. Each of the four ghosts has their own unique artificial intelligence (A.I.), or "personality": Blinky gives direct chase to Pac-Man; Pinky and Inky try to position themselves in front of Pac-Man, usually by cornering him; and Clyde will switch between chasing Pac-Man and fleeing from him.Placed near the four corners of the maze are large flashing "energizers" or "power pellets." Eating these will cause the ghosts to turn blue with a dizzied expression and to reverse direction. Pac-Man can eat blue ghosts for bonus points; when a ghost is eaten, their eyes make their way back to the center box in the maze, where the ghost "regenerates" and resumes their normal activity. Eating multiple blue ghosts in succession increases their point value. After a certain amount of time, blue-colored ghosts will flash white before turning back into their normal form. Eating a certain number of dots in a level will cause a bonus item — usually in the form of a fruit — to appear underneath the center box; the item can be eaten for bonus points. To the sides of the maze are two "warp tunnels", which allow Pac-Man and the ghosts to travel to the opposite side of the screen. Ghosts become slower when entering and exiting these tunnels.
The game increases in difficulty as the player progresses: the ghosts become faster, and the energizers' effect decreases in duration, eventually disappearing entirely. Due to an integer overflow, the 256th level loads improperly, rendering it impossible to complete.
Development
After acquiring the struggling Japanese division of Atari in 1974, video game developer Namco began producing its own video games in-house, as opposed to simply licensing them from other developers and distributing them in Japan. Company president Masaya Nakamura created a small video game development group within the company and ordered them to study several NEC-produced microcomputers to potentially create new games with. One of the first people assigned to this division was a young 24-year-old employee named Toru Iwatani. He created Namco's first video game Gee Bee in 1978, which while unsuccessful helped the company gain a stronger foothold in the quickly-growing video game industry. He also assisted in the production of two sequels, Bomb Bee and Cutie Q, both released in 1979.
The Japanese video game industry had surged in popularity with games such as Space Invaders and Breakout, which led to the market being flooded with similar titles from other manufacturers in an attempt to cash in on the success. Iwatani felt that arcade games only appealed to men for their crude graphics and violence, and that arcades in general were seen as seedy environments. For his next project, Iwatani chose to create a non-violent, cheerful video game that appealed mostly to women, as he believed that attracting women and couples into arcades would potentially make them appear to be much more family friendly in tone. Iwatani began thinking of things that women liked to do in their time; he decided to center his game around eating, basing this on women liking to eat desserts and other sweets. His game was initially called Pakkuman, based on the Japanese onomatopoeia term "paku paku taberu", referencing the mouth movement of opening and closing in succession.The game that later became Pac-Man began development in early 1979 and took a year and five months to complete, the longest ever for a video game up to that point. Iwatani enlisted the help of nine other Namco employees to assist in production, including composer Toshio Kai, programmer Shigeo Funaki, and hardware engineer Shigeichi Ishimura. Care was taken to make the game appeal to a "non-violent" audience, particularly women, with its usage of simple gameplay and cute, attractive character designs. When the game was being developed, Namco was underway with designing Galaxian, which used a then-revolutionary RGB color display, allowing sprites to use several colors at once instead of using colored strips of cellophane that was commonplace at the time; this technological accomplishment allowed Iwatani to greatly enhance his game with bright pastel colors, which he felt would help attract players. The idea for energizers was a concept Iwatani borrowed from Popeye the Sailor, a cartoon character that temporarily acquires superhuman strength after eating a can of spinach; it is also believed that Iwatani was also partly inspired by a Japanese children's story about a creature that protected children from monsters by devouring them. Frank Fogleman, the co-founder of Gremlin Industries, believes that the maze-chase gameplay of Pac-Man was inspired by Sega's Head On (1979), a similar arcade game that was popular in Japan.Iwatani has often claimed that the character of Pac-Man himself was designed after the shape of a pizza with a missing slice while he was at lunch; in a 1986 interview he said that this was only half-truth, and that the Pac-Man character was also based on him rounding out and simplifying the Japanese character "kuchi" (口), meaning "mouth". The four ghosts were made to be cute, colorful and appealing, using bright, pastel colors and expressive blue eyes. Iwatani had used this idea before in Cutie Q, which features similar ghost-like characters, and decided to incorporate it into Pac-Man. He was also inspired by the television series Casper the Friendly Ghost and the manga Obake no Q-Taro. Ghosts were chosen as the game's main antagonists due to them being used as villainous characters in animation. The idea for the fruit bonuses was based on graphics displayed on slot machines, which often use symbols such as cherries and bells.Originally, Namco president Masaya Nakamura had requested that all of the ghosts be red and thus indistinguishable from one another. Iwatani believed that the ghosts should be different colors, and he received unanimous support from his colleagues for this idea. Each of the ghosts were programmed to have their own distinct personalities, so as to keep the game from becoming too boring or impossibly difficult to play. Each ghost's name gives a hint to its strategy for tracking down Pac-Man: Shadow ("Blinky") always chases Pac-Man, Speedy ("Pinky") tries to get ahead of him, Bashful ("Inky") uses a more complicated strategy to zero in on him, and Pokey ("Clyde") alternates between chasing him and running away. (The ghosts' Japanese names are おいかけ, chase; まちぶせ, ambush; きまぐれ, fickle; and おとぼけ, playing dumb, respectively.) To break up the tension of constantly being pursued, humorous intermissions between Pac-Man and Blinky were added. The sound effects were among the last things added to the game, created by Toshio Kai. In a design session, Iwatani noisily ate fruit and made gurgling noises to describe to Kai how he wanted the eating effect to sound. Upon completion, the game was titled Puck Man, based on the working title and the titular character's distinct hockey puck-like shape.
Release
Location testing for Puck Man began on May 22, 1980, in Shibuya, Tokyo, to a relatively positive fanfare from players. A private showing for the game was done in June, followed by a nationwide release in July. Eyeing the game's success in Japan, Namco initialized plans to bring the game to the international market, particularly the United States. Before showing the game to distributors, Namco America made a number of changes, such as altering the names of the ghosts. The biggest of these was the game's title; executives at Namco were worried that vandals would change the "P" in Puck Man to an "F", forming an obscene name. Masaya Nakamura chose to rename it to Pac-Man, as he felt it was closer to the game's original Japanese title of Pakkuman. In Europe, the game was released under both titles, Pac-Man and Puck Man.When Namco presented Pac-Man and Rally-X to potential distributors at the 1980 AMOA tradeshow in November, executives believed that Rally-X would be the best-selling game of that year. According to Play Meter magazine, both Pac-Man and Rally-X received mild attention at the show. Namco had initially approached Atari to distribute Pac-Man, but Atari refused the offer. Midway Manufacturing subsequently agreed to distribute both Pac-Man and Rally-X in North America, announcing their acquisition of the manufacturing rights on November 22 and releasing them in December.
Ports
Pac-Man was ported to a plethora of home video game systems and personal computers; the most infamous of these is the 1982 Atari 2600 conversion, designed by Tod Frye and published by Atari. This version of the game was widely criticized for its inaccurate portrayal of the arcade version and for its peculiar design choices, most notably the flickering effect of the ghosts. However, it was a commercial success, having sold over seven million copies. Atari also released versions for the Intellivision, VIC-20, Commodore 64, Apple II, IBM PC, TI-99/4A, ZX Spectrum, and the Atari 8-bit family of computers. A port for the Atari 5200 was released in 1983, a version that many have seen as a significant improvement over the Atari 2600 version.Namco released a version for the Family Computer in 1984 as one of the console's first third-party titles, as well as a port for the MSX computer. The Famicom version was later released in North America for the Nintendo Entertainment System by Tengen, a subsidiary of Atari Games. Tengen also produced an unlicensed version of the game in a black cartridge shell, released during a time where Tengen and Nintendo were in bitter disagreements over the latter's stance on quality control for their consoles; this version was later re-released by Namco as an official title in 1993, featuring a new cartridge label and box. The Famicom version was released for the Famicom Disk System in 1990 as a budget title for the Disk Writer kiosks in retail stores. The same year, Namco released a port of Pac-Man for the Game Boy, which allowed for two-player co-operative play via the Game Link Cable peripheral. A version for the Game Gear was released a year later, which also enabled support for multiplayer. In celebration of the game's 20th anniversary in 1999, Namco re-released the Game Boy version for the Game Boy Color, bundled with Pac-Attack and titled Pac-Man: Special Color Edition. The same year, Namco and SNK co-published a port for the Neo Geo Pocket Color, which came with a circular "Cross Ring" that attached to the d-pad to restrict it to four-directional movement.In 2001, Namco released a port of Pac-Man for various Japanese mobile phones, being one of the company's first mobile game releases. The Famicom version of the game was re-released for the Game Boy Advance in 2004 as part of the Famicom Mini series, released to commemorate the 25th anniversary of the Famicom; this version was also released in North America and Europe under the Classic NES Series label. Namco Networks released Pac-Man for BREW mobile devices in 2005. The arcade original was released for the Xbox Live Arcade service in 2006, featuring achievements and online leaderboards. In 2009 a version for iOS devices was published; this release was later rebranded as Pac-Man + Tournaments in 2013, featuring new mazes and leaderboards. The NES version was released for the Wii Virtual Console in 2007. A Roku version was released in 2011, alongside a port of the Game Boy release for the 3DS Virtual Console. Pac-Man was one of four titles released under the Arcade Game Series brand, which was published for the Xbox One, PlayStation 4 and PC in 2016. In 2021, according to Nintendo Direct, it was announced that Hamster Corporation would release Pac-Man, along with Xevious, for the Nintendo Switch and PlayStation 4 as part of their Arcade Archives series, marking the first two Namco games to be included as part of the series.
Pac-Man is included in many Namco compilations, including Namco Museum Vol. 1 (1995), Namco Museum 64 (1999), Namco Museum Battle Collection (2005), Namco Museum DS (2007), Namco Museum Essentials (2009), and Namco Museum Megamix (2010). In 1996, it was re-released for arcades as part of Namco Classic Collection Vol. 2, alongside Dig Dug, Rally-X and special "Arrangement" remakes of all three titles. Microsoft included Pac-Man in Microsoft Return of Arcade (1995) as a way to help attract video game companies to their Windows 95 operating system. Namco released the game in the third volume of Namco History in Japan in 1998. The 2001 Game Boy Advance compilation Pac-Man Collection compiles Pac-Man, Pac-Mania, Pac-Attack and Pac-Man Arrangement onto one cartridge. Pac-Man is also a hidden extra in the arcade game Ms. Pac-Man/Galaga - Class of 1981 (2001). A similar cabinet was released in 2005 that featured Pac-Man as the centerpiece. Pac-Man 2: The New Adventures (1993) and Pac-Man World 2 (2002) have Pac-Man as an unlockable extra. Alongside the Xbox 360 remake Pac-Man Championship Edition, it was ported to the Nintendo 3DS in 2012 as part of Pac-Man & Galaga Dimensions. The 2010 Wii game Pac-Man Party and its 2011 3DS remake also include Pac-Man as a bonus game, alongside the arcade versions of Dig Dug and Galaga. In 2014, Pac-Man was included in the compilation title Pac-Man Museum for the Xbox 360, PlayStation 3 and PC, alongside several other Pac-Man games. The NES version is one of 30 games included in the NES Classic Edition.
Reception
Upon its North American debut at AMOA 1980, the game initially received a mild response. Play Meter magazine previewed the game and called it "a cute game which appears to grow on players, something which cute games are not prone to do." They said there's "more to the game than at first appears" but criticized the sound as a drawback, saying it's "good for awhile, then becomes annoying." Upon release, the game exceeded expectations with wide critical and commercial success.
Commercial performance
When it was first released in Japan, Pac-Man was initially only a modest success; Namco's own Galaxian (1979) had quickly outdone the game in popularity, due to the predominately male player base being familiar with its shooting gameplay as opposed to Pac-Man's cute characters and maze-chase theme. Pac-Man eventually became very successful in Japan, where it went on to be Japan's highest-grossing arcade game of 1980 according to the annual Game Machine charts, dethroning Space Invaders (1978) which had topped the annual charts for two years in a row and leading to a shift in the Japanese market away from space shooters towards action games featuring comical characters. Pac-Man was also Japan's fourth highest-grossing arcade game of 1981.In North America, Midway had limited expectations prior to release, initially manufacturing 5,000 units for the US, before it caught on immediately upon release there. Some arcades purchased entire rows of Pac-Man cabinets. It soon became a nationwide success. Upon release in 1980, it was earning about $8.1 million per week in the United States. Within one year, more than 100,000 arcade units had been sold which grossed more than $1 billion in quarters. It overtook Atari's Asteroids (1979) as the best-selling arcade game in the country, and surpassed the film Star Wars: A New Hope (1977) with more than $1 billion in revenue. Pac-Man was America's highest-grossing arcade game of 1981, and second highest game of 1982. By 1982, it was estimated to have had 30 million active players across the United States. The game's success was partly driven by its popularity among female audiences, becoming "the first commercial videogame to involve large numbers of women as players" according to Midway's Stan Jarocki, with Pac-Man being the favorite coin-op game among female gamers through 1982. Among the nine arcade games covered by How to Win Video Games (1982), Pac-Man was the only one with females accounting for a majority of players.
The number of arcade units sold had tripled to 400,000 by 1982, receiving an estimated total of between seven billion coins and $6 billion. In a 1983 interview, Nakamura said that though he did expect Pac-Man to be successful, "I never thought it would be this big." Pac-Man is the best-selling arcade game of all time (surpassing Space Invaders), with total estimated earnings ranging from 10 billion coins and $3.5 billion ($7.7 billion adjusted for inflation) to $6 billion ($18 billion adjusted for inflation) in arcades. Pac-Man and Ms. Pac-Man also topped the US RePlay cocktail arcade cabinet charts for 23 months, from February 1982 through 1983 up until February 1984.The Atari 2600 version of the game sold over 8 million copies, making it the console's best-selling title. In addition, Coleco's tabletop mini-arcade unit sold over 1.5 million units in 1982, the Pac-Man Nelsonic Game Watch sold more than 500,000 units the same year, the Family Computer (Famicom) version and its 2004 Game Boy Advance re-release sold a combined 598,000 copies in Japan, the Atari 5200 version sold 35,011 cartridges between 1986 and 1988, the Atari XE computer version sold 42,359 copies in 1986 and 1990, Thunder Mountain's 1986 budget release for home computers received a Diamond certification from the Software Publishers Association in 1989 for selling over 500,000 copies, and mobile phone ports have sold over 30 million paid downloads as of 2010. II Computing also listed the Atarisoft port tenth on the magazine's list of top Apple II games as of late 1985, based on sales and market-share data. As of 2016, all versions of Pac-Man are estimated to have grossed a total of more than $12 billion in revenue.
Accolades
Pac-Man was awarded "Best Commercial Arcade Game" at the 1982 Arcade Awards. Pac-Man also won the Video Software Dealers Association's VSDA Award for Best Videogame. In 2001, Pac-Man was voted the greatest video game of all time by a Dixons poll in the UK. The Killer List of Videogames listed Pac-Man as the most popular game of all time. The list aggregator site Playthatgame currently ranks Pac-Man as the #53rd top game of all-time & game of the year.
Impact
Pac-Man is considered by many to be one of the most influential video games of all time. The game established the maze chase game genre, was the first video game to make use of power-ups, and the individual ghosts have deterministic artificial intelligence (AI) that reacts to player actions. Pac-Man is considered one of the first video games to have demonstrated the potential of characters in the medium; its title character was the first original gaming mascot, it increased the appeal of video games with female audiences, and it was gaming's first broad licensing success. It is often cited as the first game with cutscenes (in the form of brief comical interludes about Pac-Man and Blinky chasing each other),: 2 though actually Space Invaders Part II employed a similar style of between-level intermissions in 1979.Pac-Man was a turning point for the arcade video game industry, which had previously been dominated by space shoot 'em ups since Space Invaders (1978). Pac-Man popularized a genre of "character-led" action games, leading to a wave of character action games involving player characters in 1981, such as Nintendo's prototypical platform game Donkey Kong, Konami's Frogger and Universal Entertainment's Lady Bug. Pac-Man was one of the first popular non-shooting action games, defining key elements of the genre such as "parallel visual processing" which requires simultaneously keeping track of multiple entities, including the player's location, the enemies, and the energizers."Maze chase" games exploded on home computers after the release of Pac-Man. Some of them appeared before official ports and garnered more attention from consumers, and sometimes lawyers, as a result. These include Taxman (1981) and Snack Attack (1982) for the Apple II, Jawbreaker (1981) for the Atari 8-bit family, Scarfman (1981) for the TRS-80, and K.C. Munchkin! (1981) for the Odyssey². Namco themselves produced several other maze chase games, including Rally-X (1980), Dig Dug (1982), Exvania (1992), and Tinkle Pit (1994). Atari sued Philips for creating K.C. Munchkin in the case Atari, Inc. v. North American Philips Consumer Electronics Corp., leading to Munchkin being pulled from store shelves under court order. No major competitors emerged to challenge Pac-Man in the maze-chase subgenre.Pac-Man also inspired 3D variants of the concept, such as Monster Maze (1982), Spectre (1982), and early first-person shooters such as MIDI Maze (1987; which also had similar character designs).: 5 John Romero credited Pac-Man as the game that had the biggest influence on his career; Wolfenstein 3D includes a Pac-Man level from a first-person perspective. Many post-Pac-Man titles include power-ups that briefly turn the tables on the enemy. The game's artificial intelligence inspired programmers who later worked for companies like Bethesda.
Reviews
Reviewing home console versions in 1982, Games magazine called the Atari 5200 implementation a "splendidly reproduced" version of the arcade game, noting a difference in maze layouts for the television screen. It considered the Atari 2600 version to have "much weaker graphics", but to still be one of the best games for that console. In both cases the reviewer felt that the joystick controls were harder to use than those of the arcade machine, and that "attempts to make quick turns are often frustrated".
Legacy
Guinness World Records has awarded the Pac-Man series eight records in Guinness World Records: Gamer's Edition 2008, including "Most Successful Coin-Operated Game". On June 3, 2010, at the NLGD Festival of Games, the game's creator, Toru Iwatani, officially received the certificate from Guinness World Records for Pac-Man having had the most "coin-operated arcade machines" installed worldwide: 293,822. The record was set and recognized in 2005 and mentioned in the Guinness World Records: Gamer's Edition 2008, but finally actually awarded in 2010. In 2009, Guinness World Records listed Pac-Man as the most recognizable video game character in the United States, recognized by 94% of the population, above Mario who was recognized by 93% of the population. In 2015, The Strong National Museum of Play inducted Pac-Man to its World Video Game Hall of Fame. The Pac-Man character and game series became an icon of video game culture during the 1980s.
The game has inspired various real-life recreations, involving real people or robots. One event called Pac-Manhattan set a Guinness World Record for "Largest Pac-Man Game" in 2004.The business term "Pac-Man defense" in mergers and acquisitions refers to a hostile takeover target that attempts to reverse the situation and instead acquire its attempted acquirer, a reference to Pac-Man's energizers. The "Pac-Man renormalization" is named for a cosmetic resemblance to the character, in the mathematical study of the Mandelbrot set. The game's popularity has also led to "Pac-Man" being adopted as a nickname, such as by boxer Manny Pacquiao and the American football player Adam Jones.
On August 21, 2016, in the 2016 Summer Olympics closing ceremony, during a video which showcases Tokyo as the host of the 2020 Summer Olympics, a small segment shows Pac-Man and the ghosts racing and eating dots on a running track.
Merchandise
A wide variety of Pac-Man merchandise have been marketed with the character's image. By 1982, Midway had about 95-105 licensees selling Pac-Man merchandise, including major companies, such as AT&T selling a Pac-Man telephone. There were more than 500 Pac-Man related products.7-Eleven has long sold Pac-Man themed merchandise at its stores since the game's initial popularity in the 1980s. This has included, among other things, collectible Slurpee & Big Gulp cups. In 2023, 7-Eleven included Pac-Man in its Spring 2023 marketing material including at Speedway and Stripes banner locations, and sold more merchandise around the game as well as rebranding some of its products after the ghosts. This included its house blend coffee (Clyde's Coffee Blend), two Slurpee flavors (Blinky's Cherry & Inky's Blueberry Raz), and a special limited time only cappuccino flavor (Pinky's Strawberry White Chocolate Cappuccino), the latter of which came out pink to match the ghost.Pac-Man themed merchandise sales had exceeded $1 billion in the US by 1982. Pac-Man related merchandise products included bumper stickers, jewellery, accessories (such as a $20,000 Ms. Pac-Man choker with 14 karat gold), bicycles, breakfast cereals, popsicles, t-shirts, toys, handheld electronic game imitations, and pasta.
Television
The Pac-Man animated television series produced by Hanna–Barbera aired on ABC from 1982 to 1983. It was the highest-rated Saturday morning cartoon show in the US during late 1982.A computer-generated animated series titled Pac-Man and the Ghostly Adventures aired on Disney XD in June 2013, and also on Discovery Family in November 2019.
Literature
The original Pac-Man game plays a key role in the plot of Ernest Cline's video game-themed science fiction novel Ready Player One.
Music
The Buckner & Garcia song "Pac-Man Fever" (1981) went to No. 9 on the Billboard Hot 100 charts, and received a Gold certification for more than 1 million records sold by 1982, and a total of 2.5 million copies sold as of 2008. More than one million copies of the group's Pac-Man Fever album (1982) were sold.In 1982, "Weird Al" Yankovic recorded a parody of "Taxman" by the Beatles as "Pac-Man". It was eventually released in 2017 as part of Squeeze Box: The Complete Works of "Weird Al" Yankovic. In 1992, Aphex Twin (with the name Power-Pill) released Pac-Man, a techno album which consists mostly of samples from the game.
The character appears in the music video for Bloodhound Gang's "Mope", released in 2000. Here, the character is portrayed as a cocaine addict.
On July 20, 2020, Gorillaz and ScHoolboy Q, released a track entitled "PAC-MAN" as a part of Gorillaz' Song Machine series to commemorate the game's 40th anniversary, with the music video depicting the band's frontman, 2-D, playing a Gorillaz-themed Pac-Man arcade game.
Film
The Pac-Man character appears in the film Pixels (2015), with Denis Akiyama playing series creator Toru Iwatani. Iwatani makes a cameo at the beginning of the film as an arcade technician. Pac-Man is referenced and makes an appearance in the 2017 film, Guardians of the Galaxy Vol. 2, and the video game, Marvel's Guardians of the Galaxy. The game, the character, and the ghosts all also appear in the film Wreck-It Ralph, as well as the sequel Ralph Breaks the Internet.
In Sword Art Online The Movie: Ordinal Scale where Kirito and his friends beat a virtual reality game called PAC-Man 2026, which is loosely based on Pac-Man 256. In the Japanese tokusatsu film Kamen Rider Heisei Generations: Dr. Pac-Man vs. Ex-Aid & Ghost with Legend Riders, a Pac-Man-like character is the main villain.The 2018 film Relaxer uses Pac-Man as a strong plot element in the story of a 1999 couch-bound man who attempts to beat the game (and encounters the famous Level 256 glitch) before the year 2000 problem occurs.Various attempts for a feature film based on Pac-Man have been planned since the peak of the original game's popularity. Following the release of Ms. Pac-Man, a feature film was being developed, but never reached an agreement. In 2008, a live-action film based on the series was in development at Crystal Sky. In 2022, plans for a live-action Pac-Man film were revived at Wayfarer Studios, based on an idea by Chuck Williams.
Other gaming media
In 1982, Milton Bradley Company released a board game based on Pac-Man. Players move up to four Pac-Man characters (traditional yellow plus red, green, and blue) plus two ghosts as per the throws of a pair of dice. The two ghost pieces were randomly packed with one of four colors.Sticker manufacturer Fleer included rub-off game cards with its Pac-Man stickers. The card packages contain a Pac-Man style maze with all points along the path hidden with opaque coverings. From the starting position, the player moves around the maze while scratching off the coverings to score points.A Pac-Man-themed downloadable content package for Minecraft was released in 2020 in commemoration of the game's 40th anniversary. This pack introduced a new ghost called 'Creepy', based on the Creeper.
Perfect scores and other records
A perfect score on the original Pac-Man arcade game is 3,333,360 points, achieved when the player obtains the maximum score on the first 255 levels by eating every dot, energizer, fruit and blue ghost without losing a man, then uses all six men to obtain the maximum possible number of points on level 256.The first person to achieve a publicly witnessed and verified perfect score without manipulating the game's hardware to freeze play was Billy Mitchell, who performed the feat on July 3, 1999. Some recordkeeping organizations removed Mitchell's score after a 2018 investigation by Twin Galaxies concluded that two unrelated Donkey Kong score performances submitted by Mitchell had not used an unmodified original circuit board. As of July 2020, seven other gamers had achieved perfect Pac-Man scores on original arcade hardware. The world record for the fastest completion of a perfect score, according to Twin Galaxies, is currently held by David Race with a time of 3 hours, 28 minutes, 49 seconds.In December 1982, eight-year-old boy Jeffrey R. Yee received a letter from United States president Ronald Reagan congratulating him on a world record score of 6,131,940 points, possible only if he had passed level 256. In September 1983, Walter Day, chief scorekeeper at Twin Galaxies at the time, took the U.S. National Video Game Team on a tour of the East Coast to visit gamers who claimed the ability to pass that level. None demonstrated such an ability. In 1999, Billy Mitchell offered $100,000 to anyone who could pass level 256 before January 1, 2000. The offer expired with the prize unclaimed.After announcing in 2018 that it would no longer recognize the first perfect score on Pac-Man, Guinness World Records reversed that decision and reinstated Billy Mitchell's 1999 performance on June 18, 2020.
Remakes and sequels
Pac-Man inspired a long series of sequels, remakes, and re-imaginings, and is one of the longest-running video game franchises in history. The first of these was Ms. Pac-Man, developed by the American-based General Computer Corporation and published by Midway in 1982. The character's gender was changed to female in response to Pac-Man's popularity with women, with new mazes, moving bonus items, and faster gameplay being implemented to increase its appeal. Ms. Pac-Man is one of the best-selling arcade games in North America, where Pac-Man and Ms. Pac-Man had become the most successful machines in the history of the amusement arcade industry. Legal concerns raised over who owned the game caused Ms. Pac-Man to become owned by Namco, who assisted in production of the game. Ms. Pac-Man inspired its own line of remakes, including Ms. Pac-Man Maze Madness (2000), and Ms. Pac-Man: Quest for the Golden Maze, and is also included in many Namco and Pac-Man collections for consoles.
Namco's own follow-up to the original was Super Pac-Man, released in 1982. This was followed by the Japan-exclusive Pac & Pal in 1983. Midway produced many other Pac-Man sequels during the early 1980s, including Pac-Man Plus (1982), Jr. Pac-Man (1983), Baby Pac-Man (1983), and Professor Pac-Man (1984). Other games include the isometric Pac-Mania (1987), the side-scrollers Pac-Land (1984), Hello! Pac-Man (1994), and Pac-In-Time (1995), the 3D platformer Pac-Man World (1999), and the puzzle games Pac-Attack (1991) and Pac-Pix (2005). Iwatani designed Pac-Land and Pac-Mania, both of which remain his favorite games in the series. Pac-Man Championship Edition, published for the Xbox 360 in 2007, was Iwatani's final game before leaving the company. Its neon visuals and fast-paced gameplay was met with acclaim, leading to the creation of Pac-Man Championship Edition DX (2010) and Pac-Man Championship Edition 2 (2016).Coleco's tabletop Mini-Arcade versions of the game yielded 1.5 million units sold in 1982. Nelsonic Industries produced a Pac-Man LCD wristwatch game with a simplified maze also in 1982.Namco Networks sold a downloadable Windows PC version of Pac-Man in 2009 which also includes an enhanced mode which replaces all of the original sprites with the sprites from Pac-Man Championship Edition. Namco Networks made a downloadable bundle which includes its PC version of Pac-Man and its port of Dig Dug called Namco All-Stars: Pac-Man and Dig Dug. In 2010, Namco Bandai announced the release of the game on Windows Phone 7 as an Xbox Live game.For the weekend of May 21–23, 2010, Google changed the logo on its homepage to a playable version of the game in recognition of the 30th anniversary of the game's release. The Google Doodle version of Pac-Man was estimated to have been played by more than 1 billion people worldwide in 2010, so Google later gave the game its own page.In April 2011, Soap Creative published World's Biggest Pac-Man, working together with Microsoft and Namco-Bandai to celebrate Pac-Man's 30th anniversary. It is a multiplayer browser-based game with user-created, interlocking mazes.For April Fools' Day in 2017, Google created a playable of the game on Google Maps where users were able to play the game using the map onscreen.
Technology
The original arcade system board had one Z80A processor, running at 3.072 Mhz, 16 kbyte of ROM and 3 kbyte of static RAM. Of those 1 kbyte each was for video RAM, color RAM and generic program RAM. There were two custom chips on the board: the 285 sync bus controller and the 284 video RAM addresser, but daughterboards made only from standard parts were also widely used instead. Video output was (analog) component video with composite sync. A further 8 kbyte of character ROM was used for characters, background tiles and sprites and an additional 1 kbit of static RAM was used to hold 4bpp sprite data for one scanline and was written to during the horizontal blanking period preceding each line. Sprite size was always 16x16 pixels, one of the four colors per pixel was for transparency (of the background).
The monitor was installed 90 degree rotated clockwise, the first visible scanline started in the top right corner and ends in the bottom right corner. The horizontal blanking period, which starts after the level indicator at the bottom is drawn, had a duration of 96 pixel clock ticks, enough time to fetch 4 bytes of sprite data per 16 clock ticks for 6 sprites. Although attribute memory exists for them, sprite 0 and 7 are unusable, their pixel fetch timing window occupied by the bottom level indicator (which just precedes the hblank) for sprite 0 and two rows of characters at the top of the screen, which just follow the hblank, for sprite 7.
Further reading
Trueman, Doug (November 10, 1999). "The History of Pac-Man". GameSpot. Archived from the original on June 26, 2009. Comprehensive coverage on the history of the entire series up through 1999.
Morris, Chris (May 10, 2005). "Pac Man Turns 25". CNN Money.
Vargas, Jose Antonio (June 22, 2005). "Still Love at First Bite: At 25, Pac-Man Remains a Hot Pursuit". The Washington Post.
Hirschfeld, Tom. How to Master the Video Games, Bantam Books, 1981. ISBN 0-553-20164-6 Strategy guide for a variety of arcade games including Pac-Man. Includes drawings of some of the common patterns.
Official website
Pac-Man highscores on Twin Galaxies
Pac-Man on Arcade History
Pac-Man at the Killer List of Videogames |
Systemd is a software suite that provides an array of system components for Linux operating systems. The main aim is to unify service configuration and behavior across Linux distributions. Its primary component is a "system and service manager" – an init system used to bootstrap user space and manage user processes. It also provides replacements for various daemons and utilities, including device management, login management, network connection management, and event logging. The name systemd adheres to the Unix convention of naming daemons by appending the letter d. It also plays on the term "System D", which refers to a person's ability to adapt quickly and improvise to solve problems.Since 2015, the majority of Linux distributions have adopted systemd, having replaced other init systems such as SysV init. It has been praised by developers and users of distributions that adopted it for providing a stable, fast out-of-the-box solution for issues that had existed in the Linux space for years. At the time of adoption of systemd on most Linux distributions, it was the only software suite that offered reliable parallelism during boot as well as centralized management of processes, daemons, services and mount points.
Critics of systemd contend that it suffers from mission creep and bloat; the latter affecting other software (such as the GNOME desktop), adding dependencies on systemd, reducing its compatibility with other Unix-like operating systems and making it difficult for sysadmins to integrate alternative solutions. Concerns have also been raised about Red Hat and its parent company IBM controlling the scene of init systems on Linux. Critics also contend that the complexity of systemd results in a larger attack surface, reducing the overall security of the platform.
History
Lennart Poettering and Kay Sievers, the software engineers working for Red Hat who initially developed systemd, started a project to replace Linux's conventional System V init in 2010. An April 2010 blog post from Poettering, titled "Rethinking PID 1", introduced an experimental version of what would later become systemd. They sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies, to allow more processing to be done concurrently or in parallel during system booting, and to reduce the computational overhead of the shell.
In May 2011 Fedora Linux became the first major Linux distribution to enable systemd by default, replacing Upstart. The reasoning at the time was that systemd provided extensive parallelization during startup, better management of processes and overall a saner, dependency-based approach to control of the system.In October 2012, Arch Linux made systemd the default, switching from SysVinit. Developers had debated since August 2012 and came to the conclusion that it was faster and had more features than SysVinit, and that maintaining the latter was not worth the effort in patches. Some of them thought that the criticism towards the implementation of systemd was not based on actual shortcomings of the software, rather the disliking of Lennart from a part of the Linux community and the general hesitation for change. Specifically, some of the complaints regarding systemd not being programmed in bash, it being bigger and more extensive than SysVinit, the use of D-bus, and the optional on-disk format of the journal were regarded as advantages by programmers.Between October 2013 and February 2014, a long debate among the Debian Technical Committee occurred on the Debian mailing list, discussing which init system to use as the default in Debian 8 "jessie", and culminating in a decision in favor of systemd. The debate was widely publicized and in the wake of the decision the debate continues on the Debian mailing list. In February 2014, after Debian's decision was made, Mark Shuttleworth announced on his blog that Ubuntu would follow in implementing systemd, discarding its own Upstart.In November 2014 Debian Developer Joey Hess, Debian Technical Committee members Russ Allbery and Ian Jackson, and systemd package-maintainer Tollef Fog Heen resigned from their positions. All four justified their decision on the public Debian mailing list and in personal blogs with their exposure to extraordinary stress-levels related to ongoing disputes on systemd integration within the Debian and FOSS community that rendered regular maintenance virtually impossible.
In August 2015 systemd started providing a login shell, callable via machinectl shell.In September 2016, a security bug was discovered that allowed any unprivileged user to perform a denial-of-service attack against systemd. Rich Felker, developer of musl, stated that this bug reveals a major "system development design flaw". In 2017 another security bug was discovered in systemd, CVE-2017-9445, which "allows disruption of service" by a "malicious DNS server". Later in 2017, the Pwnie Awards gave author Lennart Poettering a "lamest vendor response" award due to his handling of the vulnerabilities.
Design
Poettering describes systemd development as "never finished, never complete, but tracking progress of technology". In May 2014, Poettering further described systemd as unifying "pointless differences between distributions", by providing the following three general functions:
A system and service manager (manages both the system, by applying various configurations, and its services)
A software platform (serves as a basis for developing other software)
The glue between applications and the kernel (provides various interfaces that expose functionalities provided by the kernel)Systemd includes features like on-demand starting of daemons, snapshot support, process tracking and Inhibitor Locks. It is not just the name of the init daemon but also refers to the entire software bundle around it, which, in addition to the systemd init daemon, includes the daemons journald, logind and networkd, and many other low-level components. In January 2013, Poettering described systemd not as one program, but rather a large software suite that includes 69 individual binaries. As an integrated software suite, systemd replaces the startup sequences and runlevels controlled by the traditional init daemon, along with the shell scripts executed under its control. systemd also integrates many other services that are common on Linux systems by handling user logins, the system console, device hotplugging (see udev), scheduled execution (replacing cron), logging, hostnames and locales.
Like the init daemon, systemd is a daemon that manages other daemons, which, including systemd itself, are background processes. systemd is the first daemon to start during booting and the last daemon to terminate during shutdown. The systemd daemon serves as the root of the user space's process tree; the first process (PID 1) has a special role on Unix systems, as it replaces the parent of a process when the original parent terminates. Therefore, the first process is particularly well suited for the purpose of monitoring daemons.
systemd executes elements of its startup sequence in parallel, which is theoretically faster than the traditional startup sequence approach. For inter-process communication (IPC), systemd makes Unix domain sockets and D-Bus available to the running daemons. The state of systemd itself can also be preserved in a snapshot for future recall.
Core components and libraries
Following its integrated approach, systemd also provides replacements for various daemons and utilities, including the startup shell scripts, pm-utils, inetd, acpid, syslog, watchdog, cron and atd. systemd's core components include the following:
systemd is a system and service manager for Linux operating systems.
systemctl is a command to introspect and control the state of the systemd system and service manager. Not to be confused with sysctl.
systemd-analyze may be used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager.
systemd tracks processes using the Linux kernel's cgroups subsystem instead of using process identifiers (PIDs); thus, daemons cannot "escape" systemd, not even by double-forking. systemd not only uses cgroups, but also augments them with systemd-nspawn and machinectl, two utility programs that facilitate the creation and management of Linux containers. Since version 205, systemd also offers ControlGroupInterface, which is an API to the Linux kernel cgroups. The Linux kernel cgroups are adapted to support kernfs, and are being modified to support a unified hierarchy.
Ancillary components
Beside its primary purpose of providing a Linux init system, the systemd suite can provide additional functionality, including the following components:
journald
systemd-journald is a daemon responsible for event logging, with append-only binary files serving as its logfiles. The system administrator may choose whether to log system events with systemd-journald, syslog-ng or rsyslog. The potential for corruption of the binary format has led to much heated debate.
libudev
libudev is the standard library for utilizing udev, which allows third-party applications to query udev resources.
localed
logind
systemd-logind is a daemon that manages user logins and seats in various ways. It is an integrated login manager that offers multiseat improvements and replaces ConsoleKit, which is no longer maintained. For X11 display managers the switch to logind requires a minimal amount of porting. It was integrated in systemd version 30.
homed
homed is a daemon that provides portable human-user accounts that are independent of current system configuration. homed moves various pieces of data such as UID/GID from various places across the filesystem into one file, ~/.identity. homed manages the user's home directory in various ways such as a plain directory, a btrfs subvolume, a Linux Unified Key Setup volume, an fscrypt directory, or mounted from an SMB server.
networkd
networkd is a daemon to handle the configuration of the network interfaces; in version 209, when it was first integrated, support was limited to statically assigned addresses and basic support for bridging configuration. In July 2014, systemd version 215 was released, adding new features such as a DHCP server for IPv4 hosts, and VXLAN support. networkctl may be used to review the state of the network links as seen by systemd-networkd. Configuration of new interfaces has to be added under the /lib/systemd/network/ as a new file ending with .network extension.
resolved
provides network name resolution to local applications
systemd-boot
systemd-boot is a boot manager, formerly known as gummiboot. Kay Sievers merged it into systemd with rev 220.
timedated
systemd-timedated is a daemon that can be used to control time-related settings, such as the system time, system time zone, or selection between UTC and local time-zone system clock. It is accessible through D-Bus. It was integrated in systemd version 30.
timesyncd
is a daemon that has been added for synchronizing the system clock across the network.
tmpfiles
systemd-tmpfiles is a utility that takes care of creation and clean-up of temporary files and directories. It is normally run once at startup and then in specified intervals.
udevd
udev is a device manager for the Linux kernel, which handles the /dev directory and all user space actions when adding/removing devices, including firmware loading. In April 2012, the source tree for udev was merged into the systemd source tree. In order to match the version number of udev, systemd maintainers bumped the version number directly from 44 to 183.
On 29 May 2014, support for firmware loading through udev was dropped from systemd, as it was decided that the kernel should be responsible for loading firmware.
Configuration of systemd
systemd is configured exclusively via plain-text files.
systemd records initialization instructions for each daemon in a configuration file (referred to as a "unit file") that uses a declarative language, replacing the traditionally used per-daemon startup shell scripts. The syntax of the language is inspired by .ini files.Unit-file types include:
.service
.socket
.device (automatically initiated by systemd)
.mount
.automount
.swap
.target
.path
.timer (which can be used as a cron-like job scheduler)
.snapshot
.slice (used to group and manage processes and resources)
.scope (used to group worker processes, not intended to be configured via unit files)
Adoption
While many distributions boot systemd by default, some allow other init systems to be used; in this case switching the init system is possible by installing the appropriate packages. A fork of Debian called Devuan was developed to avoid systemd and has reached version 4.0 for stable usage. In December 2019, the Debian project voted in favour of retaining systemd as the default init system for the distribution, but with support for "exploring alternatives".
Integration with other software
In the interest of enhancing the interoperability between systemd and the GNOME desktop environment, systemd coauthor Lennart Poettering asked the GNOME Project to consider making systemd an external dependency of GNOME 3.2.In November 2012, the GNOME Project concluded that basic GNOME functionality should not rely on systemd. However, GNOME 3.8 introduced a compile-time choice between the logind and ConsoleKit API, the former being provided at the time only by systemd. Ubuntu provided a separate logind binary but systemd became a de facto dependency of GNOME for most Linux distributions, in particular since ConsoleKit is no longer actively maintained and upstream recommends the use of systemd-logind instead. The developers of Gentoo Linux also attempted to adapt these changes in OpenRC, but the implementation contained too many bugs, causing the distribution to mark systemd as a dependency of GNOME.GNOME has further integrated logind. As of Mutter version 3.13.2, logind is a dependency for Wayland sessions.
Reception
The design of systemd has ignited controversy within the free-software community. Critics regard systemd as overly complex and suffering from continued feature creep, arguing that its architecture violates the Unix philosophy. There is also concern that it forms a system of interlocked dependencies, thereby giving distribution maintainers little choice but to adopt systemd as more user-space software comes to depend on its components, which is similar to the problems created by PulseAudio, another project which was also developed by Lennart Poettering.In a 2012 interview, Slackware's lead Patrick Volkerding expressed reservations about the systemd architecture, stating his belief that its design was contrary to the Unix philosophy of interconnected utilities with narrowly defined functionalities. As of August 2018, Slackware does not support or use systemd, but Volkerding has not ruled out the possibility of switching to it.In January 2013, Lennart Poettering attempted to address concerns about systemd in a blog post called The Biggest Myths.In February 2014, musl's Rich Felker opined that PID 1 is too special to be saddled with additional responsibilities, believing that PID 1 should only be responsible for starting the rest of the init system and reaping zombie processes, and that the additional functionality added by systemd can be provided elsewhere and unnecessarily increases the complexity and attack surface of PID 1.In March 2014 Eric S. Raymond commented that systemd's design goals were prone to mission creep and software bloat. In April 2014, Linus Torvalds expressed reservations about the attitude of Kay Sievers, a key systemd developer, toward users and bug reports in regard to modifications to the Linux kernel submitted by Sievers. In late April 2014 a campaign to boycott systemd was launched, with a website listing various reasons against its adoption.In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong". The article also characterizes the architecture of systemd as similar to that of svchost.exe, a critical system component in Microsoft Windows with a broad functional scope.In a September 2014 ZDNet interview, prominent Linux kernel developer Theodore Ts'o expressed his opinion that the dispute over systemd's centralized design philosophy, more than technical concerns, indicates a dangerous general trend toward uniformizing the Linux ecosystem, alienating and marginalizing parts of the open-source community, and leaving little room for alternative projects. He cited similarities with the attitude he found in the GNOME project toward non-standard configurations. On social media, Ts'o also later compared the attitudes of Sievers and his co-developer, Lennart Poettering, to that of GNOME's developers.
Forks and alternative implementations
Forks of systemd are closely tied to critiques of it outlined in the above section. Forks generally try to improve on at least one of portability (to other libcs and Unix-like systems), modularity, or size. A few forks have collaborated under the FreeInit banner.
Fork of components
eudev
In 2012, the Gentoo Linux project created a fork of udev in order to avoid dependency on the systemd architecture. The resulting fork is called eudev and it makes udev functionality available without systemd. A stated goal of the project is to keep eudev independent of any Linux distribution or init system. In 2021, Gentoo announced that support of eudev would cease at the beginning of 2022. An independent group of maintainers have since taken up eudev.
elogind
Elogind is the systemd project's "logind", extracted to be a standalone daemon. It integrates with PAM to know the set of users that are logged into a system and whether they are logged in graphically, on the console, or remotely. Elogind exposes this information via the standard org.freedesktop.login1 D-Bus interface, as well as through the file system using systemd's standard /run/systemd layout. Elogind also provides "libelogind", which is a subset of the facilities offered by "libsystemd". There is a "libelogind.pc" pkg-config file as well.
consolekit2
ConsoleKit was forked in October 2014 by Xfce developers wanting its features to still be maintained and available on operating systems other than Linux. While not ruling out the possibility of reviving the original repository in the long term, the main developer considers ConsoleKit2 a temporary necessity until systembsd matures. Development ceased in December 2017 and the project may be defunct.
LoginKit
LoginKit was an attempt to implement a logind (systemd-logind) shim, which would allow packages that depend on systemd-logind to work without dependency on a specific init system. The project has been defunct since February 2015.
systembsd
In 2014, a Google Summer of Code project named "systembsd" was started in order to provide alternative implementations of these APIs for OpenBSD. The original project developer began it in order to ease his transition from Linux to OpenBSD. Project development finished in July 2016.The systembsd project did not provide an init replacement, but aimed to provide OpenBSD with compatible daemons for hostnamed, timedated, localed, and logind. The project did not create new systemd-like functionality, and was only meant to act as a wrapper over the native OpenBSD system. The developer aimed for systembsd to be installable as part of the ports collection, not as part of a base system, stating that "systemd and *BSD differ fundamentally in terms of philosophy and development practices."
notsystemd
Notsystemd intends to implement all systemd's features working on any init system. It was forked by the Parabola GNU/Linux-libre developers to build packages with their development tools without the necessity of having systemd installed to run systemd-nspawn.
Fork including init system
uselessd
In 2014, uselessd was created as a lightweight fork of systemd. The project sought to remove features and programs deemed unnecessary for an init system, as well as address other perceived faults. Project development halted in January 2015.uselessd supported the musl and µClibc libraries, so it may have been used on embedded systems, whereas systemd only supports glibc. The uselessd project had planned further improvements on cross-platform compatibility, as well as architectural overhauls and refactoring for the Linux build in the future.
InitWare
InitWare is a modular refactor of systemd, porting the system to BSD platforms without glibc or Linux-specific system calls. It is known to work on DragonFly BSD, FreeBSD, NetBSD, and GNU/Linux. Components considered unnecessary are dropped.
See also
BusyBox
launchd
Linux distributions without systemd
Operating system service management
readahead
runit
Service Management Facility
GNU Daemon Shepherd
Upstart
svchost.exe
Official website
Systemd on GitHub
Rethinking PID 1 |
Gentoo may refer to:
Gentoo penguin, a species of bird.
Gentoo Linux, a computer operating system distribution named after the penguin.
Gentoo (file manager), a free file manager for Linux and other Unix-like systems.
Gentoo (term), an alternative, archaic name of the Telugu language, or a historical, archaic term for Hindus.
Gentoo Code, a document translated from Sanskrit regarding inheritance laws in Hinduism. |
Red Hat Linux was a widely used commercial open-source Linux distribution created by Red Hat until its discontinuation in 2004.Early releases of Red Hat Linux were called Red Hat Commercial Linux. Red Hat published the first non-beta release in May 1995. It was the first Linux distribution to use the RPM Package Manager as its packaging format, and over time has served as the starting point for several other distributions, such as Mandriva Linux and Yellow Dog Linux.
In 2003, Red Hat discontinued the Red Hat Linux line in favor of Red Hat Enterprise Linux (RHEL) for enterprise environments. Fedora Linux, developed by the community-supported Fedora Project and sponsored by Red Hat, is a free-of-cost alternative intended for home use. Red Hat Linux 9, the final release, hit its official end-of-life on April 30, 2004, although updates were published for it through 2006 by the Fedora Legacy project until the updates were discontinued in early 2007.
Features
Version 3.0.3 was one of the first Linux distributions to support ELF (Executable and Linkable Format) binaries instead of the older a.out format.Red Hat Linux introduced a graphical installer called Anaconda developed by Ketan Bagal, intended to be easy to use for novices, and which has since been adopted by some other Linux distributions. It also introduced a built-in tool called Lokkit for configuring the firewall capabilities.
In version 6 Red Hat moved to glibc 2.1, egcs-1.2, and to the 2.2 kernel. It was the first version to use the GNOME as its default graphical environment. It also introduced Kudzu, a software library for automatic discovery and configuration of hardware.Version 7 was released in preparation for the 2.4 kernel, although the first release still used the stable 2.2 kernel. Glibc was updated to version 2.1.92, which was a beta of the upcoming version 2.2 and Red Hat used a patched version of GCC from CVS that they called "2.96". The decision to ship an unstable GCC version was due to GCC 2.95's bad performance on non-i386 platforms, especially DEC Alpha. Newer GCCs had also improved support for the C++ standard, which caused much of the existing code not to compile.
In particular, the use of a non-released version of GCC caused some criticism, e.g. from Linus Torvalds and the GCC Steering Committee; Red Hat was forced to defend this decision.
GCC 2.96 failed to compile the Linux kernel, and some other software used in Red Hat, due to stricter checks. It also had an incompatible C++ ABI with other compilers. The distribution included a previous version of GCC for compiling the kernel, called "kgcc".
As of Red Hat Linux 7.0, UTF-8 was enabled as the default character encoding for the system. This had little effect on English-speaking users, but enabled much easier internationalisation and seamless support for multiple languages, including ideographic, bi-directional and complex script languages along with European languages. However, this did cause some negative reactions among existing Western European users, whose legacy ISO-8859–based setups were broken by the change.Version 8.0 was also the second to include the Bluecurve desktop theme. It used a common theme for GNOME-2 and KDE 3.0.2 desktops, as well as OpenOffice-1.0. KDE members did not appreciate the change, claiming that it was not in the best interests of KDE.Version 9 supported the Native POSIX Thread Library, which was ported to the 2.4 series kernels by Red Hat.Red Hat Linux lacked many features due to possible copyright and patent problems. For example, MP3 support was disabled in both Rhythmbox and XMMS; instead, Red Hat recommended using Ogg Vorbis, which has no patents. MP3 support, however, could be installed afterwards, through the use of packages. Support for Microsoft's NTFS file system was also missing, but could be freely installed as well.
Fedora Linux
Red Hat Linux was originally developed exclusively inside Red Hat, with the only feedback from users coming through bug reports and contributions to the included software packages – not contributions to the distribution as such. This was changed in late 2003 when Red Hat Linux merged with the community-based Fedora Project. The new plan was to draw most of the codebase from Fedora Linux when creating new Red Hat Enterprise Linux distributions. Fedora Linux replaced the original Red Hat Linux download and retail version. The model is similar to the relationship between Netscape Communicator and Mozilla, or StarOffice and OpenOffice.org, although in this case the resulting commercial product is also fully free software.
Version history
Release dates were drawn from announcements on comp.os.linux.announce
. Version names are chosen as to be cognitively related to the prior release, yet not related in the same way as the release before that.The Fedora and Red Hat Projects were merged on September 22, 2003.
See also
Fedora Linux release history
List of Linux distributions
Think Blue Linux
Fedora Linux – Free, community-supported, home version of Red Hat Linux
Fedora Project – History of Red Hat Linux
Red Hat, Inc. – Linux documentation
Linux Kernel Organization – Red Hat Archive
Red Hat Linux at DistroWatch
Mapping of RedHat Versions and Code Names to LINUX Kernel Versions |
Manjaro ( man-JAAR-oh) is a free and open-source Linux distribution based on the Arch Linux operating system that has a focus on user-friendliness and accessibility. It uses a rolling release update model and Pacman as its package manager. It is developed mainly in Austria, France and Germany.
History
Manjaro was first released on July 10, 2011. By mid 2013, it was in the beta stage, though key elements of the final system had all been implemented, including a GUI installer (then an Antergos installer fork); a package manager (Pacman) with a choice of frontends; Pamac (GTK) for Xfce desktop and Octopi (Qt) for its Openbox edition; MHWD (Manjaro Hardware Detection, for detection of free & proprietary video drivers); and Manjaro Settings Manager (for system-wide settings, user management, and graphics driver installation and management).GNOME Shell support was dropped with the release of version 0.8.3 in 2012. However, efforts within Arch Linux made it possible to restart the Cinnamon/GNOME edition as a community edition. An official release offering the GNOME desktop environment was reinstated in March 2017.During the development of Manjaro 0.9.0 at the end of August 2015, the team decided to switch to year and month designations for Manjaro's version scheme instead of numbers. This applies to both the 0.8.x series as well as the new 0.9.x series—renaming 0.8.13, released in June 2015, as 15.06 and so on. Manjaro 15.09, codenamed Bellatrix and formerly known as 0.9.0, was released on 27 September 2015 with the new Calamares installer and updated packages.In September 2017, Manjaro announced that support for i686 architecture would be dropped because "popularity of this architecture is decreasing". However, in November 2017 a semi-official community project "manjaro32", based on archlinux32, continued i686 support.In September 2019, the Manjaro GmbH & Co. KG company was founded. It's FOSS website stated the company was formed '... to effectively engage in commercial agreements, form partnerships, and offer professional services'.
Official editions
Manjaro Xfce, featuring Manjaro's own dark theme and the Xfce desktop.Manjaro KDE, featuring Manjaro's own dark Plasma theme and the latest KDE Plasma 5, apps and frameworks.Manjaro GNOME became the third official version with the Gellivara release; it offers the GNOME desktop with a version of the Manjaro theme.While not official releases, Manjaro Community Editions are maintained by members of the Manjaro community. They offer additional user interfaces over the official releases, including Budgie, Cinnamon, Deepin, i3, MATE, and Sway.Manjaro also has editions for devices with ARM processors, such as single-board computers or Pinebook notebooks.
Features
Manjaro comes with both a CLI and a graphical installer. The rolling release model means that users do not need to upgrade/reinstall the whole system to keep it all up-to-date inline with the latest release. Package management is handled by Pacman via the command line (terminal) and via front-end GUI package manager tools like the pre-installed Pamac. It can be configured as either a stable system (default) or bleeding edge, in line with Arch.The repositories are managed with their own tool, BoxIt, which is designed like Git.Manjaro includes its own GUI settings manager where options like language, drivers, and kernel version can be configured.Certain commonly used Arch utilities, such as the Arch Build System (ABS), are available but have alternate implementations in Manjaro.Manjaro Architect is a CLI net installer that allows users to choose their own kernel versions, drivers, and desktop environments during the install process. Both the official and the community edition's desktop environments are available for selection. For GUI-based installations, Manjaro uses the GUI installer Calamares.
Release history
The 0.8.x series releases were the last versions of Manjaro to use a version number. The desktop environments offered, as well as the number of programs bundled into each separate release, have varied in different releases.
Manjaro typically includes the latest versions of supported desktop environments.
Relation to Arch Linux
The main difference compared to Arch Linux is the repositories.
Manjaro uses three sets of repositories:
Unstable: contains the most up to date Arch Linux packages. Unstable is synced several times a day with Arch package releases.
Testing: contains packages from the unstable repositories after they have been tested by users.
Stable: contains only packages that are deemed stable by the development team, which can mean a delay of a few weeks before getting major upgrades.As of January 2019, package updates derived from the Arch Linux stable branch to the Manjaro stable branch typically have a lag of a few weeks.
Derivatives
Netrunner Rolling, in addition to Blue Systems Netrunner, which is Debian-based. The first version of Netrunner Rolling was 2014.04, which was based on Manjaro 0.8.9 KDE. It was released in 2014. The last released version was Netrunner Rolling 2019.04.The Sonar GNU/Linux project was aimed at providing a barrier-free Linux to people who required assistive technology for computer use, with support for GNOME and MATE desktop. The first version was released in February 2015, the latest release was in 2016. As of 2017, the Sonar project was discontinued.
Hardware
Although Manjaro can be installed on most systems, some vendors sell computers with Manjaro pre-installed on them. Suppliers of computers pre-installed with Manjaro include StarLabs Systems, Tuxedo Computers, manjarocomputer.eu and Pine64.
Manjaro with Plasma Mobile desktop environment is the default operating system on PinePhone, an ARM-based smartphone released by Pine64.
Reception
Over the years, Manjaro Linux was recognized as a desktop easy to set up and use, suitable for both beginners and experienced users. It is recommended as an easy and friendly way to install and maintain a cutting-edge Arch-derived distribution. Some reviewers find appeal in the large range of contributed software available in the AUR, which has a reputation for being kept up to date from upstream resources. Others highlight the wide selection of official and community editions with different desktop environments.Very early versions of Manjaro had a reputation for crashing and for installation difficulties, but this was reported to have improved with later versions, and by 2014 was, according to Jesse Smith of DistroWatch, "proving to be probably the most polished child of Arch Linux I have used to date. The distribution is not only easy to set up, but it has a friendly feel, complete with a nice graphical package manager, quality system installer and helpful welcome screen. Manjaro comes with lots of useful software and multimedia support."
Smith did a review of Manjaro 17.0.2 Xfce in July 2017, and observed that it did "a lot of things well". He went on to extol some of the notable features as part of his conclusion: "I found Manjaro's Xfce edition to be very fast and unusually light on memory. The distribution worked smoothly and worked well with both my physical hardware and my virtual environment. I also enjoyed Manjaro's habit of telling me when new software (particularly new versions of the Linux kernel) was available.
I fumbled a little with Manjaro's settings panel and finding some settings, but in the end I was pleased with the range of configuration I could achieve with the distribution. I especially like that Manjaro makes it easy to block notifications and keep windows from stealing focus. The distribution can be made to stay pleasantly out of the way."
Official website
Manjaro at DistroWatch
Manjaro on SourceForge |
A+ may refer to:
A+ (blood type)
A+ (grade), the highest grade achievable in a grading system based on letters of the alphabet
A+ (magazine), an Apple II periodical published by Ziff Davis, from 1983 to 1989
A+ (programming language), a dialect of APL with aggressive extensions
A+ (rapper) (born 1982), Andre Levins, an American rapper
A+ (EP), 2015 EP by Hyuna
A-Plus (rapper) (born 1974), Adam Carter, an American rapper
A+ certification, the professional computer technician certification by CompTIA
a+ (Mexican TV network), a Mexican TV channel
Animax (Eastern European TV channel), an anime television channel formerly known as A+
A-Plus (store), an American convenience store chain owned & operated by Sunoco
A-Plus TV, a television station in Pakistan
A Plus (website), a social news company based in New York City
Missouri A+ schools program, a student incentives program in Missouri
"A+", the highest average grade that can be given to a film by audiences polled by CinemaScore
"A-Plus", a song by Hieroglyphics on the album 3rd Eye Vision |
ABAP (Advanced Business Application Programming, originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report preparation processor") is a high-level programming language created by the German software company SAP SE. It is currently positioned, alongside Java, as the language for programming the SAP NetWeaver Application Server, which is part of the SAP NetWeaver platform for building business applications.
Introduction
ABAP is one of the many application-specific fourth-generation languages (4GLs) first developed in the 1980s. It was originally the report language for SAP R/2, a platform that enabled large corporations to build mainframe business applications for materials management and financial and management accounting.
ABAP used to be an abbreviation of Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "generic report preparation processor", but was later renamed to the English Advanced Business Application Programming. ABAP was one of the first languages to include the concept of Logical Databases (LDBs), which provides a high level of abstraction from the basic database level(s),which supports every platform, language and units.
The ABAP language was originally used by developers to develop the SAP R/3 platform. It was also intended to be used by SAP customers to enhance SAP applications – customers can develop custom reports and interfaces with ABAP programming. The language was geared towards more technical customers with programming experience.
ABAP remains as the language for creating programs for the client–server R/3 system, which SAP first released in 1992. As computer hardware evolved through the 1990s, more and more of SAP's applications and systems were written in ABAP. By 2001, all but the most basic functions were written in ABAP. In 1999, SAP released an object-oriented extension to ABAP called ABAP Objects, along with R/3 release 4.6.
SAP's current development platform NetWeaver supports both ABAP and Java.
ABAP has an abstraction between the business applications, the operating system and database. This ensures that applications do not depend directly upon a specific server or database platform and can easily be ported from one platform to another.
SAP Netweaver currently runs on UNIX (AIX, HP-UX, Solaris, Linux), Microsoft Windows, i5/OS on IBM System i (formerly iSeries, AS/400), and z/OS on IBM System z (formerly zSeries, S/390). Supported databases are HANA, SAP ASE (formerly Sybase), IBM Db2, Informix, MaxDB, Oracle, and Microsoft SQL Server (support for Informix was discontinued in SAP Basis release 7.00).
ABAP runtime environment
All ABAP programs reside inside the SAP database. They are not stored in separate external files like Java or C++ programs. In the database all ABAP code exists in two forms: source code, which can be viewed and edited with the ABAP Workbench tools; and generated code, a binary representation somewhat comparable with Java bytecode. ABAP programs execute under the control of the runtime system, which is part of the SAP kernel. The runtime system is responsible for processing ABAP statements, controlling the flow logic of screens and responding to events (such as a user clicking on a screen button); in this respect it can be seen as a Virtual Machine comparable with the Java VM. A key component of the ABAP runtime system is the Database Interface, which turns database-independent ABAP statements ("Open SQL") into statements understood by the underlying DBMS ("Native SQL"). The database interface handles all the communication with the relational database on behalf of ABAP programs; It also contains extra features such as buffering of tables and frequently accessed data in the local memory of the application server.
SAP systems and landscapes
All SAP data exists and all SAP software runs in the context of a SAP system. A system consists of a central relational database and one or more application servers ("instances") accessing the data and programs in this database. A SAP system contains at least one instance but may contain more, mostly for reasons of sizing and performance. In a system with multiple instances, load balancing mechanisms ensure that the load is spread evenly over the available application servers.
Installations of the Web Application Server (landscapes) typically consist of three systems: one for development; one for testing and quality assurance; and one for production. The landscape may contain more systems (e.g., separate systems for unit testing and pre-production testing) or it may contain fewer (e.g., only development and production, without separate QA); nevertheless three is the most common configuration. ABAP programs are created and undergo first testing in the development system. Afterwards they are distributed to the other systems in the landscape. These actions take place under control of the Change and Transport System (CTS), which is responsible for concurrency control (e.g., preventing two developers from changing the same code at the same time), version management, and deployment of programs on the QA and production systems.
The Web Application Server consists of three layers: the database layer; the application layer; and the presentation layer. These layers may run on the same or on different physical machines. The database layer contains the relational database and the database software. The application layer knowledge contains the instance or instances of the system. All application processes, including the business transactions and the ABAP development, run on the application layer. The presentation layer handles the interaction with users of the system. Online access to ABAP application servers can go via a proprietary graphical interface, which is called "SAP GUI", or via a Web browser.
Software layers
ABAP software is deployed in software components.
Examples for these are:
SP_BASIS is the technical base layer which is required in every ABAP system.
SAP_ABA contains functionalities which is required for all kinds of business applications, like business partner and address management.
SAP_UI provides the functionality to create SAP UI5 applications.
BBPCRM is an example for a business application, in this case the CRM application
SAP ABAP is an ERP programming language.
Transactions
A transaction in SAP terminology is the execution of a program. The normal way of executing ABAP code in the SAP system is by entering a transaction code (for instance, VA01 is the transaction code for "Create Sales Order"). The common T-codes used by ABAP developer are SE38, SE09, SE10, SE24, SE11, SE16N, SE80, SE37, ST22 e.t.c . Transactions can be called via system-defined or user-specific, role-based menus. They can also be started by entering the transaction code directly into a command field, which is present in every SAP screen. Transactions can also be invoked programmatically by means of the ABAP statements CALL TRANSACTION and LEAVE TO TRANSACTION.
The general notion of a transaction is called a Logical Unit of Work (LUW) in SAP terminology; the short form of transaction code is T-code.
Types of ABAP programs
As in other programming languages, an ABAP program is either an executable unit or a library, which provides reusable code to other programs and is not independently executable.
ABAP distinguishes two types of executable programs:
Reports
Module poolsReports follow a relatively simple programming model whereby a user optionally enters a set of parameters (e.g., a selection over a subSET of data) and the program then uses the input parameters to produce a report in the form of an interactive list. The term "report" can be somewhat misleading in that reports can also be designed to modify data; the reason why these programs are called reports is the "list-oriented" nature of the output they produce.
Module pools define more complex patterns of user interaction using a collection of screens. The term “screen” refers to the actual, physical image that the user sees. Each screen also has a "flow logic", which refers to the ABAP code implicitly invoked by the screens, which is divided into a "PBO" (Process Before Output) and "PAI" (Process After Input) section. In SAP documentation the term “dynpro” (dynamic program) refers to the combination of the screen and its flow logic.
The non-executable program types are:
INCLUDE modules - These get included at generation time into the calling unit; it is often used to subdivide large programs.
Subroutine pools - These contain ABAP subroutines (blocks of code enclosed by FORM/ENDFORM statements and invoked with PERFORM).
Function groups - These are libraries of self-contained function modules (enclosed by FUNCTION/ENDFUNCTION and invoked with CALL FUNCTION).
Object classes - These are similar to Java classes and interfaces; the first define a set of methods and attributes, the second contain "empty" method definitions, for which any class implementing the interface must provide explicit code.
Interfaces - Same as object classes
Type pools - These define collections of data types and constants.ABAP programs are composed of individual sentences (statements). The first word in a statement is called an ABAP keyword. Each statement ends with a period. Words must always be separated by at least one space. Statements can be indented as you wish. With keywords, additions and operands, the ABAP runtime system does not differentiate between upper and lowercase.
Statements can extend beyond one line. You can have several statements in a single line (though this is not recommended). Lines that begin with asterisk * in the first column are recognized as comment lines by the ABAP runtime system and are ignored. Double quotations marks (") indicate that the remainder of a line is a comment.
Development environment
There are two possible ways to develop in ABAP. The availability depends on the release of the ABAP system.
ABAP Workbench
The ABAP Workbench is part of the ABAP system and is accessed via SAP GUI. It contains different tools for editing programs. The most important of these are (transaction codes are shown in parentheses):
ABAP Editor for writing and editing reports, module pools, includes and subroutine pools (SE38)
ABAP Dictionary for processing database table definitions and retrieving global types (SE11)
Menu Painter for designing the user interface (menu bar, standard toolbar, application toolbar, function key assignment) (SE41)
Screen Painter for designing screens and flow logic (SE51)
Function Builder for function modules (SE37)
Class Builder for ABAP Objects classes and interfaces (SE24)The Object Navigator (transaction SE80) provides a single integrated interface into these various tools.
ABAP Development Tools
The ABAP Development Tools (ADT), formally known as "ABAP in Eclipse", is a set of plugins for the Eclipse IDE to develop ABAP objects.In this scenario, the ABAP developer installs the required tools on his computer and works locally, whereas a continuous synchronization with the backend is performed.
ABAP Dictionary
The ABAP Dictionary contains all metadata about the data in the SAP system. It is closely linked with the ABAP Workbench in that any reference to data (e.g., a table, a view, or a data type) will be obtained from the dictionary. Developers use the ABAP Dictionary transactions (directly or through the SE80 Object Navigator inside the ABAP Workbench) to display and maintain this metadata.
When a dictionary object is changed, a program that references the changed object will automatically reference the new version the next time the program runs. Because ABAP is interpreted, it is not necessary to recompile programs that reference changed dictionary objects.
A brief description of the most important types of dictionary objects follows:
Tables are data containers that exist in the underlying relational database. In the majority of cases there is a 1-to-1 relationship between the definition of a table in the ABAP Dictionary and the definition of that same table in the database (same name, same columns). These tables are known as "transparent". There are two types of non-transparent tables: "pooled" tables exist as independent entities in the ABAP Dictionary but they are grouped together in large physical tables ("pools") at the database level. Pooled tables are often small tables holding for example configuration data. "Clustered" tables are physically grouped in "clusters" based on their primary keys; for instance, assume that a clustered table H contains "header" data about sales invoices, whereas another clustered table D holds the invoice line items. Each row of H would then be physically grouped with the related rows from D inside a "cluster table" in the database. This type of clustering, which is designed to improve performance, also exists as native functionality in some, though not all, relational database systems.
Indexes provide accelerated access to table data for often used selection conditions. Every SAP table has a "primary index", which is created implicitly along with the table and is used to enforce primary key uniqueness. Additional indexes (unique or non-unique) may be defined; these are called "secondary indexes".
Views have the same purpose as in the underlying database: they define subsets of columns (and/or rows) from one or - using a join condition - several tables. Since views are virtual tables (they refer to data in other tables) they do not take a substantial amount of space.
Structures are complex data types consisting of multiple fields (comparable to struct in C/C++).
Data elements provide the semantic content for a table or structure field. For example, dozens of tables and structures might contain a field giving the price (of a finished product, raw material, resource, ...). All these fields could have the same data element "PRICE".
Domains define the structural characteristics of a data element. For example, the data element PRICE could have an assigned domain that defines the price as a numeric field with two decimals. Domains can also carry semantic content in providing a list of possible values. For example, a domain "BOOLEAN" could define a field of type "character" with length 1 and case-insensitive, but would also restrict the possible values to "T" (true) or "F" (false).
Search helps (successors to the now obsolete "matchcodes") provide advanced search strategies when a user wants to see the possible values for a data field. The ABAP runtime provides implicit assistance (by listing all values for the field, e.g. all existing customer numbers) but search helps can be used to refine this functionality, e.g. by providing customer searches by geographical location, credit rating, etc.
Lock objects implement application-level locking when changing data.
ABAP syntax
This brief description of the ABAP syntax begins with the ubiquitous "Hello world" program.
Hello world
This example contains two statements: REPORT and WRITE. The program displays a list on the screen. In this case, the list consists of the single line "Hello World". The REPORT statement indicates that this program is a report. This program could be a module pool after replacing the REPORT statement with PROGRAM.
Chained statements
Consecutive statements with an identical first (leftmost) part can be combined into a "chained" statement using the chain operator :. The common part of the statements is written to the left of the colon, the differing parts are written to the right of the colon and separated by commas. The colon operator is attached directly to the preceding token, without a space (the same applies to the commas in the token list on, as can be seen in the examples below).
Chaining is often used in WRITE statements. WRITE accepts just one argument, so if for instance you wanted to display three fields from a structure called FLIGHTINFO, you would have to code:
Chaining the statements results in a more readable and more intuitive form:
In a chain statement, the first part (before the colon) is not limited to the statement name alone. The entire common part of the consecutive statements can be placed before the colon. Example:
could be rewritten in chained form as:
Comments
ABAP has 2 ways of defining text as a comment:
An asterisk (*) in the leftmost column of a line makes the entire line a comment
A double quotation mark (") anywhere on a line makes the rest of that line a commentExample:
Spaces
Code in ABAP is whitespace-sensitive.
assigns to variable x the substring of the variable a, starting from b with the length defined by the variable c.
assigns to variable x the sum of the variable a and the result of the call to method b with the parameter c.
ABAP statements
In contrast with languages like C/C++ or Java, which define a limited set of language-specific statements and provide most functionality via libraries, ABAP contains an extensive amount of built-in statements. These statements traditionally used sentence-like structures and avoided symbols, making ABAP programs relatively verbose. However, in more recent versions of the ABAP language, a terser style is possible.An example of statement based syntax (whose syntax originates in COBOL) versus expression-based syntax (as in C/Java):
Data types and variables
ABAP provides a set of built-in data types. In addition, every structure, table, view or data element defined in the ABAP Dictionary can be used to type a variable. Also, object classes and interfaces can be used as types.
The built-in data types are:
Date variables or constants (type D) contain the number of days since January 1, 1 AD. Time variables or constants (type T) contain the number of seconds since midnight. A special characteristic of both types is that they can be accessed both as integers and as character strings (with internal format "YYYYMMDD" for dates and "hhmmss" for times), which can be used for date and time handling. For example, the code snippet below calculates the last day of the previous month (note: SY-DATUM is a system-defined variable containing the current date):
All ABAP variables have to be explicitly declared in order to be used. They can be declared either with individual statements and explicit typing or, since ABAP 7.40, inline with inferred typing.
Explicitly typed declaration
Normally all declarations are placed at the top of the code module (program, subroutine, function) before the first executable statement; this placement is a convention and not an enforced syntax rule. The declaration consists of the name, type, length (where applicable), additional modifiers (e.g. the number of implied decimals for a packed decimal field) and optionally an initial value:
Notice the use of the colon to chain together consecutive DATA statements.
Inline declaration
Since ABAP 7.40, variables can be declared inline with the following syntax:
For this type of declaration it must be possible to infer the type statically, e.g. by method signature or database table structure.
This syntax is also possible in OpenSQL statements:
ABAP Objects
The ABAP language supports object-oriented programming, through a feature known as "ABAP Objects". This helps to simplify applications and make them more controllable.
ABAP Objects is fully compatible with the existing language, so one can use existing statements and modularization units in programs that use ABAP Objects, and can also use ABAP Objects in existing ABAP programs. Syntax checking is stronger in ABAP Objects programs, and some syntactical forms (usually older ones) of certain statements are not permitted.
Objects form a capsule which combines the character to the respective behavior. Objects should enable programmers to map a real problem and its proposed software solution on a one-to-one basis. Typical objects in a business environment are, for example, ‘Customer’, ‘Order’, or ‘Invoice’. From Release 3.1 onwards, the Business Object Repository (BOR) of SAP Web Application Server ABAP has contained examples of such objects. The BOR object model will be integrated into ABAP Objects in the next Release by migrating the BOR object types to the ABAP class library.
A comprehensive introduction to object orientation as a whole would go far beyond the limits of this introduction to ABAP Objects. This documentation introduces a selection of terms that are used universally in object orientation and also occur in ABAP Objects. In subsequent sections, it goes on to discuss in more detail how these terms are used in ABAP Objects. The end of this section contains a list of further reading, with a selection of titles about object orientation.
Objects are instances of classes. They contain data and provide services. The data forms the attributes of the object. The services are known as methods (also known as operations or functions). Typically, methods operate on private data (the attributes, or state of the object), which is only visible to the methods of the object. Thus the attributes of an object cannot be changed directly by the user, but only by the methods of the object. This guarantees the internal consistency of the object.
Classes describe objects. From a technical point of view, objects are runtime instances of a class. In theory, any number of objects based on a single class may be created. Each instance (object) of a class has a unique identity and its own set of values for its attributes.
Object Encapsulation - Objects restrict the visibility of their resources (attributes and methods) to other users. Every object has an interface, which determines how other objects can interact with it. The implementation of the object is encapsulated, that is, invisible outside the object itself.
Inheritance - An existing class may be used to derive a new class. Derived classes inherit the data and methods of the superclass. However, they can overwrite existing methods, and also add new ones.
Polymorphism - Identical (identically-named) methods behave differently in different classes. In ABAP Objects, polymorphism is implemented by redefining methods during inheritance and by using constructs called interfaces.
CDS Views
The ABAP Core Data Services (ABAP CDS) are the implementation of the general CDS concept for AS ABAP. ABAP CDS makes it possible to define semantic data models on the central database of the application server. On AS ABAP, these models can be defined independently of the database system. The entities of these models provide enhanced access functions when compared with existing database tables and views defined in ABAP Dictionary, making it possible to optimize Open SQL-based applications. This is particularly clear when an AS ABAP uses a SAP HANA database, since its in-memory characteristics can be implemented in an optimum manner.
The data models are defined using the data definition language (DDL) and data control language (DCL) provided in the ABAP CDS in the ABAP CDS syntax. The objects defined using these languages are integrated into ABAP Dictionary and managed here too.
CDS source code can only be programmed in the Eclipse-based ABAP Development Tools (ADT). The Data Definition Language (DDL) and the Data Control Language (DCL) use different editors.
Features
Internal tables in ABAP
Internal tables are an important feature of the ABAP language. An internal table is defined similarly to a vector of structs in C++ or a vector of objects in Java. The main difference with these languages is that ABAP provides a collection of statements to easily access and manipulate the contents of internal tables. Note that ABAP does not support arrays; the only way to define a multi-element data object is to use an internal table.Internal tables are a way to store variable data sets of a fixed structure in the working memory of ABAP, and provides the functionality of dynamic arrays. The data is stored on a row-by-row basis, where each row has the same structure.
Internal tables are preferably used to store and format the content of database tables from within a program. Furthermore, internal tables in connection with structures are an important means of defining complex data structures in an ABAP program.
The following example defines an internal table with two fields with the format of database table VBRK.
History
The following list only gives a rough overview about some important milestones in the history of the language ABAP. For more details, see ABAP - Release-Specific Changes.
See also
ERP software
Secure Network Communications
SAP Logon Ticket
Single sign-on
ABAP — Keyword Documentation
SAP Help Portal
ABAP Development discussions, blogs, documents and videos on the SAP Community Network (SCN) |
ACL2 ("A Computational Logic for Applicative Common Lisp") is a software system consisting of a programming language, an extensible theory in a first-order logic, and an automated theorem prover. ACL2 is designed to support automated reasoning in inductive logical theories, mostly for software and hardware verification. The input language and implementation of ACL2 are written in Common Lisp. ACL2 is free and open-source software.
Overview
The ACL2 programming language is an applicative (side-effect free) variant of Common Lisp. ACL2 is untyped. All ACL2 functions are total — that is, every function maps each object in the ACL2 universe to another object in its universe.
ACL2's base theory axiomatizes the semantics of its programming language and its built-in functions. User definitions in the programming language that satisfy a definitional principle extend the theory in a way that maintains the theory's logical consistency.
The core of ACL2's theorem prover is based on term rewriting, and this core is extensible in that user-discovered theorems can be used as ad hoc proof techniques for subsequent conjectures.
ACL2 is intended to be an "industrial strength" version of the Boyer–Moore theorem prover, NQTHM. Toward this goal, ACL2 has many features to support clean engineering of interesting mathematical and computational theories. ACL2 also derives efficiency from being built on Common Lisp; for example, the same specification that is the basis for inductive verification can be compiled and run natively.
In 2005, the authors of the Boyer-Moore family of provers, which includes ACL2, received the ACM Software System Award "for pioneering and engineering a most effective theorem prover (...) as a formal methods tool for verifying safety-critical hardware and software."
Proofs
ACL2 has had numerous industrial applications. In 1995, J Strother Moore, Matt Kaufmann and Tom Lynch used ACL2 to prove the correctness of the floating point division operation of the AMD K5 microprocessor in the wake of the Pentium FDIV bug. The interesting applications page of the ACL2 documentation has a summary of some uses of the system.
Industrial users of ACL2 include AMD, Arm, Centaur Technology, IBM, Intel, Oracle, and Collins Aerospace.
See also
List of proof assistants
ACL2 website
ACL2s - ACL2 Sedan - An Eclipse-based interface developed by Peter Dillinger and Pete Manolios that includes powerful features to provide users with more automation and support for specifying conjectures and proving theorems with ACL2. |
Agda may refer to:
Agda (programming language), the programming language and theorem prover
Agda (Golgafrinchan), the character in The Hitchhiker's Guide to the Galaxy by Douglas Adams
Liten Agda, the heroine of a Swedish legend
Agda Montelius, a Swedish feminist
Agda Persdotter, a Swedish royal mistress of the 16th-century
Agda Rössel, a Swedish politician
Agda Östlund, a Swedish politician
Dayan Agda, a Filipino politician |
Keysight VEE is a graphical dataflow programming software development environment from Keysight Technologies for automated test, measurement, data analysis and reporting. VEE originally stood for Visual Engineering Environment and developed by HP designated as HP VEE; it has since been officially renamed to Keysight VEE. Keysight VEE has been widely used in various industries, serving the entire stage of a product lifecycle, from design, validation to manufacturing. It is optimized in instrument control and automation with test and measurement devices such as data acquisition instruments like digital voltmeters and oscilloscopes, and source devices like signal generators and programmable power supplies.
Release history
A detailed list of features for each version can be found under the Keysight VEE objects and pins
A VEE program consists of multiple connected VEE objects (sometimes called devices). Each VEE object consists of different types of pins, namely data pins, sequence pins, execute pins (XEQ), control pins and error pins. Data pins govern the data flow propagation while sequence pins determine object execution order.
The pins on the left side of an object are called input pins, whereas the pins on the right are output pins. Two objects, A and B, are connected if the output pin of object A is connected to object B's input pin. Several connection lines can emanate from a single output pin, but at most one connection line can be attached to an input pin. All data input pins and execute pins must be connected, whereas control pins and output pins can be left unconnected.
Data flow and data propagation
Keysight VEE is a dataflow programming language. Within a VEE program, there are multiple connections between objects and data flows through objects from left to right while sequence flows from top to bottom.
When an object executes, it uses the input pin's value to perform an operation. When it finishes, the result is placed on the output pin. The output pin value placed is then propagated to any input pins that are connected to it.
A sequence pin is used to specify some object execution order. In most cases, sequence pins are left unconnected to allow data propagation to determine the execution order. If an object's sequence input pin is connected, the object will execute only if all data input pins and the sequence input pin have data.
When data is present on execute pins, it will force the object to operate and place results on its output pins, regardless of whether the data inputs have values.
A control pin is used to control the internal state of an object. It doesn't have effect on data propagation.
An error pin is used to trap errors when an object execute. If it is present, no error dialog will be shown. When an error occurs, the error pin propagates instead of data output pins, followed by the sequence output pin (if connected).An object's execution order is determined by object connections and the data dependency rule. In general, an object with unconnected data input and sequence input pin will operate first.
If an object's sequence input pin is not connected, it will execute as soon as data is present on all data inputs. On the other hand, if a sequence input pin is connected, although data is present on all data input pins, the object will hold its execution until the sequence input pin is pinged. This may not be applicable to some non-primitive objects like the Junction and Collector objects.
For example, if object A's sequence output pin is connected, it will fire only after object A has executed and no further execution is possible in the objects descended from the data output pins and error pin of object A.
Some examples are taken from and can be referred to for further explanation.
Instrument connectivity
Keysight VEE can connect and control a variety of Keysight and non-Keysight instrumentation via multiple interfaces. Keysight VEE supports the following interfaces:
GPIB, LAN, USB and RS-232
VXI and LXI plug and play drivers
IVI-COM drivers
PXI via NI-DAQmx
SCPI via the DirectIO object
Panel drivers
Extensive interoperability
Keysight VEE can interact with other programming languages using the built-in ActiveX Automation Server. Other software development programs such as Visual Basic, C/C++, Visual C# and all .NET compliant languages can call Keysight VEE UserFunctions. Keysight VEE is also integrated with Microsoft .NET Framework (Common Language Runtime and Framework Class Libraries) that provides a multitude of functions and controls that can be used to enhance a program such as adding email capability and accessing databases.
Access to over 2500 MATLAB analysis and visualization functions is made possible with the built-in MATLAB Signal Processing Toolbox. The built-in Microsoft Excel library provides direct access to save, retrieve and generate reports in spreadsheets.
Keysight VEE GUI panels and runtime deployment
Keysight VEE is notable for its capability to deploy unlimited number of runtime programs with no time limitations at no extra cost. These runtime programs could contain a GUI panel and allows interaction with users, presumably operators to execute and control the program and the test execution.
See also
Dataflow programming
Graphical programming
Virtual instrumentation
LabVIEW
MATLAB
Keysight Technologies, Keysight VEE
Keysight Technologies, E-Learning Portal
Keysight Technologies, VEE Software Forums
Keysight Technologies, VEE Pro 30-day free trial download
Usage of VXIplug&play instrument driver in Agilent VEE |
ALGOL 58, originally named IAL, is one of the family of ALGOL computer programming languages. It was an early compromise design soon superseded by ALGOL 60. According to John Backus
The Zurich ACM-GAMM Conference had two principal motives in proposing the IAL: (a) To provide a means of communicating numerical methods and other procedures between people, and (b) To provide a means of realizing a stated process on a variety of machines...
ALGOL 58 introduced the fundamental notion of the compound statement, but it was restricted to control flow only, and it was not tied to identifier scope in the way that Algol 60's blocks were.
Name
Bauer attributes the name to Hermann Bottenbruch, who coined the term algorithmic language (algorithmische Sprache) in 1957, "at least in Germany".
History
There were proposals for a universal language by the Association for Computing Machinery (ACM) and also by the German Gesellschaft für Angewandte Mathematik und Mechanik ("Society of Applied Mathematics and Mechanics") (GAMM). It was decided to organize a joint meeting to combine them. The meeting took place from May 27 to June 2, 1958, at ETH Zurich and was attended by the following people:
Friedrich L. Bauer, Hermann Bottenbruch, Heinz Rutishauser, and Klaus Samelson (from the GAMM)
John Backus, Charles Katz, Alan Perlis, and Joseph Henry Wegstein (from the ACM).The language was originally proposed to be called IAL (International Algebraic Language) but according to Perlis,
this was rejected as an "'unspeakable' and pompous acronym". ALGOL was suggested instead, though not officially adopted until a year later. The publication following the meeting still used the name IAL.By the end of 1958 the ZMMD-group had built a working ALGOL 58 compiler for the Z22 computer. ZMMD was an abbreviation for Zürich (where Rutishauser worked), München (workplace of Bauer and Samelson), Mainz (location of the Z22 computer), Darmstadt (workplace of Bottenbruch).
ALGOL 58 saw some implementation effort at IBM, but the effort was in competition with FORTRAN, and soon abandoned. It was also implemented at Dartmouth College on an LGP-30, but that implementation soon evolved into ALGOL 60. An implementation for the Burroughs 220 called BALGOL evolved along its own lines as well, but retained much of ALGOL 58's original character.ALGOL 58's primary contribution was to later languages; it was used as a basis for JOVIAL, MAD, NELIAC and ALGO. It was also used during 1959 to publish algorithms in CACM, beginning a trend of using ALGOL notation in publication that continued for many years.
Time line of implementations of ALGOL 58 variants
ALGOL 58's influence on ALGOL 60
IAL introduced the three-level concept of reference, publication and hardware language, and the concept of "word delimiters" having a separate representation from freely chosen identifiers (hence, no reserved words). ALGOL 60 kept this three-level concept.
The distinction between assignment (:= representing a left-facing arrow) and the equality relation = was introduced in IAL and kept in ALGOL 60.
Both IAL and ALGOL 60 allow arrays with arbitrary lower and upper subscript bounds, and allow subscript bounds to be defined by integer expressions.
Both IAL and ALGOL 60 allow nesting of procedure declarations and the corresponding identifier scopes.
The IAL report described parameter substitution in much the same terms as the ALGOL 60 report, leaving open the possibility of call by name. It is unclear if this was realized at the time.
IAL allows numeric statement labels, that ALGOL 60 kept.
The possibility of including non-ALGOL code within a program was already hinted at, in the context of parameters to procedures.
Both IAL and ALGOL 60 have a switch designator, unrelated, however, to the switch statement in C and other languages.
In-line functions of the form f(x) := x / 2; were proposed in IAL but dropped in ALGOL 60.
IAL procedure declarations provide separate declaration lists for input and output parameters, a procedure can return multiple values; this mechanism was replaced in ALGOL 60 with the value declaration.
Variable declarations in IAL can be placed anywhere in the program and not necessarily at the beginning of a procedure. In contrast, the declarations within an ALGOL 60 block should occur before all execution statements.
The for-statement has the form for i:=base(increment)limit, directly resembling the loop of Rutishauser's programming language Superplan, replacing =with :=, and replacing its German keyword Für with the direct English translation for; ALGOL 60 replaced the parentheses with the word delimiters step and until, such that the previous statement instead would be i:=base step increment until limit.
The IAL if-statement does not have a then-clause or else-clause; it rather guards the succeeding statement. IAL provides an if either-statement that cleanly allows testing of multiple conditions. Both were replaced by ALGOL's if-then construct, with the introduction of the "dangling-else" ambiguity.
IAL provides macro-substitution with the do-statement; this was dropped in ALGOL 60.
IAL allows one or more array subscripts to be omitted when passing arrays to procedures, and to provide any or all arguments to a procedure passed to another procedure.
IAL's infix boolean operators are all of the same precedence level. Exponents are indicated with paired up and down arrows, which removed any confusion about the correct interpretation of nested exponents; ALGOL 60 replaced the paired arrows with a single up-arrow whose function is equivalent to FORTRAN's **.
The IAL report does not explicitly specify which standard functions were to be provided, making a vague reference to the "standard functions of analysis." The ALGOL 60 report has a more explicit list of standard functions.
Algol 58 at the Software Preservation Group (cf. Computer History Museum)
Algol 58 report from CACM at the Software Preservation Group |
ALGOL 60 (short for Algorithmic Language 1960) is a member of the ALGOL family of computer programming languages. It followed on from ALGOL 58 which had introduced code blocks and the begin and end pairs for delimiting them, representing a key advance in the rise of structured programming. ALGOL 60 was one of the first languages implementing function definitions (that could be invoked recursively). ALGOL 60 function definitions could be nested within one another (which was first introduced by any programming language), with lexical scope. It gave rise to many other languages, including CPL, PL/I, Simula, BCPL, B, Pascal, and C. Practically every computer of the era had a systems programming language based on ALGOL 60 concepts.
Niklaus Wirth based his own ALGOL W on ALGOL 60 before moving to develop Pascal. Algol-W was intended to be the next generation ALGOL but the ALGOL 68 committee decided on a design that was more complex and advanced rather than a cleaned simplified ALGOL 60. The official ALGOL versions are named after the year they were first published. ALGOL 68 is substantially different from ALGOL 60 and was criticised partially for being so, so that in general "ALGOL" refers to dialects of ALGOL 60.
Standardization
ALGOL 60 – with COBOL – were the first languages to seek standardization.
ISO 1538:1984 Programming languages – ALGOL 60 (stabilized)
ISO/TR 1672:1977 Hardware representation of ALGOL basic symbols ... (now withdrawn)
History
ALGOL 60 was used mostly by research computer scientists in the United States and in Europe. Its use in commercial applications was hindered by the absence of standard input/output facilities in its description and the lack of interest in the language by large computer vendors. ALGOL 60 did however become the standard for the publication of algorithms and had a profound effect on future language development.
John Backus developed the Backus normal form method of describing programming languages specifically for ALGOL 58. It was revised and expanded by Peter Naur for ALGOL 60, and at Donald Knuth's suggestion renamed Backus–Naur form.Peter Naur: "As editor of the ALGOL Bulletin I was drawn into the international discussions of the language and was selected to be member of the European language design group in November 1959. In this capacity I was the editor of the ALGOL 60 report, produced as the result of the ALGOL 60 meeting in Paris in January 1960."The following people attended the meeting in Paris (from January 11 to 16):
Friedrich L. Bauer, Peter Naur, Heinz Rutishauser, Klaus Samelson, Bernard Vauquois, Adriaan van Wijngaarden, and Michael Woodger (from Europe)
John W. Backus, Julien Green, Charles Katz, John McCarthy, Alan J. Perlis, and Joseph Henry Wegstein (from the US).Alan Perlis gave a vivid description of the meeting: "The meetings were exhausting, interminable, and exhilarating. One became aggravated when one's good ideas were discarded along with the bad ones of others. Nevertheless, diligence persisted during the entire period. The chemistry of the 13 was excellent."
The language originally did not include recursion. It was inserted into the specification at the last minute, against the wishes of some of the committee.ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors."
ALGOL 60 implementations timeline
To date there have been at least 70 augmentations, extensions, derivations and sublanguages of ALGOL 60.
The Burroughs dialects included special system programming dialects such as ESPOL and NEWP.
Properties
ALGOL 60 as officially defined had no I/O facilities; implementations defined their own in ways that were rarely compatible with each other. In contrast, ALGOL 68 offered an extensive library of transput (ALGOL 68 parlance for input/output) facilities.
ALGOL 60 provided two evaluation strategies for parameter passing: the common call-by-value, and call-by-name. The procedure declaration specified, for each formal parameter, which was to be used: value specified for call-by-value, and omitted for call-by-name. Call-by-name has certain effects in contrast to call-by-reference. For example, without specifying the parameters as value or reference, it is impossible to develop a procedure that will swap the values of two parameters if the actual parameters that are passed in are an integer variable and an array that is indexed by that same integer variable. Think of passing a pointer to swap(i, A[i]) in to a function. Now that every time swap is referenced, it's reevaluated. Say i := 1 and A[i] := 2, so every time swap is referenced it'll return the other combination of the values ([1,2], [2,1], [1,2] and so on). A similar situation occurs with a random function passed as actual argument.
Call-by-name is known by many compiler designers for the interesting "thunks" that are used to implement it. Donald Knuth devised the "man or boy test" to separate compilers that correctly implemented "recursion and non-local references." This test contains an example of call-by-name.
ALGOL 60 Reserved words and restricted identifiers
There are 35 such reserved words in the standard Burroughs Large Systems sub-language:
There are 71 such restricted identifiers in the standard Burroughs Large Systems sub-language:
and also the names of all the intrinsic functions.
Standard operators
Examples and portability issues
Code sample comparisons
ALGOL 60
procedure Absmax(a) Size:(n, m) Result:(y) Subscripts:(i, k);
value n, m; array a; integer n, m, i, k; real y;
comment The absolute greatest element of the matrix a, of size n by m,
is copied to y, and the subscripts of this element to i and k;
begin
integer p, q;
y := 0; i := k := 1;
for p := 1 step 1 until n do
for q := 1 step 1 until m do
if abs(a[p, q]) > y then
begin y := abs(a[p, q]);
i := p; k := q
end
end Absmax
Implementations differ in how the text in bold must be written. The word 'INTEGER', including the quotation marks, must be used in some implementations in place of integer, above, thereby designating it as a special keyword.
Following is an example of how to produce a table using Elliott 803 ALGOL:
FLOATING POINT ALGOL TEST'
BEGIN REAL A,B,C,D'
READ D'
FOR A:= 0.0 STEP D UNTIL 6.3 DO
BEGIN
PRINT PUNCH(3),££L??'
B := SIN(A)'
C := COS(A)'
PRINT PUNCH(3),SAMELINE,ALIGNED(1,6),A,B,C'
END'
END'
ALGOL 60 family
Since ALGOL 60 had no I/O facilities, there is no portable hello world program in ALGOL. The following program could (and still will) compile and run on an ALGOL implementation for a Unisys A-Series mainframe, and is a straightforward simplification of code taken from The Language Guide at the University of Michigan-Dearborn Computer and Information
Science Department Hello world! ALGOL Example Program page.
BEGIN
FILE F(KIND=REMOTE);
EBCDIC ARRAY E[0:11];
REPLACE E BY "HELLO WORLD!";
WRITE(F, *, E);
END.
Where * etc. represented a format specification as used in FORTRAN, e.g.A simpler program using an inline format:
An even simpler program using the Display statement:
An alternative example, using Elliott Algol I/O is as follows. Elliott Algol used different characters for "open-string-quote" and "close-string-quote", represented here by ‘ and ’ .
Here's a version for the Elliott 803 Algol (A104) The standard Elliott 803 used 5-hole paper tape and thus only had upper case. The code lacked any quote characters so £ (pound sign) was used for open quote and ? (question mark) for close quote. Special sequences were placed in double quotes (e.g., £L?? produced a new line on the teleprinter).
HIFOLKS'
BEGIN
PRINT £HELLO WORLD£L??'
END'
The ICT 1900 series Algol I/O version allowed input from paper tape or punched card. Paper tape 'full' mode allowed lower case. Output was to a line printer. Note use of '(', ')', and %.
'PROGRAM' (HELLO)
'BEGIN'
'COMMENT' OPEN QUOTE IS '(', CLOSE IS ')', PRINTABLE SPACE HAS TO
BE WRITTEN AS % BECAUSE SPACES ARE IGNORED;
WRITE TEXT('('HELLO%WORLD')');
'END'
'FINISH'
See also
Further reading
Dijkstra, Edsger W. (1961). "ALGOL 60 Translation: An ALGOL 60 Translator for the X1 and Making a Translator for ALGOL 60 (PDF) (Technical report). Amsterdam: Mathematisch Centrum. 35.
Randell, Brian; Russell, Lawford John (1964). ALGOL 60 Implementation: The Translation and Use of ALGOL 60 Programs on a Computer. Academic Press. OCLC 526731. The design of the Whetstone Compiler. One of the early published descriptions of implementing a compiler. See the related papers: Whetstone Algol Revisited, and The Whetstone KDF9 ALGOL Translator by Brian Randell
Revised Report on the Algorithmic Language ALGOL 60 by Peter Naur, et al. ALGOL definition
A BNF syntax summary of ALGOL 60
"The Emperor's Old Clothes" – Hoare's 1980 ACM Turing Award speech, which discusses ALGOL history and his involvement
MARST, a free ALGOL-to-C translator
An Implementation of ALGOL 60 for the FP6000 Archived 2020-07-25 at the Wayback Machine Discussion of some implementation issues.
Naur, Peter (August 1978). "The European Side of the Last Phase of the Development of ALGOL 60". ACM SIGPLAN Notices. 13 (8): 15–44. doi:10.1145/960118.808370. S2CID 15552479.
Edinburgh University wrote compilers for Algol60 (later updated for Algol60M) based on their Atlas Autocode compilers initially bootstrapped from the Atlas to the KDF-9. The Edinburgh compilers generated code for the ICL1900, the ICL4/75 (an IBM360 clone), and the ICL2900. Here is the BNF for Algol60 Archived 2020-05-15 at the Wayback Machine and the ICL2900 compiler source Archived 2020-05-15 at the Wayback Machine, library documentation Archived 2020-05-15 at the Wayback Machine, and a considerable test suite Archived 2020-05-15 at the Wayback Machine including Brian Wichmann's tests. Archived 2020-05-15 at the Wayback Machine Also there is a rather superficial Algol60 to Atlas Autocode source-level translator Archived 2020-05-15 at the Wayback Machine.
Eric S. Raymond's Retrocomputing Museum, among others a link to the NASE ALGOL 60 interpreter written in C.
The NASE interpreter
Stories of the B5000 and People Who Were There: a dedicated ALGOL computer [1], [2]
Bottenbruch, Hermann (1961). Structure and Use of ALGOL 60 (Report). doi:10.2172/4020495. OSTI 4020495.
NUMAL A Library of Numerical Procedures in ALGOL 60 developed at The Stichting Centrum Wiskunde & Informatica (legal successor of Stichting Mathematisch Centrum) legal owner.
ALGOL 60 resources: translators, documentation, programs
ALGOL 60 included in Racket |
ALGOL 68 (short for Algorithmic Language 1968) is an imperative programming language that was conceived as a successor to the ALGOL 60 programming language, designed with the goal of a much wider scope of application and more rigorously defined syntax and semantics.
The complexity of the language's definition, which runs to several hundred pages filled with non-standard terminology, made compiler implementation difficult and it was said it had "no implementations and no users". This was only partly true; ALGOL 68 did find use in several niche markets, notably in the United Kingdom where it was popular on International Computers Limited (ICL) machines, and in teaching roles. Outside these fields, use was relatively limited.
Nevertheless, the contributions of ALGOL 68 to the field of computer science have been deep, wide-ranging and enduring, although many of these contributions were only publicly identified when they had reappeared in subsequently developed programming languages. Many languages were developed specifically as a response to the perceived complexity of the language, the most notable being Pascal, or were reimplementations for specific roles, like Ada.
Many languages of the 1970s trace their design specifically to ALGOL 68, selecting some features while abandoning others that were considered too complex or out-of-scope for given roles. Among these is the language C, which was directly influenced by ALGOL 68, especially by its strong typing and structures. Most modern languages trace at least some of their syntax to either C or Pascal, and thus directly or indirectly to ALGOL 68.
Overview
ALGOL 68 features include expression-based syntax, user-declared types and structures/tagged-unions, a reference model of variables and reference parameters, string, array and matrix slicing, and concurrency.
ALGOL 68 was designed by the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi. On December 20, 1968, the language was formally adopted by the group, and then approved for publication by the General Assembly of IFIP.
ALGOL 68 was defined using a formalism, a two-level formal grammar, invented by Adriaan van Wijngaarden. Van Wijngaarden grammars use a context-free grammar to generate an infinite set of productions that will recognize a particular ALGOL 68 program; notably, they are able to express the kind of requirements that in many other programming language technical standards are labelled semantics, and must be expressed in ambiguity-prone natural language prose, and then implemented in compilers as ad hoc code attached to the formal language parser.
The main aims and principles of design of ALGOL 68:
Completeness and clarity of description
Orthogonality of design
Security
Efficiency:Static mode checking
Mode-independent parsing
Independent compiling
Loop optimizing
Representations – in minimal & larger character setsALGOL 68 has been criticized, most prominently by some members of its design committee such as C. A. R. Hoare and Edsger Dijkstra, for abandoning the simplicity of ALGOL 60, becoming a vehicle for complex or overly general ideas, and doing little to make the compiler writer's task easier, in contrast to deliberately simple contemporaries (and competitors) such as C, S-algol and Pascal.
In 1970, ALGOL 68-R became the first working compiler for ALGOL 68.
In the 1973 revision, certain features — such as proceduring, gommas and formal bounds — were omitted. C.f. The language of the unrevised report.r0
Though European defence agencies (in Britain Royal Signals and Radar Establishment (RSRE)) promoted the use of ALGOL 68 for its expected security advantages, the American side of the NATO alliance decided to develop a different project, the language Ada, making its use obligatory for US defense contracts.
ALGOL 68 also had a notable influence in the Soviet Union, details of which can be found in Andrey Ershov's 2014 paper: "ALGOL 68 and Its Impact on the USSR and Russian Programming", and "Алгол 68 и его влияние на программирование в СССР и России".Steve Bourne, who was on the ALGOL 68 revision committee, took some of its ideas to his Bourne shell (and thereby, to descendant Unix shells such as Bash) and to C (and thereby to descendants such as C++).
The complete history of the project can be found in C. H. Lindsey's A History of ALGOL 68.For a full-length treatment of the language, see "Programming ALGOL 68 Made Easy" by Dr. Sian Mountbatten, or "Learning ALGOL 68 Genie" by Marcel van der Veer which includes the Revised Report.
History
Origins
ALGOL 68, as the name implies, is a follow-on to the ALGOL language that was first formalized in 1960. That same year the International Federation for Information Processing (IFIP) formed and started the Working Group on ALGOL, or WG2.1. This group released an updated ALGOL 60 specification in Rome in April 1962. At a follow-up meeting in March 1964, it was agreed that the group should begin work on two follow-on standards, ALGOL X which would be a redefinition of the language with some additions, and an ALGOL Y, which would have the ability to modify its own programs in the style of the language LISP.
Definition process
The first meeting of the ALGOL X group was held in Princeton University in May 1965. A report of the meeting noted two broadly supported themes, the introduction of strong typing and interest in Euler's concepts of 'trees' or 'lists' for handling collections.At the second meeting in October in France, three formal proposals were presented, Niklaus Wirth's ALGOL W along with comments about record structures by C.A.R. (Tony) Hoare, a similar language by Gerhard Seegmüller, and a paper by Adriaan van Wijngaarden on "Orthogonal design and description of a formal language". The latter, written in almost indecipherable "W-Grammar", proved to be a decisive shift in the evolution of the language. The meeting closed with an agreement that van Wijngaarden would re-write the Wirth/Hoare submission using his W-Grammar.This seemingly simple task ultimately proved more difficult than expected, and the follow-up meeting had to be delayed six months. When it met in April 1966 in Kootwijk, van Wijngaarden's draft remained incomplete and Wirth and Hoare presented a version using more traditional descriptions. It was generally agreed that their paper was "the right language in the wrong formalism". As these approaches were explored, it became clear there was a difference in the way parameters were described that would have real-world effects, and while Wirth and Hoare protested that further delays might become endless, the committee decided to wait for van Wijngaarden's version. Wirth then implemented their current definition as ALGOL W.At the next meeting in Warsaw in October 1966, there was an initial report from the I/O Subcommittee who had met at the Oak Ridge National Laboratory and the University of Illinois but had not yet made much progress. The two proposals from the previous meeting were again explored, and this time a new debate emerged about the use of pointers; ALGOL W used them only to refer to records, while van Wijngaarden's version could point to any object. To add confusion, John McCarthy presented a new proposal for operator overloading and the ability to string together and or constructs, and Klaus Samelson wanted to allow anonymous functions. In the resulting confusion, there was some discussion of abandoning the entire effort. The confusion continued through what was supposed to be the ALGOL Y meeting in Zandvoort in May 1967.
Publication
A draft report was finally published in February 1968. This was met by "shock, horror and dissent", mostly due to the hundreds of pages of unreadable grammar and odd terminology. Charles H. Lindsey attempted to figure out what "language was hidden inside of it", a process that took six man-weeks of effort. The resulting paper, "ALGOL 68 with fewer tears", was widely circulated. At a wider information processing meeting in Zurich in May 1968, attendees complained that the language was being forced upon them and that IFIP was "the true villain of this unreasonable situation" as the meetings were mostly closed and there was no formal feedback mechanism. Wirth and Peter Naur formally resigned their authorship positions in WG2.1 at that time.The next WG2.1 meeting took place in Tirrenia in June 1968. It was supposed to discuss the release of compilers and other issues, but instead devolved into a discussion on the language. van Wijngaarden responded by saying (or threatening) that he would release only one more version of the report. By this point Naur, Hoare, and Wirth had left the effort, and several more were threatening to do so. Several more meetings followed, North Berwick in August 1968, Munich in December which produced the release of the official Report in January 1969 but also resulted in a contentious Minority Report being written. Finally, at Banff, Alberta in September 1969, the project was generally considered complete and the discussion was primarily on errata and a greatly expanded Introduction to the Report.The effort took five years, burned out many of the greatest names in computer science, and on several occasions became deadlocked over issues both in the definition and the group as a whole. Hoare released a "Critique of ALGOL 68" almost immediately, which has been widely referenced in many works. Wirth went on to further develop the ALGOL W concept and released this as Pascal in 1970.
Implementations
ALGOL 68-R
The first implementation of the standard, based on the late-1968 draft Report, was introduced by the Royal Radar Establishment in the UK as ALGOL 68-R in July 1970. This was, however, a subset of the full language, and Barry Mailloux, the final editor of the Report, joked that "It is a question of morality. We have a Bible and you are sinning!" This version nevertheless became very popular on the ICL machines, and became a widely-used language in military coding, especially in the UK.Among the changes in 68-R was the requirement for all variables to be declared before their first use. This had a significant advantage that it allowed the compiler to be one-pass, as space for the variables in the activation record was set aside before it was used. However, this change also had the side-effect of demanding the PROCs be declared twice, once as a declaration of the types, and then again as the body of code. Another change was to eliminate the assumed VOID mode, an expression that returns no value (named a statement in other languages) and demanding the word VOID be added where it would have been assumed. Further, 68-R eliminated the explicit parallel processing commands based on PAR.
Others
The first full implementation of the language was introduced in 1974 by CDC Netherlands for the Control Data mainframe series. This saw limited use, mostly teaching in Germany and the Netherlands.A version similar to 68-R was introduced from Carnegie Mellon University in 1976 as 68S, and was again a one-pass compiler based on various simplifications of the original and intended for use on smaller machines like the DEC PDP-11. It too was used mostly for teaching purposes.A version for IBM mainframes did not become available until 1978, when one was released from Cambridge University. This was "nearly complete". Lindsey released a version for small machines including the IBM PC in 1984.Three open source Algol 68 implementations are known:
a68g, GPLv3, written by Marcel van der Veer.
algol68toc, an open-source software port of ALGOL 68RS.
experimental Algol68 frontend for GCC, written by Jose E. Marchesi.
Timeline
"A Shorter History of Algol 68"
ALGOL 68 – 3rd generation ALGOL
The Algorithmic Language ALGOL 68 Reports and Working Group members
March 1968: Draft Report on the Algorithmic Language ALGOL 68 – Edited by: Adriaan van Wijngaarden, Barry J. Mailloux, John Peck and Cornelis H. A. Koster."Van Wijngaarden once characterized the four authors, somewhat tongue-in-cheek, as: Koster: transputter, Peck: syntaxer, Mailloux: implementer, Van Wijngaarden: party ideologist." – Koster.
October 1968: Penultimate Draft Report on the Algorithmic Language ALGOL 68 — Chapters 1-9 Chapters 10-12 — Edited by: A. van Wijngaarden, B.J. Mailloux, J. E. L. Peck and C. H. A. Koster.
December 1968: Report on the Algorithmic Language ALGOL 68 — Offprint from Numerische Mathematik, 14, 79-218 (1969); Springer-Verlag. — Edited by: A. van Wijngaarden, B. J. Mailloux, J. E. L. Peck and C. H. A. Koster.
March 1970: Minority report, ALGOL Bulletin AB31.1.1 — signed by Edsger Dijkstra, Fraser Duncan, Jan Garwick, Tony Hoare, Brian Randell, Gerhard Seegmüller, Wlad Turski, and Mike Woodger.
September 1973: Revised Report on the Algorithmic Language Algol 68 — Springer-Verlag 1976 — Edited by: A. van Wijngaarden, B. Mailloux, J. Peck, K. Koster, Michel Sintzoff, Charles H. Lindsey, Lambert Meertens and Richard G. Fisker.
other WG 2.1 members active in ALGOL 68 design: Friedrich L. Bauer • Hans Bekic • Gerhard Goos • Peter Zilahy Ingerman • Peter Landin • John McCarthy • Jack Merner • Peter Naur • Manfred Paul • Willem van der Poel • Doug Ross • Klaus Samelson • Niklaus Wirth • Nobuo Yoneda.
Timeline of standardization
1968: On 20 December 1968, the "Final Report" (MR 101) was adopted by the Working Group, then subsequently approved by the General Assembly of UNESCO's IFIP for publication. Translations of the standard were made for Russian, German, French and Bulgarian, and then later Japanese and Chinese. The standard was also made available in Braille.
1984: TC 97 considered ALGOL 68 for standardisation as "New Work Item" TC97/N1642 [2][3]. West Germany, Belgium, Netherlands, USSR and Czechoslovakia willing to participate in preparing the standard but the USSR and Czechoslovakia "were not the right kinds of member of the right ISO committees"[4] and Algol 68's ISO standardisation stalled.[5]
1988: Subsequently ALGOL 68 became one of the GOST standards in Russia.
GOST 27974-88 Programming language ALGOL 68 — Язык программирования АЛГОЛ 68
GOST 27975-88 Programming language ALGOL 68 extended — Язык программирования АЛГОЛ 68 расширенный
Notable language elements
Bold symbols and reserved words
The standard language contains about sixty reserved words, typically bolded in print, and some with "brief symbol" equivalents:
MODE, OP, PRIO, PROC,
FLEX, HEAP, LOC, LONG, REF, SHORT,
BITS, BOOL, BYTES, CHAR, COMPL, INT, REAL, SEMA, STRING, VOID,
CHANNEL, FILE, FORMAT, STRUCT, UNION,
AT "@", EITHERr0, IS ":=:", ISNT IS NOTr0 ":/=:" ":≠:", OF "→"r0, TRUE, FALSE, EMPTY, NIL "○", SKIP "~",
CO "¢", COMMENT "¢", PR, PRAGMAT,
CASE ~ IN ~ OUSE ~ IN ~ OUT ~ ESAC "( ~ | ~ |: ~ | ~ | ~ )",
FOR ~ FROM ~ TO ~ BY ~ WHILE ~ DO ~ OD,
IF ~ THEN ~ ELIF ~ THEN ~ ELSE ~ FI "( ~ | ~ |: ~ | ~ | ~ )",
PAR BEGIN ~ END "( ~ )", GO TO, GOTO, EXIT "□"r0.
Units: Expressions
The basic language construct is the unit. A unit may be a formula, an enclosed clause, a routine text or one of several technically needed constructs (assignation, jump, skip, nihil). The technical term enclosed clause unifies some of the inherently bracketing constructs known as block, do statement, switch statement in other contemporary languages. When keywords are used, generally the reversed character sequence of the introducing keyword is used for terminating the enclosure, e.g. ( IF ~ THEN ~ ELSE ~ FI, CASE ~ IN ~ OUT ~ ESAC, FOR ~ WHILE ~ DO ~ OD ). This Guarded Command syntax was reused by Stephen Bourne in the common Unix Bourne shell. An expression may also yield a multiple value, which is constructed from other values by a collateral clause. This construct just looks like the parameter pack of a procedure call.
mode: Declarations
The basic data types (called modes in Algol 68 parlance) are real, int, compl (complex number), bool, char, bits and bytes. For example:
INT n = 2;
CO n is fixed as a constant of 2. CO
INT m := 3;
CO m is a newly created local variable whose value is initially set to 3. CO
CO This is short for ref int m = loc int := 3; CO
REAL avogadro = 6.0221415⏨23; CO Avogadro number CO
long long real long long pi = 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 37510;
COMPL square root of minus one = 0 ⊥ 1;
However, the declaration REAL x; is just syntactic sugar for REF REAL x = LOC REAL;. That is, x is really the constant identifier for a reference to a newly generated local REAL variable.
Furthermore, instead of defining both float and double, or int and long and short, etc., ALGOL 68 provides modifiers, so that the presently common double would be written as LONG REAL or LONG LONG REAL instead, for example. The prelude constants max real and min long int are provided to adapt programs to different implementations.
All variables need to be declared, but declaration does not have to precede the first use.
primitive-declarer: INT, REAL, COMPL, COMPLEXG, BOOL, CHAR, STRING, BITS, BYTES, FORMAT, FILE, PIPEG, CHANNEL, SEMA
BITS – a "packed vector" of BOOL.
BYTES – a "packed vector" of CHAR.
STRING – a FLEXible array of CHAR.
SEMA – a SEMAphore which can be initialised with the OPerator LEVEL.Complex types can be created from simpler ones using various type constructors:
REF mode – a reference to a value of type mode, similar to & in C/C++ and REF in Pascal
STRUCT – used to build structures, like STRUCT in C/C++ and RECORD in Pascal
UNION – used to build unions, like in C/C++ and Pascal
PROC – used to specify procedures, like functions in C/C++ and procedures/functions in PascalFor some examples, see Comparison of ALGOL 68 and C++.
Other declaration symbols include: FLEX, HEAP, LOC, REF, LONG, SHORT, EVENTS
FLEX – declare the array to be flexible, i.e. it can grow in length on demand.
HEAP – allocate variable some free space from the global heap.
LOC – allocate variable some free space of the local stack.
LONG – declare an INT, REAL or COMPL to be of a LONGer size.
SHORT – declare an INT, REAL or COMPL to be of a SHORTer size.A name for a mode (type) can be declared using a MODE declaration,
which is similar to TYPEDEF in C/C++ and TYPE in Pascal:
INT max=99;
MODE newmode = [0:9][0:max]STRUCT (
LONG REAL a, b, c, SHORT INT i, j, k, REF REAL r
);
This is similar to the following C code:
For ALGOL 68, only the NEWMODE mode-indication appears to the left of the equals symbol, and most notably the construction is made, and can be read, from left to right without regard to priorities. Also, the lower bound of Algol 68 arrays is one by default, but can be any integer from -max int to max int.
Mode declarations allow types to be recursive: defined directly or indirectly in terms of themselves.
This is subject to some restrictions – for instance, these declarations are illegal:
MODE A = REF A
MODE A = STRUCT (A a, B b)
MODE A = PROC (A a) A
while these are valid:
MODE A = STRUCT (REF A a, B b)
MODE A = PROC (REF A a) REF A
Coercions: casting
The coercions produce a coercee from a coercend according to three criteria: the a priori mode of the coercend before the application of any coercion, the a posteriori mode of the coercee required after those coercions, and the syntactic position or "sort" of the coercee. Coercions may be cascaded.
The six possible coercions are termed deproceduring, dereferencing, uniting, widening, rowing, and voiding. Each coercion, except for uniting, prescribes a corresponding dynamic effect on the associated values. Hence, many primitive actions can be programmed implicitly by coercions.
Context strength – allowed coercions:
soft – deproceduring
weak – dereferencing or deproceduring, yielding a name
meek – dereferencing or deproceduring
firm – meek, followed by uniting
strong – firm, followed by widening, rowing or voiding
Coercion hierarchy with examples
ALGOL 68 has a hierarchy of contexts which determine the kind of coercions available at a particular point in the program. These contexts are:
For more details about Primaries, Secondaries, Tertiary & Quaternaries refer to Operator precedence.
pr & co: Pragmats and Comments
Pragmats are directives in the program, typically hints to the compiler; in newer languages these are called "pragmas" (no 't'). e.g.
PRAGMAT heap=32 PRAGMAT
PR heap=32 PR
Comments can be inserted in a variety of ways:
¢ The original way of adding your 2 cents worth to a program ¢
COMMENT "bold" comment COMMENT
CO Style i comment CO
Style ii comment #
£ This is a hash/pound comment for a UK keyboard £
Normally, comments cannot be nested in ALGOL 68. This restriction can be circumvented by using different comment delimiters (e.g. use hash only for temporary code deletions).
Expressions and compound statements
ALGOL 68 being an expression-oriented programming language, the value returned by an assignment statement is a reference to the destination. Thus, the following is valid ALGOL 68 code:
REAL half pi, one pi; one pi := 2 * ( half pi := 2 * arc tan(1) )
This notion is present in C and Perl, among others. Note that as in earlier languages such as Algol 60 and FORTRAN, spaces are allowed in identifiers, so that half pi is a single identifier (thus avoiding the underscores versus camel case versus all lower-case issues).
As another example, to express the mathematical idea of a sum of f(i) from i=1 to n, the following ALGOL 68 integer expression suffices:
(INT sum := 0; FOR i TO n DO sum +:= f(i) OD; sum)
Note that, being an integer expression, the former block of code can be used in any context where an integer value can be used. A block of code returns the value of the last expression it evaluated; this idea is present in Lisp, among other languages.
Compound statements are all terminated by distinctive closing brackets:
IF choice clauses: IF condition THEN statements [ ELSE statements ] FI
"brief" form: ( condition | statements | statements )
IF condition1 THEN statements ELIF condition2 THEN statements [ ELSE statements ] FI
"brief" form: ( condition1 | statements |: condition2 | statements | statements )
This scheme not only avoids the dangling else problem but also avoids having to use BEGIN and END in embedded statement sequences.
CASE choice clauses: CASE switch IN statements, statements,... [ OUT statements ] ESAC
"brief" form: ( switch | statements,statements,... | statements )
CASE switch1 IN statements, statements,... OUSE switch2 IN statements, statements,... [ OUT statements ] ESAC
"brief" form of CASE statement: ( switch1 | statements,statements,... |: switch2 | statements,statements,... | statements )
Choice clause example with Brief symbols:
PROC days in month = (INT year, month)INT:
(month|
31,
(year÷×4=0 ∧ year÷×100≠0 ∨ year÷×400=0 | 29 | 28 ),
31, 30, 31, 30, 31, 31, 30, 31, 30, 31
);
Choice clause example with Bold symbols:
PROC days in month = (INT year, month)INT:
CASE month IN
31,
IF year MOD 4 EQ 0 AND year MOD 100 NE 0 OR year MOD 400 EQ 0 THEN 29 ELSE 28 FI,
31, 30, 31, 30, 31, 31, 30, 31, 30, 31
ESAC;
Choice clause example mixing Bold and Brief symbols:
PROC days in month = (INT year, month)INT:
CASE month IN
¢Jan¢ 31,
¢Feb¢ ( year MOD 4 = 0 AND year MOD 100 ≠ 0 OR year MOD 400 = 0 | 29 | 28 ),
¢Mar¢ 31, 30, 31, 30, 31, 31, 30, 31, 30, 31 ¢ to Dec. ¢
ESAC;
Algol68 allowed the switch to be of either type INT or (uniquely) UNION. The latter allows the enforcing strong typing onto UNION variables. c.f. union below for example.
do loop clause: [ FOR index ] [ FROM first ] [ BY increment ] [ TO last ] [ WHILE condition ] DO statements OD
The minimum form of a "loop clause" is thus: DO statements OD
This was considered the "universal" loop, the full syntax is:
FOR i FROM 1 BY -22 TO -333 WHILE i×i≠4444 DO ~ OD
The construct have several unusual aspects:
only the DO ~ OD portion was compulsory, in which case the loop will iterate indefinitely.
thus the clause TO 100 DO ~ OD, will iterate only 100 times.
the WHILE "syntactic element" allowed a programmer to break from a FOR loop early. e.g.INT sum sq:=0;
FOR i
WHILE
print(("So far:",i,newline));
sum sq≠70↑2
DO
sum sq+:=i↑2
OD
Subsequent "extensions" to the standard Algol68 allowed the TO syntactic element to be replaced with UPTO and DOWNTO to achieve a small optimisation. The same compilers also incorporated:
UNTIL(C) – for late loop termination.
FOREACH(S) – for working on arrays in parallel.Further examples can be found in the code examples below.
struct, union & [:]: Structures, unions and arrays
ALGOL 68 supports arrays with any number of dimensions, and it allows for the slicing of whole or partial rows or columns.
MODE VECTOR = [1:3] REAL; # vector MODE declaration (typedef) #
MODE MATRIX = [1:3,1:3]REAL; # matrix MODE declaration (typedef) #
VECTOR v1 := (1,2,3); # array variable initially (1,2,3) #
[]REAL v2 = (4,5,6); # constant array, type equivalent to VECTOR, bounds are implied #
OP + = (VECTOR a,b) VECTOR: # binary OPerator definition #
(VECTOR out; FOR i FROM ⌊a TO ⌈a DO out[i] := a[i]+b[i] OD; out);
MATRIX m := (v1, v2, v1+v2);
print ((m[,2:])); # a slice of the 2nd and 3rd columns #
Matrices can be sliced either way, e.g.:
REF VECTOR row = m[2,]; # define a REF (pointer) to the 2nd row #
REF VECTOR col = m[,2]; # define a REF (pointer) to the 2nd column #
ALGOL 68 supports multiple field structures (STRUCT) and united modes. Reference variables may point to any MODE including array slices and structure fields.
For an example of all this, here is the traditional linked list declaration:
MODE NODE = UNION (VOID, REAL, INT, COMPL, STRING),
LIST = STRUCT (NODE val, REF LIST next);
Usage example for UNION CASE of NODE:
proc: Procedures
Procedure (PROC) declarations require type specifications for both the parameters and the result (VOID if none):
PROC max of real = (REAL a, b) REAL:
IF a > b THEN a ELSE b FI;
or, using the "brief" form of the conditional statement:
PROC max of real = (REAL a, b) REAL: (a>b | a | b);
The return value of a proc is the value of the last expression evaluated in the procedure. PROC apply = (REF [] REAL a, PROC (REAL) REAL f):
FOR i FROM LWB a TO UPB a DO a[i] := f(a[i]) OD
This simplicity of code was unachievable in ALGOL 68's predecessor ALGOL 60.
op: Operators
The programmer may define new operators and both those and the pre-defined ones may be overloaded and their priorities may be changed by the coder. The following example defines operator MAX with both dyadic and monadic versions (scanning across the elements of an array).
PRIO MAX = 9;
OP MAX = (INT a,b) INT: ( a>b | a | b );
OP MAX = (REAL a,b) REAL: ( a>b | a | b );
OP MAX = (COMPL a,b) COMPL: ( ABS a > ABS b | a | b );
OP MAX = ([]REAL a) REAL:
(REAL out := a[LWB a];
FOR i FROM LWB a + 1 TO UPB a DO ( a[i]>out | out:=a[i] ) OD;
out)
Array, Procedure, Dereference and coercion operations
These are technically not operators, rather they are considered "units associated with names"
Monadic operators
Dyadic operators with associated priorities
Specific details:
Tertiaries include names NIL and ○.
LWS: In Algol68r0 the operators LWS and ⎩ ... both return TRUE if the lower state of the dimension of an array is fixed.
The UPS and ⎧ operators are similar on the upper state.
The LWB and UPB operators are automatically available on UNIONs of different orders (and MODEs) of arrays. eg. UPB of union([]int, [,]real, flex[,,,]char)
Assignation and identity relations etc
These are technically not operators, rather they are considered "units associated with names"
Note: Quaternaries include names SKIP and ~.
":=:" (alternatively "IS") tests if two pointers are equal; ":/=:" (alternatively "ISNT") tests if they are unequal.
Why :=: and :/=: are needed: Consider trying to compare two pointer values, such as the following variables, declared as pointers-to-integer:
REF INT ip, jp Now consider how to decide whether these two are pointing to the same location, or whether one of them is pointing to NIL. The following expression
ip = jp will dereference both pointers down to values of type INT, and compare those, since the "=" operator is defined for INT, but not REF INT. It is not legal to define "=" for operands of type REF INT and INT at the same time, because then calls become ambiguous, due to the implicit coercions that can be applied: should the operands be left as REF INT and that version of the operator called? Or should they be dereferenced further to INT and that version used instead? Therefore the following expression can never be made legal:
ip = NIL Hence the need for separate constructs not subject to the normal coercion rules for operands to operators. But there is a gotcha. The following expressions:
ip :=: jp
ip :=: NIL while legal, will probably not do what might be expected. They will always return FALSE, because they are comparing the actual addresses of the variables ip and jp, rather than what they point to. To achieve the right effect, one would have to write
ip :=: REF INT(jp)
ip :=: REF INT(NIL)
Special characters
Most of Algol's "special" characters (⊂, ≡, ␣, ×, ÷, ≤, ≥, ≠, ¬, ⊃, ≡, ∨, ∧, →, ↓, ↑, ⌊, ⌈, ⎩, ⎧, ⊥, ⏨, ¢, ○ and □) can be found on the IBM 2741 keyboard with the APL "golf-ball" print head inserted; these became available in the mid-1960s while ALGOL 68 was being drafted. These characters are also part of the Unicode standard and most of them are available in several popular fonts.
transput: Input and output
Transput is the term used to refer to ALGOL 68's input and output facilities. It includes pre-defined procedures for unformatted, formatted and binary transput. Files and other transput devices are handled in a consistent and machine-independent manner. The following example prints out some unformatted output to the standard output device:
print ((newpage, "Title", newline, "Value of i is ",
i, "and x[i] is ", x[i], newline))
Note the predefined procedures newpage and newline passed as arguments.
Books, channels and files
The TRANSPUT is considered to be of BOOKS, CHANNELS and FILES:
Books are made up of pages, lines and characters, and may be backed up by files.
A specific book can be located by name with a call to match.
CHANNELs correspond to physical devices. e.g. card punches and printers.
Three standard channels are distinguished: stand in channel, stand out channel, stand back channel.
A FILE is a means of communicating between a program and a book that has been opened via some channel.
The MOOD of a file may be read, write, char, bin, and opened.
transput procedures include: establish, create, open, associate, lock, close, scratch.
position enquires: char number, line number, page number.
layout routines include:
space, backspace, newline, newpage.
get good line, get good page, get good book, and PROC set=(REF FILE f, INT page,line,char)VOID:
A file has event routines. e.g. on logical file end, on physical file end, on page end, on line end, on format end, on value error, on char error.
formatted transput
"Formatted transput" in ALGOL 68's transput has its own syntax and patterns (functions), with FORMATs embedded between two $ characters.Examples:
printf (($2l"The sum is:"x, g(0)$, m + n)); ¢ prints the same as: ¢
print ((new line, new line, "The sum is:", space, whole (m + n, 0))
par: Parallel processing
ALGOL 68 supports programming of parallel processing. Using the keyword PAR, a collateral clause is converted to a parallel clause, where the synchronisation of actions is controlled using semaphores. In A68G the parallel actions are mapped to threads when available on the hosting operating system. In A68S a different paradigm of parallel processing was implemented (see below).
PROC
eat = VOID: ( muffins-:=1; print(("Yum!",new line))),
speak = VOID: ( words-:=1; print(("Yak...",new line)));
INT muffins := 4, words := 8;
SEMA mouth = LEVEL 1;
PAR BEGIN
WHILE muffins > 0 DO
DOWN mouth;
eat;
UP mouth
OD,
WHILE words > 0 DO
DOWN mouth;
speak;
UP mouth
OD
END
Examples of use
Code sample
This sample program implements the Sieve of Eratosthenes to find all the prime numbers that are less than 100. NIL is the ALGOL 68 analogue of the null pointer in other languages. The notation x OF y accesses a member x of a STRUCT y.
BEGIN # Algol-68 prime number sieve, functional style #
PROC error = (STRING s) VOID:
(print(( newline, " error: ", s, newline)); GOTO stop);
PROC one to = (INT n) LIST:
(PROC f = (INT m,n) LIST: (m>n | NIL | cons(m, f(m+1,n))); f(1,n));
MODE LIST = REF NODE;
MODE NODE = STRUCT (INT h, LIST t);
PROC cons = (INT n, LIST l) LIST: HEAP NODE := (n,l);
PROC hd = (LIST l) INT: ( l IS NIL | error("hd NIL"); SKIP | h OF l );
PROC tl = (LIST l) LIST: ( l IS NIL | error("tl NIL"); SKIP | t OF l );
PROC show = (LIST l) VOID: ( l ISNT NIL | print((" ",whole(hd(l),0))); show(tl(l)));
PROC filter = (PROC (INT) BOOL p, LIST l) LIST:
IF l IS NIL THEN NIL
ELIF p(hd(l)) THEN cons(hd(l), filter(p,tl(l)))
ELSE filter(p, tl(l))
FI;
PROC sieve = (LIST l) LIST:
IF l IS NIL THEN NIL
ELSE
PROC not multiple = (INT n) BOOL: n MOD hd(l) ~= 0;
cons(hd(l), sieve( filter( not multiple, tl(l) )))
FI;
PROC primes = (INT n) LIST: sieve( tl( one to(n) ));
show( primes(100) )
END
Operating systems written in ALGOL 68
Cambridge CAP computer – All procedures constituting the operating system were written in ALGOL 68C, although several other closely associated protected procedures, such as a paginator, are written in BCPL.
Eldon 3 – Developed at Leeds University for the ICL 1900 was written in ALGOL 68-R.
Flex machine – The hardware was custom and microprogrammable, with an operating system, (modular) compiler, editor, garbage collector and filing system all written in ALGOL 68RS. The command shell Curt was designed to access typed data similar to Algol-68 modes.
VME – S3 was the implementation language of the operating system VME. S3 was based on ALGOL 68 but with data types and operators aligned to those offered by the ICL 2900 Series.Note: The Soviet Era computers Эльбрус-1 (Elbrus-1) and Эльбрус-2 were created using high-level language Эль-76 (AL-76), rather than the traditional assembly. Эль-76 resembles Algol-68, The main difference is the dynamic binding types in Эль-76 supported at the hardware level. Эль-76 is used for application, job control, system programming.
Applications
Both ALGOL 68C and ALGOL 68-R are written in ALGOL 68, effectively making ALGOL 68 an application of itself. Other applications include:
ELLA – a hardware description language and support toolset. Developed by the Royal Signals and Radar Establishment during the 1980s and 1990s.
RAF Strike Command System – "... 400K of error-free ALGOL 68-RT code was produced with three man-years of work. ..."
Libraries and APIs
NAG Numerical Libraries – a software library of numerical analysis routines. Supplied in ALGOL 68 during the 1980s.
TORRIX – a programming system for operations on vectors and matrices over arbitrary fields and of variable size by S. G. van der Meulen and M. Veldhorst.
Program representation
A feature of ALGOL 68, inherited from the ALGOL tradition, is its different representations. There is a representation language used to describe algorithms in printed work, a strict language (rigorously defined in the Report), and an official reference language intended to be used in compiler input. The examples contain BOLD typeface words, this is the STRICT language. ALGOL 68's reserved words are effectively in a different namespace from identifiers, and spaces are allowed in identifiers, so this next fragment is legal:
INT a real int = 3 ;
The programmer who writes executable code does not always have an option of BOLD typeface or underlining in the code as this may depend on hardware and cultural issues. Different methods to denote these identifiers have been devised. This is called a stropping regime. For example, all or some of the following may be available programming representations:
INT a real int = 3; # the STRICT language #
'INT'A REAL INT = 3; # QUOTE stropping style #
.INT A REAL INT = 3; # POINT stropping style #
INT a real int = 3; # UPPER stropping style #
int a_real_int = 3; # RES stropping style, there are 61 accepted reserved words #
All implementations must recognize at least POINT, UPPER and RES inside PRAGMAT sections. Of these, POINT and UPPER stropping are quite common, while RES stropping is a contradiction to the specification (as there are no reserved words). QUOTE (single apostrophe quoting) was the original recommendation, while matched apostrophe quoting, common in ALGOL 60, is not used much in ALGOL 68.The following characters were recommended for portability, and termed "worthy characters" in the Report on the Standard Hardware Representation of Algol 68 Archived 2014-01-02 at the Wayback Machine:
^ Worthy Characters: ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789 "#$%'()*+,-./:;<=>@[ ]_|This reflected a problem in the 1960s where some hardware didn't support lower-case, nor some other non-ASCII characters, indeed in the 1973 report it was written: "Four worthy characters — "|", "_", "[", and "]" — are often coded differently, even at installations which nominally use the same character set."
Base characters: "Worthy characters" are a subset of "base characters".
Example of different program representations
ALGOL 68 allows for every natural language to define its own set of keywords Algol-68. As a result, programmers are able to write programs using keywords from their native language. Below is an example of a simple procedure that calculates "the day following", the code is in two languages: English and German.
# Next day date - English variant #
MODE DATE = STRUCT(INT day, STRING month, INT year);
PROC the day following = (DATE x) DATE:
IF day OF x < length of month (month OF x, year OF x)
THEN (day OF x + 1, month OF x, year OF x)
ELIF month OF x = "December"
THEN (1, "January", year OF x + 1)
ELSE (1, successor of month (month OF x), year OF x)
FI;
# Nachfolgetag - Deutsche Variante #
MENGE DATUM = TUPEL(GANZ tag, WORT monat, GANZ jahr);
FUNKTION naechster tag nach = (DATUM x) DATUM:
WENN tag VON x < monatslaenge(monat VON x, jahr VON x)
DANN (tag VON x + 1, monat VON x, jahr VON x)
WENNABER monat VON x = "Dezember"
DANN (1, "Januar", jahr VON x + 1)
ANSONSTEN (1, nachfolgemonat(monat VON x), jahr VON x)
ENDEWENN;
Russian/Soviet example:
In English Algol68's case statement reads CASE ~ IN ~ OUT ~ ESAC, in Cyrillic this reads выб ~ в ~ либо ~ быв.
Some Vanitas
For its technical intricacies, ALGOL 68 needs a cornucopia of methods to deny the existence of something:
SKIP, "~" or "?"C – an undefined value always syntactically valid,
EMPTY – the only value admissible to VOID, needed for selecting VOID in a UNION,
VOID – syntactically like a MODE, but not one,
NIL or "○" – a name not denoting anything, of an unspecified reference mode,or specifically [1:0]INT – a vacuum is an empty array (here specifically of MODE []INT).
undefined – a standards reports procedure raising an exception in the runtime system.
ℵ – Used in the standards report to inhibit introspection of certain types. e.g. SEMA
c.f. below for other examples of ℵ.
The term NIL IS var always evaluates to TRUE for any variable (but see above for correct use of IS :/=:), whereas it is not known to which value a comparison x < SKIP evaluates for any integer x.
ALGOL 68 leaves intentionally undefined what happens in case of integer overflow, the integer bit representation, and the degree of numerical accuracy for floating point. In contrast, the language Java has been criticized for over-specifying the latter.
Both official reports included some advanced features that were not part of the standard language. These were indicated with an ℵ and considered effectively private. Examples include "≮" and "≯" for templates, the OUTTYPE/INTYPE for crude duck typing, and the STRAIGHTOUT and STRAIGHTIN operators for "straightening" nested arrays and structures.
Extract from the 1973 report:
§10.3.2.2. Transput modes
a) MODE ℵ SIMPLOUT = UNION (≮ℒ INT≯, ≮ℒ REAL≯, ≮ℒ COMPL≯, BOOL, ≮ℒ bits≯,
CHAR, [ ] CHAR);
b) MODE ℵ OUTTYPE = ¢ an actual – declarer specifying a mode united
from a sufficient set of modes none of which is 'void' or contains 'flexible',
'reference to', 'procedure' or 'union of' ¢;
c) MODE ℵ SIMPLIN = UNION (≮REF ℒ INT≯, ≮REF ℒ REAL≯, ≮REF ℒ COMPL≯, REF BOOL,
≮REF ℒ BITS≯, REF CHAR, REF [ ] CHAR, REF STRING);
d) MODE ℵ INTYPE = ¢ ... ¢;
§10.3.2.3. Straightening
a) OP ℵ STRAIGHTOUT = (OUTTYPE x) [ ] SIMPLOUT: ¢ the result of "straightening" 'x' ¢;
b) OP ℵ STRAIGHTIN = (INTYPE x) [ ] SIMPLIN: ¢ the result of straightening 'x' ¢;
Comparisons with other languages
1973 – Comparative on Algol 68 and PL/I – S. H. Valentine – February 1973
1973 – B. R. Alexander and G. E. Hedrick. A Comparison of PL/1 and ALGOL 68. International Symposium on Computers and Chinese Input/Output Systems. pp. 359–368.
1976 – Evaluation of ALGOL 68, JOVIAL J3B, Pascal, Simula 67, and TACPOL Versus TINMAN – Requirements for a Common High Order Programming Language.
1976 – A Language Comparison – A Comparison of the Properties of the Programming Languages ALGOL 68, CAMAC-IML, Coral 66, PAS 1, PEARL, PL/1, PROCOL, RTL/2 in Relation to Real Time Programming – R. Roessler; K. Schenk – October 1976 [7]
1976 – Evaluation of ALGOL 68, JOVIAL J3B, PASCAL, SIMULA 67, and TACPOL Versus [Steelman language requirements|TINMAN] Requirements for a Common High Order Programming Language. October 1976 [8]
1977 – Report to the High Order-Language Working Group (HOLWG) – Executive Summary – Language Evaluation Coordinating Committee – Evaluation of PL/I, Pascal, ALGOL 68, HAL/S, PEARL, SPL/I, PDL/2, LTR, CS-4, LIS, Euclid, ECL, Moral, RTL/2, Fortran, COBOL, ALGOL 60, TACPOL, CMS-2, Simula 67, JOVIAL J3B, JOVIAL J73 & Coral 66.
1977 – A comparison of PASCAL and ALGOL 68 – Andrew S. Tanenbaum – June 1977.
1980 – A Critical Comparison of Several Programming Language Implementations – Algol 60, FORTRAN, Pascal and Algol 68.
1993 – Five Little Languages and How They Grew – BLISS, Pascal, Algol 68, BCPL & C – Dennis M. Ritchie – April 1993.
1999 – On Orthogonality: Algol68, Pascal and C
2000 – A Comparison of Arrays in ALGOL 68 and BLISS – University of Virginia – Michael Walker – Spring 2000
2009 – On Go – oh, go on – How well will Google's Go stand up against Brand X programming language? – David Given – November 2009
2010 – Algol and Pascal from "Concepts in Programming Languages – Block-structured procedural languages" – by Marcelo Fiore
Comparison of ALGOL 68 and C++
Revisions
Except where noted (with a superscript), the language described above is that of the "Revised Report(r1)".
The language of the unrevised report
The original language (As per the "Final Report"r0) differs in syntax of the mode cast, and it had the feature of proceduring, i.e. coercing the value of a term into a procedure which evaluates the term. Proceduring would be intended to make evaluations lazy. The most useful application could have been the short-circuited evaluation of boolean operators. In:
OP ANDF = (BOOL a,PROC BOOL b)BOOL:(a | b | FALSE);
OP ORF = (BOOL a,PROC BOOL b)BOOL:(a | TRUE | b);
b is only evaluated if a is true.
As defined in ALGOL 68, it did not work as expected, for example in the code:
IF FALSE ANDF CO proc bool: CO ( print ("Should not be executed"); TRUE)
THEN ...
against the programmers naïve expectations the print would be executed as it is only the value of the elaborated enclosed-clause after ANDF that was procedured. Textual insertion of the commented-out PROC BOOL: makes it work.
Some implementations emulate the expected behaviour for this special case by extension of the language.
Before revision, the programmer could decide to have the arguments of a procedure evaluated serially instead of collaterally by using semicolons instead of commas (gommas).
For example in:
PROC test = (REAL a; REAL b) :...
...
test (x PLUS 1, x);
The first argument to test is guaranteed to be evaluated before the second, but in the usual:
PROC test = (REAL a, b) :...
...
test (x PLUS 1, x);
then the compiler could evaluate the arguments in whatever order it felt like.
Extension proposals from IFIP WG 2.1
After the revision of the report, some extensions to the language have been proposed to widen the applicability:
partial parametrisation (aka Currying): creation of functions (with fewer parameters) by specification of some, but not all parameters for a call, e.g. a function logarithm of two parameters, base and argument, could be specialised to natural, binary or decadic log,
module extension: for support of external linkage, two mechanisms were proposed, bottom-up definition modules, a more powerful version of the facilities from ALGOL 68-R and top-down holes, similar to the ENVIRON and USING clauses from ALGOL 68C
mode parameters: for implementation of limited parametrical polymorphism (most operations on data structures like lists, trees or other data containers can be specified without touching the pay load).So far, only partial parametrisation has been implemented, in Algol 68 Genie.
True ALGOL 68s specification and implementation timeline
The S3 language that was used to write the ICL VME operating system and much other system software on the ICL 2900 Series was a direct derivative of Algol 68. However, it omitted many of the more complex features, and replaced the basic modes with a set of data types that mapped directly to the 2900 Series hardware architecture.
Implementation specific extensions
ALGOL 68R(R) from RRE was the first ALGOL 68 subset implementation, running on the ICL 1900. Based on the original language, the main subset restrictions were definition before use and no parallel processing. This compiler was popular in UK universities in the 1970s, where many computer science students learnt ALGOL 68 as their first programming language; the compiler was renowned for good error messages.
ALGOL 68RS(RS) from RSRE was a portable compiler system written in ALGOL 68RS (bootstrapped from ALGOL 68R), and implemented on a variety of systems including the ICL 2900/Series 39, Multics and DEC VAX/VMS. The language was based on the Revised Report, but with similar subset restrictions to ALGOL 68R. This compiler survives in the form of an Algol68-to-C compiler.
In ALGOL 68S(S) from Carnegie Mellon University the power of parallel processing was improved by adding an orthogonal extension, eventing. Any variable declaration containing keyword EVENT made assignments to this variable eligible for parallel evaluation, i.e. the right hand side was made into a procedure which was moved to one of the processors of the C.mmp multiprocessor system. Accesses to such variables were delayed after termination of the assignment.
Cambridge ALGOL 68C(C) was a portable compiler that implemented a subset of ALGOL 68, restricting operator definitions and omitting garbage collection, flexible rows and formatted transput.
Algol 68 Genie(G) by M. van der Veer is an ALGOL 68 implementation for today's computers and operating systems.
"Despite good intentions, a programmer may violate portability by inadvertently employing a local extension. To guard against this, each implementation should provide a PORTCHECK pragmat option. While this option is in force, the compiler prints a message for each construct that it recognizes as violating some portability constraint."
Quotes
... The scheme of type composition adopted by C owes considerable debt to Algol 68, although it did not, perhaps, emerge in a form that Algol's adherents would approve of. The central notion I captured from Algol was a type structure based on atomic types (including structures), composed into arrays, pointers (references), and functions (procedures). Algol 68's concept of unions and casts also had an influence that appeared later. Dennis Ritchie Apr 1993.
... C does not descend from Algol 68 is true, yet there was influence, much of it so subtle that it is hard to recover even when I think hard. In particular, the union type (a late addition to C) does owe to A68, not in any details, but in the idea of having such a type at all. More deeply, the type structure in general and even, in some strange way, the declaration syntax (the type-constructor part) was inspired by A68. And yes, of course, "long". Dennis Ritchie, 18 June 1988
"Congratulations, your Master has done it" – Niklaus Wirth
The more I see of it, the more unhappy I become – E. W. Dijkstra, 1968
[...] it was said that A68's popularity was inversely proportional to [...] the distance from Amsterdam – Guido van Rossum
[...] The best we could do was to send with it a minority report, stating our considered view that, "... as a tool for the reliable creation of sophisticated programs, the language was a failure." [...] – C. A. R. Hoare in his Oct 1980 Turing Award Lecture
"[...] More than ever it will be required from an adequate programming tool that it assists, by structure, the programmer in the most difficult aspects of his job, viz. in the reliable creation of sophisticated programs. In this respect we fail to see how the language proposed here is a significant step forward: on the contrary, we feel that its implicit view of the programmer's task is very much the same as, say, ten years ago. This forces upon us the conclusion that, regarded as a programming tool, the language must be regarded as obsolete. [...]" 1968 Working Group minority report on 23 December 1968.
See also
Works cited
Revised Report on the Algorithmic Language ALGOL 68 The official reference for users and implementors of the language (large pdf file, scanned from Algol Bulletin)
Revised Report on the Algorithmic Language ALGOL 68 Hyperlinked HTML version of the Revised Report
A Tutorial on Algol 68, by Andrew S. Tanenbaum, in Computing Surveys, Vol. 8, No. 2, June 1976, with Corrigenda (Vol. 9, No. 3, September 1977)
Algol 68 Genie – a GNU GPL Algol 68 compiler-interpreter
Open source ALGOL 68 implementations, on SourceForge
Algol68 Standard Hardware representation (.pdf) Archived 2014-01-02 at the Wayback Machine
Из истории создания компилятора с Алгол 68
Algol 68 – 25 Years in the USSR
Система программ динамической поддержки для транслятора с Алгол 68
C history with Algol68 heritage
McJones, Paul, "Algol 68 implementations and dialects", Software Preservation Group, Computer History Museum, 2011-07-05
Web enabled ALGOL 68 compiler for small experiments |
AMPL (A Mathematical Programming Language) is an algebraic modeling language to describe and solve high-complexity problems for large-scale mathematical computing (i.e., large-scale optimization and scheduling-type problems).
It was developed by Robert Fourer, David Gay, and Brian Kernighan at Bell Laboratories.
AMPL supports dozens of solvers, both open source and commercial software, including CBC, CPLEX, FortMP, MINOS, IPOPT, SNOPT, KNITRO, and LGO. Problems are passed to solvers as nl files.
AMPL is used by more than 100 corporate clients, and by government agencies and academic institutions.One advantage of AMPL is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization. Many modern solvers available on the NEOS Server (formerly hosted at the Argonne National Laboratory, currently hosted at the University of Wisconsin, Madison) accept AMPL input. According to the NEOS statistics AMPL is the most popular format for representing mathematical programming problems.
Features
AMPL features a mix of declarative and imperative programming styles. Formulating optimization models occurs via declarative language elements such as sets, scalar and multidimensional parameters, decision variables, objectives and constraints, which allow for concise description of most problems in the domain of mathematical optimization.
Procedures and control flow statements are available in AMPL for
the exchange of data with external data sources such as spreadsheets, databases, XML and text files
data pre- and post-processing tasks around optimization models
the construction of hybrid algorithms for problem types for which no direct efficient solvers are available.To support re-use and simplify construction of large-scale optimization problems, AMPL allows separation of model and data.
AMPL supports a wide range of problem types, among them:
Linear programming
Quadratic programming
Nonlinear programming
Mixed-integer programming
Mixed-integer quadratic programming with or without convex quadratic constraints
Mixed-integer nonlinear programming
Second-order cone programming
Global optimization
Semidefinite programming problems with bilinear matrix inequalities
Complementarity theory problems (MPECs) in discrete or continuous variables
Constraint programmingAMPL invokes a solver in a separate process which has these advantages:
User can interrupt the solution process at any time
Solver errors do not affect the interpreter
32-bit version of AMPL can be used with a 64-bit solver and vice versaInteraction with the solver is done through a well-defined nl interface.
Availability
AMPL is available for many popular 32- and 64-bit operating systems including Linux, macOS, Solaris, AIX, and Windows.
The translator is proprietary software maintained by AMPL Optimization LLC. However, several online services exist, providing free modeling and solving facilities using AMPL. A free student version with limited functionality and a free full-featured version for academic courses are also available.AMPL can be used from within Microsoft Excel via the SolverStudio Excel add-in.
The AMPL Solver Library (ASL), which allows reading nl files and provides the automatic differentiation, is open-source. It is used in many solvers to implement AMPL connection.
Status history
This table present significant steps in AMPL history.
A sample model
A transportation problem from George Dantzig is used to provide a sample AMPL model. This problem finds the least cost shipping schedule that meets requirements at markets and supplies at factories.
Solvers
Here is a partial list of solvers supported by AMPL:
See also
sol (format)
GNU MathProg (previously known as GMPL) is a subset of AMPL supported by the GNU Linear Programming Kit
Official website
Prof. Fourer's home page at Northwestern University |
AppleScript is a scripting language created by Apple Inc. that facilitates automated control over scriptable Mac applications. First introduced in System 7, it is currently included in all versions of macOS as part of a package of system automation tools. The term "AppleScript" may refer to the language itself, to an individual script written in the language, or, informally, to the macOS Open Scripting Architecture that underlies the language.
Overview
AppleScript is primarily a scripting language developed by Apple to do inter-application communication (IAC) using Apple events. AppleScript is related to, but different from, Apple events. Apple events are designed to exchange data between and control other applications in order to automate repetitive tasks.
AppleScript has some processing abilities of its own, in addition to sending and receiving Apple events to applications. AppleScript can do basic calculations and text processing, and is extensible, allowing the use of scripting additions that add new functions to the language. Mainly, however, AppleScript relies on the functionality of applications and processes to handle complex tasks. As a structured command language, AppleScript can be compared to Unix shells, the Microsoft Windows Script Host, or IBM REXX but it is distinct from all three. Essential to its functionality is the fact that Macintosh applications publish "dictionaries" of addressable objects and operations.
AppleScript has some elements of procedural programming, object-oriented programming (particularly in the construction of script objects), and natural language programming tendencies in its syntax, but does not strictly conform to any of these programming paradigms.: xxvi
History
In the late 1980s Apple considered using HyperCard's HyperTalk scripting language as the standard language for end-user development across the company and within its classic Mac OS operating system, and for interprocess communication between Apple and non-Apple products. HyperTalk could be used by novices to program a HyperCard stack. Apple engineers recognized that a similar, but more object-oriented scripting language could be designed to be used with any application, and the AppleScript project was born as a spin-off of a research effort to modernize the Macintosh as a whole and finally became part of System 7.AppleScript was released in October 1993 as part of System 7.1.1 (System 7 Pro, the first major upgrade to System 7). QuarkXPress (ver. 3.2) was one of the first major software applications that supported AppleScript. This in turn led to AppleScript being widely adopted within the publishing and prepress world, often tying together complex workflows. This was a key factor in retaining the Macintosh's dominant position in publishing and prepress, even after QuarkXpress and other publishing applications were ported to Microsoft Windows.
After some uncertainty about the future of AppleScript on Apple's next generation OS, the move to Mac OS X (around 2002) and its Cocoa frameworks greatly increased the usefulness and flexibility of AppleScript. Cocoa applications allow application developers to implement basic scriptability for their apps with minimal effort, broadening the number of applications that are directly scriptable. At the same time, the shift to the Unix underpinnings and AppleScript's ability to run Unix commands directly, with the do shell script command, allowed AppleScripts much greater control over the operating system itself.: 863 AppleScript Studio, released with Mac OS X 10.2 as part of Xcode, and later AppleScriptObjC framework, released in Mac OS X 10.6, allowed users to build Cocoa applications using AppleScript.: 969 In a 2006 article, Macworld included AppleScript among its rankings of Apple's 30 most significant products to date, placing it at #17.In a 2013 article for Macworld, veteran Mac software developer and commentator John Gruber concluded his reflection on "the unlikely persistence of AppleScript" by noting: "In theory, AppleScript could be much better; in practice, though, it's the best thing we have that works. It exemplifies the Mac's advantages over iOS for tinkerers and advanced users."In October 2016, longtime AppleScript product manager and automation evangelist Sal Soghoian left Apple when his position was eliminated "for business reasons". Veterans in the Mac community such as John Gruber and Andy Ihnatko generally responded with concern, questioning Apple's commitment to the developer community and pro users. Apple senior vice president of software engineering Craig Federighi responded in an email saying that "We have every intent to continue our support for the great automation technologies in macOS!", though Jeff Gamet at The Mac Observer opined that it did little to assuage his doubt about the future of Apple automation in general and AppleScript in particular. For the time being, AppleScript remains one component of macOS automation technologies, along with Automator, Shortcuts, Services, and shell scripting.
Basic concepts
AppleScript was designed to be used as an accessible end-user scripting language, offering users an intelligent mechanism to control applications, and to access and modify data and documents. AppleScript uses Apple events, a set of standardized data formats that the Macintosh operating system uses to send information to applications, roughly analogous to sending XPath queries over XML-RPC in the world of web services.: xxvi Apple events allow a script to work with multiple applications simultaneously, passing data between them so that complex tasks can be accomplished without human interaction. For example, an AppleScript to create a simple web gallery might do the following:
Open a photo in a photo-editing application (by sending that application an Open File Apple event).
Tell the photo-editing application to manipulate the image (e.g. reduce its resolution, add a border, add a photo credit)
Tell the photo-editing application to save the changed image in a file in some different folder (by sending that application a Save and/or Close Apple event).
Send the new file path (via another Apple event) to a text editor or web editor application.
Tell that editor application to write a link for the photo into an HTML file.
Repeat the above steps for an entire folder of images (hundreds or even thousands of photos).
Upload the HTML file and folder of revised photos to a website, by sending Apple events to a graphical FTP client, by using built-in AppleScript commands, or by sending Apple events to Unix FTP utilities.For the user, hundreds or thousands of steps in multiple applications have been reduced to the single act of running the script, and the task is accomplished in much less time and with no possibility of random human error. A large complex script could be developed to run only once, while other scripts are used again and again.
An application's scriptable elements are visible in the application's Scripting Dictionary (distributed as part of the application), which can be viewed in any script editor. Elements are generally grouped into suites, according to loose functional relationships between them. There are two basic kinds of elements present in any suite: classes and commands.
Classes are scriptable objects—for example, a text editing application will almost certainly have classes for windows, documents, and texts—and these classes will have properties that can be changed (window size, document background color, text font size, etc.), and may contain other classes (a window will contain one or more documents, a document will contain text, a text object will contain paragraphs and words and characters).
Commands, by contrast, are instructions that can be given to scriptable objects. The general format for a block of AppleScript is to tell a scriptable object to run a command.All scriptable applications share a few basic commands and objects, usually called the Standard Suite—commands to open, close or save a file, to print something, to quit, to set data to variables—as well as a basic application object that gives the scriptable properties of the application itself. Many applications have numerous suites capable of performing any task the application itself can perform. In exceptional cases, applications may support plugins which include their own scripting dictionaries.
AppleScript was designed with the ability to build scripts intuitively by recording user actions. Such AppleScript recordability has to be engineered into the app—the app must support Apple events and AppleScript recording; as Finder supports AppleScript recording, it can be useful for reference. When AppleScript Editor (Script Editor) is open and the Record button clicked, user actions for recordable apps are converted to their equivalent AppleScript commands and output to the Script Editor window. The resulting script can be saved and re-run to duplicate the original actions, or modified to be more generally useful.
Comments
Comments can be made multiple ways. A one-line comment can begin with 2 hyphens (--). In AppleScript 2.0, first released in Mac OS X Leopard, it may also begin with a number sign (#). This permits a self-contained AppleScript script to be stored as an executable text file beginning with the shebang line #!/usr/bin/osascript
Example:
For comments that take up multiple lines, AppleScript uses parentheses with asterisks inside.
Example:
Hello, world!
In AppleScript, the traditional "Hello, World!" program could be written in many different forms, including:
AppleScript has several user interface options, including dialogs, alerts, and list of choices. (The character ¬, produced by typing ⌥ Option+return in the Script Editor, denotes continuation of a single statement across multiple lines.)
Each user interaction method can return the values of buttons clicked, items chosen or text entered for further processing. For example:
Natural language metaphor
Whereas Apple events are a way to send messages into applications, AppleScript is a particular language designed to send Apple events. In keeping with the objective of ease-of-use for beginners, the AppleScript language is designed on the natural language metaphor, just as the graphical user interface is designed on the desktop metaphor. A well-written AppleScript should be clear enough to be read and understood by anyone, and easily edited. The language is based largely on HyperCard's HyperTalk language, extended to refer not only to the HyperCard world of cards and stacks, but also theoretically to any document. To this end, the AppleScript team introduced the AppleEvent Object Model (AEOM), which specifies the objects any particular application "knows".
The heart of the AppleScript language is the use of terms that act as nouns and verbs that can be combined. For example, rather than a different verb to print a page, document or range of pages (such as printPage, printDocument, printRange), AppleScript uses a single "print" verb which can be combined with an object, such as a page, a document or a range of pages.
Generally, AEOM defines a number of objects—like "document" or "paragraph"—and corresponding actions—like "cut" and "close". The system also defines ways to refer to properties of objects, so one can refer to the "third paragraph of the document 'Good Day'", or the "color of the last word of the front window". AEOM uses an application dictionary to associate the Apple events with human-readable terms, allowing the translation back and forth between human-readable AppleScript and bytecode Apple events. To discover what elements of a program are scriptable, dictionaries for supported applications may be viewed. (In the Xcode and Script Editor applications, this is under File → Open Dictionary.)
To designate which application is meant to be the target of such a message, AppleScript uses a "tell" construct:
Alternatively, the tell may be expressed in one line by using an infinitive:
For events in the "Core Suite" (activate, open, reopen, close, print, and quit), the application may be supplied as the direct object to transitive commands:
The concept of an object hierarchy can be expressed using nested blocks:
The concept of an object hierarchy can also be expressed using either nested prepositional phrases or a series of possessives:
which in another programming language might be expressed as sequential method calls, like in this pseudocode:
AppleScript includes syntax for ordinal counting, "the first paragraph", as well as cardinal, "paragraph one". Likewise, the numbers themselves can be referred to as text or numerically, "five", "fifth" and "5" are all supported; they are synonyms in AppleScript. Also, the word "the" can legally be used anywhere in the script in order to enhance readability: it has no effect on the functionality of the script.
Examples of scripts
A failsafe calculator:
A simple username and password dialog box sequence. Here, the username is John and password is app123:
Development tools
Script editors
Script editors provide a unified programing environment for AppleScripts, including tools for composing, validating, compiling, running, and debugging scripts. They also provide mechanisms for opening and viewing AppleScript dictionaries from scriptable applications, saving scripts in a number of formats (compiled script files, application packages, script bundles, and plain text files), and usually provide features such as syntax highlighting and prewritten code snippets.
From Apple
AppleScript Editor (Script Editor)
The editor for AppleScript packaged with macOS, called AppleScript Editor in Mac OS X Snow Leopard (10.6) through OS X Mavericks (10.9) and Script Editor in all earlier and later versions of macOS. Scripts are written in document editing windows where they can be compiled and run, and these windows contain various panes in which logged information, execution results, and other information is available for debugging purposes. Access to scripting dictionaries and prewritten code snippets is available through the application menus. Since OS X Yosemite (10.10), Script Editor includes the ability to write in both AppleScript and JavaScript.Xcode
A suite of tools for developing applications with features for editing AppleScripts or creating full-fledged applications written with AppleScript.
From third parties
Script Debugger, from Late Night Software
A third-party commercial IDE for AppleScript. Script Debugger is a more advanced AppleScript environment that allows the script writer to debug AppleScripts via single stepping, breakpoints, stepping in and out of functions/subroutines, variable tracking, etc. Script Debugger also contains an advanced dictionary browser that allows the user to see the dictionary in action in real world situations. That is, rather than just a listing of what the dictionary covers, one can open a document in Pages, for example, and see how the dictionary's terms apply to that document, making it easier to determine which parts of the dictionary to use. Script Debugger is not designed to create scripts with a GUI, other than basic alerts and dialogs, but is focused more on the coding and debugging of scripts.Smile and SmileLab
A third-party freeware/commercial IDE for AppleScript, itself written entirely in AppleScript. Smile is free, and primarily designed for AppleScript development. SmileLab is commercial software with extensive additions for numerical analysis, graphing, machine automation and web production. Smile and SmileLab use an assortment of different windows—AppleScript windows for running and saving full scripts, AppleScript terminals for testing code line-by-line, unicode windows for working with text and XML. Users can create complex interfaces—called dialogs—for situations where the built-in dialogs in AppleScript are insufficient.ASObjC Explorer 4, from Shane Stanley
A discontinued third-party commercial IDE for AppleScript, especially for AppleScriptObjC. The main feature is Cocoa-object/event logging, debugging and code-completion. Users can read Cocoa events and objects like other scriptable applications. This tool was originally built for AppleScript Libraries (available in OS X Mavericks). AppleScript Libraries aims for re-usable AppleScript components and supports built-in AppleScript dictionary (sdef). ASObjC Explorer 4 can be an external Xcode script editor, too.FaceSpan, from Late Night Software
A discontinued third-party commercial IDE for creating AppleScript applications with graphic user interfaces.
Script launchers
AppleScripts can be run from a script editor, but it is usually more convenient to run scripts directly, without opening a script editor application. There are a number of options for doing so:
Applets
AppleScripts can be saved from a script editor as applications (called applets, or droplets when they accept input via drag and drop).: 69 Applets can be run from the Dock, from the toolbar of Finder windows, from Spotlight, from third-party application launchers, or from any other place where applications can be run.Folder actions
Using AppleScript folder actions, scripts can be launched when specific changes occur in folders (such as adding or removing files). Folder actions can be assigned by clicking on a folder and choosing Folder Actions Setup... from the contextual menu; the location of this command differs slightly in Mac OS X 10.6.x from earlier versions. This same action can be achieved with third-party utilities such as Hazel.Hotkey launchers
Keyboard shortcuts can be assigned to AppleScripts in the script menu using the Keyboard & Mouse Settings Preference Pane in System Preferences. In addition, various third-party utilities are available—Alfred, FastScripts, Keyboard Maestro, QuicKeys, Quicksilver, TextExpander—which can run AppleScripts on demand using key combinations.Script menu
This system-wide menu provides access to AppleScripts from the macOS menu bar, visible no matter what application is running. (In addition, many Apple applications, some third-party applications, and some add-ons provide their own script menus. These may be activated in different ways, but all function in essentially the same manner.) Selecting a script in the script menu launches it. Since Mac OS X 10.6.x, the system-wide script menu can be enabled from the preferences of Script Editor; in prior versions of Mac OS X, it could be enabled from the AppleScript Utility application. When first enabled, the script menu displays a default library of fairly generic, functional AppleScripts, which can also be opened in Script Editor and used as examples for learning AppleScript. Scripts can be organized so that they only appear in the menu when particular applications are in the foreground.Unix command line and launchd
AppleScripts can be run from the Unix command line, or from launchd for scheduled tasks,: 716 by using the osascript command line tool. The osascript tool can run compiled scripts (.scpt files) and plain text files (.applescript files—these are compiled by the tool at runtime). Script applications can be run using the Unix open command.
AppleScript resources
AppleScript Libraries
Re-usable AppleScript modules (available since OS X Mavericks), written in AppleScript or AppleScriptObjC and saved as script files or bundles in certain locations, that can be called from other scripts. When saved as a bundle, a library can include an AppleScript dictionary (sdef) file, thus functioning like a scripting addition but written in AppleScript or AppleScriptObjC.
AppleScript Studio
A framework for attaching Cocoa interfaces to AppleScript applications, part of the Xcode package in Mac OS X 10.4 and 10.5, now deprecated in favor of AppleScriptObjC.: 438
AppleScriptObjC
A Cocoa development software framework, also called AppleScript/Objective-C or ASOC, part of the Xcode package since Mac OS X Snow Leopard. AppleScriptObjC allows AppleScripts to use Cocoa classes and methods directly. The following table shows the availability of AppleScriptObjC in various versions of macOS:
AppleScriptObjC can be used in all subsequent Mac OS X versions.
Automator
A graphical, modular editing environment in which workflows are built up from actions. It is intended to duplicate many of the functions of AppleScript without the necessity for programming knowledge. Automator has an action specifically designed to contain and run AppleScripts, for tasks that are too complex for Automator's simplified framework.
Scriptable core system applications
These background-only applications, packaged with macOS, are used to allow AppleScript to access features that would not normally be scriptable. As of Mac OS X 10.6.3 they include the scriptable applications for:
VoiceOver (scriptable auditory and braille screen reader package)
System Events (control of non-scriptable applications and access to certain system functions and basic file operations)
Printer Setup Utility (scriptable utility for handling print jobs)
Image Events (core image manipulation)
HelpViewer (scriptable utility for showing help displays)
Database Events (minimal SQLite3 database interface)
AppleScript Utility (for scripting a few AppleScript related preferences)
Scripting Additions (OSAX)
Plug-ins for AppleScript developed by Apple or third parties. They are designed to extend the built-in command set, expanding AppleScript's features and making it somewhat less dependent on functionality provided by applications. macOS includes a collection of scripting additions referred to as Standard Additions (StandardAdditions.osax) that adds a set of commands and classes that are not part of AppleScript's core features, including user interaction dialogs, reading and writing files, file system commands, date functions, and text and mathematical operations; without this OSAX, AppleScript would have no capacity to perform many basic actions not directly provided by an application.
Language essentials
Classes (data types)
While applications can define specialized classes (or data types), AppleScript also has a number of built-in classes. These basic data classes are directly supported by the language and tend to be universally recognized by scriptable applications. The most common ones are as follows:
Basic objects
application: an application object, used mostly as a specifier for tell statements (tell application "Finder" …).
script: a script object. Script objects are containers for scripts. Every AppleScript creates a script object when run, and script objects may be created within AppleScripts.
class: a meta-object that specifies the type of other objects.
reference: an object that encapsulates an unevaluated object specifier that may or may not point to a valid object. Can be evaluated on-demand by accessing its contents property.
Standard data objects
constant: a constant value. There are a number of language-defined constants, such as pi, tab, and linefeed.
boolean: a Boolean true/false value. Actually a subclass of constant.
number: a rarely used abstract superclass of integer and real.
integer: an integer. Can be manipulated with built-in mathematical operators.
real: a floating-point (real) number. Can be manipulated with built-in mathematical operators.
date: a date and time.
text: text. In versions of AppleScript before 2.0 (Mac OS X 10.4 and below) the text class was distinct from string and Unicode text, and the three behaved somewhat differently; in 2.0 (10.5) and later, they are all synonyms and all text is handled as being UTF-16 (“Unicode”)-encoded.
Containers
list: an ordered list of objects. Can contain any class, including other lists and classes defined by applications.
record: a keyed list of objects. Like a list, except structured as key–value pairs. Runtime keyed access is unsupported; all keys must be compile-time constant identifiers.
File system
alias: a reference to a file system object (file or folder). The alias will maintain its link to the object if the object is moved or renamed.
file: a reference to a file system object (file or folder). This is a static reference, and can point to an object that does not currently exist.
POSIX file: a reference to a file system object (file or folder), in plain text, using Unix (POSIX)-style slash (/) notation. Not a true data type, as AppleScript automatically converts POSIX files to ordinary files whenever they are used.
Miscellaneous
RGB color: specifies an RGB triplet (in 16-bit high color format), for use in commands and objects that work with colors.
unit types: class that converts between standard units. For instance, a value can be defined as square yards, then converted to square feet by casting between unit types (using the as operator).
Language structures
Many AppleScript processes are managed by blocks of code, where a block begins with a command command and ends with an end command statement. The most important structures are described below.
Conditionals
AppleScript offers two kinds of conditionals.
Loops
The repeat loop of AppleScript comes in several slightly different flavors. They all execute the block between repeat and end repeat lines a number of times. The looping can be prematurely stopped with command exit repeat.
Repeat forever.
Repeat a given number of times.
Conditional loops. The block inside repeat while loop executes as long as the condition evaluates to true. The condition is re-evaluated after each execution of the block. The repeat until loop is otherwise identical, but the block is executed as long as the condition evaluates to false.
Loop with a variable. When starting the loop, the variable is assigned to the start value. After each execution of the block, the optional step value is added to the variable. Step value defaults to 1.
Enumerate a list. On each iteration set the loopVariable to a new item in the given list
One important variation on this block structure is in the form of on —end ... blocks that are used to define handlers (function-like subroutines). Handlers begin with on functionName() and ending with end functionName, and are not executed as part of the normal script flow unless called from somewhere in the script.
Handlers can also be defined using "to" in place of "on" and can be written to accept labeled parameters, not enclosed in parens.
There are four types of predefined handlers in AppleScript—run, open, idle, and quit—each of which is created in the same way as the run handler shown above.
Run handler
Defines the main code of the script, which is called when the script is run. Run handler blocks are optional, unless arguments are being passed to the script. If an explicit run handler block is omitted, then all code that is not contained inside handler blocks is executed as though it were in an implicit run handler.
Open handler
Defined using "on open theItems".
When a script containing an "open handler' is saved as an applet, the applet becomes a droplet. A droplet can be identified in the Finder by its icon, which includes an arrow, indicating items can be dropped onto the icon. The droplet's open handler is executed when files or folders are dropped onto droplet's icon. Idle handler
A subroutine that is run periodically by the system when the application is idle.
An idle handler can be used in applets or droplets saved as stay-open applets, and is useful for scripts that watch for particular data or events. The length of the idle time is 30 seconds by default, but can be changed by including a 'return x' statement at the end of the subroutine, where x is the number of seconds the system should wait before running the handler again.
Quit handler
A handler that is run when the applet receives a Quit request. This can be used to save data or do other ending tasks before quitting.
Script objects
Script objects may be defined explicitly using the syntax:
Script objects can use the same 'tell' structures that are used for application objects, and can be loaded from and saved to files. Runtime execution time can be reduced in some cases by using script objects.
Miscellaneous information
Variables are not strictly typed, and do not need to be declared. Variables can take any data type (including scripts and functions). The following commands are examples of the creation of variables:
Script objects are full objects—they can encapsulate methods and data and inherit data and behavior from a parent script.
Subroutines cannot be called directly from application tell blocks. Use the 'my' or 'of me' keywords to do so.
Using the same technique for scripting addition commands can reduce errors and improve performance.
Open Scripting Architecture
An important aspect of the AppleScript implementation is the Open Scripting Architecture (OSA). Apple provides OSA for other scripting languages and third-party scripting/automation products such as QuicKeys and UserLand Frontier, to function on an equal status with AppleScript. AppleScript was implemented as a scripting component, and the basic specs for interfacing such components to the OSA were public, allowing other developers to add their own scripting components to the system. Public client APIs for loading, saving and compiling scripts would work the same for all such components, which also meant that applets and droplets could hold scripts in any of those scripting languages.
One feature of the OSA is scripting additions, or OSAX for Open Scripting Architecture eXtension, which were inspired by HyperCard's External Commands. Scripting additions are libraries that allow programmers to extend the function of AppleScript. Commands included as scripting additions are available system-wide, and are not dependent on an application (see also § AppleScript Libraries). The AppleScript Editor is also able to directly edit and run some of the OSA languages.
JavaScript for Automation
Under OS X Yosemite and later versions of macOS, the JavaScript for Automation (JXA) component remains the only serious OSA language alternative to AppleScript, though the Macintosh versions of Perl, Python, Ruby, and Tcl all support native means of working with Apple events without being OSA components.: 516 JXA also provides an Objective-C (and C language) foreign language interface. Being an environment based on WebKit's JavaScriptCore engine, the JavaScript feature set is in sync with the system Safari browser engine. JXA provides a JavaScript module system and it is also possible to use CommonJS modules via browserify.
See also
ARexx – competitive technology of 1987
Further reading
"AppleScript Language Guide". developer.apple.com. 2016. Retrieved May 9, 2017.
Munro, Mark Conway (2010). AppleScript. Developer Reference. Indianapolis: Wiley. ISBN 978-0-470-56229-1. OCLC 468969567.
Rosenthal, Hanaan; Sanderson, Hamish (2010). Learn AppleScript: The Comprehensive Guide to Scripting and Automation on Mac OS X (3rd ed.). Berkeley: Apress. doi:10.1007/978-1-4302-2362-7. ISBN 978-1-4302-2361-0. OCLC 308193726.
Soghoian, Sal; Cheeseman, Bill (2009). Apple Training Series: AppleScript 1-2-3. Apple Pro training series. Berkeley: Peachpit Press. ISBN 978-0-321-14931-2. OCLC 298560807.
Cook, William (2007). "AppleScript" (PDF). Proceedings of the third ACM SIGPLAN conference on History of programming languages. ACM. pp. 1–21. CiteSeerX 10.1.1.86.2218. doi:10.1145/1238844.1238845. ISBN 9781595937667. S2CID 220938191.
Ford Jr., Jerry Lee (2007). AppleScript Programming for the Absolute Beginner. Boston: Thomson Course Technology. ISBN 978-1-59863-384-9. OCLC 76910522.
Neuburg, Matt (2006). AppleScript: The Definitive Guide (2nd ed.). Beijing; Farnham: O'Reilly Media. ISBN 0-596-10211-9. OCLC 68694976.
Goldstein, Adam (2005). AppleScript: The Missing Manual. Missing Manual series. Sebastopol, CA; Farnham: O'Reilly Media. ISBN 0-596-00850-3. OCLC 56912218.
Trinko, Tom (2004). AppleScript for Dummies. For Dummies series (2nd ed.). Hoboken, NJ: Wiley. ISBN 978-0-7645-7494-8. OCLC 56500506.
"AppleScript Overview". developer.apple.com. 2007. Retrieved November 7, 2020.
"AppleScript for Python Programmers (Comparison Chart)". aurelio.net. 2005. Retrieved May 9, 2017.
"Doug's AppleScripts for iTunes". dougscripts.com. Retrieved May 9, 2017.
"MacScripter AppleScript community". macscripter.net. Retrieved May 9, 2017. |
AutoHotkey is a free and open-source custom scripting language for Microsoft Windows, initially aimed at providing easy keyboard shortcuts or hotkeys, fast macro-creation and software automation that allows users of most levels of computer skill to automate repetitive tasks in any Windows application. User interfaces can easily be extended or modified by AutoHotkey (for example, overriding the default Windows control key commands with their Emacs equivalents). The AutoHotkey installation includes its own extensive help file, and web-based documentation is also available.
Features
AutoHotkey scripts can be used to launch programs, open documents, and emulate keystrokes or mouse clicks and movements. AutoHotkey scripts can also assign, retrieve, and manipulate variables, run loops and manipulate windows, files, and folders. These commands can be triggered by a hotkey, such as a script that would open an internet browser whenever the user presses Ctrl+Alt+I on the keyboard. Keyboard keys can also be remapped or disabled, such that pressing Ctrl+M, for example, might result in the active window receiving an em dash — or nothing at all. AutoHotkey also allows for "hotstrings" that will automatically replace certain text as it is typed, such as assigning the string "btw" to produce the text "by the way" when typed, or the text "%o" to produce "percentage of". Further, scripts can be initiated automatically at computer startup and need not interact with the keyboard at all, perhaps performing file manipulation at a set interval.More complex tasks can be achieved with custom data entry forms (GUI windows), working with the system registry, or using the Windows API by calling functions from DLLs. The scripts can be compiled into an executable file that can be run on other computers that do not have AutoHotkey installed. The source code is in C++ and can be compiled with Visual Studio Express.
Memory access through pointers is allowed just as in C.Some uses for AutoHotkey:
Remapping the keyboard, such as from QWERTY to Dvorak or other alternative keyboard layouts.
Using shortcuts to fill in frequently-used file names or other phrases.
Typing punctuation not provided on the keyboard, such as curved quotes (“…”).
Typing other non-keyboard characters such as the sign × used, e.g., in describing a room as 10′×12′.
Controlling the mouse cursor with a keyboard or joystick.
Opening programs, documents, and websites with simple keystrokes.
Adding a signature to e-mail, message boards, etc.
Monitoring a system and automatically closing unwanted programs.
Scheduling an automatic reminder, system scan, or backup.
Automating repetitive tasks.
Filling out forms automatically.
Prototyping before implementing in another, more time-consuming, programming language.
History
The first public beta of AutoHotkey was released on November 10, 2003, after author Chris Mallett's proposal to integrate hotkey support into AutoIt v2 failed to generate response from the AutoIt community. Mallett built a new program from scratch basing the syntax on AutoIt v2 and using AutoIt v3 for some commands and the compiler. Later, AutoIt v3 switched from GPL to closed source because of "other projects repeatedly taking AutoIt code" and "setting themselves up as competitors".In 2010, AutoHotkey v1.1 (originally called AutoHotkey_L) became the platform for ongoing development of AutoHotkey. In late 2012, it became the official branch. Another port of the program is AutoHotkey.dll. A well known fork of the program is AutoHotkey_H, which has its own subforum on the main site.
Version 2
In July 2021, the first AutoHotkey v2 beta was released. The first release candidate was released on November 20th 2022 with the full release of v2.0.0 planned later in the year.
On December 20th 2022, version 2.0.0 was officially released. On January 22nd 2023, AutoHotkey v2 became the official primary version. AutoHotkey v1.1 became legacy and no new features will be implemented, but will still be supported by the site and maintenance releases are possible.
Examples
The following script will allow a user to search for a particular word or phrase using Google. After copying text from any application to the clipboard, pressing the configurable hotkey ⊞ Win+G will open the user's default web browser and perform the search.
The following script defines a hotstring that enables the user to type "afaik" in any program and have it automatically replaced with "as far as I know":
User-contributed features
There are extensions/interops/inline script libraries available for usage with/from other programming languages:
Other major plugins enable support for:
Malware
When AutoHotkey is used to make self-contained software for distribution, that software must include the part of AutoHotkey itself that understands and executes AutoHotkey scripts, as it is an interpreted language. Inevitably, some malware has been written using AutoHotkey. When anti-malware products attempt to earmark items of malware that have been programmed using AutoHotkey, they sometimes falsely identify AutoHotkey as the culprit rather than the actual malware.
See also
AutoIt (for Windows)
AutoKey (for Linux)
Automator (for Macintosh)
Bookmarklet (for web browsers)
iMacros (for Firefox, Chrome, and Internet Explorer)
Keyboard Maestro (for Macintosh)
KiXtart (for Windows)
Macro Express (for Windows)
Winbatch (for Windows)
Official website
AutoHotkey Foundation LLC
The Automator Community and Resources |
AutoIt is a freeware programming language for Microsoft Windows. In its earliest release, it was primarily intended to create automation scripts (sometimes called macros) for Microsoft Windows programs but has since grown to include enhancements in both programming language design and overall functionality.
The scripting language in AutoIt 1 and 2 was statement-driven and designed primarily for simulating user interaction. From version 3 onward, the AutoIt syntax is similar to that found in the BASIC family of languages. In this form, AutoIt is a general-purpose, third-generation programming language with a classical data model and a variant data type that can store several types of data, including arrays.
An AutoIt automation script can be converted into a compressed, stand-alone executable which can be run on computers even if they do not have the AutoIt interpreter installed. A wide range of function libraries (known as UDFs, or "User Defined Functions") are also included as standard or are available from the website to add specialized functionality. AutoIt is also distributed with an IDE based on the free SciTE editor. The compiler and help text are fully integrated and provide a de facto standard environment for developers using AutoIt.
History
AutoIt1 and AutoIt2 were closed-source projects, and had a very different syntax than AutoIt3, whose syntax is more like VBScript and BASIC.AutoIt3 was initially free and open-source, licensed under the terms of the GNU General Public License, with its initial public release 3.0.100 in February 2004, and had open-source releases in March 2004 and August 2004. Version 3.0.102, released in August 2004, was initially open-source, but by January 2005 was distributed as closed-source. Subsequent releases, starting from the February 2005 release of version 3.1.0, were all closed-source. Version 3.1.0 was also the first release with support for GUI scripts.
Related projects
The free and open-source AutoHotkey project derived 29 of its functions from the AutoIt 3.1 source code. The AutoHotkey syntax is quite different from AutoIt3 syntax, and rather resembles AutoIt2 syntax.
Features
AutoIt is typically used to produce utility software for Microsoft Windows and to automate routine tasks, such as systems management, monitoring, maintenance, or software installation. It is also used to simulate user interaction, whereby an application is "driven" (via automated form entry, keypresses, mouse clicks, and so on) to do things by an AutoIt script.
AutoIt can also be used in low-cost laboratory automation. Applications include instrument synchronization, alarm monitoring and results gathering. Devices such as CNC routers and 3D-printers can also be controlled.
64-bit code support from version 3.2.10.0
Add-on libraries and modules for specific apps
Automate sending user input and keystrokes to apps, as well as to individual controls within an app
Call functions in DLL files
Compatible with User Account Control
Compiling into standalone executables
Create graphical user interfaces, including message and input boxes
Include data files in the compiled file to be extracted when running
Manipulate windows and processes
Object-oriented design through a library
Play sounds, pause, resume, stop, seek, get the current position of the sound and get the length of the sound
Run console apps and access the standard streams
Scripting language with BASIC-like structure for Windows
Simulate mouse movements
Supports component object model (COM)
Supports regular expressions
Supports TCP and UDP protocols
Unicode support from version 3.2.4.0
Examples
Hello world
Automating the Windows Calculator
Find average
See also
AutoHotkey
Automator (for Macintosh)
Expect
iMacros
Keyboard Maestro (for Macintosh)
KiXtart
Macro Express
thinBasic
Winbatch
Official website |
AutoLISP is a dialect of the programming language Lisp built specifically for use with the full version of AutoCAD and its derivatives, which include AutoCAD Map 3D, AutoCAD Architecture and AutoCAD Mechanical. Neither the application programming interface (API) nor the interpreter to execute AutoLISP code is included in the AutoCAD LT product line (up to Release 2023, AutoCAD LT 2024 includes AutoLISP). A subset of AutoLISP functions is included in the browser-based AutoCAD web app.
Features
AutoLISP is a small, dynamically scoped, dynamically typed Lisp language dialect with garbage collection, immutable list structure, and settable symbols, lacking in such regular Lisp features as macro system, records definition facilities, arrays, functions with variable number of arguments or let bindings. Aside from the core language, most of the primitive functions are for geometry, accessing AutoCAD's internal DWG database, or manipulation of graphical entities in AutoCAD. The properties of these graphical entities are revealed to AutoLISP as association lists in which values are paired with AutoCAD group codes that indicate properties such as definitional points, radii, colors, layers, linetypes, etc. AutoCAD loads AutoLISP code from .LSP files.AutoLISP code can interact with the user through AutoCAD's graphical editor by use of primitive functions that allow the user to pick points, choose objects on screen, and input numbers and other data. AutoLisp also has a built-in graphical user interface (GUI) mini- or domain-specific language (DSL), the Dialog Control Language, for creating modal dialog boxes with automated layout, within AutoCAD.
History
AutoLISP was derived from an early version of XLISP, which was created by David Betz. The language was introduced in AutoCAD Version 2.18 in January 1986, and continued to be enhanced in successive releases up to release 13 in February 1995. After that, its development was neglected by Autodesk in favor of more fashionable development environments like Visual Basic for Applications (VBA), .NET Framework, and ObjectARX. However, it has remained AutoCAD's main user customizing language.
Vital-LISP, a considerably enhanced version of AutoLISP including an integrated development environment (IDE), debugger, compiler, and ActiveX support, was developed and sold by third-party developer Basis Software. Vital LISP was a superset of the existing AutoLISP language that added VBA-like access to the AutoCAD object model, reactors (event handling for AutoCAD objects), general ActiveX support, and some other general Lisp functions. Autodesk purchased this, renamed it Visual LISP, and briefly sold it as an add-on to AutoCAD release 14 released in May 1997. It was incorporated into AutoCAD 2000 released in March 1999, as a replacement for AutoLISP. Since then, Autodesk has ceased major enhancements to Visual LISP and focused more effort on VBA and .NET, and C++. As of January 31, 2014, Autodesk ended support for VBA versions before 7.1, as part of a long-term process of changing from VBA to .NET for user customizing.AutoLISP has such a strong following that other computer-aided design (CAD) application vendors add it to their products. Bricscad, IntelliCAD, DraftSight and others have AutoLISP functionality, so that AutoLISP users can consider using them as an alternative to AutoCAD. Most development involving AutoLISP since AutoCAD 2000 is performed within Visual LISP since the original AutoLISP engine was replaced with the Visual LISP engine. There are thousands of utilities and applications that have been developed using AutoLISP or Visual LISP (distributed as LSP, FAS and VLX files).
Examples
A simple Hello world program in AutoLISP would be:
Note the final line inside the function definition: when evaluated with no arguments, the princ function returns a null symbol, which is not displayed by the AutoCAD command-line interface. As the AutoCAD command line functions as a read–eval–print loop (REPL), this would normally print "Hello World!" to the command line, followed immediately by the return value of the call to princ. Therefore, without the final call to the princ function, the result of this would be:
Hello World!"\nHello World!"The prin1 function may also be used to achieve the same result.
A more complex example is:
The above code defines a new function which generates an AutoCAD point object at a given point, with a one-line text object displaying the X and Y coordinates beside it. The name of the function includes a special prefix 'c:', which causes AutoCAD to recognize the function as a regular command. The user, upon typing 'pointlabel' at the AutoCAD command line, would be prompted to pick a point, either by typing the X and Y coordinates, or clicking a location in the drawing. The function would then place a marker at that point, and create a one-line text object next to it, containing the X and Y coordinates of the point expressed relative to the active User Coordinate System (UCS). The function requires no parameters, and contains one local variable ('pnt').
The above example could also be written using built-in AutoCAD commands to achieve the same result, however this approach is susceptible to changes to the command prompts between AutoCAD releases.
AutoLISP FAQ |
AWK (awk ) is a domain-specific language designed for text processing and typically used as a data extraction and reporting tool. Like sed and grep, it is a filter, and is a standard feature of most Unix-like operating systems.
The AWK language is a data-driven scripting language consisting of a set of actions to be taken against streams of textual data – either run directly on files or used as part of a pipeline – for purposes of extracting or transforming text, such as producing formatted reports. The language extensively uses the string datatype, associative arrays (that is, arrays indexed by key strings), and regular expressions. While AWK has a limited intended application domain and was especially designed to support one-liner programs, the language is Turing-complete, and even the early Bell Labs users of AWK often wrote well-structured large AWK programs.AWK was created at Bell Labs in the 1970s, and its name is derived from the surnames of its authors: Alfred Aho, Peter Weinberger, and Brian Kernighan. The acronym is pronounced the same as the name of the bird species auk, which is illustrated on the cover of The AWK Programming Language. When written in all lowercase letters, as awk, it refers to the Unix or Plan 9 program that runs scripts written in the AWK programming language.
History
AWK was initially developed in 1977 by Alfred Aho (author of egrep), Peter J. Weinberger (who worked on tiny relational databases), and Brian Kernighan. AWK takes its name from their respective initials. According to Kernighan, one of the goals of AWK was to have a tool that would easily manipulate both numbers and strings.
AWK was also inspired by Marc Rochkind's programming language that was used to search for patterns in input data, and was implemented using yacc.As one of the early tools to appear in Version 7 Unix, AWK added computational features to a Unix pipeline besides the Bourne shell, the only scripting language available in a standard Unix environment. It is one of the mandatory utilities of the Single UNIX Specification, and is required by the Linux Standard Base specification.AWK was significantly revised and expanded in 1985–88, resulting in the GNU AWK implementation written by Paul Rubin, Jay Fenlason, and Richard Stallman, released in 1988. GNU AWK may be the most widely deployed version because it is included with GNU-based Linux packages. GNU AWK has been maintained solely by Arnold Robbins since 1994. Brian Kernighan's nawk (New AWK) source was first released in 1993 unpublicized, and publicly since the late 1990s; many BSD systems use it to avoid the GPL license.AWK was preceded by sed (1974). Both were designed for text processing. They share the line-oriented, data-driven paradigm, and are particularly suited to writing one-liner programs, due to the implicit main loop and current line variables. The power and terseness of early AWK programs – notably the powerful regular expression handling and conciseness due to implicit variables, which facilitate one-liners – together with the limitations of AWK at the time, were important inspirations for the Perl language (1987). In the 1990s, Perl became very popular, competing with AWK in the niche of Unix text-processing languages.
Structure of AWK programs
AWK reads the input a line at a time. A line is scanned for each pattern in the program, and for each pattern that matches, the associated action is executed.
An AWK program is a series of pattern action pairs, written as:
where condition is typically an expression and action is a series of commands. The input is split into records, where by default records are separated by newline characters so that the input is split into lines. The program tests each record against each of the conditions in turn, and executes the action for each expression that is true. Either the condition or the action may be omitted. The condition defaults to matching every record. The default action is to print the record. This is the same pattern-action structure as sed.
In addition to a simple AWK expression, such as foo == 1 or /^foo/, the condition can be BEGIN or END causing the action to be executed before or after all records have been read, or pattern1, pattern2 which matches the range of records starting with a record that matches pattern1 up to and including the record that matches pattern2 before again trying to match against pattern1 on subsequent lines.
In addition to normal arithmetic and logical operators, AWK expressions include the tilde operator, ~, which matches a regular expression against a string. As handy syntactic sugar, /regexp/ without using the tilde operator matches against the current record; this syntax derives from sed, which in turn inherited it from the ed editor, where / is used for searching. This syntax of using slashes as delimiters for regular expressions was subsequently adopted by Perl and ECMAScript, and is now common. The tilde operator was also adopted by Perl.
Commands
AWK commands are the statements that are substituted for action in the examples above. AWK commands can include function calls, variable assignments, calculations, or any combination thereof. AWK contains built-in support for many functions; many more are provided by the various flavors of AWK. Also, some flavors support the inclusion of dynamically linked libraries, which can also provide more functions.
The print command
The print command is used to output text. The output text is always terminated with a predefined string called the output record separator (ORS) whose default value is a newline. The simplest form of this command is:
print
This displays the contents of the current record. In AWK, records are broken down into fields, and these can be displayed separately:
print $1
Displays the first field of the current record
print $1, $3
Displays the first and third fields of the current record, separated by a predefined string called the output field separator (OFS) whose default value is a single space characterAlthough these fields ($X) may bear resemblance to variables (the $ symbol indicates variables in Perl), they actually refer to the fields of the current record. A special case, $0, refers to the entire record. In fact, the commands "print" and "print $0" are identical in functionality.
The print command can also display the results of calculations and/or function calls:
Output may be sent to a file:
or through a pipe:
Built-in variables
Awk's built-in variables include the field variables: $1, $2, $3, and so on ($0 represents the entire record). They hold the text or values in the individual text-fields in a record.
Other variables include:
NR: Number of Records. Keeps a current count of the number of input records read so far from all data files. It starts at zero, but is never automatically reset to zero.
FNR: File Number of Records. Keeps a current count of the number of input records read so far in the current file. This variable is automatically reset to zero each time a new file is started.
NF: Number of Fields. Contains the number of fields in the current input record. The last field in the input record can be designated by $NF, the 2nd-to-last field by $(NF-1), the 3rd-to-last field by $(NF-2), etc.
FILENAME: Contains the name of the current input-file.
FS: Field Separator. Contains the "field separator" used to divide fields in the input record. The default, "white space", allows any sequence of space and tab characters. FS can be reassigned with another character or character sequence to change the field separator.
RS: Record Separator. Stores the current "record separator" character. Since, by default, an input line is the input record, the default record separator character is a "newline".
OFS: Output Field Separator. Stores the "output field separator", which separates the fields when Awk prints them. The default is a "space" character.
ORS: Output Record Separator. Stores the "output record separator", which separates the output records when Awk prints them. The default is a "newline" character.
OFMT: Output Format. Stores the format for numeric output. The default format is "%.6g".
Variables and syntax
Variable names can use any of the characters [A-Za-z0-9_], with the exception of language keywords. The operators + - * / represent addition, subtraction, multiplication, and division, respectively. For string concatenation, simply place two variables (or string constants) next to each other. It is optional to use a space in between if string constants are involved, but two variable names placed adjacent to each other require a space in between. Double quotes delimit string constants. Statements need not end with semicolons. Finally, comments can be added to programs by using # as the first character on a line.
User-defined functions
In a format similar to C, function definitions consist of the keyword function, the function name, argument names and the function body. Here is an example of a function.
This statement can be invoked as follows:
Functions can have variables that are in the local scope. The names of these are added to the end of the argument list, though values for these should be omitted when calling the function. It is convention to add some whitespace in the argument list before the local variables, to indicate where the parameters end and the local variables begin.
Examples
Hello World
Here is the customary "Hello, world" program written in AWK:
Print lines longer than 80 characters
Print all lines longer than 80 characters. Note that the default action is to print the current line.
Count words
Count words in the input and print the number of lines, words, and characters (like wc):
As there is no pattern for the first line of the program, every line of input matches by default, so the increment actions are executed for every line. Note that words += NF is shorthand for words = words + NF.
Sum last word
s is incremented by the numeric value of $NF, which is the last word on the line as defined by AWK's field separator (by default, white-space). NF is the number of fields in the current line, e.g. 4. Since $4 is the value of the fourth field, $NF is the value of the last field in the line regardless of how many fields this line has, or whether it has more or fewer fields than surrounding lines. $ is actually a unary operator with the highest operator precedence. (If the line has no fields, then NF is 0, $0 is the whole line, which in this case is empty apart from possible white-space, and so has the numeric value 0.)
At the end of the input the END pattern matches, so s is printed. However, since there may have been no lines of input at all, in which case no value has ever been assigned to s, it will by default be an empty string. Adding zero to a variable is an AWK idiom for coercing it from a string to a numeric value. (Concatenating an empty string is to coerce from a number to a string, e.g. s "". Note, there's no operator to concatenate strings, they're just placed adjacently.) With the coercion the program prints "0" on an empty input, without it, an empty line is printed.
Match a range of input lines
The action statement prints each line numbered. The printf function emulates the standard C printf and works similarly to the print command described above. The pattern to match, however, works as follows: NR is the number of records, typically lines of input, AWK has so far read, i.e. the current line number, starting at 1 for the first line of input. % is the modulo operator. NR % 4 3 is true for the 3rd, 7th, 11th, etc., lines of input. The range pattern is false until the first part matches, on line 1, and then remains true up to and including when the second part matches, on line 3. It then stays false until the first part matches again on line 5.
Thus, the program prints lines 1,2,3, skips line 4, and then 5,6,7, and so on. For each line, it prints the line number (on a 6 character-wide field) and then the line contents. For example, when executed on this input:
Rome
Florence
Milan
Naples
Turin
Venice
The previous program prints:
1 Rome
2 Florence
3 Milan
5 Turin
6 Venice
Printing the initial or the final part of a file
As a special case, when the first part of a range pattern is constantly true, e.g. 1, the range will start at the beginning of the input. Similarly, if the second part is constantly false, e.g. 0, the range will continue until the end of input. For example,
prints lines of input from the first line matching the regular expression ^--cut here--$, that is, a line containing only the phrase "--cut here--", to the end.
Calculate word frequencies
Word frequency using associative arrays:
The BEGIN block sets the field separator to any sequence of non-alphabetic characters. Note that separators can be regular expressions. After that, we get to a bare action, which performs the action on every input line. In this case, for every field on the line, we add one to the number of times that word, first converted to lowercase, appears. Finally, in the END block, we print the words with their frequencies. The line
for (i in words)
creates a loop that goes through the array words, setting i to each subscript of the array. This is different from most languages, where such a loop goes through each value in the array. The loop thus prints out each word followed by its frequency count. tolower was an addition to the One True awk (see below) made after the book was published.
Match pattern from command line
This program can be represented in several ways. The first one uses the Bourne shell to make a shell script that does everything. It is the shortest of these methods:
The $pattern in the awk command is not protected by single quotes so that the shell does expand the variable but it needs to be put in double quotes to properly handle patterns containing spaces. A pattern by itself in the usual way checks to see if the whole line ($0) matches. FILENAME contains the current filename. awk has no explicit concatenation operator; two adjacent strings concatenate them. $0 expands to the original unchanged input line.
There are alternate ways of writing this. This shell script accesses the environment directly from within awk:
This is a shell script that uses ENVIRON, an array introduced in a newer version of the One True awk after the book was published. The subscript of ENVIRON is the name of an environment variable; its result is the variable's value. This is like the getenv function in various standard libraries and POSIX. The shell script makes an environment variable pattern containing the first argument, then drops that argument and has awk look for the pattern in each file.
~ checks to see if its left operand matches its right operand; !~ is its inverse. Note that a regular expression is just a string and can be stored in variables.
The next way uses command-line variable assignment, in which an argument to awk can be seen as an assignment to a variable:
Or You can use the -v var=value command line option (e.g. awk -v pattern="$pattern" ...).
Finally, this is written in pure awk, without help from a shell or without the need to know too much about the implementation of the awk script (as the variable assignment on command line one does), but is a bit lengthy:
The BEGIN is necessary not only to extract the first argument, but also to prevent it from being interpreted as a filename after the BEGIN block ends. ARGC, the number of arguments, is always guaranteed to be ≥1, as ARGV[0] is the name of the command that executed the script, most often the string "awk". Also note that ARGV[ARGC] is the empty string, "". # initiates a comment that expands to the end of the line.
Note the if block. awk only checks to see if it should read from standard input before it runs the command. This means that
awk 'prog'
only works because the fact that there are no filenames is only checked before prog is run! If you explicitly set ARGC to 1 so that there are no arguments, awk will simply quit because it feels there are no more input files. Therefore, you need to explicitly say to read from standard input with the special filename -.
Self-contained AWK scripts
On Unix-like operating systems self-contained AWK scripts can be constructed using the shebang syntax.
For example, a script that prints the content of a given file may be built by creating a file named print.awk with the following content:
It can be invoked with: ./print.awk <filename>
The -f tells AWK that the argument that follows is the file to read the AWK program from, which is the same flag that is used in sed. Since they are often used for one-liners, both these programs default to executing a program given as a command-line argument, rather than a separate file.
Versions and implementations
AWK was originally written in 1977 and distributed with Version 7 Unix.
In 1985 its authors started expanding the language, most significantly by adding user-defined functions. The language is described in the book The AWK Programming Language, published 1988, and its implementation was made available in releases of UNIX System V. To avoid confusion with the incompatible older version, this version was sometimes called "new awk" or nawk. This implementation was released under a free software license in 1996 and is still maintained by Brian Kernighan (see external links below).Old versions of Unix, such as UNIX/32V, included awkcc, which converted AWK to C. Kernighan wrote a program to turn awk into C++; its state is not known.
BWK awk, also known as nawk, refers to the version by Brian Kernighan. It has been dubbed the "One True AWK" because of the use of the term in association with the book that originally described the language and the fact that Kernighan was one of the original authors of AWK. FreeBSD refers to this version as one-true-awk. This version also has features not in the book, such as tolower and ENVIRON that are explained above; see the FIXES file in the source archive for details. This version is used by, for example, Android, FreeBSD, NetBSD, OpenBSD, macOS, and illumos. Brian Kernighan and Arnold Robbins are the main contributors to a source repository for nawk: github.com/onetrueawk/awk.
gawk (GNU awk) is another free-software implementation and the only implementation that makes serious progress implementing internationalization and localization and TCP/IP networking. It was written before the original implementation became freely available. It includes its own debugger, and its profiler enables the user to make measured performance enhancements to a script. It also enables the user to extend functionality with shared libraries. Some Linux distributions include gawk as their default AWK implementation. As of version 5.2 (September 2022) gawk includes a persistent memory feature that can remember script-defined variables and functions from one invocation of a script to the next and pass data between unrelated scripts, as described in the Persistent-Memory gawk User Manual: www.gnu.org/software/gawk/manual/pm-gawk/.
gawk-csv. The CSV extension of gawk provides facilities for handling input and output CSV formatted data.
mawk is a very fast AWK implementation by Mike Brennan based on a bytecode interpreter.
libmawk is a fork of mawk, allowing applications to embed multiple parallel instances of awk interpreters.
awka (whose front end is written atop the mawk program) is another translator of AWK scripts into C code. When compiled, statically including the author's libawka.a, the resulting executables are considerably sped up and, according to the author's tests, compare very well with other versions of AWK, Perl, or Tcl. Small scripts will turn into programs of 160–170 kB.
tawk (Thompson AWK) is an AWK compiler for Solaris, DOS, OS/2, and Windows, previously sold by Thompson Automation Software (which has ceased its activities).
Jawk is a project to implement AWK in Java, hosted on SourceForge. Extensions to the language are added to provide access to Java features within AWK scripts (i.e., Java threads, sockets, collections, etc.).
xgawk is a fork of gawk that extends gawk with dynamically loadable libraries. The XMLgawk extension was integrated into the official GNU Awk release 4.1.0.
QSEAWK is an embedded AWK interpreter implementation included in the QSE library that provides embedding application programming interface (API) for C and C++.
libfawk is a very small, function-only, reentrant, embeddable interpreter written in C
BusyBox includes an AWK implementation written by Dmitry Zakharov. This is a very small implementation suitable for embedded systems.
CLAWK by Michael Parker provides an AWK implementation in Common Lisp, based upon the regular expression library of the same author.
Books
Aho, Alfred V.; Kernighan, Brian W.; Weinberger, Peter J. (1988-01-01). The AWK Programming Language. New York, NY: Addison-Wesley. ISBN 0-201-07981-X. Retrieved 2017-01-22.
Robbins, Arnold (2001-05-15). Effective awk Programming (3rd ed.). Sebastopol, CA: O'Reilly Media. ISBN 0-596-00070-7. Retrieved 2009-04-16.
Dougherty, Dale; Robbins, Arnold (1997-03-01). sed & awk (2nd ed.). Sebastopol, CA: O'Reilly Media. ISBN 1-56592-225-5. Retrieved 2009-04-16.
Robbins, Arnold (2000). Effective Awk Programming: A User's Guide for Gnu Awk (1.0.3 ed.). Bloomington, IN: iUniverse. ISBN 0-595-10034-1. Archived from the original on 12 April 2009. Retrieved 2009-04-16.
See also
Data transformation
Event-driven programming
List of Unix commands
sed
Further reading
Andy Oram (May 19, 2021). "Awk: The Power and Promise of a 40-Year-Old Language". Fosslife. Retrieved June 9, 2021.
Hamilton, Naomi (May 30, 2008). "The A-Z of Programming Languages: AWK". Computerworld. Retrieved 2008-12-12. – Interview with Alfred V. Aho on AWK
Robbins, Daniel (2000-12-01). "Awk by example, Part 1: An intro to the great language with the strange name". Common threads. IBM DeveloperWorks. Retrieved 2009-04-16.
Robbins, Daniel (2001-01-01). "Awk by example, Part 2: Records, loops, and arrays". Common threads. IBM DeveloperWorks. Retrieved 2009-04-16.
Robbins, Daniel (2001-04-01). "Awk by example, Part 3: String functions and ... checkbooks?". Common threads. IBM DeveloperWorks. Archived from the original on 19 May 2009. Retrieved 2009-04-16.
AWK – Become an expert in 60 minutes
awk: pattern scanning and processing language – Shell and Utilities Reference, The Single UNIX Specification, Version 4 from The Open Group
gawk(1) – Linux User Manual – User Commands
The Amazing Awk Assembler by Henry Spencer.
AWK at Curlie
awklang.org The site for things related to the awk language |
BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967.
Design
BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only 1⁄5 of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java).
The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and of adequate capacity to represent any valid storage address. For many machines of the time, this data type was a 16-bit word. This choice later proved to be a significant problem when BCPL was used on machines in which the smallest addressable item was not a word but a byte or on machines with larger word sizes such as 32-bit or 64-bit.The interpretation of any value was determined by the operators used to process the values. (For example, + added two values together, treating them as integers; ! indirected through a value, effectively treating it as a pointer.) In order for this to work, the implementation provided no type checking.
The mismatch between BCPL's word orientation and byte-oriented hardware was addressed in several ways. One was by providing standard library routines for packing and unpacking words into byte strings. Later, two language features were added: the bit-field selection operator and the infix byte indirection operator (denoted by %).BCPL handles bindings spanning separate compilation units in a unique way. There are no user-declarable global variables; instead, there is a global vector, similar to "blank common" in Fortran. All data shared between different compilation units comprises scalars and pointers to vectors stored in a pre-arranged place in the global vector. Thus, the header files (files included during compilation using the "GET" directive) become the primary means of synchronizing global data between compilation units, containing "GLOBAL" directives that present lists of symbolic names, each paired with a number that associates the name with the corresponding numerically addressed word in the global vector. As well as variables, the global vector contains bindings for external procedures. This makes dynamic loading of compilation units very simple to achieve. Instead of relying on the link loader of the underlying implementation, effectively, BCPL gives the programmer control of the linking process.The global vector also made it very simple to replace or augment standard library routines. A program could save the pointer from the global vector to the original routine and replace it with a pointer to an alternative version. The alternative might call the original as part of its processing. This could be used as a quick ad hoc debugging aid.BCPL was the first brace programming language and the braces survived the syntactical changes and have become a common means of denoting program source code statements. In practice, on limited keyboards of the day, source programs often used the sequences $( and $) in place of the symbols { and }. The single-line // comments of BCPL, which were not adopted by C, reappeared in C++ and later in C99.
The book BCPL: The language and its compiler describes the philosophy of BCPL as follows:
The philosophy of BCPL is not one of the tyrant who thinks he knows best and lays down the law on what is and what is not allowed; rather, BCPL acts more as a servant offering his services to the best of his ability without complaint, even when confronted with apparent nonsense. The programmer is always assumed to know what he is doing and is not hemmed in by petty restrictions.
History
BCPL was first implemented by Martin Richards of the University of Cambridge in 1967. BCPL was a response to difficulties with its predecessor, Cambridge Programming Language, later renamed Combined Programming Language (CPL), which was designed during the early 1960s. Richards created BCPL by "removing those features of the full language which make compilation difficult". The first compiler implementation, for the IBM 7094 under Compatible Time-Sharing System, was written while Richards was visiting Project MAC at the Massachusetts Institute of Technology in the spring of 1967. The language was first described in a paper presented to the 1969 Spring Joint Computer Conference.BCPL has been rumored to have originally stood for "Bootstrap Cambridge Programming Language", but CPL was never created since development stopped at BCPL, and the acronym was later reinterpreted for the BCPL book.BCPL is the language in which the original "Hello, World!" program was written. The first MUD was also written in BCPL (MUD1).
Several operating systems were written partially or wholly in BCPL (for example, TRIPOS and the earliest versions of AmigaDOS). BCPL was also the initial language used in the Xerox PARC Alto project, the first modern personal computer; among other projects, the Bravo document preparation system was written in BCPL.
An early compiler, bootstrapped in 1969, by starting with a paper tape of the O-code of Richards's Atlas 2 compiler, targeted the ICT 1900 series. The two machines had different word-lengths (48 vs 24 bits), different character encodings, and different packed string representations—and the successful bootstrapping increased confidence in the practicality of the method.
By late 1970, implementations existed for the Honeywell 635 and Honeywell 645, IBM 360, PDP-10, TX-2, CDC 6400, UNIVAC 1108, PDP-9, KDF 9 and Atlas 2. In 1974 a dialect of BCPL was implemented at BBN without using the intermediate O-code. The initial implementation was a cross-compiler hosted on BBN's TENEX PDP-10s, and directly targeted the PDP-11s used in BBN's implementation of the second generation IMPs used in the ARPANET.
There was also a version produced for the BBC Micro in the mid-1980s, by Richards Computer Products, a company started by John Richards, the brother of Martin Richards. The BBC Domesday Project made use of the language. Versions of BCPL for the Amstrad CPC and Amstrad PCW computers were also released in 1986 by UK software house Arnor Ltd. MacBCPL was released for the Apple Macintosh in 1985 by Topexpress Ltd, of Kensington, England.
Both the design and philosophy of BCPL strongly influenced B, which in turn influenced C. Programmers at the time debated whether an eventual successor to C would be called "D", the next letter in the alphabet, or "P", the next letter in the parent language name. The language most accepted as being C's successor is C++ (with ++ being C's increment operator), although meanwhile, a D programming language also exists.
In 1979, implementations of BCPL existed for at least 25 architectures; the language gradually fell out of favour as C became popular on non-Unix systems.
Martin Richards maintains a modern version of BCPL on his website, last updated in 2018. This can be set up to run on various systems including Linux, FreeBSD, and Mac OS X. The latest distribution includes graphics and sound libraries, and there is a comprehensive manual. He continues to program in it, including for his research on musical automated score following.
A common informal MIME type for BCPL is text/x-bcpl.
Examples
Hello world
Richards and Whitby-Strevens provide an example of the "Hello, World!" program for BCPL using a standard system header, 'LIBHDR':
GET "LIBHDR"
LET START() BE WRITES("Hello, World")
Further examples
If these programs are run using Richards' current version of Cintsys (December 2018), LIBHDR, START and WRITEF must be changed to lower case to avoid errors.
Print factorials:
GET "LIBHDR"
LET START() = VALOF $(
FOR I = 1 TO 5 DO
WRITEF("%N! = %I4*N", I, FACT(I))
RESULTIS 0
$)
AND FACT(N) = N = 0 -> 1, N * FACT(N - 1)
Count solutions to the N queens problem:
GET "LIBHDR"
GLOBAL $(
COUNT: 200
ALL: 201
$)
LET TRY(LD, ROW, RD) BE
TEST ROW = ALL THEN
COUNT := COUNT + 1
ELSE $(
LET POSS = ALL & ~(LD | ROW | RD)
UNTIL POSS = 0 DO $(
LET P = POSS & -POSS
POSS := POSS - P
TRY(LD + P << 1, ROW + P, RD + P >> 1)
$)
$)
LET START() = VALOF $(
ALL := 1
FOR I = 1 TO 12 DO $(
COUNT := 0
TRY(0, 0, 0)
WRITEF("%I2-QUEENS PROBLEM HAS %I5 SOLUTIONS*N", I, COUNT)
ALL := 2 * ALL + 1
$)
RESULTIS 0
$)
Further reading
Martin Richards, The BCPL Reference Manual (Memorandum M-352, Project MAC, Cambridge, MA, USA, July, 1967)
Martin Richards, BCPL - a tool for compiler writing and systems programming (Proceedings of the Spring Joint Computer Conference, Vol 34, pp 557–566, 1969)
Martin Richards, Arthur Evans, Robert F. Mabee, The BCPL Reference Manual (MAC TR-141, Project MAC, Cambridge, MA, USA, 1974)
Martin Richards, Colin Whitby-Strevens, BCPL, the language and its compiler (Cambridge University Press, 1980) ISBN 0-521-28681-6
Martin Richards' BCPL distribution
Martin Richards' BCPL Reference Manual, 1967 by Dennis M. Ritchie
BCPL entry in the Jargon File
Nordier & Associates' x86 port
ArnorBCPL manual (1986, Amstrad PCW/CPC)
How BCPL evolved from CPL, Martin Richards [1]
Ritchie's The Development of the C Language has commentary about BCPL's influence on C
The BCPL Cintsys and Cintpos User Guide |
Blockly is a client-side library for the programming language JavaScript for creating block-based visual programming languages (VPLs) and editors. A project of Google, it is free and open-source software released under the Apache License 2.0. It typically runs in a web browser, and visually resembles the language Scratch.
Blockly uses visual blocks that link together to make writing code easier, and can generate code in JavaScript, Lua, Dart, Python, or PHP. It can also be customized to generate code in any textual programming language.
History
Blockly development began in summer 2011. The first public release was in May 2012 at Maker Faire. Blockly was originally designed as a replacement for OpenBlocks in App Inventor. Neil Fraser began the project with Quynh Neutron, Ellen Spertus, and Mark Friedman as contributors.
User interface
The default graphical user interface (GUI) of the Blockly editor consists of a toolbox, which holds available blocks, and where a user can select blocks, and a workspace, where a user can drag and drop and rearrange blocks. The workspace also includes, by default, zoom icons, and a trashcan to delete blocks. The editor can be modified easily to customize and limit the available editing features and blocks.
Customization
Blockly includes a set of visual blocks for common operations, and can be customized by adding more blocks. New blocks require a block definition and a generator. The definition describes the block's appearance (user interface) and the generator describes the block's translation to executable code. Definitions and generators can be written in JavaScript, or using a visual set of blocks, the Block Factory, which allows new blocks to be described using extant visual blocks; the intent is to make creating new blocks easier.
Applications
Blockly is used in several notable projects, including:
MIT's Scratch, visual programming environment for education
MIT's App Inventor, to create applications for Android.
MIT's CoCo, visual collaborative programming website for education.
Code.org, to teach introductory programing to millions of students in their Hour of Code program
Microsoft's MakeCode, "a free online learn-to-code platform where anyone can build games, code devices, and mod Minecraft"
RoboBlockly, a web-based robot simulation environment for learning coding and math
PICAXE, to control their educational microchips
SAM Labs, in STEAM learn-to-code "education solutions"
Features
Web-based using Scalable Vector Graphics (SVG)
Completely client-side JavaScript
Support of major web browsers including: Chrome, Firefox, Safari, Opera, Edge
Support for many programmatic constructs including variables, functions, arrays
Minimal type checking supported, designed for dynamically typed languages
Easy to extend with custom blocks
Clean code generation
Step-by-step code execution for tracing and debugging code
Localised into 100+ languages
Support for left-to-right and right-to-left languages
Official website |
Boo is an onomatopoeic word for a loud, startling sound, as an exclamation intended to scare, or as a call of derision (see booing).
Boo or BOO may also refer to:
Places
Boo (Aller), parish in Asturias, Spain
Boo, standard abbreviation for the constellation Boötes
Boo, Ghana, a town in Lawra District in the Upper West RegionDeja
Boo, Guinea, in Nzérékoré Prefecture; see List of schools in Ghana
Boo, Sweden, locality in Stockholm County
Bodø Airport in Norway, IATA airport code BOO
Boo Islands, West Papua, Indonesia
Station
Code for Bogor railway station
People
Boo (name), a list of people with the given name, nickname or surname
Betty Boo (born 1970), English singer, songwriter and pop rapper Alison Moira Clarkson
Gangsta Boo (1979–2023), American rapper
Sabrian "Boo" Sledge, half of the American hip hop duo Boo & Gotti
Ben Okello Oluoch, Kenyan politician and host of the radio program Kogwen gi BOO
Arts, entertainment, and media
Fictional characters
Boo (character), a ghost character in the Mario series
Boo (Sonic the Hedgehog character), a ghost character in Sonic the Hedgehog series
Boo, a hamster belonging to Minsc in Baldur's Gate, and a character in Megatokyo
Boo, a character in the Malaysian animated television series Boo & Me
Boo, a character in the manga and anime Crayon Shin-chan
Boo, a human baby girl in the animated film Monsters, Inc.
Boo, Carrie Black's nickname in Orange Is the New Black
Boo! (comic strip), a character in the British comic The Dandy created by Andy Fanton
Majin Boo, an anime and manga character in Dragon Ball
Boo Radley, a character in the novel To Kill a Mockingbird and its adaptations
Boo, the title character in Boo! (TV series)
Films
Boo (2005 film), a horror film
B.O.O.: Bureau of Otherworldly Operations, an animated film
Boo! (1932 film), a 1932 comedy film
Boo! (2018 film), a 2018 horror film
Boo (2023 film), an Indian Telugu language horror film
Boo! A Madea Halloween, a 2016 horror comedy film
Boo 2! A Madea Halloween, a 2017 horror comedy film
Music
Boo! (album), by Was (Not Was)
Boo! (band), a South African band
Born of Osiris, an American heavy metal band
Television
Boo! (TV series), a 2003–2006 British children's series
"Boo" (CSI: NY), a 2007 episode
"Boo" (Dark Angel), a 2001 episode
"Boo!" (Frasier), a 2004 episode
"Boo!" (Roseanne), a 1989 episode
"Boo!" (Space Ghost Coast to Coast), an episode of Space Ghost Coast to Coast
Literature
The Boo (book), by Pat Conroy
Computing and technology
Boo (programming language)
.boo, a binary-to-text encoding system
Languages
Boo dialect, of the Teke-Ebo or Central Teke language, spoken in Congo and the Democratic Republic of the Congo
Boko language (Benin), also called Boo language
Bomu language, also called Boo, or Western Bobo Wule language
Bozo language, ISO 639 code boo, spoken in Mali
Other uses
Better Off Out, a political campaign
Black Oxygen Organics, a defunct multi-level marketing company
Bladder outlet obstruction
Boo (dog) (2006–2019)
Boô, a Saxon cattle shed
"Boo", a term of endearment
Boo FF, a Swedish football club in Boo, Stockholm
Boo.com, a clothing company
Build Own Operate, a form of infrastructure project operating concession
See also
Big Boo (disambiguation)
BO2 (disambiguation)
Boo Boo (disambiguation)
Boo language (disambiguation)
Boos (disambiguation)
Buu (disambiguation) |
The Web Services Business Process Execution Language (WS-BPEL), commonly known as BPEL (Business Process Execution Language), is an OASIS standard executable language for specifying actions within business processes with web services. Processes in BPEL export and import information by using web service interfaces exclusively.
Overview
One can describe Web-service interactions in two ways: as executable business processes and as abstract business processes.
An executable business process: models an actual behavior of a participant in a business interaction.
An abstract business process: is a partially specified process that is not intended to be executed. Contrary to Executable Processes, an Abstract Process may hide some of the required concrete operational details. Abstract Processes serve a descriptive role, with more than one possible use case, including observable behavior and/or process template.WS-BPEL aims to model the behavior of processes, via a language for the specification of both Executable and Abstract Business Processes. By doing so, it extends the Web Services interaction model and enables it to support business transactions. It also defines an interoperable integration model that should facilitate the expansion of automated process integration both within and between businesses. Its development came out of the notion that programming in the large and programming in the small required different types of languages.
As such, it is serialized in XML and aims to enable programming in the large.
Programming in the large/small
The concepts of programming in the large and programming in the small distinguish between two aspects of writing the type of long-running asynchronous processes that one typically sees in business processes:
Programming in the large generally refers to the high-level state transition interactions of a process. BPEL refers to this concept as an Abstract Process. A BPEL Abstract Process represents a set of publicly observable behaviors in a standardized fashion. An Abstract Process includes information such as when to wait for messages, when to send messages, when to compensate for failed transactions, etc.
Programming in the small, in contrast, deals with short-lived programmatic behavior, often executed as a single transaction and involving access to local logic and resources such as files, databases, et cetera.
History
The origins of WS-BPEL go back to Web Services Flow Language (WSFL) and Xlang.
In 2001, IBM and Microsoft had each defined their own fairly similar, "programming in the large" languages: WSFL (Web Services Flow Language) and Xlang, respectively. Microsoft even went ahead and created a scripting variant called XLANG/s which would later serve as the basis for their Orchestrations services inside their BizTalk Server. They specifically documented that this language "is proprietary and is not fully documented."
With the advent and popularity of BPML, and the growing success of BPMI.org and the open BPMS movement led by JBoss and Intalio Inc., IBM and Microsoft decided to combine these languages into a new language, BPEL4WS. In April 2003, BEA Systems, IBM, Microsoft, SAP, and Siebel Systems submitted BPEL4WS 1.1 to OASIS for standardization via the Web Services BPEL Technical Committee. Although BPEL4WS appeared as both a 1.0 and 1.1 version, the OASIS WS-BPEL technical committee voted on 14 September 2004 to name their spec "WS-BPEL 2.0". (This change in name aligned BPEL with other web service standard naming conventions which start with "WS-" (similar to WS-Security) and took account of the significant enhancements made between BPEL4WS 1.1 and WS-BPEL 2.0.) If not discussing a specific version, the moniker BPEL is commonly used.
In June 2007, Active Endpoints, Adobe Systems, BEA, IBM, Oracle, and SAP published the BPEL4People and WS-HumanTask specifications, which describe how human interaction in BPEL processes can be implemented.
Topics
Design goals
There were ten original design goals associated with BPEL:
Define business processes that interact with external entities through web service operations defined using Web Services Description Language (WSDL) 1.1, and that manifest themselves as Web services defined using WSDL 1.1. The interactions are "abstract" in the sense that the dependence is on portType definitions, not on port definitions.
Define business processes using an XML-based language. Do not define a graphical representation of processes or provide any particular design methodology for processes.
Define a set of Web service orchestration concepts that are meant to be used by both the external (abstract) and internal (executable) views of a business process. Such a business process defines the behavior of a single autonomous entity, typically operating in interaction with other similar peer entities. It is recognized that each usage pattern (i.e., abstract view and executable view) will require a few specialized extensions, but these extensions are to be kept to a minimum and tested against requirements such as import/export and conformance checking that link the two usage patterns.
Provide both hierarchical and graph-like control regimes, and allow their use to be blended as seamlessly as possible. This should reduce the fragmentation of the process modeling space.
Provide data manipulation functions for the simple manipulation of data needed to define process data and control flow.
Support an identification mechanism for process instances that allows the definition of instance identifiers at the application message level. Instance identifiers should be defined by partners and may change.
Support the implicit creation and termination of process instances as the basic lifecycle mechanism. Advanced lifecycle operations such as "suspend" and "resume" may be added in future releases for enhanced lifecycle management.
Define a long-running transaction model that is based on proven techniques like compensation actions and scoping to support failure recovery for parts of long-running business processes.
Use Web Services as the model for process decomposition and assembly.
Build on Web services standards (approved and proposed) as much as possible in a composable and modular manner.
The BPEL language
BPEL is an orchestration language, and not a choreography language. The primary difference between orchestration and choreography is executability and control. An orchestration specifies an executable process that involves message exchanges with other systems, such that the message exchange sequences are controlled by the orchestration designer. A choreography specifies a protocol for peer-to-peer interactions, defining, e.g., the legal sequences of messages exchanged with the purpose of guaranteeing interoperability. Such a protocol is not directly executable, as it allows many different realizations (processes that comply with it). A choreography can be realized by writing an orchestration (e.g., in the form of a BPEL process) for each peer involved in it. The orchestration and the choreography distinctions are based on analogies: orchestration refers to the central control (by the conductor) of the behavior of a distributed system (the orchestra consisting of many players), while choreography refers to a distributed system (the dancing team) which operates according to rules (the choreography) but without centralized control.
BPEL's focus on modern business processes, plus the histories of WSFL and XLANG, led BPEL to adopt web services as its external communication mechanism. Thus BPEL's messaging facilities depend on the use of the Web Services Description Language (WSDL) 1.1 to describe outgoing and incoming messages.
In addition to providing facilities to enable sending and receiving messages, the BPEL programming language also supports:
A property-based message correlation mechanism
XML and WSDL typed variables
An extensible language plug-in model to allow writing expressions and queries in multiple languages: BPEL supports XPath 1.0 by default
Structured-programming constructs including if-then-elseif-else, while, sequence (to enable executing commands in order) and flow (to enable executing commands in parallel)
A scoping system to allow the encapsulation of logic with local variables, fault-handlers, compensation-handlers and event-handlers
Serialized scopes to control concurrent access to variables.
Relationship of BPEL to BPMN
There is no standard graphical notation for WS-BPEL, as the OASIS technical committee decided this was out of scope. Some vendors have invented their own notations. These notations take advantage of the fact that most constructs in BPEL are block-structured (e.g., sequence, while, pick, scope, etcetera.) This feature enables a direct visual representation of BPEL process descriptions in the form of structograms, in a style reminiscent of a Nassi–Shneiderman diagram.
Others have proposed to use a substantially different business process modeling language, namely Business Process Model and Notation (BPMN), as a graphical front-end to capture BPEL process descriptions. As an illustration of the feasibility of this approach, the BPMN specification includes an informal and partial mapping from BPMN to BPEL 1.1. A more detailed mapping of BPMN to BPEL has been implemented in a number of tools, including an open-source tool known as BPMN2BPEL. However, the development of these tools has exposed fundamental differences between BPMN and BPEL, which make it very difficult, and in some cases impossible, to generate human-readable BPEL code from BPMN models. Even more difficult is the problem of BPMN-to-BPEL round-trip engineering: generating BPEL code from BPMN diagrams and maintaining the original BPMN model and the generated BPEL code synchronized, in the sense that any modification to one is propagated to the other.
Adding 'programming in the small' support to BPEL
BPEL's control structures such as 'if-then-elseif-else' and 'while' as well as its variable manipulation facilities depend on the use of 'programming in the small' languages to provide logic. All BPEL implementations must support XPath 1.0 as a default language. But the design of BPEL envisages extensibility so that systems builders can use other languages as well. BPELJ is an effort related to JSR 207 that may enable Java to function as a 'programming in the small' language within BPEL.
BPEL4People
Despite wide acceptance of Web services in distributed business applications, the absence of human interactions was a significant gap for many real-world business processes.
To fill this gap, BPEL4People extended BPEL from orchestration of Web services alone to orchestration of role-based human activities as well.
Objectives
Within the context of a business process BPEL4People
supports role-based interaction of people
provides means of assigning users to generic human roles
takes care to delegate ownership of a task to a person only
supports scenario as
four-eyes scenario
nomination
escalation
chained executionby extending BPEL with additional independent syntax and semantic.
The WS-HumanTask specification introduces the definition of human tasks and notifications, including their properties, behavior and a set of operations used to manipulate human tasks. A coordination protocol is introduced in order to control autonomy and life cycle of service-enabled human tasks in an interoperable manner.
The BPEL4People specification introduces a WS-BPEL extension to address human interactions in WS-BPEL as a first-class citizen. It defines a new type of basic activity which uses human tasks as an implementation, and allows specifying tasks local to a process or use tasks defined outside of the process definition. This extension is based on the WS-HumanTask specification.
WS-BPEL 2.0
Version 2.0 introduced some changes and new features:
New activity types: repeatUntil, validate, forEach (parallel and sequential), rethrow, extensionActivity, compensateScope
Renamed activities: switch/case renamed to if/else, terminate renamed to exit
Termination Handler added to scope activities to provide explicit behavior for termination
Variable initialization
XSLT for variable transformations (New XPath extension function bpws:doXslTransform)
XPath access to variable data (XPath variable syntax $variable[.part]/location)
XML schema variables in Web service activities (for WS-I doc/lit style service interactions)
Locally declared messageExchange (internal correlation of receive and reply activities)
Clarification of Abstract Processes (syntax and semantics)
Enable expression language overrides at each activity
See also
BPEL4People
BPELscript
Business Process Model and Notation
Business Process Modeling
List of BPEL engines
Web Services Conversation Language
Workflow
WS-CDL
XML Process Definition Language
Yet Another Workflow Language
Further reading
Books on BPEL 2.0SOA for the Business Developer: Concepts, BPEL, and SCA. ISBN 978-1-58347-065-7 |
The C shell (csh or the improved version, tcsh) is a Unix shell created by Bill Joy while he was a graduate student at University of California, Berkeley in the late 1970s. It has been widely distributed, beginning with the 2BSD release of the Berkeley Software Distribution (BSD) which Joy first distributed in 1978. Other early contributors to the ideas or the code were Michael Ubell, Eric Allman, Mike O'Brien and Jim Kulp.The C shell is a command processor which is typically run in a text window, allowing the user to type and execute commands. The C shell can also read commands from a file, called a script. Like all Unix shells, it supports filename wildcarding, piping, here documents, command substitution, variables and control structures for condition-testing and iteration. What differentiated the C shell from others, especially in the 1980s, were its interactive features and overall style. Its new features made it easier and faster to use. The overall style of the language looked more like C and was seen as more readable.
On many systems, such as macOS and Red Hat Linux, csh is actually tcsh, an improved version of csh. Often one of the two files is either a hard link or a symbolic link to the other, so that either name refers to the same improved version of the C shell. The original csh source code and binary are part of NetBSD.
On Debian and some derivatives (including Ubuntu), there are two different packages: csh and tcsh. The former is based on the original BSD version of csh and the latter is the improved tcsh.tcsh added filename and command completion and command line editing concepts borrowed from the Tenex system, which is the source of the "t". Because it only added functionality and did not change what already existed, tcsh remained backward compatible with the original C shell. Though it started as a side branch from the original source tree Joy had created, tcsh is now the main branch for ongoing development. tcsh is very stable but new releases continue to appear roughly once a year, consisting mostly of minor bug fixes.
Design objectives and features
The main design objectives for the C shell were that it should look more like the C programming language and that it should be better for interactive use.
More like C
The Unix system had been written almost exclusively in C, so the C shell's first objective was a command language that was more stylistically consistent with the rest of the system. The keywords, the use of parentheses, and the C shell's built-in expression grammar and support for arrays were all strongly influenced by C.
By today's standards, C shell may not seem particularly more C-like than many other popular scripting languages. But through the 80s and 90s, the difference was seen as striking, particularly when compared to Bourne shell (also known as sh), the then-dominant shell written by Stephen Bourne at Bell Labs. This example illustrates the C shell's more conventional expression operators and syntax.
The Bourne sh lacked an expression grammar. The square bracketed condition had to be evaluated by the slower means of running the external test program. sh's if command took its argument words as a new command to be run as a child process. If the child exited with a zero return code, sh would look for a then clause (a separate statement, but often written joined on the same line with a semicolon) and run that nested block. Otherwise, it would run the else. Hard-linking the test program as both "test" and "[" gave the notational advantage of the square brackets and the appearance that the functionality of test was part of the sh language. sh's use of a reversed keyword to mark the end of a control block was a style borrowed from ALGOL 68.By contrast, csh could evaluate the expression directly, which made it faster. It also claimed better readability: Its expressions used a grammar and a set of operators mostly copied from C, none of its keywords were reversed and the overall style was also more like C.
Here is a second example, comparing scripts that calculate the first 10 powers of 2.
Again because of the lack of an expression grammar, the sh script uses command substitution and the expr command. (Modern POSIX shell does have such a grammar: the statement could be written i=$((i * 2)) or : "$((i *= 2))".)
Finally, here is a third example, showing the differing styles for a switch statement.
In the sh script, ";;" marks the end of each case because sh disallows null statements otherwise.
Improvements for interactive use
The second objective was that the C shell should be better for interactive use. It introduced numerous new features that made it easier, faster and more friendly to use by typing commands at a terminal. Users could get things done with a lot fewer keystrokes and it ran faster. The most significant of these new features were the history and editing mechanisms, aliases, directory stacks, tilde notation, cdpath, job control, and path hashing. These new features proved very popular, and many of them have since been copied by other Unix shells.
History
History allows users to recall previous commands and rerun them by typing only a few quick keystrokes. For example, typing two exclamation marks ("!!") as a command causes the immediately preceding command to be run. Other short keystroke combinations, e.g., "!$" (meaning "the final argument of the previous command"), allow bits and pieces of previous commands to be pasted together and edited to form a new command.
Editing operators
Editing can be done not only on the text of a previous command, but also on variable substitutions. Operators range from simple string search/replace to parsing a pathname to extract a specific segment.
Aliases
Aliases allow the user to type the name of an alias and have the C shell expand it internally into whatever set of words the user has defined. For many simple situations, aliases run faster and are more convenient than scripts.
Directory stack
The directory stack allows the user to push or pop the current working directory, making it easier to jump back and forth between different places in the filesystem.
Tilde notation
Tilde notation offers a shorthand way of specifying pathnames relative to the home directory using the "~" character.
Filename completion
The escape key can be used interactively to show possible completions of a filename at the end of the current command line.
Cdpath
Cdpath extends the notion of a search path to the cd (change directory) command: If the specified directory is not in the current directory, csh will try to find it in the cdpath directories.
Job control
Well into the 1980s, most users only had simple character-mode terminals that precluded multiple windows, so they could only work on one task at a time. The C shell's job control allowed the user to suspend the current activity and create a new instance of the C shell, called a job, by typing ^Z. The user could then switch back and forth between jobs using the fg command. The active job was said to be in the foreground. Other jobs were said to be either suspended (stopped) or running in the background.
Path hashing
Path hashing speeds up the C shell's search for executable files. Rather than performing a filesystem call in each path directory, one at a time, until it either finds the file or runs out of possibilities, the C shell consults an internal hash table built by scanning the path directories. That table can usually tell the C shell where to find the file (if it exists) without having to search and can be refreshed with the rehash command.
Overview of the language
The C shell operates one line at a time. Each line is tokenized into a set of words separated by spaces or other characters with special meaning, including parentheses, piping and input/output redirection operators, semicolons, and ampersands.
Basic statements
A basic statement is one that simply runs a command. The first word is taken as name of the command to be run and may be either an internal command, e.g., echo, or an external command. The rest of the words are passed as arguments to the command.
At the basic statement level, here are some of the features of the grammar:
Wildcarding
The C shell, like all Unix shells, treats any command-line argument that contains wildcard characters as a pattern and replaces it with the list of all the filenames that match (see globbing).
matches any number of characters.
? matches any single character.
[...] matches any of the characters inside the square brackets. Ranges are allowed, using the hyphen.
[^...] matches any character not in the set.The C shell also introduced several notational conveniences (sometimes known as extended globbing), since copied by other Unix shells.
abc{def,ghi} is alternation (aka brace expansion) and expands to abcdef abcghi.
~ means the current user's home directory.
~user means user's home directory.Multiple directory-level wildcards, e.g., "*/*.c", are supported.
Since version 6.17.01, recursive wildcarding à la zsh (e.g. "**/*.c" or "***/*.html") is also supported with the globstar option.
Giving the shell the responsibility for interpreting wildcards was an important decision on Unix. It meant that wildcards would work with every command, and always in the same way. However, the decision relied on Unix's ability to pass long argument lists efficiently through the exec system call that csh uses to execute commands. By contrast, on Windows, wildcard interpretation is conventionally performed by each application. This is a legacy of MS-DOS, which only allowed a 128-byte command line to be passed to an application, making wildcarding by the DOS command prompt impractical. Although modern Windows can pass command lines of up to roughly 32K Unicode characters, the burden for wildcard interpretation remains with the application.
I/O redirection
By default, when csh runs a command, the command inherits the csh's stdio file handles for stdin, stdout and stderr, which normally all point to the console window where the C shell is running. The i/o redirection operators allow the command to use a file instead for input or output.
> file means stdout will be written to file, overwriting it if it exists, and creating it if it doesn't. Errors still come to the shell window.
>& file means both stdout and stderr will be written to file, overwriting it if it exists, and creating it if it doesn't.
>> file means stdout will be appended at the end of file.
>>& file means both stdout and stderr will be appended at the end of file.
< file means stdin will be read from file.
<< string is a here document. Stdin will read the following lines up to the one that matches string.Redirecting stderr alone isn't possible without the aid of a sub-shell.
Joining
Commands can be joined on the same line.
means run the first command and then the next.
&& means run the first command and, if it succeeds with a 0 return code, run the next.
|| means run the first command and, if it fails with a non-zero return code, run the next.
Piping
Commands can be connected using a pipe, which causes the output of one command to be fed into the input of the next. Both commands run concurrently.
| means connect stdout to stdin of the next command. Errors still come to the shell window.
|& means connect both stdout and stderr to stdin of the next command.Running concurrently means "in parallel". In a multi-core (multiple processor) system, the piped commands may literally be executing at the same time, otherwise the scheduler in the operating system time-slices between them.
Given a command, e.g., "a | b", the shell creates a pipe, then starts both a and b with stdio for the two commands redirected so that a writes its stdout into the input of the pipe while b reads stdin from the output of the pipe. Pipes are implemented by the operating system with a certain amount of buffering so that a can write for a while before the pipe fills but once the pipe fills any new write will block inside the OS until b reads enough to unblock new writes. If b tries to read more data than is available, it will block until a has written more data or until the pipe closes, e.g., if a exits.
Variable substitution
If a word contains a dollar sign, "$", the following characters are taken as the name of a variable and the reference is replaced by the value of that variable. Various editing operators, typed as suffixes to the reference, allow pathname editing (e.g., ":e" to extract just the extension) and other operations.
Quoting and escaping
Quoting mechanisms allow otherwise special characters, such as whitespace, wildcards, parentheses, and dollar signs, to be taken as literal text.
\ means take the next character as an ordinary literal character.
"string" is a weak quote. Enclosed whitespace and wildcards are taken as literals, but variable and command substitutions are still performed.
'string' is a strong quote. The entire enclosed string is taken as a literal.Double quotes inside double quotes should be escaped with "\"". The same applies to the dollar symbol, to prevent variable expansion "\$". For backticks, to prevent command substitution nesting, single quotes are required "'\`'".
Command substitution
Command substitution allows the output of one command to be used as arguments to another.
`command` means take the output of command, parse it into words and paste them back into the command line.The following is an example of nested command substitutions.
The following works too.
Background execution
Normally, when the C shell starts a command, it waits for the command to finish before giving the user another prompt signaling that a new command can be typed.
command & means start command in the background and prompt immediately for a new command.
Subshells
A subshell is a separate child copy of the shell that inherits the current state but can then make changes, e.g., to the current directory, without affecting the parent.
( commands ) means run commands in a subshell.
Control structures
The C shell provides control structures for both condition-testing and iteration. The condition-testing control structures are the if and switch statements. The iteration control structures are the while, foreach and repeat statements.
if statement
There are two forms of the if statement. The short form is typed on a single line but can specify only a single command if the expression is true.
The long form uses then, else and endif keywords to allow for blocks of commands to be nested inside the condition.
If the else and if keywords appear on the same line, csh chains, rather than nests them; the block is terminated with a single endif.
switch statement
The switch statement compares a string against a list of patterns, which may contain wildcard characters. If nothing matches, the default action, if there is one, is taken.
while statement
The while statement evaluates an expression. If it is true, the shell runs the nested commands and then repeats for as long as the expression remains true.
foreach statement
The foreach statement takes a list of values, usually a list of filenames produced by wildcarding, and then for each, sets the loop variable to that value and runs the nested commands.
repeat statement
The repeat statement repeats a single command an integral number of times.
Variables
The C shell implements both shell and environment variables. Environment variables, created using the setenv statement, are always simple strings, passed to any child processes, which retrieve these variables via the envp[] argument to main().
Shell variables, created using the set or @ statements, are internal to C shell. They are not passed to child processes. Shell variables can be either simple strings or arrays of strings. Some of the shell variables are predefined and used to control various internal C shell options, e.g., what should happen if a wildcard fails to match anything.
In current versions of csh, strings can be of arbitrary length, well into millions of characters.
Expressions
The C shell implements a 32-bit integer expression grammar with operators borrowed from C but with a few additional operators for string comparisons and filesystem tests, e.g., testing for the existence of a file. Operators must be separated by whitespace from their operands. Variables are referenced as $name.
Operator precedence is also borrowed from C, but with different operator associativity rules to resolve the ambiguity of what comes first in a sequence of equal precedence operators. In C, the associativity is left-to-right for most operators; in C shell, it is right-to-left. For example,
The parentheses in the C shell example are to avoid having the bit-shifting operators confused as I/O redirection operators. In either language, parentheses can always be used to explicitly specify the desired order of evaluation, even if only for clarity.
Reception
Although Stephen Bourne himself acknowledged that csh was superior to his shell for interactive use, it has never been as popular for scripting. Initially, and through the 1980s, csh could not be guaranteed to be present on all Unix systems, but sh could, which made it a better choice for any scripts that might have to run on other machines. By the mid-1990s, csh was widely available, but the use of csh for scripting faced new criticism by the POSIX committee, which specified that there should only be one preferred shell, the KornShell, for both interactive and scripting purposes. The C shell also faced criticism from others over the C shell's alleged defects in syntax, missing features, and poor implementation.
Syntax defects: were generally simple but unnecessary inconsistencies in the definition of the language. For example, the set, setenv and alias commands all did basically the same thing, namely, associate a name with a string or set of words. But all three had slight but unnecessary differences. An equal sign was required for a set but not for setenv or alias; parentheses were required around a word list for a set but not for setenv or alias, etc. Similarly, the if, switch and looping constructs use needlessly different keywords (endif, endsw and end) to terminate the nested blocks.
Missing features: most commonly cited are the lack of ability to manipulate the stdio file handles independently and support for functions. Although lacking support for functions, aliases serve as workaround. For multiple lines of code, aliases must be within single quotes, and each end of line must precede a backslash (the end of the last line must precede a single quote to delimit the end of the alias). Recursion is favorable over aliases in scripts as workaround for functions (an example is given below).
The implementation: which used an ad hoc parser, has drawn the most serious criticism. By the early 1970s, compiler technology was sufficiently mature that most new language implementations used either a top-down or bottom-up parser capable of recognizing a fully recursive grammar. It is not known why an ad hoc design was chosen instead for the C shell. It may be simply that, as Joy put it in an interview in 2009, "When I started doing this stuff with Unix, I wasn’t a very good programmer." The ad hoc design meant that the C shell language was not fully recursive. There was a limit to how complex a command it could handle.It worked for most interactively typed commands, but for the more complex commands a user might write in a script, it could easily fail, producing only a cryptic error message or an unwelcome result. For example, the C shell could not support piping between control structures. Attempting to pipe the output of a foreach command into grep simply didn't work. (The work-around, which works for many of the complaints related to the parser, is to break the code up into separate scripts. If the foreach is moved to a separate script, piping works because scripts are run by forking a new copy of csh that does inherit the correct stdio handles. It's also possible to break codes in a single file. An example is given below on how to break codes in a single file.)
Another example is the unwelcome behavior in the following fragments. Both of these appear to mean, "If 'myfile' does not exist, create it by writing 'mytext' into it." But the version on the right always creates an empty file because the C shell's order of evaluation is to look for and evaluate I/O redirection operators on each command line as it reads it, before examining the rest of the line to see whether it contains a control structure.
The implementation is also criticized for its notoriously poor error messages, e.g., "0: Event not found.", which yields no useful information about the problem.
However, by practicing, it's possible to overcome those deficiencies (thus instructing the programmer to take better and safer approaches on implementing a script).
The "0: Event not found." error implies there aren't saved commands in the history. The history may not work properly in scripts, but having a pre-set of commands in a variable serves as workaround.
Prefer breaking codes by recursing the script as workaround for functions.
Influence
The C shell was extremely successful in introducing a large number of innovations including the history mechanism, aliases, tilde notation, interactive filename completion, an expression grammar built into the shell, and more, that have since been copied by other Unix shells. But in contrast to sh, which has spawned a large number of independently developed clones, including ksh and bash, only two csh clones are known. (Since tcsh was based on the csh code originally written by Bill Joy, it is not considered a clone.)
In 1986, Allen Holub wrote On Command: Writing a Unix-Like Shell for MS-DOS, a book describing a program he had written called "SH" but which in fact copied the language design and features of csh, not sh. Companion diskettes containing full source for SH and for a basic set of Unix-like utilities (cat, cp, grep, etc.) were available for $25 and $30, respectively, from the publisher. The control structures, expression grammar, history mechanism and other features in Holub's SH were identical to those of the C shell.
In 1988, Hamilton Laboratories began shipping Hamilton C shell for OS/2. It included both a csh clone and a set of Unix-like utilities. In 1992, Hamilton C shell was released for Windows NT. The Windows version continues to be actively supported but the OS/2 version was discontinued in 2003. An early 1990 quick reference described the intent as "full compliance with the entire C shell language (except job control)" but with improvements to the language design and adaptation to the differences between Unix and a PC. The most important improvement was a top-down parser that allowed control structures to be nested or piped, something the original C shell could not support, given its ad hoc parser. Hamilton also added new language features including built-in and user-defined procedures, block-structured local variables and floating point arithmetic. Adaptation to a PC included support for the filename and other conventions on a PC and the use of threads instead of forks (which were not available under either OS/2 or Windows) to achieve parallelism, e.g., in setting up a pipeline.
See also
Command-line interpreter
Comparison of command shells
Further reading
Anderson, Gail; Paul Anderson (1986). The UNIX C Shell Field Guide. Prentice-Hall. ISBN 0-13-937468-X.
Wang, Paul (1988). An Introduction to Berkeley UNIX. Wadsworth Pub. Co. ISBN 0-534-08862-7.
DuBois, Paul (1995). Using csh & tcsh. O'Reilly & Associates. ISBN 1-56592-132-1.
Arick, Martin R. (1993). UNIX C Shell Desk Reference. John Wiley & Sons. ISBN 0-471-55680-7.
"Introduction to C Shell Programming". Canisius College Computer Science Department. Retrieved 23 June 2010.
An Introduction to the C shell by William Joy.
Linux in a Nutshell: Chapter 8. csh and tcsh.
tcsh home page.
tcsh(1) man page.
most recent available tcsh source code.
historical 2BSD csh source code dated 2 February 1980.
The Unix Tree, complete historical Unix distributions.
Csh programming considered harmful.
Top Ten Reasons not to use the C shell. |
Caml (originally an acronym for Categorical Abstract Machine Language) is a multi-paradigm, general-purpose programming language which is a dialect of the ML programming language family. Caml was developed in France at INRIA and ENS.
Caml is statically typed, strictly evaluated, and uses automatic memory management. OCaml, the main descendant of Caml, adds many features to the language, including an object layer.
Examples
In the following, # represents the Caml prompt.
Hello World
Factorial function (recursion and purely functional programming)
Many mathematical functions, such as factorial, are most naturally represented in a purely functional form. The following recursive, purely functional Caml function implements factorial:
The function can be written equivalently using pattern matching:
This latter form is the mathematical definition of factorial as a recurrence relation.
Note that the compiler inferred the type of this function to be int -> int, meaning that this function maps ints onto ints. For example, 12! is:
Numerical derivative (higher-order functions)
Since Caml is a functional programming language, it is easy to create and pass around functions in Caml programs. This capability has an enormous number of applications. Calculating the numerical derivative of a function is one such application. The following Caml function d computes the numerical derivative of a given function f at a given point x:
This function requires a small value delta. A good choice for delta is the cube root of the machine epsilon.
The type of the function d indicates that it maps a float onto another function with the type (float -> float) -> float -> float. This allows us to partially apply arguments. This functional style is known as currying. In this case, it is useful to partially apply the first argument delta to d, to obtain a more specialised function:
Note that the inferred type indicates that the replacement d is expecting a function with the type float -> float as its first argument. We can compute a numerical approximation to the derivative of
x
3
−
x
−
1
{\displaystyle x^{3}-x-1}
at
x
=
3
{\displaystyle x=3}
with:
The correct answer is
f
′
(
x
)
=
3
x
2
−
1
→
f
′
(
3
)
=
27
−
1
=
26
{\displaystyle f'(x)=3x^{2}-1\rightarrow f'(3)=27-1=26}
.
The function d is called a "higher-order function" because it accepts another function (f) as an argument.
We can go further and create the (approximate) derivative of f, by applying d while omitting the x argument:
The concepts of curried and higher-order functions are clearly useful in mathematical programs. In fact, these concepts are equally applicable to most other forms of programming and can be used to factor code much more aggressively, resulting in shorter programs and fewer bugs.
Discrete wavelet transform (pattern matching)
The 1D Haar wavelet transform of an integer-power-of-two-length list of numbers can be implemented very succinctly in Caml and is an excellent example of the use of pattern matching over lists, taking pairs of elements (h1 and h2) off the front and storing their sums and differences on the lists s and d, respectively:
For example:
Pattern matching allows complicated transformations to be represented clearly and succinctly. Moreover, the Caml compiler turns pattern matches into very efficient code, at times resulting in programs that are shorter and faster than equivalent code written with a case statement (Cardelli 1984, p. 210.).
History
The first Caml implementation was written in Lisp by Ascánder Suárez in 1987 at the French Institute for Research in Computer Science and Automation (INRIA).Its successor, Caml Light, was implemented in C by Xavier Leroy and Damien Doligez, and the original was nicknamed "Heavy Caml" because of its higher memory and CPU requirements.Caml Special Light was a further complete rewrite that added a powerful module system to the core language. It was augmented with an object layer to become Objective Caml, eventually renamed OCaml.
See also
Categorical abstract machine
OCaml
Bibliography
The Functional Approach to Programming with Caml by Guy Cousineau and Michel Mauny.
Cardelli, Luca (1984). Compiling a functional language ACM Symposium on LISP and functional programming, Association of Computer Machinery.
Official website – Caml language family |
ChucK is a concurrent, strongly timed audio programming language for real-time synthesis, composition, and performance,
which runs on Linux, Mac OS X, Microsoft Windows, and iOS. It is designed to favor readability and flexibility for the programmer over other considerations such as raw performance. It natively supports deterministic concurrency and multiple, simultaneous, dynamic control rates. Another key feature is the ability to live code; adding, removing, and modifying code on the fly, while the program is running, without stopping or restarting. It has a highly precise timing/concurrency model, allowing for arbitrarily fine granularity. It offers composers and researchers a powerful and flexible programming tool for building and experimenting with complex audio synthesis programs, and real-time interactive control.ChucK was created and chiefly designed by Ge Wang as a graduate student working with Perry R. Cook. ChucK is distributed freely under the terms of the GNU General Public License on Mac OS X, Linux and Microsoft Windows. On iPhone and iPad, ChiP (ChucK for iPhone) is distributed under a limited, closed source license, and is not currently licensed to the public. However, the core team has stated that it would like to explore "ways to open ChiP by creating a beneficial environment for everyone".
Language features
The ChucK programming language is a loosely C-like object-oriented language, with strong static typing.
ChucK is distinguished by the following characteristics:
Direct support for real-time audio synthesis
A powerful and simple concurrent programming model
A unified timing mechanism for multi-rate event and control processing.
A language syntax that encourages left-to-right syntax and semantics within program statements.
Precision timing: a strongly timed sample-synchronous timing model.
Programs are dynamically compiled to ChucK virtual machine bytecode.
A runtime environment that supports on-the-fly programming.
The ChucK Operator (=>) that can be used in several ways to "chuck" any ordered flow of data from left to right.ChucK standard libraries provide:
MIDI input and output.
Open Sound Control support.
HID connectivity.
Unit generators (UGens) - ie oscillators, envelopes, synthesis toolkit ugens, filters, etc.
Unit analyzers (UAnae) - blocks that perform analysis functions on audio signals and/or metadata input, and produce metadata analysis results as output - ie FFT/IFFT, Spectral Flux/Centroid, RMS, etc.
Serial IO capabilities - ie Arduino.
File IO capabilities.
Code example
The following is a simple ChucK program that generates sound and music:
// our signal graph (patch)
SinOsc f => dac;
// set gain
.3 => f.gain;
// an array of pitch classes (in half steps)
[ 0, 2, 4, 6, 9, 10 ] @=> int hi[];
// infinite loop
while( true )
{
// choose a note, shift registers, convert to frequency
Std.mtof( 65 + Std.rand2(0,1) * 43 +
hi[Std.rand2(0,hi.cap()-1)] ) => f.freq;
// advance time by 120 ms
120::ms => now;
}
Uses
ChucK has been used in performances by the Princeton Laptop Orchestra (PLOrk) and for developing Smule applications, including their ocarina emulator. PLOrk organizers attribute some of the uniqueness of their performances to the live coding they can perform with ChucK.
See also
Comparison of audio synthesis environments
Sonic Pi
Pure Data
Further reading
ChucK homepage at Princeton University
ChucK mirror at Stanford University
ChucK FLOSS manual |
IBM CICS (Customer Information Control System) is a family of mixed-language application servers that provide online transaction management and connectivity for applications on IBM mainframe systems under z/OS and z/VSE.
CICS family products are designed as middleware and support rapid, high-volume online transaction processing. A CICS transaction is a unit of processing initiated by a single request that may affect one or more objects. This processing is usually interactive (screen-oriented), but background transactions are possible.
CICS Transaction Server (CICS TS) sits at the head of the CICS family and provides services that extend or replace the functions of the operating system. These services can be more efficient than the generalized operating system services and also simpler for programmers to use, particularly with respect to communication with diverse terminal devices.
Applications developed for CICS may be written in a variety of programming languages and use CICS-supplied language extensions to interact with resources such as files, database connections, terminals, or to invoke functions such as web services. CICS manages the entire transaction such that if for any reason a part of the transaction fails all recoverable changes can be backed out.
While CICS TS has its highest profile among large financial institutions, such as banks and insurance companies, many Fortune 500 companies and government entities are reported to run CICS. Other, smaller enterprises can also run CICS TS and other CICS family products. CICS can regularly be found behind the scenes in, for example, bank-teller applications, ATM systems, industrial production control systems, insurance applications, and many other types of interactive applications.
Recent CICS TS enhancements include new capabilities to improve the developer experience, including the choice of APIs, frameworks, editors, and build tools, while at the same time providing updates in the key areas of security, resilience, and management. In earlier, recent CICS TS releases, support was provided for Web services and Java, event processing, Atom feeds, and RESTful interfaces.
History
CICS was preceded by an earlier, single-threaded transaction processing system, IBM MTCS. An 'MTCS-CICS bridge' was later developed to allow these transactions to execute under CICS with no change to the original application programs.
IBM's Customer Information Control System (CICS), first developed in conjunction with Michigan Bell in 1966. Ben Riggins was an IBM systems engineer at Virginia Electric Power Co. when he came up with the idea for the online system.CICS was originally developed in the United States out of the IBM Development Center in Des Plaines, Illinois, beginning in 1966 to address requirements from the public utility industry. The first CICS product was announced in 1968, named Public Utility Customer Information Control System, or PU-CICS. It became clear immediately that it had applicability to many other industries, so the Public Utility prefix was dropped with the introduction of the first release of the CICS Program Product on July 8, 1969, not long after IMS database management system.
For the next few years, CICS was developed in Palo Alto and was considered a less important "smaller" product than IMS which IBM then considered more strategic. Customer pressure kept it alive, however. When IBM decided to end development of CICS in 1974 to concentrate on IMS, the CICS development responsibility was picked up by the IBM Hursley site in the United Kingdom, which had just ceased work on the PL/I compiler and so knew many of the same customers as CICS. The core of the development work continues in Hursley today alongside contributions from labs in India, China, Russia, Australia, and the United States.
Early evolution
CICS originally only supported a few IBM-brand devices like the 1965 IBM 2741 Selectric (golf ball) typewriter-based terminal. The 1964 IBM 2260 and 1972 IBM 3270 video display terminals were widely used later.
In the early days of IBM mainframes, computer software was free – bundled at no extra charge with computer hardware. The OS/360 operating system and application support software like CICS were "open" to IBM customers long before the open-source software initiative. Corporations like Standard Oil of Indiana (Amoco) made major contributions to CICS.
The IBM Des Plaines team tried to add support for popular non-IBM terminals like the ASCII Teletype Model 33 ASR, but the small low-budget software development team could not afford the $100-per-month hardware to test it. IBM executives incorrectly felt that the future would be like the past, with batch processing using traditional punch cards.
IBM reluctantly provided only minimal funding when public utility companies, banks and credit-card companies demanded a cost-effective interactive system (similar to the 1965 IBM Airline Control Program used by the American Airlines Sabre computer reservation system) for high-speed data access-and-update to customer information for their telephone operators (without waiting for overnight batch processing punch card systems).
When CICS was delivered to Amoco with Teletype Model 33 ASR support, it caused the entire OS/360 operating system to crash (including non-CICS application programs). The majority of the CICS Terminal Control Program (TCP – the heart of CICS) and part of OS/360 had to be laboriously redesigned and rewritten by Amoco Production Company in Tulsa Oklahoma. It was then given back to IBM for free distribution to others.
In a few years, CICS generated over $60 billion in new hardware revenue for IBM, and became their most-successful mainframe software product.
In 1972, CICS was available in three versions – DOS-ENTRY (program number 5736-XX6) for DOS/360 machines with very limited memory, DOS-STANDARD (program number 5736-XX7), for DOS/360 machines with more memory, and OS-STANDARD V2 (program number 5734-XX7) for the larger machines which ran OS/360.In early 1970, a number of the original developers, including Ben Riggins (the principal architect of the early releases) relocated to California and continued CICS development at IBM's Palo Alto Development Center. IBM executives did not recognize value in software as a revenue-generating product until after federal law required software unbundling. In 1980, IBM executives failed to heed Ben Riggins' strong suggestions that IBM should provide their own EBCDIC-based operating system and integrated-circuit microprocessor chip for use in the IBM Personal Computer as a CICS intelligent terminal (instead of the incompatible Intel chip, and immature ASCII-based Microsoft 1980 DOS).
Because of the limited capacity of even large processors of that era every CICS installation was required to assemble the source code for all of the CICS system modules after completing a process similar to system generation (sysgen), called CICSGEN, to establish values for conditional assembly-language statements. This process allowed each customer to exclude support from CICS itself for any feature they did not intend to use, such as device support for terminal types not in use.
CICS owes its early popularity to its relatively efficient implementation when hardware was very expensive, its multi-threaded processing architecture, its relative simplicity for developing terminal-based real-time transaction applications, and many open-source customer contributions, including both debugging and feature enhancement.
Z notation
Part of CICS was formalized using the Z notation in the 1980s and 1990s in collaboration with the Oxford University Computing Laboratory, under the leadership of Tony Hoare. This work won a Queen's Award for Technological Achievement.
CICS as a distributed file server
In 1986, IBM announced CICS support for the record-oriented file services defined by Distributed Data Management Architecture (DDM). This enabled programs on remote, network-connected computers to create, manage, and access files that had previously been available only within the CICS/MVS and CICS/VSE transaction processing environments.In newer versions of CICS, support for DDM has been removed. Support for the DDM component of CICS z/OS was discontinued at the end of 2003, and was removed from CICS for z/OS in version 5.2 onward. In CICS TS for z/VSE, support for DDM was stabilised at V1.1.1 level, with an announced intention to discontinue it in a future release. In CICS for z/VSE 2.1 onward, CICS/DDM is not supported.
CICS and the World Wide Web
CICS Transaction Server first introduced a native HTTP interface in version 1.2, together with a Web Bridge technology for wrapping green-screen terminal-based programs with an HTML facade. CICS Web and Document APIs were enhanced in CICS TS V1.3 to enable web-aware applications to be written to interact more effectively with web browsers.
CICS TS versions 2.1 through 2.3 focused on introducing CORBA and EJB technologies to CICS, offering new ways to integrate CICS assets into distributed application component models. These technologies relied on hosting Java applications in CICS. The Java hosting environment saw numerous improvements over many releases, ultimately resulting in the embedding of the WebSphere Liberty Profile into CICS Transaction Server V5.1. Numerous web facing technologies could be hosted in CICS using Java, this ultimately resulted in the removal of the native CORBA and EJB technologies.
CICS TS V3.1 added a native implementation of the SOAP and WSDL technologies for CICS, together with client side HTTP APIs for outbound communication. These twin technologies enabled easier integration of CICS components with other Enterprise applications, and saw widespread adoption. Tools were included for taking traditional CICS programs written in languages such as COBOL, and converting them into WSDL defined Web Services, with little or no program changes. This technology saw regular enhancements over successive releases of CICS.
CICS TS V4.1 and V4.2 saw further enhancements to web connectivity, including a native implementation of the Atom publishing protocol.
Many of the newer web facing technologies were made available for earlier releases of CICS using delivery models other than a traditional product release. This allowed early adopters to provide constructive feedback that could influence the final design of the integrated technology. Examples include the Soap for CICS technology preview SupportPac for TS V2.2, or the ATOM SupportPac for TS V3.1. This approach was used to introduce JSON support for CICS TS V4.2, a technology that went on to be integrated into CICS TS V5.2.
The JSON technology in CICS is similar to earlier SOAP technology, both of which allowed programs hosted in CICS to be wrapped with a modern facade. The JSON technology was in turn enhanced in z/OS Connect Enterprise Edition, an IBM product for composing JSON APIs that can leverage assets from several mainframe subsystems.
Many partner products have also been used to interact with CICS. Popular examples include using the CICS Transaction Gateway for connecting to CICS from JCA compliant Java application servers, and IBM DataPower appliances for filtering web traffic before it reaches CICS.
Modern versions of CICS provide many ways for both existing and new software assets to be integrated into distributed application flows. CICS assets can be accessed from remote systems, and can access remote systems; user identity and transactional context can be propagated; RESTful APIs can be composed and managed; devices, users and servers can interact with CICS using standards-based technologies; and the IBM WebSphere Liberty environment in CICS promotes the rapid adoption of new technologies.
MicroCICS
By January, 1985 a 1969-founded consulting company, having done "massive on-line systems" for Hilton Hotels, FTD Florists, Amtrak, and Budget Rent-a-Car, announced what became MicroCICS. The initial focus was the IBM XT/370 and IBM AT/370.
CICS Family
Although when CICS is mentioned, people usually mean CICS Transaction Server, the CICS Family refers to a portfolio of transaction servers, connectors (called CICS Transaction Gateway) and CICS Tools.
CICS on distributed platforms—not mainframes—is called IBM TXSeries. TXSeries is distributed transaction processing middleware. It supports C, C++, COBOL, Java™ and PL/I applications in cloud environments and traditional data centers. TXSeries is available on AIX, Linux x86, Windows, Solaris, and HP-UX platforms. CICS is also available on other operating systems, notably IBM i and OS/2. The z/OS implementation (i.e., CICS Transaction Server for z/OS) is by far the most popular and significant.
Two versions of CICS were previously available for VM/CMS, but both have since been discontinued. In 1986, IBM released CICS/CMS, which was a single-user version of CICS designed for development use, the applications later being transferred to an MVS or DOS/VS system for production execution. Later, in 1988, IBM released CICS/VM. CICS/VM was intended for use on the IBM 9370, a low-end mainframe targeted at departmental use; IBM positioned CICS/VM running on departmental or branch office mainframes for use in conjunction with a central mainframe running CICS for MVS.
CICS Tools
Provisioning, management and analysis of CICS systems and applications is provided by CICS Tools. This includes performance management as well as deployment and management of CICS resources. In 2015, the four core foundational CICS tools (and the CICS Optimization Solution Pack for z/OS) were updated with the release of CICS Transaction Server for z/OS 5.3. The four core CICS Tools: CICS Interdependency Analyzer for z/OS, CICS Deployment Assistant for z/OS, CICS Performance Analyzer for z/OS and CICS Configuration Manager for z/OS.
Releases and versions
CICS Transaction Server for z/OS has used the following release numbers:
Programming
Programming considerations
Multiple-user interactive-transaction application programs were required to be quasi-reentrant in order to support multiple concurrent transaction threads. A software coding error in one application could block all users from the system. The modular design of CICS reentrant / reusable control programs meant that, with judicious "pruning," multiple users with multiple applications could be executed on a computer with just 32K of expensive magnetic core physical memory (including the operating system).
Considerable effort was required by CICS application programmers to make their transactions as efficient as possible. A common technique was to limit the size of individual programs to no more than 4,096 bytes, or 4K, so that CICS could easily reuse the memory occupied by any program not currently in use for another program or other application storage needs. When virtual memory was added to versions OS/360 in 1972, the 4K strategy became even more important to reduce paging and thrashing unproductive resource-contention overhead.
The efficiency of compiled high-level COBOL and PL/I language programs left much to be desired. Many CICS application programs continued to be written in assembler language, even after COBOL and PL/I support became available.
With 1960s-and-1970s hardware resources expensive and scarce, a competitive "game" developed among system optimization analysts. When critical path code was identified, a code snippet was passed around from one analyst to another. Each person had to either (a) reduce the number of bytes of code required, or (b) reduce the number of CPU cycles required. Younger analysts learned from what more-experienced mentors did. Eventually, when no one could do (a) or (b), the code was considered optimized, and they moved on to other snippets. Small shops with only one analyst learned CICS optimization very slowly (or not at all).
Because application programs could be shared by many concurrent threads, the use of static variables embedded within a program (or use of operating system memory) was restricted (by convention only).
Unfortunately, many of the "rules" were frequently broken, especially by COBOL programmers who might not understand the internals of their programs or fail to use the necessary restrictive compile time options. This resulted in "non-re-entrant" code that was often unreliable, leading to spurious storage violations and entire CICS system crashes.
Originally, the entire partition, or Multiple Virtual Storage (MVS) region, operated with the same memory protection key including the CICS kernel code. Program corruption and CICS control block corruption was a frequent cause of system downtime. A software error in one application program could overwrite the memory (code or data) of one or all currently running application transactions. Locating the offending application code for complex transient timing errors could be a very-difficult operating-system analyst problem.
These shortcomings persisted for multiple new releases of CICS over a period of more than 20 years, in spite of their severity and the fact that top-quality CICS skills were in high demand and short supply. They were addressed in TS V3.3, V4.1 and V5.2 with the Storage Protection, Transaction Isolation and Subspace features respectively, which utilize operating system hardware features to protect the application code and the data within the same address space even though the applications were not written to be separated. CICS application transactions remain mission-critical for many public utility companies, large banks and other multibillion-dollar financial institutions.
Additionally, it is possible to provide a measure of advance application protection by performing test under control of a monitoring program that also serves to provide Test and Debug features.
Macro-level programming
When CICS was first released, it only supported application transaction programs written in IBM 360 Assembler. COBOL and PL/I support were added years later. Because of the initial assembler orientation, requests for CICS services were made using assembler-language macros. For example, the request to read a record from a file were made by a macro call to the "File Control Program" of CICS might look like this:
DFHFC TYPE=READ,DATASET=myfile,TYPOPER=UPDATE,....etc.
This gave rise to the later terminology "Macro-level CICS."
When high-level language support was added, the macros were retained and the code was converted by a pre-compiler that expanded the macros to their COBOL or PL/I CALL statement equivalents. Thus preparing a HLL application was effectively a "two-stage" compile – output from the preprocessor fed into the HLL compiler as input.
COBOL considerations: unlike PL/I, IBM COBOL does not normally provide for the manipulation of pointers (addresses). In order to allow COBOL programmers to access CICS control blocks and dynamic storage the designers resorted to what was essentially a hack. The COBOL Linkage Section was normally used for inter-program communication, such as parameter passing. The compiler generates a list of addresses, each called a Base Locator for Linkage (BLL) which were set on entry to the called program. The first BLL corresponds to the first item in the Linkage Section and so on. CICS allows the programmer to access and manipulate these by passing the address of the list as the first argument to the program. The BLLs can then be dynamically set, either by CICS or by the application to allow access to the corresponding structure in the Linkage Section.
Command-level programming
During the 1980s, IBM at Hursley Park produced a version of CICS that supported what became known as "Command-level CICS" which still supported the older programs but introduced a new API style to application programs.
A typical Command-level call might look like the following:
The values given in the SEND MAPSET command correspond to the names used on the first DFHMSD macro in the map definition given below for the MAPSET argument, and the DFHMSI macro for the MAP argument. This is pre-processed by a pre-compile batch translation stage, which converts the embedded commands (EXECs) into call statements to a stub subroutine. So, preparing application programs for later execution still required two stages. It was possible to write "Mixed mode" applications using both Macro-level and Command-level statements.
Initially, at execution time, the command-level commands were converted using a run-time translator, "The EXEC Interface Program", to the old Macro-level call, which was then executed by the mostly unchanged CICS nucleus programs. But when the CICS Kernel was re-written for TS V3, EXEC CICS became the only way to program CICS applications, as many of the underlying interfaces had changed.
Run-time conversion
The Command-level-only CICS introduced in the early 1990s offered some advantages over earlier versions of CICS. However, IBM also dropped support for Macro-level application programs written for earlier versions. This meant that many application programs had to be converted or completely rewritten to use Command-level EXEC commands only.
By this time, there were perhaps millions of programs worldwide that had been in production for decades in many cases. Rewriting them often introduced new bugs without necessarily adding new features. There were a significant number of users who ran CICS V2 application-owning regions (AORs) to continue to run macro code for many years after the change to V3.
It was also possible to execute old Macro-level programs using conversion software such as APT International's Command CICS.
New programming styles
Recent CICS Transaction Server enhancements include support for a number of modern programming styles.
CICS Transaction Server Version 5.6 introduced enhanced support for Java to deliver a cloud-native experience for Java developers. For example, the new CICS Java API (JCICSX) allows easier unit testing using mocking and stubbing approaches, and can be run remotely on the developer’s local workstation. A set of CICS artifacts on Maven Central enable developers to resolve Java dependencies using popular dependency management tools such as Apache Maven and Gradle. Plug-ins for Maven (cics-bundle-maven) and Gradle (cics-bundle-gradle) are also provided to simplify automated building of CICS bundles, using familiar IDEs like Eclipse, IntelliJ IDEA, and Visual Studio Code. In addition, Node.js z/OS support is enhanced for version 12, providing faster startup, better default heap limits, updates to the V8 JavaScript engine, etc. Support for Jakarta EE 8 is also included.
CICS TS 5.5 introduced support for IBM SDK for Node.js, providing a full JavaScript runtime, server-side APIs, and libraries to efficiently build high-performance, highly scalable network applications for IBM Z.
CICS Transaction Server Version 2.1 introduced support for Java. CICS Transaction Server Version 2.2 supported the Software Developers Toolkit. CICS provides the same run-time container as IBM's WebSphere product family so Java EE applications are portable between CICS and Websphere and there is common tooling for the development and deployment of Java EE applications.
In addition, CICS placed an emphasis on "wrapping" existing application programs inside modern interfaces so that long-established business functions can be incorporated into more modern services. These include WSDL, SOAP and JSON interfaces that wrap legacy code so that a web or mobile application can obtain and update the core business objects without requiring a major rewrite of the back-end functions.
Transactions
A CICS transaction is a set of operations that perform a task together. Usually, the majority of transactions are relatively simple tasks such as requesting an inventory list or entering a debit or credit to an account. A primary characteristic of a transaction is that it should be atomic. On IBM Z servers, CICS easily supports thousands of transactions per second, making it a mainstay of enterprise computing.
CICS applications comprise transactions, which can be written in numerous programming languages, including COBOL, PL/I, C, C++, IBM Basic Assembly Language, Rexx, and Java.
Each CICS program is initiated using a transaction identifier. CICS screens are usually sent as a construct called a map, a module created with Basic Mapping Support (BMS) assembler macros or third-party tools. CICS screens may contain text that is highlighted, has different colors, and/or blinks depending on the terminal type used. An example of how a map can be sent through COBOL is given below. The end user inputs data, which is made accessible to the program by receiving a map from CICS.
For technical reasons, the arguments to some command parameters must be quoted and some must not be quoted, depending on what is being referenced. Most programmers will code out of a reference book until they get the "hang" or concept of which arguments are quoted, or they'll typically use a "canned template" where they have example code that they just copy and paste, then edit to change the values.
Example of BMS Map Code
Basic Mapping Support defines the screen format through assembler macros such as the following. This was assembled to generate both the physical map set – a load module in a CICS load library – and a symbolic map set – a structure definition or DSECT in PL/I, COBOL, assembler, etc. which was copied into the source program.
Structure
In the z/OS environment, a CICS installation comprises one or more "regions" (generally referred to as a "CICS Region"), spread across one or more z/OS system images. Although it processes interactive transactions, each CICS region is usually started as a batch processing|batch address space with standard JCL statements: it's a job that runs indefinitely until shutdown. Alternatively, each CICS region may be started as a started task. Whether a batch job or a started task, CICS regions may run for days, weeks, or even months before shutting down for maintenance (MVS or CICS). Upon restart a parameter determines if the start should be "Cold" (no recovery) or "Warm"/"Emergency" (using a warm shutdown or restarting from the log after a crash). Cold starts of large CICS regions with many resources can take a long time as all the definitions are re-processed.
Installations are divided into multiple address spaces for a wide variety of reasons, such as:
application separation,
function separation,
avoiding the workload capacity limitations of a single region, or address space, or mainframe instance in the case of a z/OS SysPlex.A typical installation consists of a number of distinct applications that make up a service. Each service usually has a number of "Terminal-Owning Region" (TORs) that route transactions to multiple "Application-Owning Regions" (AORs), though other topologies are possible. For example, the AORs might not perform File I/O. Instead there would be a "File-Owning Region" (FOR) that performed the File I/O on behalf of transactions in the AOR – given that, at the time, a VSAM file could only support recoverable write access from one address space at a time.
But not all CICS applications use VSAM as the primary data source (or historically other single-address-space-at-a-time datastores such as CA Datacom) − many use either IMS/DB or Db2 as the database, and/or MQ as a queue manager. For all these cases, TORs can load-balance transactions to sets of AORs which then directly use the shared databases/queues. CICS supports XA two-phase commit between data stores and so transactions that spanned MQ, VSAM/RLS and Db2, for example, are possible with ACID properties.
CICS supports distributed transactions using SNA LU6.2 protocol between the address spaces which can be running on the same or different clusters. This allows ACID updates of multiple datastores by cooperating distributed applications. In practice there are issues with this if a system or communications failure occurs because the transaction disposition (backout or commit) may be in-doubt if one of the communicating nodes has not recovered. Thus the use of these facilities has never been very widespread.
Sysplex exploitation
At the time of CICS ESA V3.2, in the early 1990s, IBM faced the challenge of how to get CICS to exploit the new zOS Sysplex mainframe line.
The Sysplex was to be based on CMOS (Complementary Metal Oxide Silicon) rather than the existing ECL (Emitter Coupled Logic) hardware. The cost of scaling the mainframe-unique ECL was much higher than CMOS which was being developed by a keiretsu with high-volume use cases such as Sony PlayStation to reduce the unit cost of each generation's CPUs. The ECL was also expensive for users to run because the gate drain current produced so much heat that the CPU had to packaged into a special module called a Thermal Conduction Module (TCM) that had inert gas pistons and needed plumbed with high-volume chilled water to be cooled. However, the air-cooled CMOS technology's CPU speed initially was much slower than the ECL (notably the boxes available from the mainframe-clone makers Amdahl and Hitachi). This was especially concerning to IBM in the CICS context as almost all the largest mainframe customers were running CICS and for many of them it was the primary mainframe workload.
To achieve the same total transaction throughput on a Sysplex, multiple boxes would need to be used in parallel for each workload. However, a CICS address space, due to its quasi-reentrant application programming model, could not exploit more than about 1.5 processors on one box at the time – even with use of MVS sub-tasks. Without enhanced parallelism, customers would tend to move to IBM's competitors rather than use Sysplex as they scaled up the CICS workloads. There was considerable debate inside IBM as to whether the right approach would be to break upward compatibility for applications and move to a model like IMS/DC which was fully reentrant, or to extend the approach customers had adopted to more fully exploit a single mainframe's power – using multi-region operation (MRO).
Eventually the second path was adopted after the CICS user community was consulted. The community vehemently opposed breaking upward compatibility given that they had the prospect of Y2K to contend with at that time and did not see the value in re-writing and testing millions of lines of mainly COBOL, PL/I, or assembler code.
The IBM-recommended structure for CICS on Sysplex was that at least one CICS Terminal Owning Region was placed on each Sysplex node which dispatched transactions to many Application Owning Regions (AORs) spread across the entire Sysplex. If these applications needed to access shared resources they either used a Sysplex-exploiting datastore (such as IBM Db2 or IMS/DB) or concentrated, by function-shipping, the resource requests into singular-per-resource Resource Owing Regions (RORs) including File Owning Regions (FORs) for VSAM and CICS Data Tables, Queue Owning Regions (QORs) for MQ, CICS Transient Data (TD) and CICS Temporary Storage (TS). This preserved compatibility for legacy applications at the expense of operational complexity to configure and manage many CICS regions.
In subsequent releases and versions, CICS was able to exploit new Sysplex-exploiting facilities in VSAM/RLS, MQ for zOS and placed its own Data Tables, TD, and TS resources into the architected shared resource manager for the Sysplex -> the Coupling Facility or CF, dispensing with the need for most RORs. The CF provides a mapped view of resources including a shared timebase, buffer pools, locks and counters with hardware messaging assists that made sharing resources across the Sysplex both more efficient than polling and reliable (utilizing a semi-synchronized backup CF for use in case of failure).
By this time, the CMOS line had individual boxes that exceeded the power available by the fastest ECL box with more processors per CPU. When these were coupled together, 32 or more nodes would be able to scale two orders of magnitude greater in total power for a single workload. For example, by 2002, Charles Schwab was running a "MetroPlex" consisting of a redundant pair of mainframe Sysplexes in two locations in Phoenix, AZ, each with 32 nodes driven by one shared CICS/DB/2 workload to support the vast volume of pre-dotcom-bubble web client inquiry requests.
This cheaper, much more scalable CMOS technology base, and the huge investment costs of having to both get to 64-bit addressing and independently produce cloned CF functionality drove the IBM-mainframe clone makers out of the business one by one.
CICS Recovery/Restart
The objective of recovery/restart in CICS is to minimize and if possible eliminate damage done to Online System when a failure occurs, so that system and data integrity is maintained. If the CICS region was shutdown instead of failing it will perform a "Warm" start exploiting the checkpoint written at shutdown. The CICS region can also be forced to "Cold" start which reloads all definitions and wipes out the log, leaving the resources in whatever state they are in.
Under CICS, following are some of the resources which are considered recoverable. If one wishes these resources to be recoverable then special options must be specified in relevant CICS definitions:
VSAM files
CMT CICS-maintained data tables
Intrapartition TDQ
Temporary Storage Queue in auxiliary storage
I/O messages from/to transactions in a VTAM network
Other database/queuing resources connected to CICS that support XA two-phase commit protocol (like IMS/DB, Db2, VSAM/RLS)CICS also offers extensive recovery/restart facilities for users to establish their own recovery/restart capability in their CICS system. Commonly used recovery/restart facilities include:
Dynamic Transaction Backout (DTB)
Automatic Transaction Restart
Resource Recovery using System Log
Resource Recovery using Journal
System Restart
Extended Recovery Facility
Components
Each CICS region comprises one major task on which every transaction runs, although certain services such as access to IBM Db2 data use other tasks (TCBs). Within a region, transactions are cooperatively multitasked – they are expected to be well-behaved and yield the CPU rather than wait. CICS services handle this automatically.
Each unique CICS "Task" or transaction is allocated its own dynamic memory at start-up and subsequent requests for additional memory were handled by a call to the "Storage Control program" (part of the CICS nucleus or "kernel"), which is analogous to an operating system.
A CICS system consists of the online nucleus, batch support programs, and applications services.
Nucleus
The original CICS nucleus consisted of a number of functional modules written in 370 assembler until V3:
Task Control Program (KCP)
Storage Control Program (SCP)
Program Control Program (PCP)
Program Interrupt Control Program (PIP)
Interval Control Program (ICP)
Dump Control Program (DCP)
Terminal Control Program (TCP)
File Control Program (FCP)
Transient Data Control Program (TDP)
Temporary Storage Control Program (TSP)Starting in V3, the CICS nucleus was rewritten into a kernel-and-domain structure using IBM's PL/AS language – which is compiled into assembler.
The prior structure did not enforce separation of concerns and so had many inter-program dependencies which led to bugs unless exhaustive code analysis was done. The new structure was more modular and so resilient because it was easier to change without impact. The first domains were often built with the name of the prior program but without the trailing "P". For example, Program Control Domain (DFHPC) or Transient Data Domain (DFHTD). The kernel operated as a switcher for inter-domain requests – initially this proved expensive for frequently called domains (such as Trace) but by utilizing PL/AS macros these calls were in-lined without compromising on the separate domain design.
In later versions, completely redesigned domains were added like the Logging Domain DFHLG and Transaction Domain DFHTM that replaced the Journal Control Program (JCP).
Support programs
In addition to the online functions CICS has several support programs that run as batch jobs. : pp.34–35
High-level language (macro) preprocessor
Command language translator
Dump utility – prints formatted dumps generated by CICS Dump Management
Trace utility – formats and prints CICS trace output
Journal formatting utility – prints a formatted dump of the CICS region in case of error
Applications services
The following components of CICS support application development.: pp.35–37
Basic Mapping Support (BMS) provides device-independent terminal input and output
APPC Support that provides LU6.1 and LU6.2 API support for collaborating distributed applications that support two-phase commit
Data Interchange Program (DIP) provides support for IBM 3770 and IBM 3790 programmable devices
2260 Compatibility allows programs written for IBM 2260 display devices to run on 3270 displays
EXEC Interface Program – the stub program that converts calls generated by EXEC CICS commands to calls to CICS functions
Built-in Functions – table search, phonetic conversion, field verify, field edit, bit checking, input formatting, weighted retrieval
Pronunciation
Different countries have differing pronunciations
Within IBM (specifically Tivoli) it is referred to as .
In the US, it is more usually pronounced by reciting each letter .
In Australia, Belgium, Canada, Hong Kong, the UK and some other countries, it is pronounced .
In Denmark, it is pronounced kicks.
In Finland, it is pronounced [kiks]
In France, it is pronounced [se.i.se.ɛs].
In Germany, Austria and Hungary, it is pronounced [ˈtsɪks] and, less often, [ˈkɪks].
In Greece, it is pronounced kiks.
In India, it is pronounced kicks.
In Iran, it is pronounced kicks.
In Italy, is pronounced [ˈtʃiks].
In Poland, it is pronounced [ˈkʲiks].
In Portugal and Brazil, it is pronounced [ˈsiks].
In Russia, it is pronounced kiks.
In Slovenia, it is pronounced kiks.
In Spain, it is pronounced [ˈθiks].
In Sweden, it is pronounced kicks.
In Uganda, it is pronounced kicks.
In Turkey, it is pronounced kiks.
See also
IBM TXSeries (CICS on distributed platforms)
IBM WebSphere
IBM 2741
IBM 2260
IBM 3270
OS/360 and successors
Transaction Processing Facility
Virtual Storage Access Method (VSAM)
Official website
Why to choose CICS Transaction Server for new IT projects – IBM CICS whitepaper
IBM Software - CICS - 35 year Anniversary (2004) at the Wayback Machine (archived February 4, 2009)
Support Forum for CICS Programming
CICS User Community website for CICS related news, announcements and discussions Archived 5 August 2008 at the Wayback Machine
Bob Yelavich's CICS focused website. (This site uses frames, but on high-resolution screens the left-hand frame, which contains the site index, may be hidden. Scroll right within the frame to see its content.) at the Wayback Machine (archived February 5, 2005) |
Cilk, Cilk++, Cilk Plus and OpenCilk are general-purpose programming languages designed for multithreaded parallel computing. They are based on the C and C++ programming languages, which they extend with constructs to express parallel loops and the fork–join idiom.
Originally developed in the 1990s at the Massachusetts Institute of Technology (MIT) in the group of Charles E. Leiserson, Cilk was later commercialized as Cilk++ by a spinoff company, Cilk Arts. That company was subsequently acquired by Intel, which increased compatibility with existing C and C++ code, calling the result Cilk Plus. After Intel stopped supporting Cilk Plus in 2017, MIT is again developing Cilk in the form of OpenCilk.
History
MIT Cilk
The Cilk programming language grew out of three separate projects at the MIT Laboratory for Computer Science:
Theoretical work on scheduling multi-threaded applications.
StarTech – a parallel chess program built to run on the Thinking Machines Corporation's Connection Machine model CM-5.
PCM/Threaded-C – a C-based package for scheduling continuation-passing-style threads on the CM-5In April 1994 the three projects were combined and christened "Cilk". The name Cilk is not an acronym, but an allusion to "nice threads" (silk) and the C programming language. The Cilk-1 compiler was released in September 1994.
The original Cilk language was based on ANSI C, with the addition of Cilk-specific keywords to signal parallelism. When the Cilk keywords are removed from Cilk source code, the result should always be a valid C program, called the serial elision (or C elision) of the full Cilk program, with the same semantics as the Cilk program running on a single processor. Despite several similarities, Cilk is not directly related to AT&T Bell Labs' Concurrent C.
Cilk was implemented as a translator to C, targeting the GNU C Compiler (GCC). The last version, Cilk 5.4.6, is available from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), but is no longer supported.A showcase for Cilk's capabilities was the Cilkchess parallel chess-playing program, which won several computer chess prizes in the 1990s, including the 1996 Open Dutch Computer Chess Championship.
Cilk Arts and Cilk++
Prior to c. 2006, the market for Cilk was restricted to high-performance computing. The emergence of multicore processors in mainstream computing meant that hundreds of millions of new parallel computers were being shipped every year. Cilk Arts was formed to capitalize on that opportunity: in 2006, Leiserson launched Cilk Arts to create and bring to market a modern version of Cilk that supports the commercial needs of an upcoming generation of programmers. The company closed a Series A venture financing round in October 2007, and its product, Cilk++ 1.0, shipped in December, 2008.
Cilk++ differs from Cilk in several ways: support for C++, support for loops, and hyperobjects – a new construct designed to solve data race problems created by parallel accesses to global variables. Cilk++ was proprietary software. Like its predecessor, it was implemented as a Cilk-to-C++ compiler. It supported the Microsoft and GNU compilers.
Intel Cilk Plus
On July 31, 2009, Cilk Arts announced on its web site that its products and engineering team were now part of Intel Corp. In early 2010, the Cilk website at www.cilk.com began redirecting to the Intel website (as of early 2017, the original Cilk website no longer resolves to a host). Intel and Cilk Arts integrated and advanced the technology further resulting in a September 2010 release of Intel Cilk Plus. Cilk Plus adopts simplifications, proposed by Cilk Arts in Cilk++, to eliminate the need for several of the original Cilk keywords while adding the ability to spawn functions and to deal with variables involved in reduction operations. Cilk Plus differs from Cilk and Cilk++ by adding array extensions, being incorporated in a commercial compiler (from Intel), and compatibility with existing debuggers.Cilk Plus was first implemented in the Intel C++ Compiler with the release of the Intel compiler in Intel Composer XE 2010. An open source (BSD-licensed) implementation was contributed by Intel to the GNU Compiler Collection (GCC), which shipped Cilk Plus support in version 4.9, except for the _Cilk_for keyword, which was added in GCC 5.0. In February 2013, Intel announced a Clang fork with Cilk Plus support. The Intel Compiler, but not the open source implementations, comes with a race detector and a performance analyzer.
Intel later discontinued it, recommending its users switch to instead using either OpenMP or Intel's own TBB library for their parallel programming needs.
Differences between versions
In the original MIT Cilk implementation, the first Cilk keyword is in fact cilk, which identifies a function which is written in Cilk. Since Cilk procedures can call C procedures directly, but C procedures cannot directly call or spawn Cilk procedures, this keyword is needed to distinguish Cilk code from C code. Cilk Plus removes this restriction, as well as the cilk keyword, so C and C++ functions can call into Cilk Plus code and vice versa.
Deprecation of Cilk Plus
In May, 2017, GCC 7.1 was released and marked Cilk Plus support as deprecated. Intel itself announced in September 2017 that they would deprecate Cilk Plus with the 2018 release of the Intel Software Development Tools. In May 2018, GCC 8.1 was released with Cilk Plus support removed.
OpenCilk
After Cilk Plus support was deprecated by Intel, MIT has taken on the development of Cilk in the OpenCilk implementation, focusing on the LLVM/Clang fork now termed "Tapir". OpenCilk remains largely compatible with Intel Cilk Plus. Its first stable version was released in March 2021.
Language features
The principle behind the design of the Cilk language is that the programmer should be responsible for exposing the parallelism, identifying elements that can safely be executed in parallel; it should then be left to the run-time environment, particularly the scheduler, to decide during execution how to actually divide the work between processors. It is because these responsibilities are separated that a Cilk program can run without rewriting on any number of processors, including one.
Task parallelism: spawn and sync
Cilk's main addition to C are two keywords that together allow writing task-parallel programs.
The spawn keyword, when preceding a function call (spawn f(x)), indicates that the function call (f(x)) can safely run in parallel with the statements following it in the calling function. Note that the scheduler is not obligated to run this procedure in parallel; the keyword merely alerts the scheduler that it can do so.
A sync statement indicates that execution of the current function cannot proceed until all previously spawned function calls have completed. This is an example of a barrier method.(In Cilk Plus, the keywords are spelled _Cilk_spawn and _Cilk_sync, or cilk_spawn and cilk_sync if the Cilk Plus headers are included.)
Below is a recursive implementation of the Fibonacci function in Cilk, with parallel recursive calls, which demonstrates the spawn, and sync keywords. The original Cilk required any function using these to be annotated with the cilk keyword, which is gone as of Cilk Plus. (Cilk program code is not numbered; the numbers have been added only to make the discussion easier to follow.)
If this code was executed by a single processor to determine the value of fib(2), that processor would create a frame for fib(2), and execute lines 1 through 5. On line 6, it would create spaces in the frame to hold the values of x and y. On line 8, the processor would have to suspend the current frame, create a new frame to execute the procedure fib(1), execute the code of that frame until reaching a return statement, and then resume the fib(2) frame with the value of fib(1) placed into fib(2)'s x variable. On the next line, it would need to suspend again to execute fib(0) and place the result in fib(2)'s y variable.
When the code is executed on a multiprocessor machine, however, execution proceeds differently. One processor starts the execution of fib(2); when it reaches line 8, however, the spawn keyword modifying the call to fib(n-1) tells the processor that it can safely give the job to a second processor: this second processor can create a frame for fib(1), execute its code, and store its result in fib(2)'s frame when it finishes; the first processor continues executing the code of fib(2) at the same time. A processor is not obligated to assign a spawned procedure elsewhere; if the machine only has two processors and the second is still busy on fib(1) when the processor executing fib(2) gets to the procedure call, the first processor will suspend fib(2) and execute fib(0) itself, as it would if it were the only processor. Of course, if another processor is available, then it will be called into service, and all three processors would be executing separate frames simultaneously.
(The preceding description is not entirely accurate. Even though the common terminology for discussing Cilk refers to processors making the decision to spawn off work to other processors, it is actually the scheduler which assigns procedures to processors for execution, using a policy called work-stealing, described later.)
If the processor executing fib(2) were to execute line 13 before both of the other processors had completed their frames, it would generate an incorrect result or an error; fib(2) would be trying to add the values stored in x and y, but one or both of those values would be missing. This is the purpose of the sync keyword, which we see in line 11: it tells the processor executing a frame that it must suspend its own execution until all the procedure calls it has spawned off have returned. When fib(2) is allowed to proceed past the sync statement in line 11, it can only be because fib(1) and fib(0) have completed and placed their results in x and y, making it safe to perform calculations on those results.
The code example above uses the syntax of Cilk-5. The original Cilk (Cilk-1) used a rather different syntax that required programming in an explicit continuation-passing style, and the Fibonacci examples looks as follows:
Inside fib's recursive case, the spawn_next keyword indicates the creation of a successor thread (as opposed to the child threads created by spawn), which executes the sum subroutine after waiting for the continuation variables x and y to be filled in by the recursive calls. The base case and sum use a send_argument(k, n) operation to set their continuation variable k to the value of n, effectively "returning" the value to the successor thread.
Inlets
The two remaining Cilk keywords are slightly more advanced, and concern the use of inlets. Ordinarily, when a Cilk procedure is spawned, it can return its results to the parent procedure only by putting those results in a variable in the parent's frame, as we assigned the results of our spawned procedure calls in the example to x and y.
The alternative is to use an inlet. An inlet is a function internal to a Cilk procedure which handles the results of a spawned procedure call as they return. One major reason to use inlets is that all the inlets of a procedure are guaranteed to operate atomically with regards to each other and to the parent procedure, thus avoiding the bugs that could occur if the multiple returning procedures tried to update the same variables in the parent frame at the same time.
The inlet keyword identifies a function defined within the procedure as an inlet.
The abort keyword can only be used inside an inlet; it tells the scheduler that any other procedures that have been spawned off by the parent procedure can safely be aborted.Inlets were removed when Cilk became Cilk++, and are not present in Cilk Plus.
Parallel loops
Cilk++ added an additional construct, the parallel loop, denoted cilk_for in Cilk Plus. These loops look like
This implements the parallel map idiom: the body of the loop, here a call to f followed by an assignment to the array a, is executed for each value of i from zero to n in an indeterminate order. The optional "grain size" pragma determines the coarsening: any sub-array of one hundred or fewer elements is processed sequentially. Although the Cilk specification does not specify the exact behavior of the construct, the typical implementation is a divide-and-conquer recursion, as if the programmer had written
The reasons for generating a divide-and-conquer program rather than the obvious alternative, a loop that spawn-calls the loop body as a function, lie in both the grainsize handling and in efficiency: doing all the spawning in a single task makes load balancing a bottleneck.A review of various parallel loop constructs on HPCwire found the cilk_for construct to be quite general, but noted that the Cilk Plus specification did not stipulate that its iterations need to be data-independent, so a compiler cannot automatically vectorize a cilk_for loop. The review also noted the fact that reductions (e.g., sums over arrays) need additional code.
Reducers and hyperobjects
Cilk++ added a kind of objects called hyperobjects, that allow multiple strands to share state without race conditions and without using explicit locks. Each strand has a view on the hyperobject that it can use and update; when the strands synchronize, the views are combined in a way specified by the programmer.The most common type of hyperobject is a reducer, which corresponds to the reduction clause in OpenMP or to the algebraic notion of a monoid. Each reducer has an identity element and an associative operation that combines two values. The archetypal reducer is summation of numbers: the identity element is zero, and the associative reduce operation computes a sum. This reducer is built into Cilk++ and Cilk Plus:
Other reducers can be used to construct linked lists or strings, and programmers can define custom reducers.
A limitation of hyperobjects is that they provide only limited determinacy. Burckhardt et al. point out that even the sum reducer can result in non-deterministic behavior, showing a program that may produce either 1 or 2 depending on the scheduling order:
Array notation
Intel Cilk Plus adds notation to express high-level operations on entire arrays or sections of arrays; e.g., an axpy-style function that is ordinarily written
can in Cilk Plus be expressed as
y[0:n] += alpha * x[0:n];
This notation helps the compiler to effectively vectorize the application. Intel Cilk Plus allows C/C++ operations to be applied to multiple array elements in parallel, and also provides a set of built-in functions that can be used to perform vectorized shifts, rotates, and reductions. Similar functionality exists in Fortran 90; Cilk Plus differs in that it never allocates temporary arrays, so memory usage is easier to predict.
Elemental functions
In Cilk Plus, an elemental function is a regular function which can be invoked either on scalar arguments or on array elements in parallel. They are similar to the kernel functions of OpenCL.
pragma simd
This pragma gives the compiler permission to vectorize a loop even in cases where auto-vectorization might fail. It is the simplest way to manually apply vectorization.
Work-stealing
The Cilk scheduler uses a policy called "work-stealing" to divide procedure execution efficiently among multiple processors. Again, it is easiest to understand if we look first at how Cilk code is executed on a single-processor machine.
The processor maintains a stack on which it places each frame that it has to suspend in order to handle a procedure call. If it is executing fib(2), and encounters a recursive call to fib(1), it will save fib(2)'s state, including its variables and where the code suspended execution, and put that state on the stack. It will not take a suspended state off the stack and resume execution until the procedure call that caused the suspension, and any procedures called in turn by that procedure, have all been fully executed.
With multiple processors, things of course change. Each processor still has a stack for storing frames whose execution has been suspended; however, these stacks are more like deques, in that suspended states can be removed from either end. A processor can still only remove states from its own stack from the same end that it puts them on; however, any processor which is not currently working (having finished its own work, or not yet having been assigned any) will pick another processor at random, through the scheduler, and try to "steal" work from the opposite end of their stack – suspended states, which the stealing processor can then begin to execute. The states which get stolen are the states that the processor stolen from would get around to executing last.
See also
Grand Central Dispatch
Intel Concurrent Collections (CnC)
Intel Parallel Building Blocks (PBB)
Intel Array Building Blocks (ArBB)
Intel Parallel Studio
NESL
OpenMP
Parallel computing
Sieve C++ Parallel Programming System
Threading Building Blocks (TBB)
Unified Parallel C
Official website for OpenCilk
Intel's Cilk Plus website
Cilk Project website at MIT
Arch D. Robison, "Cilk Plus: Language Support for Thread and Vector Parallelism" and "Parallel Programming with Cilk Plus", July 16, 2012. |
A clipper was a type of mid-19th-century merchant sailing vessel, designed for speed. Clippers were generally narrow for their length, small by later 19th-century standards, could carry limited bulk freight, and had a large total sail area. "Clipper" does not refer to a specific sailplan; clippers may be schooners, brigs, brigantines, etc., as well as full-rigged ships. Clippers were mostly constructed in British and American shipyards, although France, Brazil, the Netherlands, and other nations also produced some. Clippers sailed all over the world, primarily on the trade routes between the United Kingdom and China, in transatlantic trade, and on the New York-to-San Francisco route around Cape Horn during the California Gold Rush. Dutch clippers were built beginning in the 1850s for the tea trade and passenger service to Java.The boom years of the clipper era began in 1843 in response to a growing demand for faster delivery of tea from China and continued with the demand for swift passage to gold fields in California and Australia beginning in 1848 and 1851, respectively. The era ended with the opening of the Suez Canal in 1869.
Origin and usage of "clipper"
The term "clipper" most likely derives from the verb "clip", which in former times meant, among other things, to run or fly swiftly. Dryden, the English poet, used the word "clip" to describe the swift flight of a falcon in the 17th century when he said, "And, with her eagerness the quarry missed, Straight flies at check, and clips it down the wind." The ships appeared to clip along the ocean water. The term "clip" became synonymous with "speed" and was also applied to fast horses and sailing ships. "To clip it", and "going at a good clip", are remaining expressions.
The first application of the term "clipper", in a nautical sense, is uncertain. At first, fast sailing vessels were referred to as "Virginia-built" or "pilot-boat model", with the name "Baltimore-built" appearing during the War of 1812. In the final days of the slave trade (circa 1835–1850) – just as the type was dying out – the term, Baltimore clipper, became common and remained current in the last quarter of the 18th century through to the first half of the 19th century. The retrospective application of the word "clipper" to these vessels has caused confusion.The Oxford English Dictionary's earliest quote (referring to the Baltimore clipper) is from 1824. The dictionary cites Royal Navy officer and novelist Frederick Marryat as using the term in 1830. British newspaper usage of the term can be found as early as 1832 and in shipping advertisements from 1835. A US court case of 1834 has evidence that discusses a clipper being faster than a brig.
Definitions
A clipper is a sailing vessel designed for speed, a priority that takes precedence over cargo-carrying capacity or building or operating costs. It is not restricted to any one rig (while many were fully rigged ships, others were barques, brigs, or schooners), nor was the term restricted to any one hull type. Howard Chapelle lists three basic hull types for clippers. The first was characterised by the sharp deadrise and ends found in the Baltimore clipper. The second was a hull with a full midsection and modest deadrise, but sharp ends – this was a development of the hull form of transatlantic packets. The third was more experimental, with deadrise and sharpness being balanced against the need to carry a profitable quantity of cargo. A clipper carried a large sail area and a fast hull; by the standards of any other type of sailing ship, a clipper was greatly over-canvassed. The last defining feature of a clipper, in the view of maritime historian David MacGregor, was a captain who had the courage, skill, and determination to get the fastest speed possible out of her.: 16–21 : 321–322 In assessing the hull of a clipper, different maritime historians use different criteria to measure "sharpness", "fine lines" or "fineness", a concept which is explained by comparing a rectangular cuboid with the underwater shape of a vessel's hull. The more material one has to carve off the cuboid to achieve the hull shape, the sharper the hull. Ideally, a maritime historian would be able to look at either the block coefficient of fineness or the prismatic coefficient of various clippers, but measured drawings or accurate half models may not exist to calculate either of these figures.: 43–45 An alternative measure of sharpness for hulls of a broadly similar shape is the coefficient of underdeck tonnage, as used by David MacGregor in comparing tea clippers. This could be calculated from the measurements taken to determine the registered tonnage, so can be applied to more vessels.: 87–88 An extreme clipper has a hull of great fineness, as judged either by the prismatic coefficient, the coefficient of underdeck tonnage, or some other technical assessment of hull shape. This term has been misapplied in the past, without reference to hull shape. As commercial vessels, these are totally reliant on speed to generate a profit for their owners, as their sharpness limits their cargo-carrying capacity.
A medium clipper has a cargo-carrying hull that has some sharpness. In the right conditions and with a capable captain, some of these achieved notable quick passages. They were also able to pay their way when the high freight rates often paid to a fast sailing ship were not available (in a fluctuating market).
The term "clipper" applied to vessels between these two categories. They often made passages as fast as extreme clippers, but had less difficulty in making a living when freight rates were lower.: 16
History
The first ships to which the term "clipper" seems to have been applied were the Baltimore clippers, developed in the Chesapeake Bay before the American Revolution, and reached their zenith between 1795 and 1815. They were small, rarely exceeding 200 tons OM. Their hulls were sharp ended and displayed much deadrise. They were rigged as schooners, brigs, or brigantines.
In the War of 1812, some were lightly armed, sailing under letters of marque and reprisal, when the type – exemplified by Chasseur, launched at Fells Point, Baltimore in 1814 – became known for her incredible speed; the deep draft enabled the Baltimore clipper to sail close to the wind. Clippers, running the British blockade of Baltimore, came to be recognized for speed rather than cargo space.
The type existed as early as 1780. A 1789 drawing of HMS Berbice (1780) – purchased by the Royal Navy in 1780 in the West Indies – represents the earliest draught of what became known as the Baltimore clipper.
Vessels of the Baltimore clipper type continued to be built for the slave trade, being useful for escaping enforcement of the British and American legislation prohibiting the trans-Atlantic slave trade.: 308 Some of these Baltimore clippers were captured when working as slavers, condemned by the appropriate court, and sold to owners who then used them as opium clippers – moving from one illegal international trade to another.: 91 Ann McKim, built in Baltimore in 1833 by the Kennard & Williamson shipyard, is considered by some to be the original clipper ship. (Maritime historians Howard I. Chapelle and David MacGregor decry the concept of the "first" clipper, preferring a more evolutionary, multiple-step development of the type.: 72 ) She measured 494 tons OM, and was built on the enlarged lines of a Baltimore clipper, with sharply raked stem, counter stern, and square rig. Although Ann McKim was the first large clipper ship ever constructed, she cannot be said to have founded the clipper ship era, or even that she directly influenced shipbuilders, since no other ship was built like her, but she may have suggested the clipper design in vessels of ship rig. She did, however, influence the building of Rainbow in 1845, the first extreme clipper ship.In Aberdeen, Scotland, shipbuilders Alexander Hall and Sons developed the "Aberdeen" clipper bow in the late 1830s; the first was Scottish Maid launched in 1839. Scottish Maid, 150 tons OM, was the first British clipper ship. "Scottish Maid was intended for the Aberdeen-London trade, where speed was crucial to compete with steamships. The Hall brothers tested various hulls in a water tank and found the clipper design most effective. The design was influenced by tonnage regulations. Tonnage measured a ship's cargo capacity and was used to calculate tax and harbour dues. The new 1836 regulations measured depth and breadth with length measured at half midship depth. Extra length above this level was tax-free and became a feature of clippers. Scottish Maid proved swift and reliable and the design was widely copied." The earliest British clipper ships were built for trade within the British Isles (Scottish Maid was built for the Aberdeen to London trade). Then followed the vast clipper trade of tea, opium, spices, and other goods from the Far East to Europe, and the ships became known as "tea clippers".
From 1839, larger American clipper ships started to be built beginning with Akbar, 650 tons OM, in 1839, and including the 1844-built Houqua, 581 tons OM. These larger vessels were built predominantly for use in the China tea trade and known as "tea clippers".Then in 1845 Rainbow, 757 tons OM, the first extreme clipper, was launched in New York. These American clippers were larger vessels designed to sacrifice cargo capacity for speed. They had a bow lengthened above the water, a drawing out and sharpening of the forward body, and the greatest breadth further aft. Extreme clippers were built in the period 1845 to 1855.
In 1851, shipbuilders in Medford, Massachusetts, built what is sometimes called one of the first medium clippers, the Antelope, often called the Antelope of Boston to distinguish her from other ships of the same name. A contemporary ship-design journalist noted that "the design of her model was to combine large stowage capacity with good sailing qualities." Antelope was relatively flat-floored and had only an 8-inch deadrise at half-floor.
The medium clipper, though still very fast, could carry more cargo. After 1854, extreme clippers were replaced in American shipbuilding yards by medium clippers.The Flying Cloud was a clipper ship built in 1851 that established the fastest passage between New York and San Francisco within weeks of her launching, then broke her own records three years later, which stood at 89 days 8 hours until 1989. (The other contender for this "blue ribbon" title was the medium clipper Andrew Jackson – an unresolvable argument exists over timing these voyages "from pilot to pilot").: 60–61 Flying Cloud was the most famous of the clippers built by Donald McKay. She was known for her extremely close race with the Hornet in 1853; for having a woman navigator, Eleanor Creesy, wife of Josiah Perkins Creesy, who skippered the Flying Cloud on two record-setting voyages from New York to San Francisco; and for sailing in the Australia and timber trades.
Clipper ships largely ceased being built in American shipyards in 1859 when, unlike the earlier boom years, only four clipper ships were built; a few were built in the 1860s. The last American clipper ship was "the Pilgrim" launched in 1873 from the shipyards of Medford, Massachusetts, built by Joshua T. Foster. Among shipowners of the day, "Medford-built" came to mean the best.
British clipper ships continued to be built after 1859. From 1859, a new design was developed for British clipper ships that was nothing like the American clippers; these ships continued to be called extreme clippers. The new design had a sleek, graceful appearance, less sheer, less freeboard, lower bulwarks, and smaller breadth. They were built for the China tea trade, starting with Falcon in 1859, and continuing until 1870. The earlier ships were made from wood, though some were made from iron, just as some British clippers had been made from iron prior to 1859. In 1863, the first tea clippers of composite construction were brought out, combining the best of both worlds. Composite clippers had the strength of an iron hull framework but with wooden planking that, with properly insulated fastenings, could use copper sheathing without the problem of galvanic corrosion. Copper sheathing prevented fouling and teredo worm, but could not be used on iron hulls. The iron framework of composite clippers was less bulky and lighter, so allowing more cargo in a hull of the same external shape.: 84–88 After 1869, with the opening of the Suez Canal that greatly advantaged steam vessels (see Decline below), the tea trade collapsed for clippers. From the late 1860s until the early 1870s, the clipper trade increasingly focused on the Britain to Australia and New Zealand route, carrying goods and immigrants, services that had begun earlier with the Australian Gold Rush of the 1850s. British-built clipper ships and many American-built, British-owned ships were used. Even in the 1880s, sailing ships were still the main carriers of cargo between Britain, and Australia and New Zealand. This trade eventually became unprofitable, and the ageing clipper fleet became unseaworthy.
Opium clippers
Before the early 18th century, the East India Company paid for its tea mainly in silver. When the Chinese emperor chose to embargo European-manufactured commodities and demand payment for all Chinese goods in silver, the price rose, restricting trade. The East India Company began to produce opium in India, something desired by the Chinese as much as tea was by the British. This had to be smuggled into China on smaller, fast-sailing ships, called "opium clippers".: 9, 34 Some of these were built specifically for the purpose – mostly in India and Britain, such as the 1842-built Ariel, 100 tons OM. Some fruit schooners were bought for this trade, as were some Baltimore clippers.: 90–97
China clippers and the apogee of sail
Among the most notable clippers were the China clippers, also called tea clippers, designed to ply the trade routes between Europe and the East Indies. The last example of these still in reasonable condition is Cutty Sark, preserved in dry dock at Greenwich, United Kingdom. Damaged by fire on 21 May 2007 while undergoing conservation, the ship was permanently elevated 3.0 m above the dry dock floor in 2010 as part of a plan for long-term preservation.
Clippers were built for seasonal trades such as tea, where an early cargo was more valuable, or for passenger routes. One passenger ship survives, the City of Adelaide designed by William Pile of Sunderland. The fast ships were ideally suited to low-volume, high-profit goods, such as tea, opium, spices, people, and mail. The return could be spectacular. The Challenger returned from Shanghai with "the most valuable cargo of tea and silk ever to be laden in one bottom". Competition among the clippers was public and fierce, with their times recorded in the newspapers.
The last China clippers had peak speeds over 16 knots (30 km/h), but their average speeds over a whole voyage were substantially less. The joint winner of the Great Tea Race of 1866 logged about 15,800 nautical miles on a 99-day trip. This gives an average speed slightly over 6.6 knots (12.2 km/h).: 269–285 The key to a fast passage for a tea clipper was getting across the China Sea against the monsoon winds that prevailed when the first tea crop of the season was ready.: 31, 20 These difficult sailing conditions (light and/or contrary winds) dictated the design of tea clippers. The US clippers were designed for the strong winds encountered on their route around Cape Horn.
Donald McKay's Sovereign of the Seas reported the highest speed ever achieved by a sailing ship, 22 knots (41 km/h), made while running her easting down to Australia in 1854. (John Griffiths' first clipper, the Rainbow, had a top speed of 14 knots.) Eleven other instances are reported of a ship's logging 18 knots (33 km/h) or over. Ten of these were recorded by American clippers.
Besides the breath-taking 465-nautical-mile (861 km) day's run of the Champion of the Seas, 13 other cases are known of a ship's sailing over 400 nautical miles (740 km) in 24 hours.
With few exceptions, though, all the port-to-port sailing records are held by the American clippers.
The 24-hour record of the Champion of the Seas, set in 1854, was not broken until 1984 (by a multihull), or 2001 (by another monohull).
Decline
The American clippers sailing from the East Coast to the California goldfields were working in a booming market. Freight rates were high everywhere in the first years of the 1850s. This started to fade in late 1853. The ports of California and Australia reported that they were overstocked with goods that had been shipped earlier in the year. This gave an accelerating fall in freight rates that was halted, however, by the start of the Crimean War in March 1854, as many ships were now being chartered by the French and British governments. The end of the Crimean War in April 1856 released all this capacity back on the world shipping markets – the result being a severe slump. The next year had the Panic of 1857, with effects on both sides of the Atlantic. The United States was just starting to recover from this in 1861 when the American Civil War started, causing significant disruption to trade in both Union and Confederate states.: 14–15 As the economic situation deteriorated in 1853, American shipowners either did not order new vessels, or specified an ordinary clipper or a medium clipper instead of an extreme clipper. No extreme clipper was launched in an American shipyard after the end of 1854 and only a few medium clippers after 1860.
By contrast, British trade recovered well at the end of the 1850s. Tea clippers had continued to be launched during the depressed years, apparently little affected by the economic downturn.: 15 The long-distance route to China was not realistically challenged by steamships in the early part of the 1860s. No true steamer (as opposed to an auxiliary steamship) had the fuel efficiency to carry sufficient cargo to make a profitable voyage. The auxiliary steamships struggled to make any profit.
The situation changed in 1866 when the Alfred Holt-designed and owned SS Agamemnon made her first voyage to China. Holt had persuaded the Board of Trade to allow higher steam pressures in British merchant vessels. Running at 60 psi instead of the previously permitted 25 psi, and using an efficient compound engine, Agamemnon had the fuel efficiency to steam at 10 knots to China and back, with coaling stops at Mauritius on the outward and return legs – crucially carrying sufficient cargo to make a profit.
In 1869, the Suez Canal opened, giving steamships a route about 3,000 nautical miles (5,600 km; 3,500 mi) shorter than that taken by sailing ships round the Cape of Good Hope. Despite initial conservatism by tea merchants, by 1871, tea clippers found strong competition from steamers in the tea ports of China. A typical passage time back to London for a steamer was 58 days, while the fastest clippers could occasionally make the trip in less than 100 days; the average was 123 days in the 1867–68 tea season.: 225–243 The freight rate for a steamer in 1871 was roughly double that paid to a sailing vessel. Some clipper owners were severely caught out by this; several extreme clippers had been launched in 1869, including Cutty Sark, Norman Court and Caliph.
Surviving ships
Of the many clipper ships built during the mid-19th century, only two are known to survive. The only intact survivor is Cutty Sark, which was preserved as a museum ship in 1954 at Greenwich for public display. The other known survivor is City of Adelaide; unlike Cutty Sark, she was reduced to a hulk over the years. She eventually sank at her moorings in 1991, but was raised the following year, and remained on dry land for years. Adelaide (or S.V. Carrick) is the older of the two survivors, and was transported to Australia for conservation.
In popular culture
The clipper legacy appears in collectible cards and in the name of a basketball team.
Sailing cards
Departures of clipper ships, mostly from New York and Boston to San Francisco, were advertised by clipper-ship sailing cards. These cards, slightly larger than today's postcards, were produced by letterpress and wood engraving on coated card stock. Most clipper cards were printed in the 1850s and 1860s, and represented the first pronounced use of color in American advertising art. Perhaps 3,500 cards survive. With their rarity and importance as artifacts of nautical, Western, and printing history, clipper cards are valued by both private collectors and institutions.
Basketball team
The Los Angeles Clippers of the National Basketball Association take their name from the type of ship. After the Buffalo Braves moved to San Diego, California in 1978, a contest was held to choose a new name. The winning name highlighted the city's connection with the clippers that frequented San Diego Bay. The team retained the name in its 1984 move to Los Angeles.
Airliners
The airline Pan Am named its aircraft beginning with the word 'Clipper' and used Clipper as its callsign. This was intended to evoke an image of speed and glamour.
See also
List of clipper ships
Clipper route
Packet boat
Sail plan
Windjammer
People associated with clipper ships
List of people who sailed on clipper ships
Joseph Warren Holmes
Samuel Hartt Pook
William Jardine
Donald McKay
John (or "Jock") "White Hat" Willis
Further reading
Carl C. Cutler, Greyhounds of the Sea (1930, 3rd ed. Naval Institute Press 1984)
Alexander Laing, Clipper Ship Men (1944)
David R. MacGregor, Fast Sailing Ships: Their Design and Construction, 1775–1875 Naval Institute Press, 1988 ISBN 0-87021-895-6
Oxford English Dictionary (1987) ISBN 0-19-861212-5.
Bruce D. Roberts, Clipper Ship Sailing Cards, 2007, Lulu.com. ISBN 978-0-9794697-0-1.
Bruce D. Roberts, Clipper Ship Cards: The High-Water Mark in Early Trade Cards, The Advertising Trade Card Quarterly 1, no. 1 (Spring 1994): 20–22.
Bruce D. Roberts, Clipper Ship Cards: Graphic Themes and Images, The Advertising Trade Card Quarterly 1, no. 2 (Summer 1994): 22–24.
Bruce D. Roberts, Museum Collections of Clipper Ship Cards, The Advertising Trade Card Quarterly 2, no. 1 (Spring 1995): 22–24.
Bruce D. Roberts, Selling Sail with Clipper Ship Cards, Ephemera News 19, no. 2 (Winter 2001): 1, 11–14.
Chris and Lesley Holden (2009). Life and Death on the Royal Charter. Calgo Publications. ISBN 978-0-9545066-2-9.
Overview and introduction
Knoblock, Glenn A. (2014). The American Clipper Ship, 1845–1920: A Comprehensive History, with a Listing of Builders and Their Ships. Jefferson: McFarland. ISBN 978-0-7864-7112-6.
Ross, Donald Gunn III. "Era of the Clipper Ships Web Site". Archived from the original on 30 March 2010. Retrieved 3 September 2011. – Beautifully illustrated introduction, by a member of Donald McKay's family
Clark, Arthur H. (1910). The Clipper Ship Era, An Epitome of Famous American and British Clipper Ships, Their Owners, Builders, Commanders, and Crews, 1843–1869. Camden, ME: G.P. Putnam's Sons. – Basic reading, a favorite of Franklin Delano Roosevelt
Westward by Sea: A Maritime Perspective on American Expansion, 1820–1890, digitized source materials from Mystic Seaport, via Library of Congress American Memory
Currier & Ives (1959). American clipper ship prints by the Curriers. American Neptune. Salem, MA: The American Neptune.
American clipper ships
Cutler, Carl C (1984). Greyhounds of the sea: The story of the American clipper ship (3rd ed.). Annapolis, MD: Naval Institute Press. ISBN 978-0-87021-232-1. – The definitive narrative history, useful for checking discrepancies between sources
Crothers, William L (1997). The American-built clipper ship, 1850–1856 : characteristics, construction, and details. Camden, ME: International Marine. ISBN 0-07-014501-6. – The comprehensive reference for design and construction of American-built clipper ships, with numerous drawings, diagrams, and charts. Gives examples of how each design feature varies in different ships.
Howe, Octavius T; Matthews, Frederick C. (1986) [First published 1926–1927]. American Clipper Ships 1833–1858. Volume 1 and 2. Salem, MA; New York: Marine Research Society; Dover Publications. ISBN 978-0-486-25115-8. Articles on individual ships, broader coverage than Crothers
Clipper ships by type
Lubbock, Basil (1984). The China clippers. The Century seafarers. London: Century. ISBN 978-0-7126-0341-6.
Lubbock, Basil (1968) [1921]. The Colonial Clippers (2nd ed.). Glasgow: James Brown & Son. pp. 86–87. OCLC 7831041. – British and Australian clippers
Lubbock, Basil (1932). The Nitrate Clippers (1st ed.). Glasgow: Brown, Son & Ferguson. pp. 86–87. ISBN 978-0-85174-116-1.
Lubbock, Basil (1967) [1933]. The Opium Clippers. Boston, MA: Charles E. Lauriat Co. ISBN 978-0-85174-241-0. – One of the few comprehensive books on these ships
City of Adelaide Clipper Ship, one of the few surviving clippers
Westward by Sea Library of Congress collection of sailing cards.
The Shipslist: Baltimore Clipper
The Clipper Ship Card Collection at the New-York Historical Society |
Clojure (, like closure) is a dynamic and functional dialect of the Lisp programming language on the Java platform.Like most other Lisps, Clojure's syntax is built on S-expressions that are first parsed into data structures by a reader before being compiled. Clojure's reader supports literal syntax for maps, sets and vectors along with lists, and these are compiled to the mentioned structures directly. Clojure treats code as data and has a Lisp macro system. Clojure is a Lisp-1 and is not intended to be code-compatible with other dialects of Lisp, since it uses its own set of data structures incompatible with other Lisps.Clojure advocates immutability and immutable data structures and encourages programmers to be explicit about managing identity and its states. This focus on programming with immutable values and explicit progression-of-time constructs is intended to facilitate developing more robust, especially concurrent, programs that are simple and fast. While its type system is entirely dynamic, recent efforts have also sought the implementation of a dependent type system.The language was created by Rich Hickey in the mid-2000's, originally for the Java platform; the language has since been ported to other platforms, such as the Common Language Runtime (.NET). Hickey continues to lead development of the language as its benevolent dictator for life.
History and development process
Rich Hickey is the creator of the Clojure language. Before Clojure, he developed dotLisp, a similar project based on the .NET platform, and three earlier attempts to provide interoperability between Lisp and Java: a Java foreign language interface for Common Lisp (jfli), A Foreign Object Interface for Lisp (FOIL), and a Lisp-friendly interface to Java Servlets (Lisplets).Hickey spent about two and a half years working on Clojure before releasing it publicly in October 2007, much of that time working exclusively on Clojure with no outside funding. At the end of this time, Hickey sent an email announcing the language to some friends in the Common Lisp community.
The development process is restricted to the Clojure core team, though issues are publicly visible at the Clojure JIRA project page. Anyone can ask questions or submit issues and ideas at ask.clojure.org. If it's determined that a new issue warrants a JIRA ticket, a core team member will triage it and add it. JIRA issues are processed by a team of screeners and finally approved by Rich Hickey.Clojure's name, according to Hickey, is a word play on the programming concept "closure" incorporating the letters C, L, and J for C#, Lisp, and Java respectively—three languages which had a major influence on Clojure's design.
Design philosophy
Rich Hickey developed Clojure because he wanted a modern Lisp for functional programming, symbiotic with the established Java platform, and designed for concurrency.Clojure's approach to state is characterized by the concept of identities, which are represented as a series of immutable states over time. Since states are immutable values, any number of workers can operate on them in parallel, and concurrency becomes a question of managing changes from one state to another. For this purpose, Clojure provides several mutable reference types, each having well-defined semantics for the transition between states.
Language overview
Clojure runs on the Java platform and as a result, integrates with Java and fully supports calling Java code from Clojure, and Clojure code can be called from Java, too. The community uses tools such as Clojure command-line interface (CLI) or Leiningen for project automation, providing support for Maven integration. These tools handle project package management and dependencies and are configured using Clojure syntax.
As a Lisp dialect, Clojure supports functions as first-class objects, a read–eval–print loop (REPL), and a macro system. Clojure's Lisp macro system is very similar to that of Common Lisp with the exception that Clojure's version of the backquote (termed "syntax quote") qualifies symbols with their namespace. This helps prevent unintended name capture, as binding to namespace-qualified names is forbidden. It is possible to force a capturing macro expansion, but it must be done explicitly. Clojure does not allow user-defined reader macros, but the reader supports a more constrained form of syntactic extension. Clojure supports multimethods and for interface-like abstractions has a protocol based polymorphism and data type system using records, providing high-performance and dynamic polymorphism designed to avoid the expression problem.
Clojure has support for lazy sequences and encourages the principle of immutability and persistent data structures. As a functional language, emphasis is placed on recursion and higher-order functions instead of side-effect-based looping. Automatic tail call optimization is not supported as the JVM does not support it natively; it is possible to do so explicitly by using the recur keyword. For parallel and concurrent programming Clojure provides software transactional memory, a reactive agent system, and channel-based concurrent programming.Clojure 1.7 introduced reader conditionals by allowing the embedding of Clojure and ClojureScript code in the same namespace. Transducers were added as a method for composing transformations. Transducers enable higher-order functions such as map and fold to generalize over any source of input data. While traditionally these functions operate on sequences, transducers allow them to work on channels and let the user define their own models for transduction.
Extensible Data Notation
Extensible Data Notation, or edn, is a subset of the Clojure language intended as a data transfer format. It can be used to serialize and deserialize Clojure data structures, and Clojure itself uses a superset of edn to represent programs.
edn is used in a similar way to JSON or XML, but has a relatively large list of built-in elements, shown here with examples:
booleans: true, false
strings: "foo bar"
characters: \c, \tab
symbols: name
keywords: :key
integers: 123
floating point numbers: 3.14
lists: (a b 42)
vectors: [a b 42]
maps: {:a 1, "foo" :bar, [1 2 3] four}
sets: #{a b [1 2 3]}
nil: nil (a null-like value)In addition to those elements, it supports extensibility through the use of tags, which consist of the character # followed by a symbol. When encountering a tag, the reader passes the value of the next element to the corresponding handler, which returns a data value. For example, this could be a tagged element: #myapp/Person {:first "Fred" :last "Mertz"}, whose interpretation will depend on the appropriate handler of the reader.
This definition of extension elements in terms of the others avoids relying on either convention or context to convey elements not included in the base set.
Alternative platforms
The primary platform of Clojure is Java, but other target implementations exist. The most notable of these is ClojureScript, which compiles to ECMAScript 3, and ClojureCLR, a full port on the .NET platform, interoperable with its ecosystem. A survey of the Clojure community with 1,060 respondents conducted in 2013 found that 47% of respondents used both Clojure and ClojureScript when working with Clojure. In 2014, this number had risen to 55%, in 2015, based on 2,445 respondents, to 66%. Popular ClojureScript projects include implementations of the React library such as Reagent, re-frame, Rum, and Om.
Other implementations
Other implementations of Clojure on different platforms include:
Babashka, Native Clojure scripting language leveraging GraalVM native image and Small Clojure Interpreter
CljPerl, Clojure on Perl
ClojureCLR, Clojure on Common Language Runtime (CLR), the .NET virtual machine.
ClojureDart, Extend Clojure's reach to mobile & desktop apps by porting Clojure to Dart and Flutter
Clojerl, Clojure on BEAM, the Erlang virtual machine
clojure-py, Clojure in pure Python
ClojureRS, Clojure on Rust
ClojureScript, Compiler for Clojure that targets JavaScript. It emits JavaScript code compatible with the advanced compiling mode of the Google Closure optimizing compiler
Ferret, compiles to self-contained C++11 that can run on microcontrollers
jank, Clojure compatible language with gradual typing that is hosted in C++ on an LLVM-based JIT
Joker, an interpreter and linter written in Go
Las3r, a subset of Clojure that runs on the ActionScript Virtual Machine (the Adobe Flash Player platform)
Pixie, Clojure-inspired Lisp dialect written in RPython
Rouge, Clojure on YARV in Ruby
Popularity
With continued interest in functional programming, Clojure's adoption by software developers using the Java platform has continued to increase. The language has also been recommended by software developers such as Brian Goetz, Eric Evans, James Gosling, Paul Graham, and Robert C. Martin. ThoughtWorks, while assessing functional programming languages for their Technology Radar, described Clojure as "a simple, elegant implementation of Lisp on the JVM" in 2010 and promoted its status to "ADOPT" in 2012.In the "JVM Ecosystem Report 2018" (which was claimed to be "the largest survey ever of Java developers"), that was prepared in collaboration by Snyk and Java Magazine, ranked Clojure as the 2nd most used programming language on the JVM for "main applications". Clojure is used in industry by firms such as Apple, Atlassian, Funding Circle, Netflix, Nubank, Puppet, and Walmart as well as government agencies such as NASA. It has also been used for creative computing, including visual art, music, games, and poetry.
Tools
Tooling for Clojure development has seen significant improvement over the years. The following is a list of some popular IDEs and text editors with plug-ins that add support for programming in Clojure:
Emacs, with CIDER
IntelliJ IDEA, with Clojure-Kit or Cursive (a free license is available for non-commercial use)
Sublime Text, with Clojure Sublimed, or Tutkain,
Vim, with fireplace.vim, vim-iced, or Conjure (Neovim only)
Visual Studio Code, with Calva or CloverIn addition to the tools provided by the community, the official Clojure command-line interface (CLI) tools have also become available on Linux, macOS, and Windows since Clojure 1.9.
Features by example
The following examples can be run in a Clojure REPL such as one started with the Clojure CLI tools or an online REPL such as one available on REPL.it.
Simplicity
Because of its strong emphasis on simplicity, Clojure programs typically consist of mostly functions and simple data structures (i.e., lists, vectors, maps, and sets):
Programming at the REPL
Like other Lisps, one of Clojure's iconic features is interactive programming at the REPL. In the following examples, ; starts a line comment and ;; => indicates the expected output:
Names at runtime
Unlike other runtime environments where names get compiled away, Clojure's runtime environment is easily introspectable using normal Clojure data structures:
Code as data (homoiconicity)
Similar to other Lisps, Clojure is homoiconic (also known as "code as data"). In the example below, we can see how to write code that modifies code itself:
Expressive operators for data transformation
The threading macros (->, ->>, and friends) can syntactically express the abstraction of piping a collection of data through a series of transformations:
This can also be achieved more efficiently using transducers:
Thread-safe management of identity and state
A thread-safe generator of unique serial numbers (though, like many other Lisp dialects, Clojure has a built-in gensym function that it uses internally):
Macros
An anonymous subclass of java.io.Writer that doesn't write to anything, and a macro using it to silence all prints within it:
Language interoperability with Java
Clojure was created from the ground up to embrace its host platforms as one of its design goals and thus provides excellent language interoperability with Java:
Software transactional memory
10 threads manipulating one shared data structure, which consists of 100 vectors each one containing 10 (initially sequential) unique numbers. Each thread then repeatedly selects two random positions in two random vectors and swaps them. All changes to the vectors occur in transactions by making use of Clojure's software transactional memory system:
See also
List of JVM languages
List of CLI languages
Comparison of programming languages
Further reading
Official website |
The term CLU can refer to:
Organizations
California Lutheran University
Claremont Lincoln University
Communion and Liberation – University
Czech Lacrosse Union
Other uses
CLU (gene), the gene for clusterin
CLU (programming language)
Clu (Tron), fictional character from the Tron franchise
Chartered Life Underwriter, a financial professional designation
Command Launch Unit for the FGM-148 Javelin
Common Land Unit
Containerized living unit
See also
Clue (disambiguation) |
CoffeeScript is a programming language that compiles to JavaScript. It adds syntactic sugar inspired by Ruby, Python, and Haskell in an effort to enhance JavaScript's brevity and readability. Specific additional features include list comprehension and destructuring assignment.
CoffeeScript support is included in Ruby on Rails version 3.1 and Play Framework. In 2011, Brendan Eich referenced CoffeeScript as an influence on his thoughts about the future of JavaScript.
History
On December 13, 2009, Jeremy Ashkenas made the first Git commit of CoffeeScript with the comment: "initial commit of the mystery language." The compiler was written in Ruby. On December 24, he made the first tagged and documented release, 0.1.0. On February 21, 2010, he committed version 0.5, which replaced the Ruby compiler with a self-hosting version in pure CoffeeScript. By that time the project had attracted several other contributors on GitHub, and was receiving over 300 page hits per day.
On December 24, 2010, Ashkenas announced the release of stable 1.0.0 to Hacker News, the site where the project was announced for the first time.On September 18, 2017, version 2.0.0 was introduced, which "aims to bring CoffeeScript into the modern JavaScript era, closing gaps in compatibility with JavaScript while preserving the clean syntax that is CoffeeScript’s hallmark."
Syntax
Almost everything is an expression in CoffeeScript, for example, if, switch and for expressions (which have no return value in JavaScript) return a value. As in Perl, these control statements also have postfix versions; for example, if can also be written in consequent if condition form.
Many unnecessary parentheses and braces can be omitted; for example, blocks of code can be denoted by indentation instead of braces, function calls are implicit, and object literals are often detected automatically.
To compute the body mass index in JavaScript, one could write:
With CoffeeScript the interval is directly described:
To compute the greatest common divisor of two integers with the Euclidean algorithm, in JavaScript one usually needs a while loop:
Whereas in CoffeeScript one can use until instead:
The ? keyword quickly checks if a variable is null or undefined :
This would alert "No person" if the variable is null or undefined and "Have person" if there is something there.
A common pre-es6 JavaScript snippet using the jQuery library is:
Or even just:
In CoffeeScript, the function keyword is replaced by the -> symbol, and indentation is used instead of curly braces, as in other off-side rule languages such as Python and Haskell. Also, parentheses can usually be omitted, using indentation level instead to denote a function or block. Thus, the CoffeeScript equivalent of the snippet above is:
Or just:
Ruby-style string interpolation is included in CoffeeScript. Double-quoted strings allow for interpolated values, using #{ ... }, and single-quoted strings are literal.
Any for loop can be replaced by a list comprehension; so that to compute the squares of the positive odd numbers smaller than ten (i.e. numbers whose remainder modulo 2 is 1), one can do:
Alternatively, there is:
A linear search can be implemented with a one-liner using the when keyword:
The for ... in syntax allows looping over arrays while the for ... of syntax allows looping over objects.
CoffeeScript has been criticized for its unusual scoping
rules. In particular, it completely disallows variable shadowing which makes reasoning about code more difficult and
error-prone in some basic programming patterns established
by and taken for granted since procedural programming
principles were defined.
For example, with the following code snippet in JavaScript
one does not have to look outside the {}-block to know for
sure that no possible foo variable in the outer scope can be
incidentally overridden:
In CoffeeScript there is no way to tell if the scope of a variable
is limited to a block or not without looking outside the block.
Development and distribution
The CoffeeScript compiler has been self-hosting since version 0.5 and is available as a Node.js utility; however, the core compiler does not rely on Node.js and can be run in any JavaScript environment. One alternative to the Node.js utility is the Coffee Maven Plugin, a plugin for the Apache Maven build system. The plugin uses the Rhino JavaScript engine written in Java.The official site at CoffeeScript.org has a "Try CoffeeScript" button in the menu bar; clicking it opens a modal window in which users can enter CoffeeScript, see the JavaScript output, and run it directly in the browser. The js2coffee site provides bi-directional translation.
Latest additions
Source maps allow users to debug their CoffeeScript code directly, supporting CoffeeScript tracebacks on run time errors.
CoffeeScript supports a form of Literate Programming, using the .coffee.md or .litcoffee file extension. This allows CoffeeScript source code to be written in Markdown. The compiler will treat any indented blocks (Markdown's way of indicating source code) as code, and ignore the rest as comments.
Extensions
Iced CoffeeScript is a superset of CoffeeScript which adds two new keywords: await and defer. These additions simplify asynchronous control flow, making the code look more like a procedural programming language, eliminating the call-back chain. It can be used on the server side and in the browser.
Adoption
On September 13, 2012, Dropbox announced that their browser-side code base had been rewritten from JavaScript to CoffeeScript, however it was migrated to TypeScript in 2017.GitHub's internal style guide once said "write new JS in CoffeeScript", though it no longer does, and their Atom text editor was also written in the language.Pixel Game Maker MV makes uses of CoffeeScript as part of its game development environment.
See also
Haxe
Nim (programming language)
Amber Smalltalk
Clojure
Dart (programming language)
Kotlin (programming language)
LiveScript
Opa (programming language)
Elm (programming language)
TypeScript
PureScript
Further reading
Lee, Patrick (May 14, 2014). CoffeeScript in Action (First ed.). Manning Publications. p. 432. ISBN 978-1617290626.
Grosenbach, Geoffrey (May 12, 2011). "Meet CoffeeScript" (First ed.). PeepCode. : Cite journal requires |journal= (help)
Bates, Mark (May 31, 2012). Programming in CoffeeScript (First ed.). Addison-Wesley. p. 350. ISBN 978-0-321-82010-5.
MacCaw, Alex (January 31, 2012). The Little Book on CoffeeScript (First ed.). O'Reilly Media. p. 62. ISBN 978-1449321055.
Burnham, Trevor (August 3, 2011). CoffeeScript: Accelerated JavaScript Development (First ed.). Pragmatic Bookshelf. p. 138. ISBN 978-1934356784.
Official website |
COMIT was the first string processing language (compare SNOBOL, TRAC, and Perl), developed on the IBM 700/7000 series computers by Dr. Victor Yngve, University of Chicago, and collaborators at MIT from 1957 to 1965. Yngve created the language for supporting computerized research in the field of linguistics, and more specifically, the area of machine translation for natural language processing. The creation of COMIT led to the creation of SNOBOL.
Bob Fabry, University of Chicago, was responsible for COMIT II on Compatible Time Sharing System. |
Coq is an interactive theorem prover first released in 1989. It allows for expressing mathematical assertions, mechanically checks proofs of these assertions, helps find formal proofs, and extracts a certified program from the constructive proof of its formal specification. Coq works within the theory of the calculus of inductive constructions, a derivative of the calculus of constructions. Coq is not an automated theorem prover but includes automatic theorem proving tactics (procedures) and various decision procedures.
The Association for Computing Machinery awarded Thierry Coquand, Gérard Huet, Christine Paulin-Mohring, Bruno Barras, Jean-Christophe Filliâtre, Hugo Herbelin, Chetan Murthy, Yves Bertot, and Pierre Castéran with the 2013 ACM Software System Award for Coq.
The name "Coq" is a wordplay on the name of Thierry Coquand, Calculus of Constructions or "CoC" and follows the French computer science tradition of naming software after animals (coq in French meaning rooster).
Overview
When viewed as a programming language, Coq implements a dependently typed functional programming language; when viewed as a logical system, it implements a higher-order type theory. The development of Coq has been supported since 1984 by INRIA, now in collaboration with École Polytechnique, University of Paris-Sud, Paris Diderot University, and CNRS. In the 1990s, ENS Lyon was also part of the project. The development of Coq was initiated by Gérard Huet and Thierry Coquand, and more than 40 people, mainly researchers, have contributed features to the core system since its inception. The implementation team has successively been coordinated by Gérard Huet, Christine Paulin-Mohring, Hugo Herbelin, and Matthieu Sozeau. Coq is mainly implemented in OCaml with a bit of C. The core system can be extended by way of a plug-in mechanism.The name coq means 'rooster' in French and stems from a French tradition of naming research development tools after animals. Up until 1991, Coquand was implementing a language called the Calculus of Constructions and it was simply called CoC at this time. In 1991, a new implementation based on the extended Calculus of Inductive Constructions was started and the name was changed from CoC to Coq in an indirect reference to Coquand, who developed the Calculus of Constructions along with Gérard Huet and contributed to the Calculus of Inductive Constructions with Christine Paulin-Mohring.Coq provides a specification language called Gallina ("hen" in Latin, Spanish, Italian and Catalan).
Programs written in Gallina have the weak normalization property, implying that they always terminate.
This is a distinctive property of the language, since infinite loops (non-terminating programs) are common in other programming languages,
and is one way to avoid the halting problem.
As an example, a proof of commutativity of addition on natural numbers in Coq:
nat_ind stands for mathematical induction, eq_ind for substitution of equals, and f_equal for taking the same function on both sides of the equality. Earlier theorems are referenced showing
m
=
m
+
0
{\displaystyle m=m+0}
and
S
(
m
+
y
)
=
m
+
S
y
{\displaystyle S(m+y)=m+Sy}
.
Notable uses
Four color theorem and SSReflect extension
Georges Gonthier of Microsoft Research in Cambridge, England and Benjamin Werner of INRIA used Coq to create a surveyable proof of the four color theorem, which was completed in 2002. Their work led to the development of the SSReflect ("Small Scale Reflection") package, which was a significant extension to Coq. Despite its name, most of the features added to Coq by SSReflect are general-purpose features and are not limited to the computational reflection style of proof. These features include:
Additional convenient notations for irrefutable and refutable pattern matching, on inductive types with one or two constructors
Implicit arguments for functions applied to zero arguments, which is useful when programming with higher-order functions
Concise anonymous arguments
An improved set tactic with more powerful matching
Support for reflectionSSReflect 1.11 is freely available, dual-licensed under the open source CeCILL-B or CeCILL-2.0 license, and compatible with Coq 8.11.
Other applications
CompCert: an optimizing compiler for almost all of the C programming language which is largely programmed and proven correct in Coq.
Disjoint-set data structure: correctness proof in Coq was published in 2007.
Feit–Thompson theorem: formal proof using Coq was completed in September 2012.
See also
Calculus of constructions
Curry–Howard correspondence
Intuitionistic type theory
List of proof assistants
The Coq proof assistant – the official English website
coq/coq – the project's source code repository on GitHub
JsCoq Interactive Online System – allows Coq to be run in a web browser, without the need for any software installation
Alectryon – a library to process Coq snippets embedded in documents, showing goals and messages for each Coq sentence
Coq Wiki
Mathematical Components library – widely used library of mathematical structures, part of which is the SSReflect proof language
Constructive Coq Repository at Nijmegen
Math Classes
Coq at Open HubTextbooksThe Coq'Art – a book on Coq by Yves Bertot and Pierre Castéran
Certified Programming with Dependent Types – online and printed textbook by Adam Chlipala
Software Foundations – online textbook by Benjamin C. Pierce et al.
An introduction to small scale reflection in Coq – a tutorial on SSReflect by Georges Gonthier and Assia MahboubiTutorialsIntroduction to the Coq Proof Assistant – video lecture by Andrew Appel at Institute for Advanced Study
Video tutorials for the Coq proof assistant by Andrej Bauer. |
CUDA (or Compute Unified Device Architecture) is a proprietary and closed source parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements, for the execution of compute kernels.CUDA is designed to work with programming languages such as C, C++, and Fortran. This accessibility makes it easier for specialists in parallel programming to use GPU resources, in contrast to prior APIs like Direct3D and OpenGL, which required advanced skills in graphics programming. CUDA-powered GPUs also support programming frameworks such as OpenMP, OpenACC and OpenCL; and HIP by compiling such code to CUDA.
CUDA was created by Nvidia. When it was first introduced, the name was an acronym for Compute Unified Device Architecture, but Nvidia later dropped the common use of the acronym.
Background
The graphics processing unit (GPU), as a specialized computer processor, addresses the demands of real-time high-resolution 3D graphics compute-intensive tasks. By 2012, GPUs had evolved into highly parallel multi-core systems allowing efficient manipulation of large blocks of data. This design is more effective than general-purpose central processing unit (CPUs) for algorithms in situations where processing large blocks of data is done in parallel, such as:
cryptographic hash functions
machine learning
molecular dynamics simulations
physics engines
Ontology
The following table offers a non-exact description for the ontology of CUDA framework.
Programming abilities
The CUDA platform is accessible to software developers through CUDA-accelerated libraries, compiler directives such as OpenACC, and extensions to industry-standard programming languages including C, C++ and Fortran. C/C++ programmers can use 'CUDA C/C++', compiled to PTX with nvcc, Nvidia's LLVM-based C/C++ compiler, or by clang itself. Fortran programmers can use 'CUDA Fortran', compiled with the PGI CUDA Fortran compiler from The Portland Group.In addition to libraries, compiler directives, CUDA C/C++ and CUDA Fortran, the CUDA platform supports other computational interfaces, including the Khronos Group's OpenCL, Microsoft's DirectCompute, OpenGL Compute Shader and C++ AMP. Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Common Lisp, Haskell, R, MATLAB, IDL, Julia, and native support in Mathematica.
In the computer game industry, GPUs are used for graphics rendering, and for game physics calculations (physical effects such as debris, smoke, fire, fluids); examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more.CUDA provides both a low level API (CUDA Driver API, non single-source) and a higher level API (CUDA Runtime API, single-source). The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0, which supersedes the beta released February 14, 2008. CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and the Tesla line. CUDA is compatible with most standard operating systems.
CUDA 8.0 comes with the following libraries (for compilation & runtime, in alphabetical order):
cuBLAS – CUDA Basic Linear Algebra Subroutines library
CUDART – CUDA Runtime library
cuFFT – CUDA Fast Fourier Transform library
cuRAND – CUDA Random Number Generation library
cuSOLVER – CUDA based collection of dense and sparse direct solvers
cuSPARSE – CUDA Sparse Matrix library
NPP – NVIDIA Performance Primitives library
nvGRAPH – NVIDIA Graph Analytics library
NVML – NVIDIA Management Library
NVRTC – NVIDIA Runtime Compilation library for CUDA C++CUDA 8.0 comes with these other software components:
nView – NVIDIA nView Desktop Management Software
NVWMI – NVIDIA Enterprise Management Toolkit
GameWorks PhysX – is a multi-platform game physics engineCUDA 9.0–9.2 comes with these other components:
CUTLASS 1.0 – custom linear algebra algorithms,
NVCUVID – NVIDIA Video Decoder was deprecated in CUDA 9.2; it is now available in NVIDIA Video Codec SDKCUDA 10 comes with these other components:
nvJPEG – Hybrid (CPU and GPU) JPEG processingCUDA 11.0-11.8 comes with these other components:
CUB is new one of more supported C++ libraries
MIG multi instance GPU support
nvJPEG2000 – JPEG 2000 encoder and decoder
Advantages
CUDA has several advantages over traditional general-purpose computation on GPUs (GPGPU) using graphics APIs:
Scattered reads – code can read from arbitrary addresses in memory.
Unified virtual memory (CUDA 4.0 and above)
Unified memory (CUDA 6.0 and above)
Shared memory – CUDA exposes a fast shared memory region that can be shared among threads. This can be used as a user-managed cache, enabling higher bandwidth than is possible using texture lookups.
Faster downloads and readbacks to and from the GPU
Full support for integer and bitwise operations, including integer texture lookups
Limitations
Whether for the host computer or the GPU device, all CUDA source code is now processed according to C++ syntax rules. This was not always the case. Earlier versions of CUDA were based on C syntax rules. As with the more general case of compiling C code with a C++ compiler, it is therefore possible that old C-style CUDA source code will either fail to compile or will not behave as originally intended.
Interoperability with rendering languages such as OpenGL is one-way, with OpenGL having access to registered CUDA memory but CUDA not having access to OpenGL memory.
Copying between host and device memory may incur a performance hit due to system bus bandwidth and latency (this can be partly alleviated with asynchronous memory transfers, handled by the GPU's DMA engine).
Threads should be running in groups of at least 32 for best performance, with total number of threads numbering in the thousands. Branches in the program code do not affect performance significantly, provided that each of 32 threads takes the same execution path; the SIMD execution model becomes a significant limitation for any inherently divergent task (e.g. traversing a space partitioning data structure during ray tracing).
No emulator or fallback functionality is available for modern revisions.
Valid C++ may sometimes be flagged and prevent compilation due to the way the compiler approaches optimization for target GPU device limitations.
C++ run-time type information (RTTI) and C++-style exception handling are only supported in host code, not in device code.
In single-precision on first generation CUDA compute capability 1.x devices, denormal numbers are unsupported and are instead flushed to zero, and the precision of both the division and square root operations are slightly lower than IEEE 754-compliant single precision math. Devices that support compute capability 2.0 and above support denormal numbers, and the division and square root operations are IEEE 754 compliant by default. However, users can obtain the prior faster gaming-grade math of compute capability 1.x devices if desired by setting compiler flags to disable accurate divisions and accurate square roots, and enable flushing denormal numbers to zero.
Unlike OpenCL, CUDA-enabled GPUs are only available from Nvidia. Attempts to implement CUDA on other GPUs include:
Project Coriander: Converts CUDA C++11 source to OpenCL 1.2 C. A fork of CUDA-on-CL intended to run TensorFlow.
CU2CL: Convert CUDA 3.2 C++ to OpenCL C.
GPUOpen HIP: A thin abstraction layer on top of CUDA and ROCm intended for AMD and Nvidia GPUs. Has a conversion tool for importing CUDA C++ source. Supports CUDA 4.0 plus C++11 and float16.
ZLUDA is a drop-in replacement for CUDA on Intel GPU. ZLUDA allows to run unmodified CUDA applications using Intel GPUs with near-native performance.
chipStar can compile and run CUDA/HIP programs on advanced OpenCL 3.0 or Level Zero platforms.
Example
This example code in C++ loads a texture from an image into an array on the GPU:
Below is an example given in Python that computes the product of two arrays on the GPU. The unofficial Python language bindings can be obtained from PyCUDA.
Additional Python bindings to simplify matrix multiplication operations can be found in the program pycublas.
while CuPy directly replaces NumPy:
GPUs supported
Supported CUDA Compute Capability versions for CUDA SDK version and Microarchitecture (by code name):
Note: CUDA SDK 10.2 is the last official release for macOS, as support will not be available for macOS in newer releases.
CUDA Compute Capability by version with associated GPU semiconductors and GPU card models (separated by their various application areas):
'*' – OEM-only products
Version features and specifications
Data types
Note: Any missing lines or empty entries do reflect some lack of information on that exact item.
Tensor cores
Note: Any missing lines or empty entries do reflect some lack of information on that exact item.
Technical Specification
Multiprocessor Architecture
For more information read the Nvidia CUDA programming guide.
Current and future usages of CUDA architecture
Accelerated rendering of 3D graphics
Accelerated interconversion of video file formats
Accelerated encryption, decryption and compression
Bioinformatics, e.g. NGS DNA sequencing BarraCUDA
Distributed calculations, such as predicting the native conformation of proteins
Medical analysis simulations, for example virtual reality based on CT and MRI scan images
Physical simulations, in particular in fluid dynamics
Neural network training in machine learning problems
Face recognition
Volunteer computing projects, such as SETI@home and other projects using BOINC software
Molecular dynamics
Mining cryptocurrencies
Structure from motion (SfM) software
See also
SYCL – an open standard from Khronos Group for programming a variety of platforms, including GPUs, with single-source modern C++, similar to higher-level CUDA Runtime API (single-source)
BrookGPU – the Stanford University graphics group's compiler
Array programming
Parallel computing
Stream processing
rCUDA – an API for computing on remote computers
Molecular modeling on GPUs
Vulkan – low-level, high-performance 3D graphics and computing API
OptiX – ray tracing API by NVIDIA
CUDA binary (cubin) – a type of fat binary
Official website |
Cythonis a superset of the programming language Python, which allows developers to write Python code (with optional, C-inspired syntax extensions) that yields performance comparable to that of C.Cython is a compiled language that is typically used to generate CPython extension modules. Annotated Python-like code is compiled to C (also usable from e.g. C++) and then automatically wrapped in interface code, producing extension modules that can be loaded and used by regular Python code using the import statement, but with significantly less computational overhead at run time. Cython also facilitates wrapping independent C or C++ code into python-importable modules.
Cython is written in Python and C and works on Windows, macOS, and Linux, producing C source files compatible with CPython 2.6, 2.7, and 3.3 and later versions. The Cython source code that Cython compiles (to C) can use both Python 2 and Python 3 syntax, defaulting to Python 2 syntax in Cython 0.x (and Python 3 syntax in Cython 3.x, which is currently alpha software). The default can be overridden (e.g. in source code comment) to Python 3 (or 2) syntax. Since Python 3 syntax has changed in recent versions, Cython may not be up to date with latest addition. Cython has "native support for most of the C++ language" and "compiles almost all existing Python code".Cython 3.0.0 was released on 17 July 2023.
Design
Cython works by producing a standard Python module. However, the behavior differs from standard Python in that the module code, originally written in Python, is translated into C. While the resulting code is fast, it makes many calls into the CPython interpreter and CPython standard libraries to perform actual work. Choosing this arrangement saved considerably on Cython's development time, but modules have a dependency on the Python interpreter and standard library.
Although most of the code is C-based, a small stub loader written in interpreted Python is usually required (unless the goal is to create a loader written entirely in C, which may involve work with the undocumented internals of CPython). However, this is not a major problem due to the presence of the Python interpreter.Cython has a foreign function interface for invoking C/C++ routines and the ability to declare the static type of subroutine parameters and results, local variables, and class attributes.
A Cython program that implements the same algorithm as a corresponding Python program may consume fewer computing resources such as core memory and processing cycles due to differences between the CPython and Cython execution models. A basic Python program is loaded and executed by the CPython virtual machine, so both the runtime and the program itself consume computing resources. A Cython program is compiled to C code, which is further compiled to machine code, so the virtual machine is used only briefly when the program is loaded.Cython employs:
Optimistic optimizations
Type inference (optional)
Low overhead in control structures
Low function call overheadPerformance depends both on what C code is generated by Cython and how that code is compiled by the C compiler.
History
Cython is a derivative of the Pyrex language, and supports more features and optimizations than Pyrex. Cython was forked from Pyrex in 2007 by developers of the Sage computer algebra package, because they were unhappy with Pyrex's limitations and could not get patches accepted by Pyrex's maintainer Greg Ewing, who envisioned a much smaller scope for his tool than the Sage developers had in mind. They then forked Pyrex as SageX. When they found people were downloading Sage just to get SageX, and developers of other packages (including Stefan Behnel, who maintains the XML library LXML) were also maintaining forks of Pyrex, SageX was split off the Sage project and merged with cython-lxml to become Cython.Cython files have a .pyx extension. At its most basic, Cython code looks exactly like Python code. However, whereas standard Python is dynamically typed, in Cython, types can optionally be provided, allowing for improved performance, allowing loops to be converted into C loops where possible. For example:
Example
A sample hello world program for Cython is more complex than in most languages because it interfaces with the Python C API and setuptools or other PEP517-compliant extension building facilities. At least three files are required for a basic project:
A setup.py file to invoke the setuptools build process that generates the extension module
A main python program to load the extension module
Cython source file(s)The following code listings demonstrate the build and launch process:
These commands build and launch the program:
Using in IPython/Jupyter notebook
A more straightforward way to start with Cython is through command-line IPython (or through in-browser python console called Jupyter notebook):
which gives a 95 times improvement over the pure-python version. More details on the subject in the official quickstart page.
Uses
Cython is particularly popular among scientific users of Python, where it has "the perfect audience" according to Python creator Guido van Rossum. Of particular note:
The free software SageMath computer algebra system depends on Cython, both for performance and to interface with other libraries.
Significant parts of the scientific computing libraries SciPy, pandas and scikit-learn are written in Cython.
Some high-traffic websites such as Quora use Cython.Cython's domain is not limited to just numerical computing. For example, the lxml XML toolkit is written mostly in Cython, and like its predecessor Pyrex, Cython is used to provide Python bindings for many C and C++ libraries such as the messaging library ZeroMQ. Cython can also be used to develop parallel programs for multi-core processor machines; this feature makes use of the OpenMP library.
See also
PyPy
Numba
Official website
Cython on GitHub |
Stack Overflow is a question-and-answer website for programmers. It is the flagship site of the Stack Exchange Network. It was created in 2008 by Jeff Atwood and Joel Spolsky. It features questions and answers on certain computer programming topics. It was created to be a more open alternative to earlier question and answer websites such as Experts-Exchange. Stack Overflow was sold to Prosus, a Netherlands-based consumer internet conglomerate, on 2 June 2021 for $1.8 billion.The website serves as a platform for users to ask and answer questions, and, through membership and active participation, to vote questions and answers up or down similar to Reddit and edit questions and answers in a fashion similar to a wiki. Users of Stack Overflow can earn reputation points and "badges"; for example, a person is awarded 10 reputation points for receiving an "up" vote on a question or an answer to a question, and can receive badges for their valued contributions, which represents a gamification of the traditional Q&A website. Users unlock new privileges with an increase in reputation like the ability to vote, comment, and even edit other people's posts.As of March 2022 Stack Overflow has over 20 million registered users, and has received over 24 million questions and 35 million answers. The site and similar programming question and answer sites have globally mostly replaced programming books for day-to-day programming reference in the 2000s, and today are an important part of computer programming. Based on the type of tags assigned to questions, the top eight most discussed topics on the site are: JavaScript, Java, C#, PHP, Android, Python, jQuery, and HTML.
History
The website was created by Jeff Atwood and Joel Spolsky in 2008. The name for the website was chosen by voting in April 2008 by readers of Coding Horror, Atwood's popular programming blog. On 31 July 2008, Jeff Atwood sent out invitations encouraging his subscribers to take part in the private beta of the new website, limiting its use to those willing to test out the new software. On 15 September 2008 it was announced that the public beta version was in session and that the general public was now able to use it to seek assistance on programming related issues. The design of the Stack Overflow logo was decided by a voting process.On 3 May 2010, it was announced that Stack Overflow had raised $6 million in venture capital from a group of investors led by Union Square Ventures.In 2019, Stack Overflow named Prashanth Chandrasekar as its chief executive officer and Teresa Dietrich as its chief product officer.In June 2021, Prosus, a Netherlands-based subsidiary of South African media company Naspers, announced a deal to acquire Stack Overflow for $1.8 billion.
Security breach
In early May 2019, an update was deployed to Stack Overflow's development version. It contained a bug which allowed an attacker to grant themselves privileges in accessing the production version of the site. Stack Overflow published on their blog that approximately 184 public network users were affected by this breach, which "could have returned IP address, names, or emails".
2023 controversy over AI-generated content and moderation strike
Content
Stack Overflow only accepts questions about programming that are tightly focused on a specific problem. Questions of a broader nature—or those inviting answers that are inherently a matter of opinion—are usually rejected by the site's users, and marked as closed. The sister site softwareengineering.stackexchange.com is intended to be a venue for broader queries, e.g. general questions about software development.Closing questions is a main differentiation from other Q&A sites like Yahoo! Answers and a way to prevent low quality questions. The mechanism was overhauled in 2013; questions edited after being put "on hold" now appear in a review queue. Jeff Atwood stated in 2010 that duplicate questions are not seen as a problem but rather they constitute an advantage if such additional questions drive extra traffic to the site by multiplying relevant keyword hits in search engines.All user-generated content is licensed under Creative Commons Attribute-ShareAlike license, version 2.5, 3.0, or 4.0 depending on the date the content was contributed.
Statistics
A 2013 study has found that 75% of users only ask one question, 65% only answer one question, and only 8% of users answer more than 5 questions. To empower a wider group of users to ask questions and then answer, Stack Overflow created a mentorship program resulting in users having a 50% increase in score on average. As of 2011, 92% of the questions were answered, in a median time of 11 minutes.As of August 2012, 443,000 of the 1.3 million registered users had answered at least one question, and of those, approximately 6,000 (0.46% of the total user count) had earned a reputation score greater than 5000. Reputation can be gained fastest by answering questions related to tags with lower expertise density, doing so promptly (in particular being the first one to answer a question), being active during off-peak hours, and contributing to diverse areas.
Technology
Stack Overflow is written in C# using the ASP.NET MVC (Model–View–Controller) framework, and Microsoft SQL Server for the database and the Dapper object-relational mapper used for data access. Unregistered users have access to most of the site's functionality, while users who sign in can gain access to more functionality, such as asking or answering a question, establishing a profile and being able to earn reputation to allow functionality like editing questions and answers without peer review or voting to close a question.
Reception
Stack Overflow won the 2020 Webby People's Voice Award for Community in the category Web.The site's culture has been criticized for being unfriendly, especially in the context of gender differences in participation and beginners learning computer science.A study from the University of Maryland found that Android developers that used only Stack Overflow as their programming resource tended to write less secure code than those who used only the official Android developer documentation from Google, while developers using only the official Android documentation tended to write significantly less functional code than those who used only Stack Overflow.
See also
Askbot (free engine)
List of Internet forums
OSQA (Open Source Question and Answer)
Rosetta Code (multi-lingual algorithms)
Official website |
SuperCollider is an environment and programming language originally released in 1996 by James McCartney for real-time audio synthesis and algorithmic composition.Since then it has been evolving into a system used and further developed by both scientists and artists working with sound. It is a dynamic programming language providing a framework for acoustic research, algorithmic music, interactive programming and live coding.
Originally released under the terms of the GPL-2.0-or-later in 2002, and from version 3.4 under GPL-3.0-or-later, SuperCollider is free and open-source software.
Architecture
Starting with version 3, the SuperCollider environment has been split into two components: a server, scsynth; and a client, sclang. These components communicate using OSC (Open Sound Control).The SC language combines the object-oriented structure of Smalltalk and features from functional programming languages with a C-family syntax.The SC Server application supports simple C and C++ plugin APIs, making it easy to write efficient sound algorithms (unit generators), which can then be combined into graphs of calculations. Because all external control in the server happens via OSC, it is possible to use it with other languages or applications.
The SuperCollider synthesis server (scsynth)
SuperCollider's sound generation is bundled into an optimised command-line executable (named scsynth). In most cases it is controlled from within the SuperCollider programming language, but it can be used independently. The audio server has the following features:
Open Sound Control access
Simple ANSI C and C++11 plugin APIs
Supports any number of input and output channels, including massively multichannel setups
Gives access to an ordered tree structure of synthesis nodes which define the order of execution
Bus system which allows dynamically restructuring the signal flow
Buffers for writing and reading
Calculation at different rates depending on the needs: audio rate, control rate, demand rateSupernova, an independent implementation of the Server architecture, adds multi-processor support through explicit parallel grouping of synthesis nodes.
The SuperCollider programming language (sclang)
The SuperCollider programming language is a dynamically typed, garbage-collected, single-inheritance object-oriented and functional language similar to Smalltalk, with a syntax similar to Lisp or the C programming language. Its architecture strikes a balance between the needs of realtime computation and the flexibility and simplicity of an abstract language. Like many functional languages, it implements functions as first-class objects, which may be composed. Functions and methods can have default argument values and variable length argument lists and can be called with any order of keyword arguments. Closures are lexical, and scope is both lexical and dynamic. Further features typical of functional languages are supported, including creation of closures via partial application (explicit currying), tail call optimization, list comprehensions, and coroutines. Specifics include the implicit expansion of tuples and the stateless pattern system. Its constant-time message lookup and real-time garbage collection allows large systems to be efficient and to handle signal processing flexibly.By supporting methods of reflective, conversational, and literate programming, SuperCollider makes it relatively easy to find new sound algorithms and to develop custom software as well as custom frameworks. With regards to domain specific knowledge, it is both general (e.g., it allows the representation of properties such as time and pitch in variable degrees of abstraction) and has copious example implementations for specific purposes.
GUI system
The SuperCollider language allows users to construct cross-platform graphical user interfaces for applications. The standard class library with user interface components may be extended by a number of available frameworks. For interactive programming, the system supports programmatic access to rich-text code files. It may be used to generate vector graphics algorithmically.
Interfacing and system support
Clients
Because the server is controlled using Open Sound Control (OSC), a variety of applications can be used to control the server. SuperCollider language environments (see below) are typically used, but other OSC-aware systems can be used such as Pure Data."Third-party" clients for the SuperCollider server exist, including rsc3, a Scheme client, hsc3, based on Haskell, ScalaCollider, based on Scala, Overtone, based on Clojure, and Sonic Pi. These are distinct from the development environments mentioned below because they do not provide an interface to SuperCollider's programming language, instead they communicate directly with the audio server and provide their own approaches to facilitating user expression.
Supported operating systems
SuperCollider runs on macOS, Linux, Windows and FreeBSD. For each of these operating systems there are multiple language-editing environments and clients that can be used with SuperCollider (see below).It has also been demonstrated that SuperCollider can run on Android and iOS.
Editing environments
SuperCollider code is most commonly edited and used from within its own cross-platform IDE, which is Qt-based and supports Linux, Mac, and Windows.
Other development environments with SuperCollider support include:
Emacs (Linux, Mac, Windows)
Vim (Linux, Mac)
Atom (Linux, Mac, Windows)
gedit (Linux, Windows)
Kate (Linux, Windows)
Code examples
Live coding
As a versatile dynamic programming language, SuperCollider can be used for live coding, i.e. performances which involve the performer modifying and executing code on the fly. Specific kinds of proxies serve as high level placeholders for synthesis objects which can be swapped in and out or modified at runtime. Environments allow sharing and modification of objects and process declarations over networks. Various extension libraries support different abstraction and access to sound objects, e.g. dewdrop_lib allows for the live creation and modification of pseudo-classes and pseudo-objects.
See also
List of music software
Comparison of audio synthesis environments
Official website |
To Live, also titled Lifetimes in some English versions, is a 1994 Chinese drama film directed by Zhang Yimou and written by Lu Wei, based on the novel of the same name by Yu Hua. It is produced by the Shanghai Film Studio and ERA International, starring Ge You and Gong Li, in her 7th collaboration with director Zhang Yimou.
This film is about a couple, portrayed by Ge You and Gong Li, living through tumultuous periods of modern Chinese history, from the Chinese Civil War in the late 1940s to the Cultural Revolution. After going through enormous personal difficulties and tragedies, the couple tenaciously survives and endures, witnessing the vast changes of modern China.
By applying chronological narration to address the social practices of China's ideology, the film demonstrates the difficulties of the common Chinese, reflecting on how the government controls the nation as a collective community without considering their citizens.The portrayal of the Chinese people living under social pressures create the meaning of the film, as their grinding experience shows their resistance and struggles under political changes.To Live was screened at the 1994 New York Film Festival before eventually receiving a limited release in the United States on November 18, 1994. The film has been used in the United States as a support to teach Chinese history in high schools and colleges.Having achieved international success with his previous films (Ju Dou and Raise the Red Lantern), director Zhang Yimou's To Live came with high expectations, and lived up to it, receiving critical acclaim. It is the first Chinese film that had its foreign distribution rights pre-sold. Furthermore, To Live brought home the Grand Prix, Prize of the Ecumenical Jury, and Best Actor Award (Ge You) from the 1994 Cannes Film Festival, the highest major international awards Zhang Yimou has ever won.The Film was denied a theatrical release in mainland China by the Chinese State Administration of Radio, Film, and Television due to its critical portrayal of various policies and campaigns of the government, such as the Great Leap Forward and the Cultural Revolution. However, the film has now been made available in China online, through various paid streaming websites (ex. iQIYI).
Plot
In the 1940s, Xu Fugui (Ge You) is a rich man's son and compulsive gambler, who loses his family property to a man named Long'er. His behavior also causes his long-suffering wife Jiazhen (Gong Li) to leave him, along with their daughter, Fengxia, and their unborn son, Youqing.
Fugui eventually reunites with his wife and children but is forced to start a shadow puppet troupe with a partner named Chunsheng. The Chinese Civil War is occurring at the time, and both Fugui and Chunsheng are conscripted into the Kuomintang's Republic of China armed forces during a performance. Midway through the war, the two are captured by the communist People's Liberation Army and serve by performing their shadow puppet routine for the communist revolutionaries. After the Communist victory, Fugui is able to return home, only to find out that due to a week-long fever, Fengxia has become mute and partially deaf.
Soon after his return, Fugui learns that Long'er burned all his property just to deny the new regime from seizing it. No one helped put out the fire because Long'er was a gentry. He is eventually put on trial and sentenced to execution. As Long'er is pulled away, he recognizes Fugui in the crowd and tries to talk to him as he is dragged toward the execution grounds. Realizing that Long'er's fate would have been his if not for his "misfortune" years earlier, Fugui is filled with fear and runs into an alleyway before hearing five gunshots. He runs home to tell Jiazhen what has happened, and they quickly pull out the certificate stating that Fugui served in the communist People's Liberation Army. Jiazhen assures him they are no longer gentries and will not be killed.
The story moves forward a decade into the future, to the time of the Great Leap Forward. The local town chief enlists everyone to donate all scrap iron to the national drive to produce steel and make weaponry for retaking Taiwan. As an entertainer, Fugui performs for the entire town nightly, and is very smug about his singing abilities.
Soon after, some boys begin picking on Fengxia. Youqing decides to get back at one of the boys by dumping spicy noodles on his head during a communal lunch. Fugui is furious, but Jiazhen stops him and tells him why Youqing acted the way he did. Fugui realizes the love his children have for each other.
The children are exhausted from the hard labor they are doing in the town and try to sleep whenever they can. They eventually get a break during the festivities for meeting the scrap metal quota. The entire village eats dumplings in celebration. In the midst of the family eating, schoolmates of Youqing call for him to come prepare for the District Chief. Jiazhen tries to make Fugui let him sleep but eventually relents and packs her son twenty dumplings for lunch. Fugui carries his son to the school, and tells him to heat the dumplings before eating them, as he will get sick if he eats cold dumplings. He must listen to his father to have a good life.
Later on in the day, the older men and students rush to tell Fugui that his son has been killed by the District Chief. He was sleeping on the other side of a wall that the Chief's Jeep was on, and the car ran into the wall, injuring the Chief and crushing Youqing. Jiazhen, in hysterics, is forbidden to see her son's dead body, and Fugui screams at his son to wake up. Fengxia is silent in the background.
The District Chief visits the family at the grave, only to be revealed as Chunsheng. His attempts to apologize and compensate the family are rejected, particularly by Jiazhen, who tells him he owes her family a life. He returns to his Jeep in a haze, only to see his guard restraining Fengxia from breaking the Jeep's windows. He tells the guard to stop and walks home.
The story moves forward again another decade, to the Cultural Revolution. The village chief advises Fugui's family to burn their puppet drama props, which have been deemed as counter-revolutionary. Fengxia carries out the act, and is oblivious to the Chief's real reason for coming: to discuss a suitor for her. Fengxia is now grown up and her family arranges for her to meet Wan Erxi, a local leader of the Red Guards. Erxi, a man crippled by a workplace accident, fixes her parents' roof and paints depictions of Mao Zedong on their walls with his workmates. He proves to be a kind, gentle man; he and Fengxia fall in love and marry, and she soon becomes pregnant.
Chunsheng, still in the government, visits immediately after the wedding to ask for Jiazhen's forgiveness, but she refuses to acknowledge him. Later, he is branded a reactionary and a capitalist. He comes to tell them his wife has committed suicide and he intends to as well. He has come to give them all his money. Fugui refuses to take it. However, as Chunsheng leaves, Jiazhen commands him to live, reminding him that he still owes them a life.
Months later, during Fengxia's childbirth, her parents and husband accompany her to the county hospital. All doctors have been sent to do hard labor for being over educated, and the students are left as the only ones in charge. Wan Erxi manages to find a doctor to oversee the birth, removing him from confinement, but he is very weak from starvation. Fugui purchases seven steamed buns (mantou) for him and the family decides to name the son Mantou, after the buns. However, Fengxia begins to hemorrhage, and the nurses panic, admitting that they do not know what to do. The family and nurses seek the advice of the doctor, but find that he has overeaten and is semiconscious. The family is helpless, and Fengxia dies from postpartum hemorrhage (severe blood loss). The point is made that the doctor ate 7 buns, but that by drinking too much water at the same time, each bun expanded to the size of 7 buns: therefore Fengxia's death is a result of the doctor's having the equivalent of 49 buns in his belly.
The movie ends six years later, with the family now consisting of Fugui, Jiazhen, their son-in-law Erxi, and grandson Mantou. The family visits the graves of Youqing and Fengxia, where Jiazhen, as per tradition, leaves dumplings for her son. Erxi buys a box full of young chicks for his son, which they decide to keep in the chest formerly used for the shadow puppet props. When Mantou inquires how long it will take for the chicks to grow up, Fugui's response is a more tempered version of something he said earlier in the film. He expresses optimism for his grandson's future, and the film ends with his statement, "and life will get better and better" as the whole family sits down to eat.
Cast
Ge You as Xu Fugui (t徐福貴, s徐福贵, Xú Fúguì, lit."Lucky & Rich"):
Fugui came from a rich family, but he is addicted to gambling, so his pregnant wife walked away from him with their daughter. After he gambled away all his possessions, his father passed away due to anger. After a year, his wife came back and they started their life over again. Fugui and Chunsheng together maintained a shadow puppet business for their livelihood, but they were forcibly conscripted by the Kuomintang army, and later the Communist Party. When at last, Fugui got home after the war, everything has changed.
Gong Li as Jiazhen (家珍, Jiāzhēn, lit."Precious Family"), Fugui's wife:
Jiazhen is a hard-working, kind, and virtuous woman. She is a strong spiritual pillar for Fugui. When her husband gambled his possessions away, Jiazhen angrily left him and took their daughter away. But when Fugui had lost everything, and she knew that Fugui had completely quit gambling, she returned to his side to share in weal and woe. She was not after a great fortune, just a peaceful life with her family.
Liu Tianchi as adult Xu Fengxia (t徐鳳霞, s徐凤霞, Xú Fèngxiá, lit."Phoenix & Rosy Clouds"), daughter of Fugui and Jiazhen
Xiao Cong as teenage Xu Fengxia;
Zhang Lu as child Xu Fengxia;
When Fengxia was a child, she had a serious fever and could not be cured in time, so she became deaf. She married Erxi after she grew up, but when she gave birth, she died for lack of professional doctors.
Fei Deng as Xu Youqing (t徐有慶, s徐有庆, Xú Yǒuqìng, lit."Full of Celebration"), Fugui and Jiazhen's son:
Youqing was accidentally hit and killed by Chunsheng due to drowsy driving during the Great Leap Forward.
Jiang Wu as Wan Erxi (t萬二喜, s万二喜, Wàn Èrxǐ, lit."Double Happiness"), Fengxia's husband:
Erxi is honest, kind, and loyal. He often took care of Fengxia’s parents after Fengxia’s death.
Ni Dahong as Long'er (t龍二, s龙二, Lóng'èr lit."Dragon the Second"):
Long'er used to be the head of a shadow puppet troupe and won all of Fu Gui's property by gambling. After liberation, he was classified as a landlord and his property was ordered to be confiscated. But he refused the confiscation, and set the property on fire. As a result, he was convicted of the crime of "counterrevolutionary sabotage" and sentenced to death by shooting.
Guo Tao as Chunsheng (春生, Chūnshēng, lit."Spring-born"):
Fugui's good friend, they served together as forced conscripts. Chunsheng then joined the People's Liberation Army, and became the district governor. Due to this position, he was criticized as a capitalist roader and endured struggle sessions during the Cultural Revolution.
Production
Development
Zhang Yimou originally intended to adapt Mistake at River's Edge, a thriller written by Yu Hua. Yu gave Zhang a set of all of the works that had been published at that point so Zhang could understand his works. Zhang said when he began reading To Live, one of the works, he was unable to stop reading it. Zhang met with Yu to discuss the script for Mistake at River's Edge, but they kept bringing up To Live. Thus, the two decided to adapt To Live instead.
Casting
Ge You, known for his comedic roles, was chosen by Zhang Yimou to play the title character, Fugui. Known for poker-faced comedy, he was not accustomed to expressing emotional states this character requires. Thus, he was not very confident in himself, even protesting going to the Cannes Film Festival where he would eventually garner a best actor award.
Director
Growing up, Zhang spent his youth years through the Cultural revolution. Having personally experienced what it was like in such a time and setting, he had a very strong understanding and emotional connection with Chinese culture and society.As a student who studied screen studies in university in the country’s capital city, he and his peers were heavily exposed to various movies from across the world and across time. His classmate, who is now the President of the Beijing Film Academy stated that during their four years in university, they went through over 500 films spanning from Hollywood films from the 1930s to Italian Neo-Realism films. Zhang Yimou stated in a previous interview that even after many years, he still remembered the culture shock he experienced when first exposed to the wide variety of films.The combination of the two very crucial parts of his life provided him with a very strong vision for his films. He was able to have a very strong understanding of both the Chinese national outlook as well as the international outlook of films and applied them extensively throughout his career.Zhang described that To Live is the film he felt the strongest connection to because of the Cultural Revolution background in the film. The political background of Zhang’s family was the label “double-counterrevolutionary”, which was the worst kind of counterrevolutionary. Different from other fifth-generation filmmakers such as Chen Kaige and Tian Zhuangzhuang, Zhang was in a desperate state and cannot trace back things that were lost during the Cultural Revolution. Zhang said “For me, that was an era without hope – I lived in a world of desperation”.Zhang, in an interview, described how he used different elements that diverged from the original novel. The use of the shadow play and puppet theatre was to emphasize a different visual look. The ending of the film To Live is different from the novel’s because Zhang wanted to pass the censorship in China and gain approval from the audience in mainland China, even though the film has not been publicly screened in China yet. On the other hand, Zhang’s family had suffered enormously during the Cultural Revolution, but, as Zhang stated, they still survived. Thus, he felt that the book’s ending where everyone in Fu Gui’s family had died was not as reasonable. Furthermore, Zhang Yimou chose Ge You, who is famous for his comedic roles to play the protagonist, Fu Gui. Ge You actually inspired Zhang to add more humorous elements in the film, therefore it is more reasonable not to kill every character at the end.
Differences from the novel
Moved the setting from rural southern China to a small city in northern China.
Added elements of shadow puppetry.A symbol of wealth. Shows that it is at the mercy of others and can do nothing about its own future.
Second narrator and the ox not present in the film.
Fugui had a sense of political idealism that he lost by the end of the film.
The novel is a retrospective, but Zhang adapts the film without the remembrance tone.
Zhang introduced the elimination of Yu Hua's first person narration
Only Fugui survived in the novel, but Fugui, Jiazhen, Erxi, and Mantou all survived in the film.
Adaptation
In the film “To Live”, Zhang Yimou did not choose to directly express the theme of the novel, but to reduce the number of deaths, change the way of death, and cut into the doomed sense of fate to eliminate the audience's immediate depression brought by the story itself. In the film, these deliberately set dramatic turns highlight the theme that those infinitely small people, as living “others”, can only rely on living instinct to bear suffering in history, times and social torrents. The theme of the novel- the ability to bear suffering and the optimistic attitude to the world- is hidden in these little people who are helpless to their own fate but still alive strongly.As a film art that can restore extreme reality and stimulate the audience’s audio-visual sense to the greatest extent, if it still follows the original “death fable” style of telling, it will undoubtedly be depressing and dark, which will have a certain negative impact on the audience. Therefore, it is a feasible strategy for Zhang Yimou’s film adaptation to implement moderate deviation and elimination of suffering in order to weaken the excessive impact of film art communication. According to his personal experiences, Zhang Yimou blended the story into historical background and used a more gentle technique to highlight the helplessness of the character’s fate under the influence of the time. It indirectly expresses the main idea of the novel and weakens the feeling of suffocating, which makes it an excellent adaptation film with “Zhang’s brand”. This work not only tries to restore the original, but also joins another creative subject: the director’s thinking and independent consciousness. The added shadow puppets and innovative changes to the ending, to certain extent, achieved the film because the main thrust of the novel is shown in these innovations. The translation of text into a new film language, although with a strong Zhang’s brand, ensures the original spiritual connotation. The film “To Live” has also become a successful attempt of comprehensive transformation from literature art to film art.
Release
Limited release in North America
The film opened on September 16, 1994 in Canada. By the time the film opened in the United States on November 18, the film had grossed $67,408. On November 18, it expanded to 4 theaters, including Angelika Film Center and Lincoln Plaza in New York City, where it grossed $32,902, towards a weekend gross of $34,647. It went on to gross $2.3 million in the United States and Canada.
Chinese censorship
This film was banned in China due to a combination of factors. First, it has a critical portrayal of various policies and campaigns of the Communist Government, such as how the protagonists’ tragedies were caused as a result of the Great Leap Forward and the Cultural Revolution. Second, Zhang Yimou and his sponsors entered the film at the Cannes Film Festival without the usual government’s permission, ruffling the feathers of the party. Lastly, this film suffered from the bad timing of its release, following Farewell My Concubine and The Blue Kite, films which cover almost the same subject matter and historical period. Both of these films had alerted the Chinese government, due to their similar critical portrayals of Chinese policies, and made them very cautious and aware of the need to ban any future films that tried to touch on the same topics.Despite being officially banned, the film was widely available on video in China upon its release and was even shown in some theaters.
Reception
Critical response
To Live received critical acclaim and various critics selected the film in their year end lists. To Live has an approval rating of 87% on review aggregator website Rotten Tomatoes, based on 23 reviews, and an average rating of 8.3/10. The website's critical consensus states: "To Live (Huo zhe) offers a gut-wrenching overview of Chinese political upheaval through the lens of one family's unforgettable experiences".There is, among film critics, almost a consensus that To Live is not merely a lament of difficult times, nor a critique of the evils of the totalitarian system, but more “an homage to the characters’ resilience and heroism in their odyssey of survival.” Some scholars further argue that the era’s hostile and chaotic environment is not the story itself, but simply serves as a stage for the story.
Accolades
Year-end lists
4th - Kevin Thomas, Los Angeles Times.
5th - Janet Maslin, The New York Times
5th - James Berardinelli, ReelViews
9th - Michael MacCambridge, Austin American-Statesman
Honourable mention - Mike Clark, USA Today
Honourable mention - Betsy Pickle, Knoxville News-Sentinel
Awards and nominations
Other accolades
Time Out 100 Best Chinese Mainland Films - #8
Included in The New York Times' list of The Best 1000 Movies Ever Made in 2004
included in CNN's list of 18 Best Asian Movie of All Time in 2008
The film ranked 41st in BBC's 2018 list of The 100 greatest foreign language films voted by 109 film critics from 43 countries around the world.
Symbolism
Food
Dumplings: Youqing’s lunch box with dumplings inside was never opened. These dumplings reappeared as an offering on Youqing’s tomb repeatedly. Rather than being eaten and absorbed, the dumplings are now lumps of dough and meat standing as reminders of a life that has been irreparably wasted.
Mantou (steamed wheat bun): When Fengxia is giving birth, Doctor Wang, the only qualified doctor, passed out due to eating too many buns after a long time of hunger. Thus, he was unable to save Fengxia’s life. The mantou, meant to ease his hunger, rehydrated and expanded within his stomach, shrunken with starvation. “Filling stomach” ironically leads to the death of Fengxia. The buns did not save a life - it is an indirect killer of Fengxia.
Noodles: Youqing used his meal as revenge for his sister. Although wasted in a literal sense, they are not wasted in Youqing’s mind. Food is not merely for “filling stomach” or “to live,” similarly, “to live” does not depend solely on foods.
Shadow puppetry
The usage of shadow puppetry, which carries a historical and cultural heritage, throughout the movie acts as a parallel to what characters experience in the events that they have to live through.
Recurring lines
In two places of the film, there is a similar line. The version that appears earlier in the film is: “The little chickens will grow to be ducks, the ducks will become geese, and the geese will become oxen, and tomorrow will be better because of communism.” The version that appears later in the film is: “The little chickens will grow to be ducks, the ducks will become geese, and the geese will become oxen, and tomorrow will be better.” This line acts as a picture of the Chinese people’s perseverance in the face of historical hardships, giving the feeling of hope for the audience.
Other facts
Referential meaning of the film: The scene in which the father publicly punishes the son in To Live can be read as a miniaturized re-rendering of the dramatic punishing scene watched by the entire world in June 1989, the Tiananmen Massacre.
There is a scene that the local town chief calls on everyone to donate the iron products of the family to make steel. This implies the historical time jumps to the Great Leap Forward. At that period of time, the Communist Party tried to copy the huge success of the industrial revolution in Britain. However, the method is wrong and wasn't helpful. It shows that when the flood of time come upon a single family, they have no choice but to be carried forward.
In the film To Live, during its second half, another tragedy occurred to Fugui's family - Fengxia died of childbirth. None of the nurses knew how to treat postpartum hemorrhage. It's worth noting that the nurses are saying: "We don't know how to deal with this! We are just students!"As the most qualified doctor is almost beaten to death. This section of the film suggests the suffering that the Cultural Revolution brought to people. Most doctors were replaced by Red Guards and were accused as reactionaries. As the sign on the doctor's body shows, he was in many struggle sessions with the Red Guard.
See also
List of films banned in China
Censorship in the People's Republic of China
List of Chinese films
List of films featuring the deaf and hard of hearing
Bibliography
Further reading
Giskin, Howard and Bettye S. Walsh. An Introduction to Chinese Culture Through the Family. SUNY Press, 2001. ISBN 0-7914-5048-1, ISBN 978-0-7914-5048-2.
Xiao, Zhiwei. "Reviewed work(s): The Wooden Man's Bride by Ying-Hsiang Wang; Yu Shi; Li Xudong; Huang Jianxin; Yang Zhengguang Farewell My Concubine by Feng Hsu; Chen Kaige; Lillian Lee; Wei Lu The Blue Kite by Tian Zhuangzhuang To Live by Zhang Yimou; Yu Hua; Wei Lu; Fusheng Chin; Funhong Kow; Christophe Tseng." The American Historical Review. Vol. 100, No. 4 (Oct. 1995), pp. 1212–1215
Chow, Rey. "We Endure, Therefore We Are: Survival, Governance, and Zhang Yimou's To Live." South Atlantic Quarterly 95 (1996): 1039-1064.
Shi, Liang. "The Daoist Cosmic Discourse in Zhang Yimou's "to Live"."] Film Criticism, vol. 24, no. 2, 1999, pp. 2–16.
Berry, Michael (2005). Speaking in images : interviews with contemporary Chinese filmmakers. New York: Columbia University Press. ISBN 0-231-13330-8. OCLC 56614243.
To Live at IMDb
To Live at AllMovie
To Live at Rotten Tomatoes
To Live at Box Office Mojo |
Andy On (Chinese: 安志杰; pinyin: Ān Zhìjié; Cantonese Yale: On Chi Kit) (born May 11, 1977) is an American actor and martial artist.
Life and career
On was born on May 11, 1977, in Providence, Rhode Island. He is a native of the US, and can speak English, Mandarin, and a bit of Cantonese.
Andy On did not graduate from high school. He worked as a bartender in Rhode Island. While doing this, he was approached by China Star founder Charles Heung and filmmaker Tsui Hark to take over the role of one of Jet Li's film characters, Black Mask, in Black Mask 2: City of Masks (2002). He had no martial arts background. Director Tsui Hark had to send Andy to Shaolin Temple one month for training. He even received some guidance from Jet Li himself. Despite the poor reviews and the bad box office ratings, On continued to act, improving in both martial arts and acting. Like fellow Hong Kong film star Nicholas Tse, On trained in martial arts under Chung Chi Li, the leader of the Jackie Chan Stunt Team. On began training with Chan in 2001 for the film Looking for Mister Perfect/Kei fung dik sau (2003), which was released two years later. On trained in wushu at the Shaolin Temple and studied film fighting under former Jackie Chan Stunt Team leader Nicky Li for his first film, Looking for Mister Perfect, which was shot before Black Mask 2: City of Masks, but released one year later, in 2003. On continued his acting career with many injuries, sustaining a hamstring injury on the set of New Police Story (2004) in one of the two fights against Jackie Chan.
On was nominated and won the Hong Kong Film Award for Best New Actor Award for his role in Star Runner/Siu nin a Fu (2003). He shared the screen with the man who influenced him, Jackie Chan, in New Police Story. On has also won the Best New Artist Award in 2004 at the Hong Kong Film Awards for his role as Tank Wong in Siu nin a Fu, beating favorite Vanness Wu by only one-tenth of the votes.
Aside from his filmmaking career, On is also a singer. He has released some tracks, including a duet with Taiwanese pop singer Jolin Tsai called "Angel of Love". His hobbies are martial arts and video games. He continues to train in Wing Chun Kung Fu with good friend, actor, and martial artist Philip Ng, and has studied Thai boxing under former world kickboxing champion and actor Billy Chau in preparation for Star Runner.
During production of the film Three Kingdoms: Resurrection of the Dragon (2008), On was hit in the face by a stuntman during an action sequence. On cut his lip, and, after seven surgeries, sports a small scar on his lip. He considers the scar a "trophy" of his hard work in the film.
Personal life
On had a short-lived relationship with Coco Lee in which first started in June 2002. On dated model and actress Jennifer Tse from 2009 to 2013 before breaking up.
During the filming of Zombie Fight Club, On met actress Jessica Cambensy. The two began dating in November 2014. They became engaged in October 2015, a week after Jessica announced she was pregnant. Andy and Jessica became the parents of a 9-pound baby girl they named Tessa in March 2016. Andy and Jessica married on October 15, 2017, in a private ceremony in Hawaii. They later had a son, Elvis, on June 19, 2018.
Filmography
Music videos
2002 – Coco Lee ("有你就够了")
2004 – Miriam Yeung ("处处吻")
2004 – Miriam Yeung ("柳媚花娇")
Official blog of Andy On
Andy On at IMDb
Andy On Chi-Kit at the Hong Kong Movie DataBase
Andy On at the Hong Kong Cinemagic |
The 17th National Congress of the Chinese Communist Party was held in Beijing, China, at the Great Hall of the People from 15 to 21 October 2007. Congress marked a significant shift in the political direction of the country as CCP General Secretary Hu Jintao solidified his position of leadership. Hu's signature policy doctrine, the Scientific Development Concept, which aimed to create a "Socialist Harmonious Society" through egalitarian wealth distribution and concern for the country's less well-off, was enshrined into the Party Constitution. It was succeeded by the 18th National Congress of the Chinese Communist Party.The Congress also set up the political scene for a smooth transition to the fifth generation of party leadership, introducing rising political stars Xi Jinping and Li Keqiang to the Politburo Standing Committee (PSC), the country's de facto top decision-making body. Vice-President Zeng Qinghong, an important ally of former General secretary Jiang Zemin, retired from the PSC. Party anti-graft chief Wu Guanzheng, and Legal and Political Commission chief Luo Gan also retired due to age, replaced by He Guoqiang and Zhou Yongkang in their respective posts.
Significance
A Communist Party Congress is a significant event in Chinese politics since it nominally decides the leadership of the People's Republic of China. (The Politburo Standing Committee makes major policy decisions for the government to implement and the National People's Congress in the following March will elevate its members to top government positions.) The 17th Party Congress is estimated to attract over 1350 foreign and domestic journalists.Although the Congress formally elects the Central Committee and Politburo, in practice these positions are negotiated before the congress, and the Congress has never functioned as a deliberative assembly. Nominees to Party positions are invariably elected by wide margins, with a tightly controlled candidate-to-position ratio. There is room for symbolic protest votes ("no" or "abstain" votes) that embarrass the party leadership. Despite its symbolic nature, it maintains an important role because it is the occasion at which the results of these deliberations are publicly announced, and in which the PRC leadership faces both domestic and foreign reporters in a press conference.
Since the mid-1980s, the Communist Party has attempted to maintain a smooth and orderly succession and avoid a cult of personality, by having a major shift in personnel every ten years in even-number party congresses, and by promoting people in preparation for this shift in odd-number party congresses. These mechanisms have been institutionalized by mandatory retirement ages, and provisions in both the Party and state constitutions that limit the term of office of officials to two five-year terms.
Effects on incumbent leadership
Based on established convention, Hu Jintao was confirmed for another term as the party's General Secretary, setting the scene for his re-election as state President at the National People's Congress in March 2008. Wen Jiabao, too, retained his seat on the PSC and continued to serve as Premier. In addition odd-number party congresses have also served as forums in which the top leadership has institutionalized their policy views as additions to party doctrine, in preparation for their retirement at the next party congress. Hu's version of this doctrine is termed the Scientific Development Concept to develop a "socialist harmonious society", which followed Marxism-Leninism, Mao Zedong Thought, Deng Xiaoping Theory and the Three Represents as a guiding ideology in the Party's constitution.
Succession planning
More interesting and unpredictable were the selection of the younger cadres who will be promoted to the Politburo, China's de facto ruling body. The youngest person currently on the Politburo prior to the congress was only two years younger than Hu, and consequently, there was widespread speculation that Hu's successor would not come from the members serving on the Politburo prior to the congress but rather from the next generation of leaders. Prior to the congress, speculation was rife on who would be named as Hu's successor. Although the subject of succession speculation is largely taboo within the mainland Chinese media, Hong Kong and Taiwan media, as well as international media, predicted that the top candidates would be Xi Jinping and Li Keqiang, then serving as party chief in Shanghai and Liaoning, respectively.
Effects on lower party officials
In addition, as people at the top level of the party retire, there is room for younger members of the party to move up one level. Hence the party congress is a time of a general personnel reshuffle, and the climax of negotiations that involve not only the top leadership but practically all significant political positions in Mainland China. Notably, fifth-generation leadership hopefuls Xi Jinping and Li Keqiang will leave vacancies in the top leadership position of Shanghai and Liaoning. In addition, Hubei, Guangdong, Chongqing and possibly Tianjin will all go through regional leadership changes. Because of the pyramid structure of the party and the existence of mandatory retirement ages, cadres who are not promoted at a party congress are likely to face the end of their political careers. Current provincial-level officials see the Congress as a chance for promotion to Beijing. The Congress will also be significant in determining the amount of influence still held by former General secretary Jiang Zemin, as reflected by the personnel changes.
Although Hong Kong has its separate political system, Congress is being watched closely by the Special Administrative Region as well. Hong Kong media has often been very vocal in speculation and in reporting events of the Congress. The political direction set by the decisions will have a large impact on the direction of Hong Kong's development in the coming years as well. Taiwan, which recently made another series of moves provoking Beijing, will pay attention to the 17th Congress due to the variations in the direction determined by the current leadership, even though it is very unlikely that China's Taiwan policy will change.
Delegates
2,213 delegates were elected as delegates to the Congress through a series of staggered elections in which one level of the party elects delegates to the next higher party congress. An additional 57 veteran (mostly retired) communist leaders were appointed directly as delegates. This system has the effect that the party leadership through the Organization Department of the Chinese Communist Party can control elections and block the election of anyone it finds unacceptable.
The great majority of these are cadres, but about 30% are model workers, and there are about 20 private businesspeople. The number of candidates shortlisted by local Central Committees was 15% more than the number of delegates required, allowing local Party Congress members some degree of choice in the election. State media claimed this was "an improvement over past practices" (5% more in 1997 and 10% more in 2002), but noted heavy supervision of the election process by national Party authorities. In addition, elected delegates had to be approved by the 17th Delegate Status Inspection Committee, and the National Central Committee reserved the right to "select some veteran Party members who have quit their leading posts to attend the upcoming Party congress as specially-invited delegates".Two prominent delegates are known to have died since the election finished in April 2007, Major-General Wang Shaojun, and former Vice-Premier Huang Ju.
Elections and Work Reports
Many party positions will be elected, including the following:
The Politburo (about two dozen members elected by the Central Committee; expected to change about half its membership), including its Standing Committee
The wider Central Committee of the Chinese Communist Party (approximately 350 full and alternative members elected by the whole Congress; about 60% to change)
The General Secretary
Secretariat of the CCP Central Committee
The Central Military Commission, include the Supreme Military Command
The Discipline Inspection Commission
Central Committee election
The election process was supervised by Secretariat Secretary Zeng Qinghong, although he himself was not part of the new Central Committee. Most of those elected will take up the equivalent state positions after the National People's Congress in 2001, although key positions and existing vacancies on the State Council may change before and during the Congress. In the Central Committee elections on 21 October 2008, the margin of dropped off candidates was 8.3%, a three-point percentage increase from last year. The increased percentage seems to signify greater "inner Party democracy", and increased power among the delegates (i.e., only 204 out of 221 candidates shortlisted for the Central Committee survived the electoral process). In the new central committee, 107 of the 204 members are new members.
Hu Jintao's work report
General Secretary Hu Jintao's keynote report was prepared by Wen Jiabao. It was delivered to the first session of the Congress on 15 October 2007, and lasted well over two hours, and was broadcast on all major television and radio stations in the country. The event marked the first major live public address by Hu since taking over power in 2002. It laid heavy emphasis on Hu's Scientific Development Perspective as the current guiding ideology in succession to Deng Xiaoping Theory and the Three Represents, with the goal of continuing Socialism with Chinese characteristics and eventual socialist harmonious society.
Western media have generally concentrated on the lack of novelty with Hu's speech, citing that there was no references to political reform during the report. The Communist Party's grip on power is unlikely to waver for another period of time. Domestically, however, Hu's ideology is a novel addition to the current ideologies of the CCP, adding more of a populist focus, although the political rhetoric in the report was apparent. Hu stressed inner-party democracy, and repeated the word "democracy" 60 times in the speech according to Xinhua. In addition, Hu received applause a total of over 40 times, well over Jiang's record of 16 five years earlier.During the speech, former General secretary Jiang Zemin seemed very tired, was constantly yawning, and was not paying much attention. Jiang seldom talked to Wen Jiabao, who was sitting to his left. Wen was paying full attention to Hu's speech for its entire length. Hong Kong media noted that Jiang left the Great Hall without shaking anyone's hand and that no one came up to shake his. Surprisingly, Mao's successor Hua Guofeng also attended the Congress as a delegate. All the surviving members of the 14th and 15th PSC's were present, including former Premiers Li Peng and Zhu Rongji, but with the exception of Jiang rival Qiao Shi.
There were work reports from key party leaders and institutions, providing the Party's analysis of the previous quinquennium and its agenda for the next five years. It is possible that the speech will also answer calls for inner-party democracy, i.e. decentralization within the one-party system.
Press conference
After the plenary sessions, there was a rare press conference by the Politburo Standing Committee. Newcomer Li Keqiang looked a bit stiff while Xi Jinping looked shy.
Issues before the Congress
September 2006: Shanghai Party chief Chen Liangyu is arrested on corruption charges. This is perceived as an attack on the Shanghai Gang by the Hu-Wen alliance.
16 October: Xinhua carries an official commentary attacking "cliques" within the Party, perceived as a reference to the Shanghai clique.
February 2007: Party elder Li Rui and retired academic Xie Tao published articles calling for the CCP to become a European-style socialist party; their remarks were condemned by the Party propaganda apparatus.
15 March: Prime Minister Wen Jiabao told foreign journalists he supported further political reform. The remarks were initially omitted from the official transcript, allegedly on the orders of hardline propaganda chief Li Changchun.
28 April: Academic Wan Gang becomes the first non-CCP minister in half a century, on being appointed Minister of Science and Technology
25 June: In a major speech at the Central Party School, General Secretary Hu announces the 'Four Steadfasts': an open-minded attitude, reform and opening up, and a moderately well-off (xiaokang) society by 2020.
July: Chen Liangyu is formally convicted and expelled from the Party.
Mid-August: Top CCP leaders discussed the Congress' decisions at their annual Beidaihe retreat. Some Hong Kong sources claim they decided the shortlists for the new Central Committee and Politburo, while others argued that basic PSC positions were still up for grabs.
19 August: Five national newspapers run identical front pages (shown here), all giving prominence to General secretary Hu.
28 August: A Politburo meeting decides dates of the 17th Party Congress, and the final meeting of the 16th Central Committee.
30 August: A reshuffle promoted Meng Xuenong, former Mayor of Beijing and tuanpai politician, to Governor of Shanxi, whilst ousting Finance Minister Jin Renqing, who was allegedly placed in detention. Zhang Qingwei become the PRC's youngest ever minister, becoming Chairman of the Commission for Science, Technology, and Industry for National Defense after a career in the successful space programme. Ma Wen, deputy secretary of the Central Commission for Discipline Inspection (CCDI), added the Ministry of Supervision to her responsibilities.
6 September: Ma Wen gained a third role as head of a newly created National Bureau of Corruption Prevention. Unlike the CCDI, this does not investigate individual cases and is a government, rather than Party, organ. This led to speculation that the Congress will highlight the Hu-Wen leadership's anti-corruption drive.
Mid-September: The Ministry of Public Security conducted the largest crackdown on Web sites and data hosts in history a month before the event.
18 September: State media announced that the Politburo had submitted an amendment to the CCP Constitution that would entrench Hu's "Scientific Development Concept" ideology alongside the theories of Marxism-Leninism, Mao Zedong Thought, Deng Xiaoping Theory and Jiang Zemin's Three Represents. The announcement stressed the role of General Secretary Hu and phrases associated with him.
19 September: Petitioners in Beijing's Fengtai District ordered to move from their homes due to construction work for the 17th Party Congress; the work was completed by 26 September.
19 September: In a move predicted by the Hong Kong press, Ling Jihua, a tuanpai member and Hu ally, replaced Wang Gang as director of the Central Committee's General Office.
21 September: A People's Daily commentary heralded "new good tidings from Shanghai", adding to speculation that Shanghai chief Xi Jinping was headed for promotion, as the Shanghai Party emerged from the Chen Liangyu scandal.
27 September: U.S.-based Duowei reported that Wu Bangguo had undergone cancer surgery. The same day, he made his first public appearance since 31 August.
29 September: Wu Bangguo was noticeably not present at the Politburo meeting as broadcast by Xinwen Lianbo, while all other Politburo Standing Committee members were given camera time. Also unconventional was the fact that no Politburo Standing Committee members were named except for Hu Jintao.
1 October: Hu Jintao visits Shanghai during National Day, a day after all eight PSC members attended a National Day banquet in Beijing. The move is seen as an affirmation of Shanghai and symbolizes the unity between Shanghai and the central leadership. Hu is also to open the Special Olympics there.
4 October: Duowei makes their final predictions on the nine members of the new politburo. Namely, in order ranking, they are Hu Jintao, Wu Bangguo, Wen Jiabao, Jia Qinglin, Li Changchun, Xi Jinping, Li Keqiang, He Guoqiang and Zhou Yongkang.
9 October: The 7th Plenum of the 16th Central Committee meets to finalize the agenda for the Congress. A key decision involving the entrenchment of Hu's Scientific Development Concept and Socialist Harmonious Society has taken place with discussions from delegates of the 16th Central Committee.
14 October: Taiwan-based China Times announces their final speculative shortlist for the PSC. The list is identical to Duowei's shortlist 10 days earlier.
The leadership lineup
Hong Kong, Taiwan, and overseas media often speculate on the makeup of the leadership months before Congress takes place. During the 16th Party Congress, the speculation two months prior to the Congress on the nine members of the Politburo Standing Committee (PSC) were entirely accurate.
Leaving the Politburo
Zeng Qinghong, CCP Secretariat Secretary, Vice-President, ranked 5th in Politburo Standing Committee, is out of the 17th Central Committee, likely due to age. Zeng's departure also signals the solidification of Hu Jintao's power.
Wu Guanzheng, anti-corruption chief, ranked 7th in PSC, due to age.
Luo Gan, Political and Legislative Affairs Committee Secretary, ranked 9th in the PSC, due to age.
Wu Yi, Vice-Premier, China's "iron-lady", the only woman in the 16th Politburo, due to age.
Zeng Peiyan, Vice-Premier, ranked 3rd, due to age.
Cao Gangchuan, Minister of Defence, due to age.
Politburo Standing Committee
The newly formed Politburo Standing Committee consisted of (in order ranking) Hu Jintao, Wu Bangguo, Wen Jiabao, Jia Qinglin, Li Changchun, from the 16th Central Committee, in addition to four newcomers:
Shanghai party chief Xi Jinping, 54
Liaoning party chief Li Keqiang, 52
CCP Organization Department head He Guoqiang, 64
Minister of Public Security Zhou Yongkang, 65
The Politburo
The Politburo is made of a wider range of cadres whose average age is generally younger than that of the PSC, some of whom slated for promotion at the 18th Party Congress. It has been noted that the Politburo is a power balance between Hu's tuanpai, Jiang's Shanghai clique, and the Crown Prince Party.
In stroke order of surnames
Xi Jinping, Top-ranked Secretary of CCP Secretariat, Vice-President, Vice-Chairman of the Central Military Commission, President of the Central Party School
Wang Gang, Vice-Chair of CPPCC National Committee
Wang Lequan, Party chief of Xinjiang, later Deputy Secretary of the Political and Legislative Affairs Committee
Wang Zhaoguo, Vice-Chairman of National People's Congress, Chair of the All-China Federation of Trade Unions
Wang Qishan, Vice-Premier
Hui Liangyu, Vice-Premier
Liu Qi, Party chief of Beijing, head of Beijing Olympics organizing committee
Liu Yunshan, Secretary of CCP Central Secretariat, Head of the CCP Propaganda Department
Liu Yandong, State Councilor
Li Changchun, Chairman of the Central Guidance Commission for Building Spiritual Civilization
Li Keqiang, First Vice-Premier
Li Yuanchao, Secretary in CCP Central Secretariat, CCP Organization Department head
Wu Bangguo, Chairman of the Standing Committee of the National People's Congress
Wang Yang, Party chief of Guangdong
Zhang Gaoli, Party chief of Tianjin
Zhang Dejiang, Vice-Premier, Party chief of Chongqing
Zhou Yongkang, Secretary of the Political and Legislative Affairs Committee
Hu Jintao, CCP General Secretary, PRC President, Chairman of the Central Military Commission
Yu Zhengsheng, Party chief of Shanghai
He Guoqiang, Secretary of the Central Commission for Discipline Inspection
Jia Qinglin, Chairman of the National Committee of the Chinese People's Political Consultative Conference
Xu Caihou, Vice-Chairman of Central Military Commission
Guo Boxiong, Vice-Chairman of Central Military Commission
Wen Jiabao, Premier of the State Council
Bo Xilai, Party chief of Chongqing, Dismissed April 2012
Other Politburo places
Central Committee bureaucrat Wang Gang is expected to become a figurehead on the NPC or CPPCC (and implicitly a Politburo member), although he has an outside chance of a PSC place.
Wang Zhaoguo is Wu Bangguo's deputy at the NPC and Hu's former boss in the CYL. He has recently been considered to have an outside chance of a PSC place, given his age.
Regional Positions
Minister of Commerce Bo Xilai, after some reluctance following the Congress, took over as Chongqing Party Chief.
Hubei Party chief Yu Zhengsheng took over Shanghai as the municipality's Communist Party secretary.
Chongqing Party chief Wang Yang took over as Guangdong Party chief.
Beijing Mayor Wang Qishan left his municipal post to become Vice-Premier.
Central Military Commission positions
Chen Bingde may have already replaced Liang Guanglie as the PLA's Chief of General Staff.
Ministerial positions
Early speculation suggested a wide field for Vice-Premier responsible for the economy, namely National Development and Reform Commission (NDRC) chief Ma Kai, SASAC chief Kelin Ding, PBoC chief Zhou Xiaochuan, MOFCOM chief Bo Xilai, State Council official Lou Jiwei, Beijing Mayor Wang Qishan, Tianjin Mayor Dai Xianglong, Shanghai Mayor Han Zheng and Chongqing Party chief Wang Yang.
Former Shanxi Governor Yu Youjun is tipped for Minister of Culture.
State council positions will be confirmed at the 2008 National People's Congress.
See also
History of the People's Republic of China (2002–present)
Li Lianyu, a local official who mobilized a mass welcoming ceremony for himself upon his return from the 17th Party Congress
Shakeup in Beijing: Who's In, Who's Out Chi-Chu Tschang (Business Week)
The 17th Party Congress and Beyond
Official Website of the 17th National Congress of the Chinese Communist Party
China Daily |
Datalog is a declarative logic programming language. While it is syntactically a subset of Prolog, Datalog generally uses a bottom-up rather than top-down evaluation model. This difference yields significantly different behavior and properties from Prolog. It is often used as a query language for deductive databases. Datalog has been applied to problems in data integration, networking, program analysis, and more.
Example
A Datalog program consists of facts, which are statements that are held to be true, and rules, which say how to deduce new facts from known facts. For example, here are two facts that mean xerces is a parent of brooke and brooke is a parent of damocles:
The names are written in lowercase because strings beginning with an uppercase letter stand for variables. Here are two rules:
The :- symbol is read as "if", and the comma is read "and", so these rules mean:
X is an ancestor of Y if X is a parent of Y.
X is an ancestor of Y if X is a parent of some Z, and Z is an ancestor of Y.The meaning of a program is defined to be the set of all of the facts that can be deduced using the initial facts and the rules. This program's meaning is given by the following facts:
Some Datalog implementations don't deduce all possible facts, but instead answer queries:
This query asks: Who are all the X that xerces is an ancestor of? For this example, it would return brooke and damocles.
Comparison to relational databases
The non-recursive subset of Datalog is closely related to query languages for relational databases, such as SQL. The following table maps between Datalog, relational algebra, and SQL concepts:
More formally, non-recursive Datalog corresponds precisely to unions of conjunctive queries, or equivalently, negation-free relational algebra.
Syntax
A Datalog program consists of a list of rules (Horn clauses). If constant and variable are two countable sets of constants and variables respectively and relation is a countable set of predicate symbols, then the following BNF grammar expresses the structure of a Datalog program:
Atoms are also referred to as literals. The atom to the left of the :- symbol is called the head of the rule; the atoms to the right are the body. Every Datalog program must satisfy the condition that every variable that appears in the head of a rule also appears in the body (this condition is sometimes called the range restriction).There are two common conventions for variable names: capitalizing variables, or prefixing them with a question mark ?.Note that under this definition, Datalog does not include negation nor aggregates; see § Extensions for more information about those constructs.
Rules with empty bodies are called facts. For example, the following rule is a fact:
The set of facts is called the extensional database or EDB of the Datalog program. The set of tuples computed by evaluating the Datalog program is called the intensional database or IDB.
Syntactic sugar
Many implementations of logic programming extend the above grammar to allow writing facts without the :-, like so:
Some also allow writing 0-ary relations without parentheses, like so:
These are merely abbreviations (syntactic sugar); they have no impact on the semantics of the program.
Semantics
There are three widely-used approaches to the semantics of Datalog programs: model-theoretic, fixed-point, and proof-theoretic. These three approaches can be proven equivalent.An atom is called ground if none of its subterms are variables. Intuitively, each of the semantics define the meaning of a program to be the set of all ground atoms that can be deduced from the rules of the program, starting from the facts.
Model theoretic
A rule is called ground if all of its atoms (head and body) are ground. A ground rule R1 is a ground instance of another rule R2 if R1 is the result of a substitution of constants for all the variables in R2. The Herbrand base of a Datalog program is the set of all ground atoms that can be made with the constants appearing in the program. The Herbrand model of a Datalog program is the smallest subset of the Herbrand base such that, for each ground instance of each rule in the program, if the atoms in the body of the rule are in the set, then so is the head. The model-theoretic semantics define the minimal Herbrand model to be the meaning of the program.
Fixed-point
Let I be the power set of the Herbrand base of a program P. The immediate consequence operator for P is a map T from I to I that adds all of the new ground atoms that can be derived from the rules of the program in a single step. The least fixed point semantics define the least fixed point of T to be the meaning of the program; this coincides with the minimal Herbrand model.The fixpoint semantics suggest an algorithm for computing the minimal model: Start with the set of ground facts in the program, then repeatedly add consequences of the rules until a fixpoint is reached. This algorithm is called naïve evaluation.
Proof-theoretic
The proof-theoretic semantics defines the meaning of a Datalog program to be the set of facts with corresponding proof trees. Intuitively, a proof tree shows how to derive a fact from the facts and rules of a program.
One might be interested in knowing whether or not a particular ground atom appears in the minimal Herbrand model of a Datalog program, perhaps without caring much about the rest of the model. A top-down reading of the proof trees described above suggests an algorithm for computing the results of such queries. This reading informs the SLD resolution algorithm, which forms the basis for the evaluation of Prolog.
Evaluation
There are many different ways to evaluate a Datalog program, with different performance characteristics.
Bottom-up evaluation strategies
Bottom-up evaluation strategies start with the facts in the program and repeatedly apply the rules until the either some goal or query is established, or until the complete minimal model of the program is produced.
Naïve evaluation
Naïve evaluation mirrors the fixpoint semantics for Datalog programs. Naïve evaluation uses a set of "known facts", which is initialized to the facts in the program. It proceeds by repeatedly enumerating all ground instances of each rule in the program. If each atom in the body of the ground instance is in the set of known facts, then the head atom is added to the set of known facts. This process is repeated until a fixed point is reached, and no more facts may be deduced. Naïve evaluation produces the entire minimal model of the program.
Semi-naïve evaluation
Semi-naïve evaluation is a bottom-up evaluation strategy that can be asymptotically faster than naïve evaluation.
Performance considerations
Naïve and semi-naïve evaluation both evaluate recursive Datalog rules by repeatedly applying them to a set of known facts until a fixed point is reached. In each iteration, rules are only run for "one step", i.e., non-recursively. As mentioned above, each non-recursive Datalog rule corresponds precisely to a conjunctive query. Therefore, many of the techniques from database theory used to speed up conjunctive queries are applicable to bottom-up evaluation of Datalog, such as
Index selection and data structures (hash table, B-tree, etc.)
Query optimization, especially join order
Join algorithmsMany such techniques are implemented in modern bottom-up Datalog engines such as Soufflé. Some Datalog engines integrate SQL databases directly.Bottom-up evaluation of Datalog is also amenable to parallelization. Parallel Datalog engines are generally divided into two paradigms:
In the shared-memory, multi-core setting, Datalog engines execute on a single node. Coordination between threads may be achieved using locking or lock-free data structures. Examples include Datalog engines using OpenMP.In the shared-nothing setting, Datalog engines execute on a cluster of nodes. Such engines generally operate by splitting relations into disjoint subsets based on a hash function, performing computations (joins) on each node, and then exchanging newly-generated tuples over the network. Examples include Datalog engines based on MPI, Hadoop, and Spark.
Top-down evaluation strategies
SLD resolution is sound and complete for Datalog programs.
Magic sets
Top-down evaluation strategies begin with a query or goal. Bottom-up evaluation strategies can answer queries by computing the entire minimal model and matching the query against it, but this can be inefficient if the answer only depends on a small subset of the entire model. The magic sets algorithm takes a Datalog program and a query, and produces a more efficient program that computes the same answer to the query while still using bottom-up evaluation. A variant of the magic sets algorithm has been shown to produce programs that, when evaluated using semi-naïve evaluation, are as efficient as top-down evaluation.
Complexity
The decision problem formulation of Datalog evaluation is as follows: Given a Datalog program P split into a set of facts (EDB) E and a set of rules R, and an interpretation A, is A in the minimal model of P? In this formulation, there are three variations of the computational complexity of evaluating Datalog programs:
The data complexity is the complexity of the decision problem when A and E are inputs and R is fixed.
The program complexity is the complexity of the decision problem when A and R are inputs and E is fixed.
The combined complexity is the complexity of the decision problem when A, E, and R are inputs.With respect to data complexity, the decision problem for Datalog is P-complete. With respect to program complexity, the decision problem is EXPTIME-complete. In particular, evaluating Datalog programs always terminates; Datalog is not Turing-complete.
Some extensions to Datalog do not preserve these complexity bounds. Extensions implemented in some Datalog engines, such as algebraic data types, can even make the resulting language Turing-complete.
Extensions
Several extensions have been made to Datalog, e.g., to support negation, aggregate functions, inequalities, to allow object-oriented programming, or to allow disjunctions as heads of clauses. These extensions have significant impacts on the language's semantics and on the implementation of a corresponding interpreter.
Datalog is a syntactic subset of Prolog, disjunctive Datalog, answer set programming, DatalogZ, and constraint logic programming. When evaluated as an answer set program, a Datalog program yields a single answer set, which is exactly its minimal model.Many implementations of Datalog extend Datalog with additional features; see § Datalog engines for more information.
Aggregation
Datalog can be extended to support aggregate functions.Notable Datalog engines that implement aggregation include:
LogicBlox
Soufflé
Negation
Adding negation to Datalog complicates its semantics, leading to whole new languages and strategies for evaluation. For example, the language that results from adding negation with the stable model semantics is exactly answer set programming.
Stratified negation can be added to Datalog while retaining its model-theoretic and fixed point semantics. Notable Datalog engines that implement stratified negation include:
LogicBlox
Soufflé
Comparison to Prolog
Unlike in Prolog, statements of a Datalog program can be stated in any order. Datalog does not have Prolog's cut operator. This makes Datalog a fully declarative language.
In contrast to Prolog, Datalog
disallows complex terms as arguments of predicates, e.g., p(x, y) is admissible but not p(f(x), y),
disallows negation,
requires that every variable that appears in the head of a clause also appear in a literal in the body of the clause.This article deals primarily with Datalog without negation (see also Syntax and semantics of logic programming § Extending Datalog with negation). However, stratified negation is a common addition to Datalog; the following list contrasts Prolog with Datalog with stratified negation. Datalog with stratified negation
also disallows complex terms as arguments of predicates,
requires that every variable that appears in the head of a clause also appear in a positive (i.e., not negated) atom in the body of the clause,
requires that every variable appearing in a negative literal in the body of a clause also appear in some positive literal in the body of the clause.
Expressiveness
The boundedness problem for Datalog asks, given a Datalog program, whether it is bounded, i.e., the maximal recursion depth reached when evaluating the program on an input database can be bounded by some constant. In other words, this question asks whether the Datalog program could be rewritten as a nonrecursive Datalog program. Solving the boundedness problem on arbitrary Datalog programs is undecidable, but it can be made decidable by restricting to some fragments of Datalog.
Datalog engines
Systems that implement languages inspired by Datalog, whether compilers, interpreters, libraries, or embedded DSLs, are referred to as Datalog engines. Datalog engines often implement extensions of Datalog, extending it with additional data types, foreign function interfaces, or support for user-defined lattices. Such extensions may allow for writing non-terminating or otherwise ill-defined programs.
Uses and influence
Datalog is quite limited in its expressivity. It is not Turing-complete, and doesn't include basic data types such as integers or strings. This parsimony is appealing from a theoretical standpoint, but it means Datalog per se is rarely used as a programming language or knowledge representation language. Most Datalog engines implement substantial extensions of Datalog. However, Datalog has a strong influence on such implementations, and many authors don't bother to distinguish them from Datalog as presented in this article. Accordingly, the applications discussed in this section include applications of realistic implementations of Datalog-based languages.
Datalog has been applied to problems in data integration, information extraction, networking, security, cloud computing and machine learning. Google has developed an extension to Datalog for big data processing.Datalog has seen application in static program analysis. The Soufflé dialect has been used to write pointer analyses for Java and a control-flow analysis for Scheme. Datalog has been integrated with SMT solvers to make it easier to write certain static analyses. The Flix dialect is also suited to writing static program analyses.Some widely used database systems include ideas and algorithms developed for Datalog. For example, the SQL:1999 standard includes recursive queries, and the Magic Sets algorithm (initially developed for the faster evaluation of Datalog queries) is implemented in IBM's DB2.
History
The origins of Datalog date back to the beginning of logic programming, but it became prominent as a separate area around 1977 when Hervé Gallaire and Jack Minker organized a workshop on logic and databases. David Maier is credited with coining the term Datalog.
See also
Answer set programming
Conjunctive query
DatalogZ
Disjunctive Datalog
Flix
SWRL
Tuple-generating dependency (TGD), a language for integrity constraints on relational databases with a similar syntax to Datalog
Ceri, S.; Gottlob, G.; Tanca, L. (March 1989). "What you always wanted to know about Datalog (and never dared to ask)" (PDF). IEEE Transactions on Knowledge and Data Engineering. 1 (1): 146–166. CiteSeerX 10.1.1.210.1118. doi:10.1109/69.43410. ISSN 1041-4347.
Abiteboul, S. (1995). Foundations of databases. Richard Hull, Victor Vianu. Reading, Mass.: Addison-Wesley. ISBN 0-201-53771-0. OCLC 30546436. |
dBase (also stylized dBASE) was one of the first database management systems for microcomputers and the most successful in its day. The dBase system includes the core database engine, a query system, a forms engine, and a programming language that ties all of these components together.
Originally released as Vulcan for PTDOS in 1978, the CP/M port caught the attention of Ashton-Tate in 1980. They licensed it and re-released it as dBASE II, and later ported it to IBM PC computers running DOS. On the PC platform, in particular, dBase became one of the best-selling software titles for a number of years. A major upgrade was released as dBase III, and ported to a wider variety of platforms, adding UNIX, and VMS. By the mid-1980s, Ashton-Tate was one of the "big three" software publishers in the early business software market, the others being Lotus Development and WordPerfect.Starting in the mid-1980s, several companies produced their own variations on the dBase product and especially the dBase programming language. These included FoxBASE+ (later renamed FoxPro), Clipper, and other so-called xBase products. Many of these were technically stronger than dBase, but could not push it aside in the market. This changed with the poor reception of dBase IV, whose design and stability were so lacking that many users switched to other products.In the early 1990s, xBase products constituted the leading database platform for implementing business applications. The size and impact of the xBase market did not go unnoticed, and within one year, the three top xBase firms were acquired by larger software companies:
Borland purchased Ashton-Tate
Microsoft bought Fox Software
Computer Associates acquired NantucketBy the opening decade of the 21st century, most of the original xBase products had faded from prominence and many had disappeared entirely. Products known as dBase still exist, owned by dBase LLC.
History
Origins
In the late 1960s, Fred Thompson at the Jet Propulsion Laboratory (JPL) was using a Tymshare product named RETRIEVE to manage a database of electronic calculators, which were at that time very expensive products. In 1971, Thompson collaborated with Jack Hatfield, a programmer at JPL, to write an enhanced version of RETRIEVE, which became the JPLDIS project. JPLDIS was written in FORTRAN on the UNIVAC 1108 mainframe, and was presented publicly in 1973. When Hatfield left JPL in 1974, Jeb Long took over his role.While working at JPL as a contractor, C. Wayne Ratliff entered the office football pool. He had no interest in the game as such, but felt he could win the pool by processing the post-game statistics found in newspapers. In order to do this, he turned his attention to a database system and, by chance, came across the documentation for JPLDIS. He used this as the basis for a port to PTDOS on his kit-built IMSAI 8080 microcomputer, and called the resulting system Vulcan (after the home planet of Mr. Spock on Star Trek).
Ashton-Tate
George Tate and Hal Lashlee had built two successful start-up companies: Discount Software, which was one of the first to sell PC software programs through the mail to consumers, and Software Distributors, which was one of the first wholesale distributors of PC software in the world. They entered into an agreement with Ratliff to market Vulcan, and formed Ashton-Tate (the name Ashton was chosen purely for marketing reasons) to do so. Ratliff ported Vulcan from PTDOS to CP/M. Hal Pawluk, who handled marketing for the nascent company, decided to change the name to the more business-like "dBase". Pawluk devised the use of lower case "d" and all-caps "BASE" to create a distinctive name. Pawluk suggested calling the new product version two ("II") to suggest it was less buggy than an initial release. dBase II was the result and became a standard CP/M application along with WordStar and SuperCalc.In 1981, IBM commissioned a port of dBase for the then-in-development PC. The resultant program was one of the initial pieces of software available when the IBM PC went on sale in the fall of 1981. dBase was one of a few "professional" programs on the platform then, and became a huge success. The customer base included not only end-users, but an increasing number of "value added resellers", or VARs, who purchased dBase, wrote applications with it, and sold the completed systems to their customers. The May 1983 release of dBase II RunTime further entrenched dBase in the VAR market by allowing the VARs to deploy their products using the lower-cost RunTime system.Although some critics stated that dBase was difficult to learn, its success created many opportunities for third parties. By 1984, more than 1,000 companies offered dBase-related application development, libraries of code to add functionality, applications using dBase II Runtime, consulting, training, and how-to books. A company in San Diego (today known as Advisor Media) premiered a magazine devoted to the professional use of dBase, Data Based Advisor; its circulation exceeded 35,000 after eight months. All of these activities fueled the rapid rise of dBase as the leading product of its type.
dBase III
As platforms and operating systems proliferated in the early 1980s, the company found it difficult to port the assembly language-based dBase to target systems. This led to a rewrite of the platform in the C programming language, using automated code conversion tools. The resulting code worked, but was essentially undocumented and inhuman in syntax, a problem that would prove to be serious in the future.In May 1984, the rewritten dBase III was released. Although reviewers widely panned its lowered performance, the product was otherwise well reviewed. After a few rapid upgrades, the system stabilized and was once again a best-seller throughout the 1980s, and formed the famous "application trio" of PC compatibles (dBase, Lotus 123, and WordPerfect). By the fall of 1984, the company had over 500 employees and was taking in US$40 million a year in sales (equivalent to $113 million in 2022), the vast majority from dBase products.
Cloning
There was also an unauthorized clone of dBase III called Rebus in the Soviet Union. Its adaptation to the Russian language was reduced to the mechanical replacement of the name, the russification of the help files and the correction of the sorting tables for the Russian language.
dBase IV
Introduced in 1988, after delays,dBase IV had "more than 300 new or improved features". By then, FoxPro had made inroads,
and even dBase IV's support for Query by Example and SQL were not enough.Along the way, Borland, which had bought Ashton-Tate, brought out a revised dBase IV in 1992 but with a focus described as "designed for programmers" rather than "for ordinary users".
Recent version history
dBASE product range
dBase, LLC products
dBASE PLUS: A Windows-based database.
dBASE 2019: Successor of dBASE PLUS 12. Requires Windows Vista or later. Only 32-bit Windows Vista is supported, but 32 and 64-bit Windows Server 2012 are supported.
dBASE CLASSIC: dBASE V for DOS without DOS emulator, originally found in dBASE PLUS 9. Also includes original documentation included in the installation in PDF format.
dbDOS: A MS-DOS emulator.dbDOS PRO: Successor of dbDOS 1.5.1, starts with version 2.
dbDOS Open Source: Open source version of dbDOS.
dbDOSv: Successor of dbDOS PRO 7.dbfUtilities: .dbf file processing utilities.dbfCompare: Compares differences between tables.
dbfExport: Converts .dbf table to other file formats.
dbfImport: Converts other file formats into .dbf format.
dbfInspect: Read, modify, insert, delete, pack, and print using any dBASE IV and later tables.
SQL Utilities
dumpSQL: Extracts all of the records of an existing table into a new table in the supported file formats.
moveSQL: Transfers all of the records of an existing table into a new table in the supported database formats.
dBase / xBase programming language
For handling data, dBase provided detailed procedural commands and functions to
open and traverse records in data files (e.g., USE, SKIP, GO TOP, GO BOTTOM, and GO recno),
manipulate field values (REPLACE and STORE), and
manipulate text strings (e.g., STR() and SUBSTR()), numbers, and dates.dBase is an application development language and integrated navigational database management system which Ashton-Tate labeled as "relational" but it did not meet the criteria defined by Dr. Edgar F. Codd's relational model. It used a runtime interpreter architecture, which allowed the user to execute commands by typing them in a command line "dot prompt". Similarly, program scripts (text files with PRG extensions) ran in the interpreter (with the DO command).Over time, Ashton-Tate's competitors introduced so-called clone products and compilers that had more robust programming features such as user-defined functions (UDFs), arrays for complex data handling. Ashton-Tate and its competitors also began to incorporate SQL, the ANSI/ISO standard language for creating, modifying, and retrieving data stored in relational database management systems.Eventually, it became clear that the dBase world had expanded far beyond Ashton-Tate. A "third-party" community formed, consisting of Fox Software, Nantucket, Alpha Software, Data Based Advisor Magazine, SBT and other application development firms, and major developer groups. Paperback Software launched the flexible and fast VP-Info with a unique built-in compiler. The community of dBase variants sought to create a dBase language standard, supported by IEEE committee X3J19 and initiative IEEE 1192. They said "xBase" to distinguish it from the Ashton-Tate product.Ashton-Tate saw the rise of xBase as an illegal threat to its proprietary technology. In 1988 they filed suit against Fox Software and Santa Cruz Operation (SCO) for copying dBase's "structure and sequence" in FoxBase+ (SCO marketed XENIX and UNIX versions of the Fox products). In December 1990, U.S. District judge Terry Hatter Jr. dismissed Ashton-Tate's lawsuit and invalidated Ashton-Tate's copyrights for not disclosing that dBase had been based, in part, on the public domain JPLDIS. In October 1991, while the case was still under appeal, Borland International acquired Ashton-Tate, and as one of the merger's provisions the U.S. Justice Department required Borland to end the lawsuit against Fox and allow other companies to use the dBase/xBase language without the threat of legal action.By the end of 1992, major software companies raised the stakes by acquiring the leading xBase products. Borland acquired Ashton-Tate's dBase products (and later WordTech's xBase products), Microsoft acquired Fox Software's FoxBASE+ and FoxPro products, and Computer Associates acquired Nantucket's Clipper products. Advisor Media built on its Data Based Advisor magazine by launching FoxPro Advisor and Clipper Advisor (and other) developer magazines and journals, and live conferences for developers. However, a planned dBase Advisor Magazine was aborted due to the market failure of dBase IV.By the year 2000, the xBase market had faded as developers shifted to new database systems and programming languages. Computer Associates (later known as CA) eventually dropped Clipper. Borland restructured and sold dBase. Of the major acquirers, Microsoft stuck with xBase the longest, evolving FoxPro into Visual FoxPro, but the product is no longer offered. In 2006 Advisor Media stopped its last-surviving xBase magazine, FoxPro Advisor. The era of xBase dominance has ended, but there are still xBase products. The dBase product line is now owned by dBase LLC which currently sells dBASE PLUS 12.3 and a DOS-based dBASE CLASSIC (dbDOS to run it on 64-bit Windows).Some open source implementations are available, such as Harbour, xHarbour, and Clip.In 2015, a new member of the xBase family was born: the XSharp (X#) language, maintained as an open source project with a compiler, its own IDE, and Microsoft Visual Studio integration. XSharp produces .NET assemblies and uses the familiar xBase language. The XSharp product was originally created by a group of four enthusiasts who have worked for the Vulcan.NET project in the past. The compiler is created on top of the Roslyn compiler code, the code behind the C# and VB compilers from Microsoft.
Programming examples
Today, implementations of the dBase language have expanded to include many features targeted for business applications, including object-oriented programming, manipulation of remote and distributed data via SQL, Internet functionality, and interaction with modern devices.The following example opens an employee table ("empl"), gives every manager who supervises 1 or more employees a 10-percent raise, and then prints the names and salaries.
Note how one does not have to keep mentioning the table name. The assumed ("current") table stays the same until told otherwise. Because of its origins as an interpreted interactive language, dBase used a variety of contextual techniques to reduce the amount of typing needed. This facilitated incremental, interactive development but also made larger-scale modular programming difficult. A tenet of modular programming is that the correct execution of a program module must not be affected by external factors such as the state of memory variables or tables being manipulated in other program modules. Because dBase was not designed with this in mind, developers had to be careful about porting (borrowing) programming code that assumed a certain context and it would make writing larger-scale modular code difficult. Work-area-specific references were still possible using the arrow notation ("B->customer") so that multiple tables could be manipulated at the same time. In addition, if the developer had the foresight to name their tables appropriately, they could clearly refer to a large number of tables open at the same time by notation such as ("employee->salary") and ("vacation->start_date"). Alternatively, the alias command could be appended to the initial opening of a table statement which made referencing a table field unambiguous and simple. For example. one can open a table and assign an alias to it in this fashion, "use EMP alias Employee", and henceforth, refer to table variables as "Employee->Name".
Another notable feature is the re-use of the same clauses for different commands. For example, the FOR clause limits the scope of a given command. (It is somewhat comparable to SQL's WHERE clause.) Different commands such as LIST, DELETE, REPLACE, BROWSE, etc. could all accept a FOR clause to limit (filter) the scope of their activity. This simplifies the learning of the language.dBase was also one of the first business-oriented languages to implement string evaluation.
Here the "&" tells the interpreter to evaluate the string stored in "myMacro" as if it were programming code. This is an example of a feature that made dBase programming flexible and dynamic, sometimes called "meta ability" in the profession. This could allow programming expressions to be placed inside tables, somewhat reminiscent of formulas in spreadsheet software.However, it could also be problematic for pre-compiling and for making programming code secure from hacking. But, dBase tended to be used for custom internal applications for small and medium companies where the lack of protection against copying, as compared to compiled software, was often less of an issue.
File formats
A major legacy of dBase is its .dbf file format, which has been adopted in a number of other applications. For example, the shapefile format, developed by ESRI for spatial data in its PC ArcInfo geographic information system, uses .dbf files to store feature attribute data.Microsoft recommends saving a Microsoft Works database file in the dBase file format so that it can be read by Microsoft Excel.A package is available for Emacs to read xbase files.LibreOffice and OpenOffice Calc can read and write all generic dbf files.dBase's database system was one of the first to provide a header section for describing the structure of the data in the file. This meant that the program no longer required advance knowledge of the data structure, but rather could ask the data file how it was structured. There are several variations on the .dbf file structure, and not all dBase-related products and .dbf file structures are compatible. VP-Info is unique in that it can read all variants of the dbf file structure.A second filetype is the .dbt file format for memo fields. While character fields are limited to 254 characters each, a memo field is a 10-byte pointer into a .dbt file which can include a much larger text field. dBase was very limited in its ability to process memo fields, but some other xBase languages such as Clipper treated memo fields as strings just like character fields for all purposes except permanent storage.dBase uses .ndx files for single indexes, and .mdx (multiple-index) files for holding between 1 and 48 indexes. Some xBase languages such as VP-Info include compatibility with .ndx files while others use different file formats such as .ntx used by Clipper and .idx/.cdx used by FoxPro or FlagShip. Later iterations of Clipper included drivers for .ndx, .mdx, .idx and .cdx indexes.
Reception
Jerry Pournelle in July 1980 called Vulcan "infuriatingly excellent" because the software was powerful but the documentation was poor. He praised its speed and sophisticated queries, but said that "we do a lot of pounding at the table and screaming in rage at the documentation".
Official website
xBase (and dBase) File Format Description |
Delphi (; Greek: Δελφοί [ðelˈfi]), in legend previously called Pytho (Πυθώ), was an ancient sacred precinct and the seat of Pythia, the major oracle who was consulted about important decisions throughout the ancient classical world. The ancient Greeks considered the centre of the world to be in Delphi, marked by the stone monument known as the omphalos (navel).
According to the Suda, Delphi took its name from the Delphyne, the she-serpent (drakaina) who lived there and was killed by the god Apollo (in other accounts the serpent was the male serpent (drakon) Python).The sacred precinct occupies a delineated region on the south-western slope of Mount Parnassus.
It is now an extensive archaeological site, and since 1938 a part of Parnassos National Park. The precinct is recognized by UNESCO as a World Heritage Site in having had a great influence in the ancient world, as evidenced by the various monuments built there by most of the important ancient Greek city-states, demonstrating their fundamental Hellenic unity.Adjacent to the sacred precinct is a small modern town of the same name.
Names
Delphi shares the same root with the Greek word for womb, δελφύς delphys.
Pytho (Πυθώ) is related to Pythia, the priestess serving as the oracle, and to Python, a serpent or dragon who lived at the site. "Python" is derived from the verb πύθω (pythō), "to rot".
Delphi and the Delphic region
Today Delphi is a municipality of Greece as well as a modern town adjacent to the ancient precinct. The modern town was created after removing buildings from the sacred precinct so that the latter could be excavated. The two Delphis, old and new, are located on Greek National Road 48 between Amfissa in the west and Livadeia, capital of Voiotia, in the east. The road follows the northern slope of a pass between Mount Parnassus on the north and the mountains of the Desfina Peninsula on the south. The pass is of the river Pleistos, running from east to west, forming a natural boundary across the north of the Desfina Peninsula, and providing an easy route across it.
On the west side the valley joins the north–south valley between Amfissa and Itea.
On the north side of the valley junction a spur of Parnassus looming over the valley made narrower by it is the site of ancient Krisa, which once was the ruling power of the entire valley system. Both Amphissa and Krissa are mentioned in the Iliad's Catalogue of Ships. It was a Mycenaean stronghold. Archaeological dates of the valley go back to the Early Helladic. Krisa itself is Middle Helladic. These early dates are comparable to the earliest dates at Delphi, suggesting Delphi was appropriated and transformed by Phocians from ancient Krisa. It is believed that the ruins of Kirra, now part of the port of Itea, were the port of Krisa of the same name.
Archaeology of the precinct
The site was first briefly excavated in 1880 by Bernard Haussoullier (1852-1926) on behalf of the French School at Athens, of which he was a sometime member. The site was then occupied by the village of Kastri, about 100 houses, 200 people. Kastri ("fort") had been there since the destruction of the place by Theodosius I in 390. He probably left a fort to make sure it was not repopulated, however, the fort became the new village. They were mining the stone for re-use in their own buildings. British and French travelers visiting the site suspected it was ancient Delphi. Before a systematic excavation of the site could be undertaken, the village had to be relocated, but the residents resisted.
The opportunity to relocate the village occurred when it was substantially damaged by an earthquake, with villagers offered a completely new village in exchange for the old site. In 1893, the French Archaeological School removed vast quantities of soil from numerous landslides to reveal both the major buildings and structures of the sanctuary of Apollo and of the temple to Athena, the Athena Pronoia along with thousands of objects, inscriptions, and sculptures.During the Great Excavation architectural members from a fifth-century Christian basilica, were discovered that date to when Delphi was a bishopric. Other important Late Roman buildings are the Eastern Baths, the house with the peristyle, the Roman Agora, the large cistern usw. At the outskirts of the city late Roman cemeteries were located.
To the southeast of the precinct of Apollo lay the so-called Southeastern Mansion, a building with a 65-meter-long façade, spread over four levels, with four triclinia and private baths. Large storage jars kept the provisions, whereas other pottery vessels and luxury items were discovered in the rooms. Among the finds stands out a tiny leopard made of mother of pearl, possibly of Sassanian origin, on display in the ground floor gallery of the Delphi Archaeological Museum. The mansion dates to the beginning of the fifth century and functioned as a private house until 580, later however it was transformed into a potter workshop. It is only then, in the beginning of the sixth century, that the city seems to decline: its size is reduced and its trade contacts seem to be drastically diminished. Local pottery production is produced in large quantities: it is coarser and made of reddish clay, aiming at satisfying the needs of the inhabitants.
The Sacred Way remained the main street of the settlement, transformed, however, into a street with commercial and industrial use. Around the agora were built workshops as well as the only intra muros early Christian basilica. The domestic area spread mainly in the western part of the settlement. The houses were rather spacious and two large cisterns provided running water to them.
Delphi Archaeological Museum
The museum houses artifacts associated with ancient Delphi, including the earliest known notation of a melody, the Charioteer of Delphi, Kleobis and Biton, golden treasures discovered beneath the Sacred Way, the Sphinx of Naxos, and fragments of reliefs from the Siphnian Treasury. Immediately adjacent to the exit is the inscription that mentions the Roman proconsul Gallio.
Architecture of the precinct
Most of the ruins that survive today date from the most intense period of activity at the site in the sixth century BC.
Temple of Apollo
The ruins of the Temple of Apollo that are visible today date from the fourth century BC, and are of a peripteral Doric building. It was erected by Spintharus, Xenodoros, and Agathon on the remains of an earlier temple, dated to the sixth century BC, which had been erected on the site of a seventh-century BC construction attributed in legend to the architects Trophonios and Agamedes.Ancient tradition accounted for four temples that successively occupied the site before the 548/7 BC fire, following which the Alcmaeonids built a fifth. The poet Pindar celebrated the Alcmaeonids' temple in Pythian 7.8-9 and he also provided details of the third building (Paean 8. 65–75). Other details are given by Pausanias (10.5.9-13) and the Homeric Hymn to Apollo (294 ff.). The first temple was said to have been constructed out of olive branches from Tempe. The second was made by bees out of wax and wings, but was miraculously carried off by a powerful wind and deposited among the Hyperboreans. The third, as described by Pindar, was created by the deities Hephaestus and Athena, but its architectural details included Siren-like figures or "Enchantresses", whose baneful songs eventually provoked the Olympian deities to bury the temple in the earth (according to Pausanias, it was destroyed by earthquake and fire). In Pindar's words (Paean 8.65-75, Bowra translation), addressed to the Muses:
Muses, what was its fashion, shown
By the skill in all arts
Of the hands of Hephaestus and Athena?
Of bronze the walls, and of bronze
Stood the pillars beneath,
But of gold were six Enchantresses
Who sang above the eagle.
But the sons of Cronus
Opened the earth with a thunderbolt
And hid the holiest of all things made.
Away from their children
And wives, when they hung
Their lives on the honey-hearted words.The fourth temple was said to have been constructed from stone by Trophonius and Agamedes.
However, a 2019 theory gives a completely new explanation of the above myth of the four temples of Delphi.
Treasuries
From the entrance of the upper site, continuing up the slope on the Sacred Way almost to the Temple of Apollo, are a large number of votive statues, and numerous so-called treasuries. These were built by many of the Greek city-states to commemorate victories and to thank the oracle for her advice, which was thought to have contributed to those victories. These buildings held the offerings made to Apollo; these were frequently a "tithe" or tenth of the spoils of a battle. The most impressive is the now-restored Athenian Treasury, built to commemorate their victory at the Battle of Marathon in 490 BC.
The Siphnian Treasury was dedicated by the city of Siphnos, whose citizens gave a tithe of the yield from their silver mines until the mines came to an abrupt end when the sea flooded the workings.
One of the largest of the treasuries was that of Argos. Having built it in the late classical period, the Argives took great pride in establishing their place at Delphi amongst the other city-states. Completed in 380 BC, their treasury seems to draw inspiration mostly from the Temple of Hera located in the Argolis. However, recent analysis of the Archaic elements of the treasury suggest that its founding preceded this.
Other identifiable treasuries are those of the Sicyonians, the Boeotians, Massaliots, and the Thebans.
Altar of the Chians
Located in front of the Temple of Apollo, the main altar of the sanctuary was paid for and built by the people of Chios. It is dated to the fifth century BC by the inscription on its cornice. Made entirely of black marble, except for the base and cornice, the altar would have made a striking impression. It was restored in 1920.
Stoa of the Athenians
The stoa, or open-sided, covered porch, is placed in an approximately east–west alignment along the base of the polygonal wall retaining the terrace on which the Temple of Apollo sits. There is no archaeological suggestion of a connection to the temple. The stoa opened to the Sacred Way. The nearby presence of the Treasury of the Athenians suggests that this quarter of Delphi was used for Athenian business or politics, as stoas are generally found in market-places.
Although the architecture at Delphi is generally Doric, a plain style, in keeping with the Phocian traditions that were Doric, the Athenians did not prefer the Doric. The stoa was built in their own preferred style, the Ionic order, the capitals of the columns being a sure indicator. In the Ionic order they are floral and ornate, although not so much as the Corinthian, which is in deficit there. The remaining porch structure contains seven fluted columns, unusually carved from single pieces of stone (most columns were constructed from a series of discs joined). The inscription on the stylobate indicates that it was built by the Athenians after their naval victory over the Persians in 478 BC, to house their war trophies. At that time the Athenians and the Spartans were on the same side.
Sibyl rock
The Sibyl rock is a pulpit-like outcrop of rock between the Athenian Treasury and the Stoa of the Athenians upon the Sacred Way that leads up to the temple of Apollo in the archaeological area of Delphi. The rock is claimed to be the location from which a prehistoric Sibyl pre-dating the Pythia of Apollo sat to deliver her prophecies. Other suggestions are that the Pythia might have stood there, or an acolyte whose function was to deliver the final prophecy. The rock seems ideal for public speaking.
Theatre
The ancient theatre at Delphi was built farther up the hill from the Temple of Apollo giving spectators a view of the entire sanctuary and the valley below. It was originally built in the fourth century BC, but was remodeled on several occasions, particularly in 160/159 B.C. at the expenses of king Eumenes II of Pergamon and, in 67 A.D., on the occasion of emperor Nero's visit.The koilon (cavea) leans against the natural slope of the mountain whereas its eastern part overrides a little torrent that led the water of the fountain Cassotis right underneath the temple of Apollo. The orchestra was initially a full circle with a diameter measuring seven meters. The rectangular scene building ended up in two arched openings, of which the foundations are preserved today. Access to the theatre was possible through the parodoi, i.e. the side corridors. On the support walls of the parodoi are engraved large numbers of manumission inscriptions recording fictitious sales of the slaves to the deity. The koilon was divided horizontally in two zones via a corridor called diazoma. The lower zone had 27 rows of seats and the upper one only eight. Six radially arranged stairs divided the lower part of the koilon in seven tiers. The theatre could accommodate approximately 4,500 spectators.On the occasion of Nero's visit to Greece in 67 A.D. various alterations took place. The orchestra was paved and delimited by a parapet made of stone. The proscenium was replaced by a low pedestal, the pulpitum; its façade was decorated in relief with scenes from myths about Hercules. Further repairs and transformations took place in the second century A.D. Pausanias mentions that these were carried out under the auspices of Herod Atticus. In antiquity, the theatre was used for the vocal and musical contests that formed part of the programme of the Pythian Games in the late Hellenistic and Roman period. The theatre was abandoned when the sanctuary declined in Late Antiquity. After its excavation and initial restoration it hosted theatrical performances during the Delphic Festivals organized by A. Sikelianos and his wife, Eva Palmer, in 1927 and in 1930. It has recently been restored again as the serious landslides posed a grave threat for its stability for decades.
Tholos
The tholos at the sanctuary of Athena Pronaea (Ἀθηνᾶ Προναία, "Athena of forethought") is a circular building that was constructed between 380 and 360 BC. It consisted of 20 Doric columns arranged with an exterior diameter of 14.76 meters, with 10 Corinthian columns in the interior.
The Tholos is located approximately a half a mile (800 m) from the main ruins at Delphi (at 38°28′49″N 22°30′28″E). Three of the Doric columns have been restored, making it the most popular site at Delphi for tourists to take photographs.
The architect of the "vaulted temple at Delphi" is named by Vitruvius, in De architectura Book VII, as Theodorus Phoceus (not Theodorus of Samos, whom Vitruvius names separately).
Gymnasium
The gymnasium, which is half a mile away from the main sanctuary, was a series of buildings used by the youth of Delphi. The building consisted of two levels: a stoa on the upper level providing open space, and a palaestra, pool, and baths on lower floor. These pools and baths were said to have magical powers, and imparted the ability to communicate directly to Apollo.
Stadium
The stadium is located farther up the hill, beyond the via sacra and the theatre. It was built in the fifth century BC, but was altered in later centuries. The last major remodelling took place in the second century AD under the patronage of Herodes Atticus when the stone seating was built and an (arched) entrance created. It could seat 6500 spectators and the track was 177 metres long and 25.5 metres wide.
Hippodrome
It was at the Pythian Games that prominent political leaders, such as Cleisthenes, tyrant of Sikyon, and Hieron, tyrant of Syracuse, competed with their chariots. The hippodrome where these events took place was referred to by Pindar, and this monument was sought by archaeologists for over two centuries.
Traces of it have recently been found at Gonia in the plain of Krisa in the place where the original stadium had been sited.
Polygonal wall
A retaining wall was built to support the terrace housing the construction of the second temple of Apollo in 548 BC. Its name is taken from the polygonal masonry of which it is constructed. At a later date, from 200 BC onwards, the stones were inscribed with the manumission (liberation) contracts of slaves who were consecrated to Apollo. Approximately a thousand manumissions are recorded on the wall.
Castalian spring
The sacred spring of Delphi lies in the ravine of the Phaedriades. The preserved remains of two monumental fountains that received the water from the spring date to the Archaic period and the Roman, with the latter cut into the rock.
Roman Agora
The first set of remains that the visitor sees upon entering the archaeological site of Delphi is the Roman Agora, which was just outside the peribolos, or precinct walls, of the sanctuary of Apollo at Delphi. The Roman Agora was built between the sanctuary and the Castalian Spring, approximately 500 meters away. This large rectangular paved square used to be surrounded by Ionic porticos on its three sides. The square was built in the Roman period, but the remains visible at present along the north and northwestern sides date to the Late Antique period.
An open market was probably established, where the visitors would buy ex-votos, such as statuettes and small tripods, to leave as offerings to the gods. It also served as an assembly area for processions during sacred festivals.During the empire, statues of the emperor and other notable benefactors were erected here as evidenced by the remaining pedestals. In late, Antiquity workshops of artisans were also created within the agora.
Athletic statues
Delphi is famous for its many preserved athletic statues. It is known that Olympia originally housed far more of these statues, but time brought ruin to many of them, leaving Delphi as the main site of athletic statues. Kleobis and Biton, two brothers renowned for their strength, are modeled in two of the earliest known athletic statues at Delphi. The statues commemorate their feat of pulling their mother's cart several miles to the Sanctuary of Hera in the absence of oxen. The neighbors were most impressed and their mother asked Hera to grant them the greatest gift. When they entered Hera's temple, they fell into a slumber and never woke, dying at the height of their admiration, the perfect gift.The Charioteer of Delphi is another ancient relic that has withstood the centuries. It is one of the best known statues from antiquity. The charioteer has lost many features, including his chariot and his left arm, but he stands as a tribute to athletic art of antiquity.
Myths regarding the origin of the precinct
In the Iliad, Achilles would not accept Agamemnon's peace offering even if it included all the wealth in the "stone floor" of "rocky Pytho" (I 404). In the Odyssey (θ 79) Agamemnon crosses a "stone floor" to receive a prophecy from Apollo in Pytho, the first known of proto-history. Hesiod also refers to Pytho "in the hollows of Parnassus" (Theogony 498). These references imply that the earliest date of the oracle's existence is the eighth century BC, the probable date of composition of the Homeric works.
The main myths of Delphi are given in three literary "loci". H. W. Parke, the Delphi scholar, complained that they are self-contradictory, thus unconsciously falling into the Plutarchian epistemology, that they reflect some common, objective historic reality against which the accounts can be compared. Parke asserts that there is no Apollo, no Zeus, no Hera, and certainly never was a great, serpent-like monster, and that the myths are pure Plutarchian figures of speech, meant to be aetiologies of some oracular tradition.
Homeric Hymn 3, "To Apollo", is the oldest of the three loci, dating to the seventh century BC (estimate). Apollo travels about after his birth on Delos seeking a place for an oracle. He is advised by Telephus to choose Crissa "below the glade of Parnassus", which he does, and has a temple built. Killing the serpent that guards the spring. Subsequently, some Cretans from Knossos sail up on a mission to reconnoitre Pylos. Changing into a dolphin, Apollo casts himself on deck. The Cretans do not dare to remove him but sail on. Apollo guides the ship around Greece, ending back at Crisa, where the ship grounds. Apollo enters his shrine with the Cretans to be its priests, worshipping him as Delphineus, "of the dolphin".
Zeus, a Classical deity, reportedly determined the site of Delphi when he sought to find the centre of his "Grandmother Earth" (Gaia). He sent two eagles flying from the eastern and western extremities, and the path of the eagles crossed over Delphi where the omphalos, or navel of Gaia was found.According to Aeschylus in the prologue of the Eumenides, the oracle had origins in prehistoric times and the worship of Gaia, a view echoed by H. W. Parke, who described the evolution of beliefs associated with the site. He established that the prehistoric foundation of the oracle is described by three early writers: the author of the Homeric Hymn to Apollo, Aeschylus in the prologue to the Eumenides, and Euripides in a chorus in the Iphigeneia in Tauris. Parke goes on to say, "This version [Euripides] evidently reproduces in a sophisticated form the primitive tradition which Aeschylus for his own purposes had been at pains to contradict: the belief that Apollo came to Delphi as an invader and appropriated for himself a previously existing oracle of Earth. The slaying of the serpent is the act of conquest which secures his possession; not as in the Homeric Hymn, a merely secondary work of improvement on the site. Another difference is also noticeable. The Homeric Hymn, as we saw, implied that the method of prophecy used there was similar to that of Dodona: both Aeschylus and Euripides, writing in the fifth century, attribute to primeval times the same methods as used at Delphi in their own day. So much is implied by their allusions to tripods and prophetic seats... [he continues on p. 6] ...Another very archaic feature at Delphi also confirms the ancient associations of the place with the Earth goddess. This was the Omphalos, an egg-shaped stone which was situated in the innermost sanctuary of the temple in historic times. Classical legend asserted that it marked the 'navel' (Omphalos) or center of the Earth and explained that this spot was determined by Zeus who had released two eagles to fly from opposite sides of the earth and that they had met exactly over this place". On p. 7 he writes further, "So Delphi was originally devoted to the worship of the Earth goddess whom the Greeks called Ge, or Gaia. Themis, who is associated with her in tradition as her daughter and partner or successor, is really another manifestation of the same deity: an identity that Aeschylus recognized in another context. The worship of these two, as one or distinguished, was displaced by the introduction of Apollo. His origin has been the subject of much learned controversy: it is sufficient for our purpose to take him as the Homeric Hymn represents him – a northern intruder – and his arrival must have occurred in the dark interval between Mycenaean and Hellenic times. His conflict with Ge for the possession of the cult site was represented under the legend of his slaying the serpent.One tale of the sanctuary's discovery states that a goatherd, who grazed his flocks on Parnassus, one day observed his goats playing with great agility upon nearing a chasm in the rock; the goatherd noticing this held his head over the chasm causing the fumes to go to his brain; throwing him into a strange trance.The Homeric Hymn to Delphic Apollo recalled that the ancient name of this site had been Krisa.Others relate that the site was named Pytho (Πυθώ) and that Pythia, the priestess serving as the oracle, was chosen from their ranks by the priestesses who officiated at the temple. Apollo was said to have slain Python, a drako (a male serpent or a dragon) who lived there and protected the navel of the Earth. "Python" (derived from the verb πύθω (pythō), "to rot") is claimed by some to be the original name of the site in recognition of Python that Apollo defeated.The name Delphi comes from the same root as δελφύς delphys, "womb" and may indicate archaic veneration of Gaia at the site. Several other scholars discuss the likely prehistoric beliefs associated with the site.Apollo is connected with the site by his epithet Δελφίνιος Delphinios, "the Delphinian". The epithet is connected with dolphins (Greek δελφίς,-ῖνος) in the Homeric Hymn to Apollo (line 400), recounting the legend of how Apollo first came to Delphi in the shape of a dolphin, carrying Cretan priests on his back. The Homeric name of the oracle is Pytho (Πυθώ). Another legend held that Apollo walked to Delphi from the north and stopped at Tempe, a city in Thessaly, to pick laurel (also known as bay tree) which he considered to be a sacred plant. In commemoration of this legend, the winners at the Pythian Games received a wreath of laurel picked in the temple.
Oracle of Delphi
The prophetic process
Perhaps Delphi is best known for its oracle, the Pythia, or sibyl, the priestess prophesying from the tripod in the sunken adyton of the Temple of Apollo. The Pythia was known as a spokesperson for Apollo. She was a woman of blameless life chosen from the peasants of the area. Alone in an enclosed inner sanctum (Ancient Greek adyton – "do not enter") she sat on a tripod seat over an opening in the earth (the "chasm"). According to legend, when Apollo slew Python its body fell into this fissure and fumes arose from its decomposing body. Intoxicated by the vapors, the sibyl would fall into a trance, allowing Apollo to possess her spirit. In this state she prophesied. The oracle could not be consulted during the winter months, for this was traditionally the time when Apollo would live among the Hyperboreans. Dionysus would inhabit the temple during his absence. Of note, release of fumes is limited in colder weather.
The time to consult Pythia for an oracle during the year was determined from astronomical and geological grounds related to the constellations of Lyra and Cygnus. Similar practice was followed in other Apollo oracles too.Hydrocarbon vapors emitted from the chasm. While in a trance the Pythia "raved" – probably a form of ecstatic speech – and her ravings were "translated" by the priests of the temple into elegant hexameters. It has been speculated that the ancient writers, including Plutarch who had worked as a priest at Delphi, were correct in attributing the oracular effects to the sweet-smelling pneuma (Ancient Greek for breath, wind, or vapor) escaping from the chasm in the rock. That exhalation could have been high in the known anaesthetic and sweet-smelling ethylene or other hydrocarbons such as ethane known to produce violent trances. Although, given the limestone geology, this theory remains debatable, the authors put up a detailed answer to their critics.Ancient sources describe the priestess using “laurel” to inspire her prophecies. Several alternative plant candidates have been suggested including Cannabis, Hyoscyamus, Rhododendron, and Oleander. Harissis claims that a review of contemporary toxicological literature indicates that oleander causes symptoms similar to those shown by the Pythia, and his study of ancient texts shows that oleander was often included under the term "laurel". The Pythia may have chewed oleander leaves and inhaled their smoke prior to her oracular pronouncements and sometimes dying from the toxicity. The toxic substances of oleander resulted in symptoms similar to those of epilepsy, the “sacred disease”, which may have been seen as the possession of the Pythia by the spirit of Apollo.
Influence, devastations and a temporary revival
The Delphic oracle exerted considerable influence throughout the Greek world, and she was consulted before all major undertakings including wars and the founding of colonies. She also was respected by the Greek-influenced countries around the periphery of the Greek world, such as Lydia, Caria, and even Egypt.
The oracle was also known to the early Romans. Rome's seventh and last king, Lucius Tarquinius Superbus, after witnessing a snake near his palace, sent a delegation including two of his sons to consult the oracle.In 278 BC, a Thracian (Celtic) tribe raided Delphi, burned the temple, plundered the sanctuary and stole the "unquenchable fire" from the altar. During the raid, part of the temple roof collapsed. The same year, the temple was severely damaged by an earthquake, thus it fell into decay and the surrounding area became impoverished. The sparse local population led to difficulties in filling the posts required. The oracle's credibility waned due to doubtful predictions.The oracle flourished again in the second century AD, during the rule of emperor Hadrian, who is believed to have visited the oracle twice and offered complete autonomy to the city.
By the 4th century, Delphi had acquired the status of a city.Constantine the Great looted several monuments in Eastern Mediterranean, including Delphi, to decorate his new capital, Constantinople. One of those famous items was the bronze column of Plataea (The Serpent Column; Ancient Greek: Τρικάρηνος Ὄφις, Three-headed Serpent; Turkish: Yılanlı Sütun, Serpentine Column) from the sanctuary (dated 479 BC), relocated there from Delphi in AD 324, which can still be seen today standing destroyed at a square of Istanbul (where once upon a time was the Hippodrome of Constantinople, built by Constantine; Ottoman Turkish: Atmeydanı "Horse Square") with part of one of its heads kept in the Istanbul Archaeology Museums (İstanbul Arkeoloji Müzeleri).
Despite the rise of Christianity across the Roman Empire, the oracle remained a religious center throughout the fourth century, and the Pythian Games continued to be held at least until 424 AD; however, the decline continued. The attempt of Emperor Julian to revive polytheism did not survive his reign. Excavations have revealed a large three-aisled basilica in the city, as well as traces of a church building in the sanctuary's gymnasium. The site was abandoned in the sixth or seventh centuries, although a single bishop of Delphi is attested in an episcopal list of the late eighth and early ninth centuries.
Religious significance of the oracle
Delphi became the site of a major temple to Phoebus Apollo, as well as the Pythian Games and the prehistoric oracle. Even in Roman times, hundreds of votive statues remained, described by Pliny the Younger and seen by Pausanias. Carved into the temple were three phrases: γνῶθι σεαυτόν (gnōthi seautón = "know thyself") and μηδὲν ἄγαν (mēdén ágan = "nothing in excess"), and Ἑγγύα πάρα δ'ἄτη (engýa pára d'atē = "make a pledge and mischief is nigh"), In antiquity, the origin of these phrases was attributed to one or more of the Seven Sages of Greece by authors such as Plato and Pausanias. Additionally, according to Plutarch's essay on the meaning of the "E at Delphi"—the only literary source for the inscription—there was also inscribed at the temple a large letter E. Among other things epsilon signifies the number 5. However, ancient as well as modern scholars have doubted the legitimacy of such inscriptions. According to one pair of scholars, "The actual authorship of the three maxims set up on the Delphian temple may be left uncertain. Most likely they were popular proverbs, which tended later to be attributed to particular sages."According to the Homeric hymn to the Pythian Apollo, Apollo shot his first arrow as an infant that effectively slew the serpent Pytho, the son of Gaia, who guarded the spot. To atone the murder of Gaia's son, Apollo was forced to fly and spend eight years in menial service before he could return forgiven. A festival, the Septeria, was held every year, at which the whole story was represented: the slaying of the serpent, and the flight, atonement, and return of the god.The Pythian Games took place every four years to commemorate Apollo's victory. Another regular Delphi festival was the "Theophania" (Θεοφάνεια), an annual festival in spring celebrating the return of Apollo from his winter quarters in Hyperborea. The culmination of the festival was a display of an image of the deities, usually hidden in the sanctuary, to worshippers.The theoxenia was held each summer, centred on a feast for "gods and ambassadors from other states". Myths indicate that Apollo killed the chthonic serpent Python guarding the Castalian Spring and named his priestess Pythia after her. Python, who had been sent by Hera, had attempted to prevent Leto, while she was pregnant with Apollo and Artemis, from giving birth.The spring at the site flowed toward the temple but disappeared beneath, creating a cleft which emitted chemical vapors that purportedly caused the oracle at Delphi to reveal her prophecies. Apollo killed Python, but had to be punished for it, since he was a child of Gaia. The shrine dedicated to Apollo was originally dedicated to Gaia and shared with Poseidon. The name Pythia remained as the title of the Delphic oracle.
Erwin Rohde wrote that the Python was an earth spirit, who was conquered by Apollo, and buried under the omphalos, and that it is a case of one deity setting up a temple on the grave of another. Another view holds that Apollo was a fairly recent addition to the Greek pantheon coming originally from Lydia. The Etruscans coming from northern Anatolia also worshipped Apollo, and it may be that he was originally identical with Mesopotamian Aplu, an Akkadian title meaning "son", originally given to the plague God Nergal, son of Enlil. Apollo Smintheus (Greek Απόλλων Σμινθεύς), the mouse killer who eliminates mice, a primary cause of disease, hence he promotes preventive medicine.
History
Occupation of the site at Delphi can be traced back to the Neolithic period with extensive occupation and use beginning in the Mycenaean period (1600–1100 BC). In Mycenaean times Krissa was a major Greek land and sea power, perhaps one of the first in Greece, if the Early Helladic date of Kirra is to be believed. The ancient sources indicate that the previous name of the Gulf of Corinth was the "Krisaean Gulf". Like Krisa, Corinth was a Dorian state, and Gulf of Corinth was a Dorian lake, so to speak, especially since the migration of Dorians into the Peloponnesus starting about 1000 BC. Krisa's power was broken finally by the recovered Aeolic and Attic-Ionic speaking states of southern Greece over the issue of access to Delphi. Control of it was assumed by the Amphictyonic League, an organization of states with an interest in Delphi, in the early Classical period. Krisa was destroyed for its arrogance. The gulf was given Corinth's name. Corinth by then was similar to the Ionic states: ornate and innovative, not resembling the spartan style of the Doric.
Ancient Delphi
Earlier myths include traditions that Pythia, or the Delphic oracle, already was the site of an important oracle in the pre-classical Greek world (as early as 1400 BC) and, rededicated from about 800 BC, when it served as the major site during classical times for the worship of the god Apollo.
Delphi was since ancient times a place of worship for Gaia, the mother goddess connected with fertility. The town started to gain pan-Hellenic relevance as both a shrine and an oracle in the seventh century BC. Initially under the control of Phocaean settlers based in nearby Kirra (currently Itea), Delphi was reclaimed by the Athenians during the First Sacred War (597–585 BC). The conflict resulted in the consolidation of the Amphictyonic League, which had both a military and a religious function revolving around the protection of the Temple of Apollo. This shrine was destroyed by fire in 548 BC and then fell under the control of the Alcmaeonids who were banned from Athens. In 449–448 BC, the Second Sacred War (fought in the wider context of the First Peloponnesian War between the Peloponnesian League led by Sparta and the Delian-Attic League led by Athens) resulted in the Phocians gaining control of Delphi and the management of the Pythian Games.
In 356 BC, the Phocians under Philomelos captured and sacked Delphi, leading to the Third Sacred War (356–346 BC), which ended with the defeat of the former and the rise of Macedon under the reign of Philip II. This led to the Fourth Sacred War (339 BC), which culminated in the Battle of Chaeronea (338 BC) and the establishment of Macedonian rule over Greece.
In Delphi, Macedonian rule was superseded by the Aetolians in 279 BC, when a Gallic invasion was repelled, and by the Romans in 191 BC. The site was sacked by Lucius Cornelius Sulla in 86 BC, during the Mithridatic Wars, and by Nero in 66 AD. Although subsequent Roman emperors of the Flavian dynasty contributed toward to the restoration of the site, it gradually lost importance.The anti-pagan legislation of the late Roman Imperial era deprived ancient sanctuaries of their assets. The emperor Julian attempted to reverse this religious climate, yet his "pagan revival" was particularly short-lived. When the doctor Oreibasius visited the oracle of Delphi, in order to question the fate of paganism, he received a pessimistic answer:
Tell the king that the flute has fallen to the ground. Phoebus does not have a home any more, neither an oracular laurel, nor a speaking fountain, because the talking water has dried out
It was shut down during the persecution of pagans in the late Roman Empire by Theodosius I in 381 AD.
Amphictyonic Council
The Amphictyonic Council was a council of representatives from six Greek tribes who controlled Delphi and also the quadrennial Pythian Games. They met biannually and came from Thessaly and central Greece. Over time, the town of Delphi gained more control of itself and the council lost much of its influence.
The sacred precinct in the Iron Age
Excavation at Delphi, which was a post-Mycenaean settlement of the late ninth century, has uncovered artifacts increasing steadily in volume beginning with the last quarter of the eighth century BC. Pottery and bronze as well as tripod dedications continue in a steady stream, in contrast to Olympia. Neither the range of objects nor the presence of prestigious dedications proves that Delphi was a focus of attention for a wide range of worshippers, but the large quantity of valuable goods, found in no other mainland sanctuary, encourages that view.
Apollo's sacred precinct in Delphi was a Panhellenic Sanctuary, where every four years, starting in 586 BC athletes from all over the Greek world competed in the Pythian Games, one of the four Panhellenic Games, precursors of the Modern Olympics. The victors at Delphi were presented with a laurel crown (stephanos) that was ceremonially cut from a tree by a boy who re-enacted the slaying of the Python. (These competitions are also called stephantic games, after the crown.) Delphi was set apart from the other games sites because it hosted the mousikos agon, musical competitions.These Pythian Games rank second among the four stephantic games chronologically and in importance. These games, however, were different from the games at Olympia in that they were not of such vast importance to the city of Delphi as the games at Olympia were to the area surrounding Olympia. Delphi would have been a renowned city regardless of whether it hosted these games; it had other attractions that led to it being labeled the "omphalos" (navel) of the earth, in other words, the centre of the world.
In the inner hestia (hearth) of the Temple of Apollo, an eternal flame burned. After the battle of Plataea, the Greek cities extinguished their fires and brought new fire from the hearth of Greece, at Delphi; in the foundation stories of several Greek colonies, the founding colonists were first dedicated at Delphi.
Abandonment and rediscovery
The Ottomans finalized their domination over Phocis and Delphi in about 1410 AD. Delphi itself remained almost uninhabited for centuries. It seems that one of the first buildings of the early modern era was the monastery of the Dormition of Mary or of Panagia (the Mother of God) built above the ancient gymnasium at Delphi. It must have been toward the end of the fifteenth or in the sixteenth century that a settlement started forming there, which eventually ended up forming the village of Kastri.
Ottoman Delphi gradually began to be investigated. The first Westerner to describe the remains in Delphi was Cyriacus of Ancona, a fifteenth-century merchant turned diplomat and antiquarian, considered the founding father of modern classical archeology. He visited Delphi in March 1436 and remained there for six days. He recorded all the visible archaeological remains based on Pausanias for identification. He described the stadium and the theatre at that date as well as some freestanding pieces of sculpture. He also recorded several inscriptions, most of which are now lost. His identifications, however, were not always correct: for example he described a round building he saw as the temple of Apollo while this was simply the base of the Argives' ex-voto. A severe earthquake in 1500 caused much damage.
In 1766, an English expedition funded by the Society of Dilettanti included the Oxford epigraphist Richard Chandler, the architect Nicholas Revett, and the painter William Pars. Their studies were published in 1769 under the title Ionian Antiquities, followed by a collection of inscriptions, and two travel books, one about Asia Minor (1775), and one about Greece (1776). Apart from the antiquities, they also related some vivid descriptions of daily life in Kastri, such as the crude behaviour of the Muslim Albanians who guarded the mountain passes.In 1805 Edward Dodwell visited Delphi, accompanied by the painter Simone Pomardi. Lord Byron visited in 1809, accompanied by his friend John Cam Hobhouse:
Yet there I've wandered by the vaulted rill
Yes! Sighed o'er Delphi's long deserted shrine,
where, save that feeble fountain, all is still.
He carved his name on the same column in the gymnasium as Lord Aberdeen, later Prime Minister, who had visited a few years before. Proper excavation did not start until the late nineteenth century (see "Excavations" section) after the village had moved.
Delphi in later art
From the sixteenth century onward, woodcuts of Delphi began to appear in printed maps and books. The earliest depictions of Delphi were totally imaginary; for example, those created by Nikolaus Gerbel, who published in 1545 a text based on the map of Greece by N. Sofianos. The ancient sanctuary was depicted as a fortified city.The first travelers with archaeological interests, apart from the precursor Cyriacus of Ancona, were the British George Wheler and the French Jacob Spon, who visited Greece in a joint expedition in 1675–1676. They published their impressions separately. In Wheler's "Journey into Greece", published in 1682, a sketch of the region of Delphi appeared, where the settlement of Kastri and some ruins were depicted. The illustrations in Spon's publication "Voyage d'Italie, de Dalmatie, de Grèce et du Levant, 1678" are considered original and groundbreaking.
Travelers continued to visit Delphi throughout the nineteenth century and published their books which contained diaries, sketches, and views of the site, as well as pictures of coins. The illustrations often reflected the spirit of romanticism, as evident by the works of Otto Magnus von Stackelberg, where, apart from the landscapes (La Grèce. Vues pittoresques et topographiques, Paris 1834) are depicted also human types (Costumes et usages des peuples de la Grèce moderne dessinés sur les lieux, Paris 1828). The philhellene painter W. Williams has comprised the landscape of Delphi in his themes (1829). Influential personalities such as F.Ch.-H.-L. Pouqueville, W.M. Leake, Chr. Wordsworth and Lord Byron are amongst the most important visitors of Delphi.
After the foundation of the modern Greek state, the press became also interested in these travelers. Thus "Ephemeris" writes (17 March 1889):
In the Revues des Deux Mondes Paul Lefaivre published his memoirs from an excursion to Delphi. The French author relates in a charming style his adventures on the road, praising particularly the ability of an old woman to put back in place the dislocated arm of one of his foreign traveling companions, who had fallen off the horse. "In Arachova the Greek type is preserved intact. The men are rather athletes than farmers, built for running and wrestling, particularly elegant and slender under their mountain gear." Only briefly does he refer to the antiquities of Delphi, but he refers to a pelasgian wall 80 meters long, "on which innumerable inscriptions are carved, decrees, conventions, manumissions".Gradually the first travelling guides appeared. The revolutionary "pocket" books invented by Karl Baedeker, accompanied by maps useful for visiting archaeological sites such as Delphi (1894) and the informed plans, the guides became practical and popular. The photographic lens revolutionized the way of depicting the landscape and the antiquities, particularly from 1893 onward, when the systematic excavations of the French Archaeological School started. However, artists such as Vera Willoughby, continued to be inspired by the landscape.Delphic themes inspired several graphic artists. Besides the landscape, Pythia and Sibylla become illustration subjects even on Tarot cards. A famous example constitutes Michelangelo's Delphic Sibyl (1509), the nineteenth-century German engraving, Oracle of Apollo at Delphi, as well as the recent ink on paper drawing, "The Oracle of Delphi" (2013) by M. Lind.
Modern artists are inspired also by the Delphic Maxims. Examples of such works are displayed in the "Sculpture park of the European Cultural Center of Delphi" and in exhibitions taking place at the Archaeological Museum of Delphi.
Delphi in later literature
Delphi inspired literature as well. In 1814 W. Haygarth, friend of Lord Byron, refers to Delphi in his work "Greece, a Poem". In 1888 Charles Marie René Leconte de Lisle published his lyric drama L’Apollonide, accompanied by music by Franz Servais. More recent French authors used Delphi as a source of inspiration such as Yves Bonnefoy (Delphes du second jour) or Jean Sullivan (nickname of Joseph Lemarchand) in L'Obsession de Delphes (1967), but also Rob MacGregor's Indiana Jones and the Peril at Delphi (1991).
The presence of Delphi in Greek literature is very intense. Poets such as Kostis Palamas (The Delphic Hymn, 1894), Kostas Karyotakis (Delphic festival, 1927), Nikephoros Vrettakos (return from Delphi, 1957), Yannis Ritsos (Delphi, 1961–62) and Kiki Dimoula (Gas omphalos and Appropriate terrain 1988), to mention only the most renowned ones. Angelos Sikelianos wrote The Dedication (of the Delphic speech) (1927), the Delphic Hymn (1927) and the tragedy Sibylla (1940), whereas in the context of the Delphic idea and the Delphic festivals he published an essay entitled "The Delphic union" (1930). The nobelist George Seferis wrote an essay under the title "Delphi", in the book "Dokimes".
Gallery
See also
Aristoclea, Delphic priestess of the 6th century BC, said to have been tutor to Pythagoras
Ex voto of the Attalids (Delphi)
Franz Weber (activist) - made an honorary citizen of Delphi in 1997
Greek art
List of traditional Greek place names
Portico of the Aetolians
Footnotes
Citation references
Further reading
E. Partida (2012). "Delphi Archaeological Museum". Odysseus. Ministry of Culture and Sports, Hellenic Republic.
Online books, and library resources in your library and in other libraries about Delphi
VirtualDelphi will help you picture the monuments of Delphi by reconstructing their sight and placing them in their historical time frame. |
PL/I (Programming Language One, pronounced and sometimes written PL/1) is a procedural, imperative computer programming language initially developed by IBM. The PL/1 ANSI standard, X3.53-1976, was published in 1976. It is designed for scientific, engineering, business and system programming. It has been in continuous use by academic, commercial and industrial organizations since it was introduced in the 1960s.PL/I's main domains are data processing, numerical computation, scientific computing, and system programming. It supports recursion, structured programming, linked data structure handling, fixed-point, floating-point, complex, character string handling, and bit string handling. The language syntax is English-like and suited for describing complex data formats with a wide set of functions available to verify and manipulate them.
Early history
In the 1950s and early 1960s, business and scientific users programmed for different computer hardware using different programming languages. Business users were moving from Autocoders via COMTRAN to COBOL, while scientific users programmed in Fortran, ALGOL, GEORGE, and others. The IBM System/360 (announced in 1964 and delivered in 1966) was designed as a common machine architecture for both groups of users, superseding all existing IBM architectures. Similarly, IBM wanted a single programming language for all users. It hoped that Fortran could be extended to include the features needed by commercial programmers. In October 1963 a committee was formed composed originally of three IBMers from New York and three members of SHARE, the IBM
scientific users group, to propose these extensions to Fortran. Given the constraints of Fortran, they were unable to do this and embarked on the design of a new programming language based loosely on ALGOL labeled NPL. This acronym conflicted with that of the UK's National Physical Laboratory and was replaced briefly by MPPL (MultiPurpose Programming Language) and, in 1965, with PL/I (with a Roman numeral "I"). The first definition appeared in April 1964.IBM took NPL as a starting point and completed the design to a level that the first compiler could be written: the NPL definition was incomplete in scope and in detail. Control of the PL/I language was vested initially in the New York Programming Center and later at the IBM UK Laboratory at Hursley. The SHARE and GUIDE user groups were involved in extending the language and had a role in IBM's process for controlling the language through their PL/I Projects. The experience of defining such a large language showed the need for a formal definition of PL/I. A project was set up in 1967 in IBM Laboratory Vienna to make an unambiguous and complete specification. This led in turn to one of the first large scale Formal Methods for development, VDM.
Fred Brooks is credited with ensuring PL/I had the CHARACTER data type.The language was first specified in detail in the manual "PL/I Language Specifications. C28-6571", written in New York in 1965, and superseded by "PL/I Language Specifications. GY33-6003", written by Hursley in 1967. IBM continued to develop PL/I in the late sixties and early seventies, publishing it in the GY33-6003 manual. These manuals were used by the Multics group and other early implementers.
The first compiler was delivered in 1966. The Standard for PL/I was approved in 1976.
Goals and principles
The goals for PL/I evolved during the early development of the language. Competitiveness with COBOL's record handling and report writing was required. The language's scope of usefulness grew to include system programming and event-driven programming. Additional goals for PL/I were:
Performance of compiled code competitive with that of Fortran (but this was not achieved)
Extensibility for new hardware and new application areas
Improved productivity of the programming process, transferring effort from the programmer to the compiler
Machine independence to operate effectively on the main computer hardware and operating systemsTo achieve these goals, PL/I borrowed ideas from contemporary languages while adding substantial new capabilities and casting it with a distinctive concise and readable syntax. Many principles and capabilities combined to give the language its character and were important in meeting the language's goals:
Block structure, with underlying semantics (including recursion), similar to Algol 60. Arguments are passed using call by reference, using dummy variables for values where needed (call by value).
A wide range of computational data types, program control data types, and forms of data structure (strong typing).
Dynamic extents for arrays and strings with inheritance of extents by procedure parameters.
Concise syntax for expressions, declarations, and statements with permitted abbreviations. Suitable for a character set of 60 glyphs and sub-settable to 48.
An extensive structure of defaults in statements, options, and declarations to hide some complexities and facilitate extending the language while minimizing keystrokes.
Powerful iterative processing with good support for structured programming.
There were to be no reserved words (although the function names DATE and TIME initially proved to be impossible to meet this goal). New attributes, statements and statement options could be added to PL/I without invalidating existing programs. Not even IF, THEN, ELSE, and DO were reserved.
Orthogonality: each capability to be independent of other capabilities and freely combined with other capabilities wherever meaningful. Each capability to be available in all contexts where meaningful, to exploit it as widely as possible and to avoid "arbitrary restrictions". Orthogonality helps make the language "large".
Exception handling capabilities for controlling and intercepting exceptional conditions at run time.
Programs divided into separately compilable sections, with extensive compile-time facilities (a.k.a. macros), not part of the standard, for tailoring and combining sections of source code into complete programs. External names to bind separately compiled procedures into a single program.
Debugging facilities integrated into the language.
Language summary
The language is designed to be all things to all programmers. The summary is extracted from the ANSI PL/I Standard
and the ANSI PL/I General-Purpose Subset Standard.A PL/I program consists of a set of procedures, each of which is written as a sequence of statements. The %INCLUDE construct is used to include text from other sources during program translation. All of the statement types are summarized here in groupings which give an overview of the language (the Standard uses this organization).
(Features such as multi-tasking and the PL/I preprocessor are not in the Standard but are supported in the PL/I F compiler and some other implementations are discussed in the Language evolution section.)
Names may be declared to represent data of the following types, either as single values, or as aggregates in the form of arrays, with a lower-bound and upper-bound per dimension, or structures (comprising nested structure, array and scalar variables):
The arithmetic type comprises these attributes:
The base, scale, precision and scale factor of the Picture-for-arithmetic type is encoded within the picture-specification. The mode is specified separately, with the picture specification applied to both the real and the imaginary parts.
Values are computed by expressions written using a specific set of operations and builtin functions, most of which may be applied to aggregates as well as to single values, together with user-defined procedures which, likewise, may operate on and return aggregate as well as single values. The assignment statement assigns values to one or more variables.
There are no reserved words in PL/I. A statement is terminated by a semi-colon. The maximum length of a statement is implementation defined. A comment may appear anywhere in a program where a space is permitted and is preceded by the characters forward slash, asterisk and is terminated by the characters asterisk, forward slash (i.e. /* This is a comment. */). Statements may have a label-prefix introducing an entry name (ENTRY and PROCEDURE statements) or label name, and a condition prefix enabling or disabling a computational condition – e.g. (NOSIZE)). Entry and label names may be single identifiers or identifiers followed by a subscript list of constants (as in L(12,2):A=0;).
A sequence of statements becomes a group when preceded by a DO statement and followed by an END statement. Groups may include nested groups and begin blocks. The IF statement specifies a group or a single statement as the THEN part and the ELSE part (see the sample program). The group is the unit of iteration. The begin block (BEGIN; stmt-list END;) may contain declarations for names and internal procedures local to the block. A procedure starts with a PROCEDURE statement and is terminated syntactically by an END statement. The body of a procedure is a sequence of blocks, groups, and statements and contains declarations for names and procedures local to the procedure or EXTERNAL to the procedure.
An ON-unit is a single statement or block of statements written to be executed when one or more of these conditions occur:
a computational condition,
or an Input/Output condition,
or one of the conditions:
AREA, CONDITION (identifier), ERROR, FINISHA declaration of an identifier may contain one or more of the following attributes (but they need to be mutually consistent):
Current compilers from Micro Focus, and particularly that from IBM implement many extensions over the standardized version of the language. The IBM extensions are summarised in the Implementation sub-section for the compiler later. Although there are some extensions common to these compilers the lack of a current standard means that compatibility is not guaranteed.
Standardization
Language standardization began in April 1966 in Europe with ECMA TC10. In 1969 ANSI established a "Composite Language Development Committee", nicknamed "Kludge", later renamed X3J1 PL/I. Standardization became a joint effort of ECMA TC/10 and ANSI X3J1. A subset of the GY33-6003 document was offered to the joint effort by IBM and became the base document for standardization. The major features omitted from the base document were multitasking and the attributes for program optimization (e.g. NORMAL and ABNORMAL).
Proposals to change the base document were voted upon by both committees. In the event that the committees disagreed, the chairs, initially Michael Marcotty of General Motors and C.A.R. Hoare representing ICL had to resolve the disagreement. In addition to IBM, Honeywell, CDC, Data General, Digital Equipment Corporation, Prime Computer, Burroughs, RCA, and Univac served on X3J1 along with major users Eastman Kodak, MITRE, Union Carbide, Bell Laboratories, and various government and university representatives. Further development of the language occurred in the standards bodies, with continuing improvements in structured programming and internal consistency, and with the omission of the more obscure or contentious features.
As language development neared an end, X3J1/TC10 realized that there were a number of problems with a document written in English text. Discussion of a single item might appear in multiple places which might or might not agree. It was difficult to determine if there were omissions as well as inconsistencies. Consequently, David Beech (IBM), Robert Freiburghouse (Honeywell), Milton Barber (CDC), M. Donald MacLaren (Argonne National Laboratory), Craig Franklin (Data General), Lois Frampton (Digital Equipment Corporation), and editor, D.J. Andrews of IBM undertook to rewrite the entire document, each producing one or more complete chapters. The standard is couched as a formal definition using a "PL/I Machine" to specify the semantics. It was the first programming language standard to be written as a semi-formal definition.
A "PL/I General-Purpose Subset" ("Subset-G") standard was issued by ANSI in 1981 and a revision published in 1987. The General Purpose subset was widely adopted as the kernel for PL/I implementations.
Implementations
IBM PL/I F and D compilers
PL/I was first implemented by IBM, at its Hursley Laboratories in the United Kingdom, as part of the development of System/360. The first production PL/I compiler was the PL/I F compiler for the OS/360 Operating System, built by John Nash's team at Hursley in the UK: the runtime library team was managed by I.M. (Nobby) Clarke. The PL/I F compiler was written entirely in System/360 assembly language. Release 1 shipped in 1966. OS/360 is a real-memory environment and the compiler was designed for systems with as little as 64 kilobytes of real storage – F being 64 kB in S/360 parlance. To fit a large compiler into the 44 kilobytes of memory available on a 64-kilobyte machine, the compiler consists of a control phase and a large number of compiler phases (approaching 100). The phases are brought into memory from disk, one at a time, to handle particular language features and aspects of compilation. Each phase makes a single pass over the partially-compiled program, usually held in memory.Aspects of the language were still being designed as PL/I F was implemented, so some were omitted until later releases. PL/I RECORD I/O was shipped with PL/I F Release 2. The list processing functions – Based Variables, Pointers, Areas and Offsets and LOCATE-mode I/O – were first shipped in Release 4. In a major attempt to speed up PL/I code to compete with Fortran object code, PL/I F Release 5 does substantial program optimization of DO-loops facilitated by the REORDER option on procedures.
A version of PL/I F was released on the TSS/360 timesharing operating system for the System/360 Model 67, adapted at the IBM Mohansic Lab. The IBM La Gaude Lab in France developed "Language Conversion Programs" to convert Fortran, Cobol, and Algol programs to the PL/I F level of PL/I.
The PL/I D compiler, using 16 kilobytes of memory, was developed by IBM Germany for the DOS/360 low end operating system. It implements a subset of the PL/I language requiring all strings and arrays to have fixed extents, thus simplifying the run-time environment. Reflecting the underlying operating system, it lacks dynamic storage allocation and the controlled storage class. It was shipped within a year of PL/I F.
Multics PL/I and derivatives
Compilers were implemented by several groups in the early 1960s. The Multics project at MIT, one of the first to develop an operating system in a high-level language, used Early PL/I (EPL), a subset dialect of PL/I, as their implementation language in 1964. EPL was developed at Bell Labs and MIT by Douglas McIlroy, Robert Morris, and others. Initially, it was developed using the TMG compiler-compiler. The influential Multics PL/I compiler was the source of compiler technology used by a number of manufacturers and software groups. EPL was a system programming language and a dialect of PL/I that had some capabilities absent in the original PL/I.
The Honeywell PL/I compiler (for Series 60) is an implementation of the full ANSI X3J1 standard.
IBM PL/I optimizing and checkout compilers
The PL/I Optimizer and Checkout compilers produced in Hursley support a common level of PL/I language and aimed to replace the PL/I F compiler. The checkout compiler is a rewrite of PL/I F in BSL, IBM's PL/I-like proprietary implementation language (later PL/S). The performance objectives set for the compilers are shown in an IBM presentation to the BCS. The compilers had to produce identical results – the Checkout Compiler is used to debug programs that would then be submitted to the Optimizer. Given that the compilers had entirely different designs and were handling the full PL/I language this goal was challenging: it was achieved.
IBM introduced new attributes and syntax including BUILTIN, case statements (SELECT/WHEN/OTHERWISE), loop controls (ITERATE and LEAVE) and null argument lists to disambiguate, e.g., DATE().
The PL/I optimizing compiler took over from the PL/I F compiler and was IBM's workhorse compiler from the 1970s to the 1990s. Like PL/I F, it is a multiple pass compiler with a 44 kilobyte design point, but it is an entirely new design. Unlike the F compiler, it has to perform compile time evaluation of constant expressions using the run-time library, reducing the maximum memory for a compiler phase to 28 kilobytes. A second-time around design, it succeeded in eliminating the annoyances of PL/I F such as cascading diagnostics. It was written in S/360 Macro Assembler by a team, led by Tony Burbridge, most of whom had worked on PL/I F. Macros were defined to automate common compiler services and to shield the compiler writers from the task of managing real-mode storage, allowing the compiler to be moved easily to other memory models. The gamut of program optimization techniques developed for the contemporary IBM Fortran H compiler were deployed: the Optimizer equaled Fortran execution speeds in the hands of good programmers. Announced with IBM S/370 in 1970, it shipped first for the DOS/360 operating system in August 1971, and shortly afterward for OS/360, and the first virtual memory IBM operating systems OS/VS1, MVS, and VM/CMS. (The developers were unaware that while they were shoehorning the code into 28 kb sections, IBM Poughkeepsie was finally ready to ship virtual memory support in OS/360). It supported the batch programming environments and, under TSO and CMS, it could be run interactively. This compiler went through many versions covering all mainframe operating systems including the operating systems of the Japanese plug-compatible machines (PCMs).
The compiler has been superseded by "IBM PL/I for OS/2, AIX, Linux, z/OS" below.
The PL/I checkout compiler, (colloquially "The Checker") announced in August 1970 was designed to speed and improve the debugging of PL/I programs. The team was led by Brian Marks. The three-pass design cut the time to compile a program to 25% of that taken by the F Compiler. It can be run from an interactive terminal, converting PL/I programs into an internal format, "H-text". This format is interpreted by the Checkout compiler at run-time, detecting virtually all types of errors. Pointers are represented in 16 bytes, containing the target address and a description of the referenced item, thus permitting "bad" pointer use to be diagnosed. In a conversational environment when an error is detected, control is passed to the user who can inspect any variables, introduce debugging statements and edit the source program. Over time the debugging capability of mainframe programming environments developed most of the functions offered by this compiler and it was withdrawn (in the 1990s?)
DEC PL/I
Perhaps the most commercially successful implementation aside from IBM's was Digital Equipment Corporation's VAX PL/I, later known as DEC PL/I. The implementation is "a strict superset of the ANSI X3.4-1981 PL/I General Purpose Subset and provides most of the features of the new ANSI X3.74-1987 PL/I General Purpose Subset", and was first released in 1988. It originally used a compiler backend named the VAX Code Generator (VCG) created by a team led by Dave Cutler. The front end was designed by Robert Freiburghouse, and was ported to VAX/VMS from Multics. It runs on VMS on VAX and Alpha, and on Tru64. During the 1990s, Digital sold the compiler to UniPrise Systems, who later sold it to a company named Kednos. Kednos marketed the compiler as Kednos PL/I until October 2016 when the company ceased trading.
Teaching subset compilers
In the late 1960s and early 1970s, many US and Canadian universities were establishing time-sharing services on campus and needed conversational compiler/interpreters for use in teaching science, mathematics, engineering, and computer science. Dartmouth was developing BASIC, but PL/I was a popular choice, as it was concise and easy to teach. As the IBM offerings were unsuitable, a number of schools built their own subsets of PL/I and their own interactive support. Examples are:
In the 1960s and early 1970s, Allen-Babcock implemented the Remote Users of Shared Hardware (RUSH) time sharing system for an IBM System/360 Model 50 with custom microcode and subsequently implemented IBM's CPS, an interactive time-sharing system for OS/360 aimed at teaching computer science basics, offered a limited subset of the PL/I language in addition to BASIC and a remote job entry facility.
PL/C, a dialect for teaching, a compiler developed at Cornell University, had the unusual capability of never failing to compile any program through the use of extensive automatic correction of many syntax errors and by converting any remaining syntax errors to output statements. The language was almost all of PL/I as implemented by IBM. PL/C was a very fast compiler.
SL/1 (Student Language/1, Student Language/One or Subset Language/1) was a PL/I subset, initially available late 1960s, that ran interpretively on the IBM 1130; instructional use was its strong point.
PLAGO, created at the Polytechnic Institute of Brooklyn, used a simplified subset of the PL/I language and focused on good diagnostic error messages and fast compilation times.
The Computer Systems Research Group of the University of Toronto produced the SP/k compilers which supported a sequence of subsets of PL/I called SP/1, SP/2, SP/3, ..., SP/8 for teaching programming. Programs that ran without errors under the SP/k compilers produced the same results under other contemporary PL/I compilers such as IBM's PL/I F compiler, IBM's checkout compiler or Cornell University's PL/C compiler.Other examples are PL0 by P. Grouse at the University of New South Wales, PLUM by Marvin Victor Zelkowitz at the University of Maryland., and PLUTO from the University of Toronto.
IBM PL/I for OS/2, AIX, Linux, z/OS
In a major revamp of PL/I, IBM Santa Teresa in California launched an entirely new compiler in 1992. The initial shipment was for OS/2 and included most ANSI-G features and many new PL/I features. Subsequent releases provided additional platforms (MVS, VM, OS/390, AIX and Windows), but as of 2021, the only supported platforms are z/OS and AIX. IBM continued to add functions to make PL/I fully competitive with other languages (particularly C and C++) in areas where it had been overtaken. The corresponding "IBM Language Environment" supports inter-operation of PL/I programs with Database and Transaction systems, and with programs written in C, C++, and COBOL, the compiler supports all the data types needed for intercommunication with these languages.
The PL/I design principles were retained and withstood this major extension, comprising several new data types, new statements and statement options, new exception conditions, and new organisations of program source. The resulting language is a compatible super-set of the PL/I Standard and of the earlier IBM compilers. Major topics added to PL/I were:
New attributes for better support of user-defined data types – the DEFINE ALIAS, ORDINAL, and DEFINE STRUCTURE statement to introduce user-defined types, the HANDLE locator data type, the TYPE data type itself, the UNION data type, and built-in functions for manipulating the new types.
Additional data types and attributes corresponding to common PC data types (e.g. UNSIGNED, VARYINGZ).
Improvements in readability of programs – often rendering implied usages explicit (e.g. BYVALUE attribute for parameters)
Additional structured programming constructs.
Interrupt handling additions.
Compile time preprocessor extended to offer almost all PL/I string handling features and to interface with the Application Development EnvironmentThe latest series of PL/I compilers for z/OS, called Enterprise PL/I for z/OS, leverage code generation for the latest z/Architecture processors (z14, z13, zEC12, zBC12, z196, z114) via the use of ARCHLVL parm control passed during compilation, and was the second High level language supported by z/OS Language Environment to do so (XL C/C++ being the first, and Enterprise COBOL v5 the last.)
Data types
ORDINAL is a new computational data type. The ordinal facilities are like those in Pascal,
e.g. DEFINE ORDINAL Colour (red, yellow, green, blue, violet);
but in addition the name and internal values are accessible via built-in functions. Built-in functions provide access to an ordinal value's predecessor and successor.
The DEFINE-statement (see below) allows additional TYPEs to be declared composed from PL/I's built-in attributes.
The HANDLE(data structure) locator data type is similar to the POINTER data type, but strongly typed to bind only to a particular data structure. The => operator is used to select a data structure using a handle.
The UNION attribute (equivalent to CELL in early PL/I specifications) permits several scalar variables, arrays, or structures to share the same storage in a unit that occupies the amount of storage needed for the largest alternative.
Competitiveness on PC and with C
These attributes were added:
The string attributes VARYINGZ (for zero-terminated character strings), HEXADEC, WIDECHAR, and GRAPHIC.
The optional arithmetic attributes UNSIGNED and SIGNED, BIGENDIAN and LITTLEENDIAN. UNSIGNED necessitated the UPTHRU and DOWNTHRU option on iterative groups enabling a counter-controlled loop to be executed without exceeding the limit value (also essential for ORDINALs and good for documenting loops).
The DATE(pattern) attribute for controlling date representations and additions to bring time and date to best current practice. New functions for manipulating dates include – DAYS and DAYSTODATE for converting between dates and number of days, and a general DATETIME function for changing date formats.New string-handling functions were added – to centre text, to edit using a picture format, and to trim blanks or selected characters from the head or tail of text, VERIFYR to VERIFY from the right. and SEARCH and TALLY functions.
Compound assignment operators a la C e.g. +=, &=, -=, ||= were added. A+=1 is equivalent to A=A+1.
Additional parameter descriptors and attributes were added for omitted arguments and variable length argument lists.
Program readability – making intentions explicit
The VALUE attribute declares an identifier as a constant (derived from a specific literal value or restricted expression).
Parameters can have the BYADDR (pass by address) or BYVALUE (pass by value) attributes.
The ASSIGNABLE and NONASSIGNABLE attributes prevent unintended assignments.
DO FOREVER; obviates the need for the contrived construct DO WHILE ( '1'B );.
The DEFINE-statement introduces user-specified names (e.g. INTEGER) for combinations of built-in attributes (e.g. FIXED BINARY(31,0)). Thus DEFINE ALIAS INTEGER FIXED BINARY(31.0) creates the TYPE name INTEGER as an alias for the set of built-in attributes FIXED BINARY(31.0). DEFINE STRUCTURE applies to structures and their members; it provides a TYPE name for a set of structure attributes and corresponding substructure member declarations for use in a structure declaration (a generalisation of the LIKE attribute).
Structured programming additions
A LEAVE statement to exit a loop, and an ITERATE to continue with the next iteration of a loop.
UPTHRU and DOWNTHRU options on iterative groups.
The package construct consisting of a set of procedures and declarations for use as a unit. Variables declared outside of the procedures are local to the package, and can use STATIC, BASED or CONTROLLED storage. Procedure names used in the package also are local, but can be made external by means of the EXPORTS option of the PACKAGE-statement.
Interrupt handling
The RESIGNAL-statement executed in an ON-unit terminates execution of the ON-unit, and raises the condition again in the procedure that called the current one (thus passing control to the corresponding ON-unit for that procedure).
The INVALIDOP condition handles invalid operation codes detected by the PC processor, as well as illegal arithmetic operations such as subtraction of two infinite values.
The ANYCONDITION condition is provided to intercept conditions for which no specific ON-unit has been provided in the current procedure.
The STORAGE condition is raised when an ALLOCATE statement is unable to obtain sufficient storage.
Other mainframe and minicomputer compilers
A number of vendors produced compilers to compete with IBM PL/I F or Optimizing compiler on mainframes and minicomputers in the 1970s. In the 1980s the target was usually the emerging ANSI-G subset.
In 1974 Burroughs Corporation announced PL/I for the B6700 and B7700.
UNIVAC released a UNIVAC PL/I, and in the 1970s also used a variant of PL/I, PL/I PLUS, for system programming.
From 1978 Data General provided PL/I on its Eclipse and Eclipse MV platforms running the AOS, AOS/VS & AOS/VS II operating systems. A number of operating system utility programs were written in the language.
Paul Abrahams of NYU's Courant Institute of Mathematical Sciences wrote CIMS PL/I in 1972 in PL/I, bootstrapping via PL/I F. It supported "about 70%" of PL/I compiling to the CDC 6600
CDC delivered an optimizing subset PL/I compiler for Cyber 70, 170 and 6000 series.
Fujitsu delivered a PL/I compiler equivalent to the PL/I Optimizer.
Stratus Technologies PL/I is an ANSI G implementation for the VOS operating system.
IBM Series/1 PL/I is an extended subset of ANSI Programming Language PL/I (ANSI X3.53-1976) for the IBM Series/1 Realtime Programming System.
PL/I compilers for Microsoft .NET
In 2011, Raincode designed a full legacy compiler for the Microsoft .NET and .NET Core platforms, named The Raincode PL/I compiler.
PL/I compilers for personal computers and Unix
In the 1970s and 1980s Digital Research sold a PL/I compiler for CP/M (PL/I-80), CP/M-86 (PL/I-86) and Personal Computers with DOS. It was based on Subset G of PL/I and was written in PL/M.
Micro Focus implemented Open PL/I for Windows and UNIX/Linux systems, which they acquired from Liant.
IBM delivered PL/I for OS/2 in 1994, and PL/I for AIX in 1995.
Iron Spring PL/I for OS/2 and later Linux was introduced in 2007.
PL/I dialects
PL/S, a dialect of PL/I, initially called BSL was developed in the late 1960s and became the system programming language for IBM mainframes. Almost all IBM mainframe system software in the 1970s and 1980s was written in PL/S. It differed from PL/I in that there were no data type conversions, no run-time environment, structures were mapped differently, and assignment was a byte by byte copy. All strings and arrays had fixed extents, or used the REFER option. PL/S was succeeded by PL/AS, and then by PL/X, which is the language currently used for internal work on current operating systems, OS/390 and now z/OS. It is also used for some z/VSE and z/VM components. IBM Db2 for z/OS is also written in PL/X.
PL/C, is an instructional dialect of the PL/I computer programming language, developed at Cornell University in the 1970s.
Two dialects of PL/I named PL/MP (Machine Product) and PL/MI (Machine Interface) were used by IBM in the system software of the System/38 and AS/400 platforms. PL/MP was used to implement the so-called Vertical Microcode of these platforms, and targeted the IMPI instruction set. PL/MI targets the Machine Interface of those platforms, and is used in the System/38 Control Program Facility, and the XPF layer of OS/400. The PL/MP code was mostly replaced with C++ when OS/400 was ported to the IBM RS64 processor family, although some was retained and retargeted for the PowerPC/Power ISA architecture. The PL/MI code was not replaced, and remains in use in IBM i.
PL/8 (or PL.8), so-called because it was about 80% of PL/I, was originally developed by IBM Research in the 1970s for the IBM 801 architecture. It later gained support for the Motorola 68000 and System/370 architectures. It continues to be used for several IBM internal systems development tasks (e.g. millicode and firmware for z/Architecture systems) and has been re-engineered to use a 64-bit gcc-based backend.
Honeywell, Inc. developed PL-6 for use in creating the CP-6 operating system.
Prime Computer used two different PL/I dialects as the system programming language of the PRIMOS operating system: PL/P, starting from version 18, and then SPL, starting from version 19.
XPL is a dialect of PL/I used to write other compilers using the XPL compiler techniques. XPL added a heap string datatype to its small subset of PL/I.
HAL/S is a real-time aerospace programming language, best known for its use in the Space Shuttle program. It was designed by Intermetrics in the 1970s for NASA. HAL/S was implemented in XPL.
IBM and various subcontractors also developed another PL/I variant in the early 1970s to support signal processing for the Navy called SPL/I.
SabreTalk, a real-time dialect of PL/I used to program the Sabre airline reservation system.
Usage
PL/I implementations were developed for mainframes from the late 1960s, mini computers in the 1970s, and personal computers in the 1980s and 1990s. Although its main use has been on mainframes, there are PL/I versions for DOS, Microsoft Windows, OS/2, AIX, OpenVMS, and Unix.
It has been widely used in business data processing and for system use for writing operating systems on certain platforms. Very complex and powerful systems have been built with PL/I:
The SAS System was initially written in PL/I; the SAS data step is still modeled on PL/I syntax.
The pioneering online airline reservation system Sabre was originally written for the IBM 7090 in assembler. The S/360 version was largely written using SabreTalk, a purpose built subset PL/I compiler for a dedicated control program.
The Multics operating system was largely written in PL/I.
PL/I was used to write an executable formal definition to interpret IBM's System Network Architecture.
Some components of the OpenVMS operating system were originally written in PL/I, but were later rewritten in C during the port of VMS to the IA64 architecture.PL/I did not fulfill its supporters' hopes that it would displace Fortran and COBOL and become the major player on mainframes. It remained a minority but significant player. There cannot be a definitive explanation for this, but some trends in the 1970s and 1980s militated against its success by progressively reducing the territory on which PL/I enjoyed a competitive advantage.
First, the nature of the mainframe software environment changed. Application subsystems for database and transaction processing (CICS and IMS and Oracle on System 370) and application generators became the focus of mainframe users' application development. Significant parts of the language became irrelevant because of the need to use the corresponding native features of the subsystems (such as tasking and much of input/output). Fortran was not used in these application areas, confining PL/I to COBOL's territory; most users stayed with COBOL. But as the PC became the dominant environment for program development, Fortran, COBOL and PL/I all became minority languages overtaken by C++, Java and the like.
Second, PL/I was overtaken in the system programming field. The IBM system programming community was not ready to use PL/I; instead, IBM developed and adopted a proprietary dialect of PL/I for system programming. – PL/S. With the success of PL/S inside IBM, and of C outside IBM, the unique PL/I strengths for system programming became less valuable.
Third, the development environments grew capabilities for interactive software development that, again, made the unique PL/I interactive and debugging strengths less valuable.
Fourth, features such as structured programming, character string operations, and object orientation were added to COBOL and Fortran, which further reduced PL/I's relative advantages.
On mainframes there were substantial business issues at stake too. IBM's hardware competitors had little to gain and much to lose from success of PL/I. Compiler development was expensive, and the IBM compiler groups had an in-built competitive advantage. Many IBM users wished to avoid being locked into proprietary solutions. With no early support for PL/I by other vendors it was best to avoid PL/I.
Evolution of the PL/I language
This article uses the PL/I standard as the reference point for language features. But a number of features of significance in the early implementations were not in the Standard; and some were offered by non-IBM compilers. And the de facto language continued to grow after the standard, ultimately driven by developments on the Personal Computer.
Significant features omitted from the standard
Multithreading
Multithreading, under the name "multitasking", was implemented by PL/I F, the PL/I Checkout and Optimizing compilers, and the newer AIX and Z/OS compilers. It comprised the data types EVENT and TASK, the TASK-option on the CALL-statement (Fork), the WAIT-statement (Join), the DELAY(delay-time), EVENT-options on the record I/O statements and the UNLOCK statement to unlock locked records on EXCLUSIVE files. Event data identify a particular event and indicate whether it is complete ('1'B) or incomplete ('0'B): task data items identify a particular task (or process) and indicate its priority relative to other tasks.
Preprocessor
The first IBM Compile time preprocessor was built by the IBM Boston Advanced Programming Center located in Cambridge, Mass, and shipped with the PL/I F compiler. The %INCLUDE statement was in the Standard, but the rest of the features were not. The DEC and Kednos PL/I compilers implemented much the same set of features as IBM, with some additions of their own. IBM has continued to add preprocessor features to its compilers. The preprocessor treats the written source program as a sequence of tokens, copying them to an output source file or acting on them. When a % token is encountered the following compile time statement is executed: when an identifier token is encountered and the identifier has been DECLAREd, ACTIVATEd, and assigned a compile time value, the identifier is replaced by this value. Tokens are added to the output stream if they do not require action (e.g. +), as are the values of ACTIVATEd compile time expressions. Thus a compile time variable PI could be declared, activated, and assigned using %PI='3.14159265'. Subsequent occurrences of PI would be replaced by 3.14159265.
The data type supported are FIXED DECIMAL integers and CHARACTER strings of varying length with no maximum length. The structure statements are:
%[label-list:]DO iteration: statements; %[label-list:]END;
%procedure-name: PROCEDURE (parameter list) RETURNS (type); statements...;
%[label-list:]END;
%[label-list:]IF...%THEN...%ELSE..and the simple statements, which also may have a [label-list:]
%ACTIVATE(identifier-list) and %DEACTIVATE
assignment statement
%DECLARE identifier-attribute-list
%GO TO label
%INCLUDE
null statementThe feature allowed programmers to use identifiers for constants – e.g. product part numbers or mathematical constants – and was superseded in the standard by named constants for computational data. Conditional compiling and iterative generation of source code, possible with compile-time facilities, was not supported by the standard. Several manufacturers implemented these facilities.
Structured programming additions
Structured programming additions were made to PL/I during standardization but were not accepted into the standard. These features were the LEAVE-statement to exit from an iterative DO, the UNTIL-option and REPEAT-option added to DO, and a case statement of the general form:
SELECT (expression) {WHEN (expression) group}... OTHERWISE group
These features were all included in IBM's PL/I Checkout and Optimizing compilers and in DEC PL/I.
Debug facilities
PL/I F had offered some debug facilities that were not put forward for the standard but were implemented by others – notably the CHECK(variable-list) condition prefix, CHECK on-condition and the SNAP option. The IBM Optimizing and Checkout compilers added additional features appropriate to the conversational mainframe programming environment (e.g. an ATTENTION condition).
Significant features developed since the standard
Several attempts had been made to design a structure member type that could have one of several datatypes (CELL in early IBM). With the growth of classes in programming theory, approaches to this became possible on a PL/I base – UNION, TYPE etc. have been added by several compilers.
PL/I had been conceived in a single-byte character world. With support for Japanese and Chinese language becoming essential, and the developments on International Code Pages, the character string concept was expanded to accommodate wide non-ASCII/EBCDIC strings.
Time and date handling were overhauled to deal with the millennium problem, with the introduction of the DATETIME function that returned the date and time in one of about 35 different formats. Several other date functions deal with conversions to and from days and seconds.
Criticisms
Implementation issues
Though the language is easy to learn and use, implementing a PL/I compiler is difficult and time-consuming. A language as large as PL/I needed subsets that most vendors could produce and most users master. This was not resolved until "ANSI G" was published. The compile time facilities, unique to PL/I, took added implementation effort and additional compiler passes. A PL/I compiler was two to four times as large as comparable Fortran or COBOL compilers, and also that much slower—supposedly offset by gains in programmer productivity. This was anticipated in IBM before the first compilers were written.Some argue that PL/I is unusually hard to parse. The PL/I keywords are not reserved so programmers can use them as variable or procedure names in programs. Because the original PL/I(F) compiler attempts auto-correction when it encounters a keyword used in an incorrect context, it often assumes it is a variable name. This leads to "cascading diagnostics", a problem solved by later compilers.
The effort needed to produce good object code was perhaps underestimated during the initial design of the language. Program optimization (needed to compete with the excellent program optimization carried out by available Fortran compilers) is unusually complex owing to side effects and pervasive problems with aliasing of variables. Unpredictable modification can occur asynchronously in exception handlers, which may be provided by "ON statements" in (unseen) callers. Together, these make it difficult to reliably predict when a program's variables might be modified at runtime. In typical use, however, user-written error handlers (the ON-unit) often do not make assignments to variables. In spite of the aforementioned difficulties, IBM produced the PL/I Optimizing Compiler in 1971.PL/I contains many rarely used features, such as multitasking support (an IBM extension to the language) which add cost and complexity to the compiler, and its co-processing facilities require a multi-programming environment with support for non-blocking multiple threads for processes by the operating system. Compiler writers were free to select whether to implement these features.
An undeclared variable is, by default, declared by first occurrence—thus misspelling might lead to unpredictable results. This "implicit declaration" is no different from FORTRAN programs. For PL/I(F), however, an attribute listing enables the programmer to detect any misspelled or undeclared variable.
Programmer issues
Many programmers were slow to move from COBOL or Fortran due to a perceived complexity of the language and immaturity of the PL/I F compiler. Programmers were sharply divided into scientific programmers (who used Fortran) and business programmers (who used COBOL), with significant tension and even dislike between the groups. PL/I syntax borrowed from both COBOL and Fortran syntax. So instead of noticing features that would make their job easier, Fortran programmers of the time noticed COBOL syntax and had the opinion that it was a business language, while COBOL programmers noticed Fortran syntax and looked upon it as a scientific language.
Both COBOL and Fortran programmers viewed it as a "bigger" version of their own language, and both were somewhat intimidated by the language and disinclined to adopt it. Another factor was pseudo-similarities to COBOL, Fortran, and ALGOL. These were PL/I elements that looked similar to one of those languages, but worked differently in PL/I. Such frustrations left many experienced programmers with a jaundiced view of PL/I, and often an active dislike for the language. An early UNIX fortune file contained the following tongue-in-cheek description of the language:
Speaking as someone who has delved into the intricacies of PL/I, I am sure that only Real Men could have written such a machine-hogging, cycle-grabbing, all-encompassing monster. Allocate an array and free the middle third? Sure! Why not? Multiply a character string times a bit string and assign the result to a float decimal? Go ahead! Free a controlled variable procedure parameter and reallocate it before passing it back? Overlay three different types of variable on the same memory location? Anything you say! Write a recursive macro? Well, no, but Real Men use rescan. How could a language so obviously designed and written by Real Men not be intended for Real Man use?
On the positive side, full support for pointers to all data types (including pointers to structures), recursion, multitasking, string handling, and extensive built-in functions meant PL/I was indeed quite a leap forward compared to the programming languages of its time. However, these were not enough to persuade a majority of programmers or shops to switch to PL/I.
The PL/I F compiler's compile time preprocessor was unusual (outside the Lisp world) in using its target language's syntax and semantics (e.g. as compared to the C preprocessor's "#" directives).
Special topics in PL/I
Storage classes
PL/I provides several 'storage classes' to indicate how the lifetime of variables' storage is to be managed – STATIC, AUTOMATIC, CONTROLLED, and BASED. The simplest to implement is STATIC, which indicates that memory is allocated and initialized at load-time, as is done in COBOL "working-storage" and early Fortran. This is the default for EXTERNAL variables.
PL/I's default storage class for INTERNAL variables is AUTOMATIC, similar to that of other block-structured languages influenced by ALGOL, like the "auto" storage class in the C language, and default storage allocation in Pascal and "local-storage" in IBM COBOL. Storage for AUTOMATIC variables is allocated upon entry into the BEGIN-block, procedure, or ON-unit in which they are declared. The compiler and runtime system allocate memory for a stack frame to contain them and other housekeeping information. If a variable is declared with an INITIAL-attribute, code to set it to an initial value is executed at this time. Care is required to manage the use of initialization properly. Large amounts of code can be executed to initialize variables every time a scope is entered, especially if the variable is an array or structure. Storage for AUTOMATIC variables is freed at block exit: STATIC, CONTROLLED, or BASED variables are used to retain variables' contents between invocations of a procedure or block. CONTROLLED storage is also managed using a stack, but the pushing and popping of allocations on the stack is managed by the programmer, using ALLOCATE and FREE statements. Storage for BASED variables is managed using ALLOCATE/FREE, but instead of a stack these allocations have independent lifetimes and are addressed through OFFSET or POINTER variables.
The AREA attribute is used to declare programmer-defined heaps. Data can be allocated and freed within a specific area, and the area can be deleted, read, and written as a unit.: pp.235–274
Storage type sharing
There are several ways of accessing allocated storage through different data declarations. Some of these are well defined and safe, some can be used safely with careful programming, and some are inherently unsafe and/or machine dependent.: pp.262–267, 178–180 Passing a variable as an argument to a parameter by reference allows the argument's allocated storage to be referenced using the parameter. The DEFINED attribute (e.g. DCL A(10,10), B(2:9,2:9) DEFINED A) allows part or all of a variable's storage to be used with a different, but consistent, declaration. The language definition includes a CELL attribute (later renamed UNION) to allow different definitions of data to share the same storage. This was not supported by many early IBM compilers. These usages are safe and machine independent.
Record I/O and list processing produce situations where the programmer needs to fit a declaration to the storage of the next record or item, before knowing what type of data structure it has. Based variables and pointers are key to such programs. The data structures must be designed appropriately, typically using fields in a data structure to encode information about its type and size. The fields can be held in the preceding structure or, with some constraints, in the current one. Where the encoding is in the preceding structure, the program needs to allocate a based variable with a declaration that matches the current item (using expressions for extents where needed). Where the type and size information are to be kept in the current structure ("self defining structures") the type-defining fields must be ahead of the type dependent items and in the same place in every version of the data structure. The REFER-option is used for self-defining extents (e.g. string lengths as in DCL 1 A BASED, 2 N BINARY, 2 B CHAR(LENGTH REFER A.N.), etc – where LENGTH is used to allocate instances of the data structure. For self-defining structures, any typing and REFERed fields are placed ahead of the "real" data. If the records in a data set, or the items in a list of data structures, are organised this way they can be handled safely in a machine independent way.
PL/I implementations do not (except for the PL/I Checkout compiler) keep track of the data structure used when storage is first allocated. Any BASED declaration can be used with a pointer into the storage to access the storage – inherently unsafe and machine dependent. However, this usage has become important for "pointer arithmetic" (typically adding a certain amount to a known address). This has been a contentious subject in computer science. In addition to the problem of wild references and buffer overruns, issues arise due to the alignment and length for data types used with particular machines and compilers. Many cases where pointer arithmetic might be needed involve finding a pointer to an element inside a larger data structure. The ADDR function computes such pointers, safely and machine independently.
Pointer arithmetic may be accomplished by aliasing a binary variable with a pointer as in DCL P POINTER, N FIXED BINARY(31) BASED(ADDR(P));
N=N+255;
It relies on pointers being the same length as FIXED BINARY(31) integers and aligned on the same boundaries.
With the prevalence of C and its free and easy attitude to pointer arithmetic, recent IBM PL/I compilers allow pointers to be used with the addition and subtraction operators to giving the simplest syntax (but compiler options can disallow these practices where safety and machine independence are paramount).
ON-units and exception handling
When PL/I was designed, programs only ran in batch mode, with no possible intervention from the programmer at a terminal. An exceptional condition such as division by zero would abort the program yielding only a hexadecimal core dump. PL/I exception handling, via ON-units, allowed the program to stay in control in the face of hardware or operating system exceptions and to recover debugging information before closing down more gracefully. As a program became properly debugged, most of the exception handling could be removed or disabled: this level of control became less important when conversational execution became commonplace.
Computational exception handling is enabled and disabled by condition prefixes on statements, blocks (including ON-units) and procedures. – e.g. (SIZE, NOSUBSCRIPTRANGE): A(I)=B(I)*C; . Operating system exceptions for Input/Output and storage management are always enabled.
The ON-unit is a single statement or BEGIN-block introduced by an ON-statement. Executing the ON statement enables the condition specified, e.g., ON ZERODIVIDE ON-unit. When the exception for this condition occurs and the condition is enabled, the ON-unit for the condition is executed. ON-units are inherited down the call chain. When a block, procedure or ON-unit is activated, the ON-units established by the invoking activation are inherited by the new activation. They may be over-ridden by another ON-statement and can be reestablished by the REVERT-statement. The exception can be simulated using the SIGNAL-statement – e.g. to help debug the exception handlers. The dynamic inheritance principle for ON-units allows a routine to handle the exceptions occurring within the subroutines it uses.
If no ON-unit is in effect when a condition is raised a standard system action is taken (often this is to raise the ERROR condition). The system action can be reestablished using the SYSTEM option of the ON-statement. With some conditions it is possible to complete executing an ON-unit and return to the point of interrupt (e.g., the STRINGRANGE, UNDERFLOW, CONVERSION, OVERFLOW, AREA, and FILE conditions) and resume normal execution. With other conditions such as (SUBSCRIPTRANGE), the ERROR condition is raised when this is attempted. An ON-unit may be terminated with a GO TO preventing a return to the point of interrupt, but permitting the program to continue execution elsewhere as determined by the programmer.
An ON-unit needs to be designed to deal with exceptions that occur in the ON-unit itself. The ON ERROR SYSTEM; statement allows a nested error trap; if an error occurs within an ON-unit, control might pass to the operating system where a system dump might be produced, or, for some computational conditions, continue execution (as mentioned above).
The PL/I RECORD I/O statements have relatively simple syntax as they do not offer options for the many situations from end-of-file to record transmission errors that can occur when a record is read or written. Instead, these complexities are handled in the ON-units for the various file conditions. The same approach was adopted for AREA sub-allocation and the AREA condition.
The existence of exception handling ON-units can have an effect on optimization, because variables can be inspected or altered in ON-units. Values of variables that might otherwise be kept in registers between statements, may need to be returned to storage between statements. This is discussed in the section on Implementation Issues above.: pp.249–376
GO TO with a non-fixed target
PL/I has counterparts for COBOL and FORTRAN's specialized GO TO statements.
Syntax for both COBOL and FORTRAN exist for coding two special two types of GO TO, each of which has a target that is not always the same.
ALTER (COBOL), ASSIGN (FORTRAN):
ALTER paragraph_name_xxx TO PROCEED TO para_name_zzz.There are other/helpful restrictions on these, especially "in programs ... RECURSIVE attribute, in methods, or .. THREAD option."
ASSIGN 1860 TO IGOTTAGOGO TO IGOTTAGO
One enhancement, which adds built-in documentation, isGO TO IGOTTAGO (1860, 1914, 1939)
(which restricts the variable's value to "one of the labels in the list.")
GO TO ... based on a variable's subscript-like value.
GO TO (1914, 1939, 2140), MYCHOICE
GO TO para_One para_Two para_Three DEPENDING ON IDECIDE.PL/I has statement label variables (with the LABEL attribute), which can store the value of a statement label, and later be used in a GOTO statement.: 54 : 23
LABL1: ....
.
.
LABL2: ...
.
.
.
MY_DEST = LABL1;
.
GO TO MY_DEST;
GO TO HERE(LUCKY_NUMBER); /* minus 1, zero, or ... */
HERE(-1): PUT LIST ("I O U"); GO TO Lottery;
HERE(0): PUT LIST ("No Cash"); GO TO Lottery;
HERE(1): PUT LIST ("Dollar Bill"); GO TO Lottery;
HERE(2): PUT LIST ("TWO DOLLARS"); GO TO Lottery;
Statement label variables can be passed to called procedures, and used to return to a different statement in the calling routine.
Sample programs
Hello world program
Search for a string
See also
List of programming languages
Timeline of programming languages
Textbooks
Neuhold, E.J. & Lawson, H.W. (1971). The PL/I Machine: An Introduction to Programming. Addison-wesley. ISBN 978-0-2010-5275-6.
Barnes, R.A. (1979). PL/I for Programmers. North-Holland.
Hughes, Joan K. (1973). PL/I Programming (1st ed.). Wiley. ISBN 0-471-42032-8.
Hughes, Joan K. (1986). PL/I Structured Programming (3rd ed.). Wiley. ISBN 0-471-83746-6.
Groner, G.F. (1971). PL/I Programming in Technological Applications. Books on Demand, Ann Arbor, MI.
Anderson, M.E. (1973). PL/I for Programmers. Prentice-Hall.
Stoutemyer, D.R. (1971). PL/I Programming for Engineering & Science. Prentice-Hall.
Ziegler, R.R. & C. (1986). PL/I: Structured Programming and Problem Solving (1st ed.). West. ISBN 978-0-314-93915-9.
Sturm, E. (2009). The New PL/I ... for PC, Workstation and Mainframe. Vieweg-Teubner, Wiesbaden, Germany. ISBN 978-3-8348-0726-7.
Vowels, R.A. (1997). Introduction to PL/I, Algorithms, and Structured Programming (3rd ed.). R.A. Vowels. ISBN 978-0-9596384-9-3.
Abrahams, Paul (1979). The PL/I Programming Language (PDF). Courant Mathematics and Computing Laboratory, New York University.
Standards
ANSI ANSI X3.53-1976 (R1998) Information Systems - Programming Language - PL/I
ANSI ANSI X3.74-1981 (R1998) Information Systems - Programming Language - PL/I General-Purpose Subset
ANSI ANSI X3.74-1987 (R1998) Information Systems - Programming Language - PL/I General-Purpose Subset
ECMA 50 Programming Language PL/I, 1st edition, December 1976
ISO 6160:1979 Programming languages—PL/I
ISO/IEC 6522:1992 Information technology—Programming languages—PL/I general purpose subset
Reference manuals
Burroughs Corporation, "B 6700 / B 7700 PL/I Language Reference", 5001530. Detroit, 1977.
CDC. R. A. Vowels, "PL/I for CDC Cyber". Optimizing compiler for the CDC Cyber 70 series.
Digital Equipment Corporation, "decsystem10 Conversational Programming Language User's Manual", DEC-10-LCPUA-A-D. Maynard, 1975.
Fujitsu Ltd, "Facom OS IV PL/I Reference Manual", 70SP5402E-1,1974. 579 pages. PL/I F subset.
Honeywell, Inc., "Multics PL/I Language Specification", AG94-02. 1981.
IBM, IBM Operating System/360 PL/I: Language Specifications, C28-6571. 1965.
IBM, OS PL/I Checkout and Optimizing Compilers: Language Reference Manual, GC33-0009. 1976.
IBM, IBM, "NPL Technical Report", December 1964.
IBM, Enterprise PL/I for z/OS Version 4 Release 1 Language Reference Manual Archived 2020-07-28 at the Wayback Machine, SC14-7285-00. 2010.
IBM, OS/2 PL/I Version 2: Programming: Language Reference, 3rd Ed., Form SC26-4308, San Jose. 1994.
Kednos PL/I for OpenVMS Systems. Reference Manual, AA-H952E-TM. Nov 2003.
Liant Software Corporation (1994), Open PL/I Language Reference Manual, Rev. Ed., Framingham (Mass.).
Nixdorf Computer, "Terminalsystem 8820 Systemtechnischer Teil PL/I-Subset",05001.17.8.93-01, 1976.
Ing. C. Olivetti, "Mini PL/I Reference Manual", 1975, No. 3970530 V
Q1 Corporation, "The Q1/LMC Systems Software Manual", Farmingdale, 1978.
IBM PL/I Compilers for z/OS, AIX, MVS, VM and VSE
Iron Spring Software, PL/I for Linux and OS/2
Micro Focus' Mainframe PL/I Migration Solution
OS PL/I V2R3 grammar Version 0.1
Pliedit, PL/I editor for Eclipse
Power vs. Adventure - PL/I and C, a side-by-side comparison of PL/I and C.
Softpanorama PL/1 page
The PL/I Language
PL1GCC project in SourceForge
PL/1 software to print signs, source code in book form, by David Sligar (1977), for IBM PL/1 F compiler.
An open source PL/I Compiler for Windows NT |
ECMAScript (; ES) is a standard for scripting languages, including JavaScript, JScript, and ActionScript. It is also best known as a JavaScript standard intended to ensure the interoperability of web pages across different web browsers. It is standardized by Ecma International in the document ECMA-262.
ECMAScript is commonly used for client-side scripting on the World Wide Web, and it is increasingly being used to write server-side applications and services using Node.js and other runtime environments.
ECMAScript, ECMA-262, JavaScript
ECMA-262, or the ECMAScript Language Specification, defines the ECMAScript Language, or just ECMAScript. ECMA-262 specifies only language syntax and the semantics of the core application programming interface (API), such as Array, Function, and globalThis, while valid implementations of JavaScript add their own functionality such as input/output and file system handling.
History
The ECMAScript specification is a standardized specification of a scripting language developed by Brendan Eich of Netscape; initially named Mocha, then LiveScript, and finally JavaScript. In December 1995, Sun Microsystems and Netscape announced JavaScript in a press release. In November 1996, Netscape announced a meeting of the Ecma International standards organization to advance the standardization of JavaScript. The first edition of ECMA-262 was adopted by the Ecma General Assembly in June 1997. Several editions of the language standard have been published since then. The name "ECMAScript" was a compromise between the organizations involved in standardizing the language, especially Netscape and Microsoft, whose disputes dominated the early standards sessions. Eich commented that "ECMAScript was always an unwanted trade name that sounds like a skin disease." ECMAScript has been formalized through operational semantics by work at Stanford University and the Department of Computing, Imperial College London for security analysis and standardization.
"ECMA" stood for "European Computer Manufacturers Association" until 1994.
Version history
Features
The ECMAScript language includes structured, dynamic, functional, and prototype-based features.
Imperative and structured
ECMAScript JavaScript supports C style structured programming. Previously, JavaScript only supported function scoping using the keyword var, but ECMAScript 2015 added the keywords let and const allowing JavaScript to support both block scoping and function scoping. JavaScript supports automatic semicolon insertion, meaning that semicolons that are normally used to terminate a statement in C may be omitted in JavaScript.Like C-style languages, control flow is done with the while, for, do / while, if / else, and switch statements. Functions are weakly typed and may accept and return any type. Arguments not provided default to undefined.
Weakly typed
ECMAScript is weakly typed. This means that certain types are assigned implicitly based on the operation being performed. However, there are several quirks in JavaScript's implementation of the conversion of a variable from one type to another. These quirks have been the subject of a talk entitled Wat.
Dynamic
ECMAScript is dynamically typed. Thus, a type is associated with a value rather than an expression. ECMAScript supports various ways to test the type of objects, including duck typing.
Transpiling
Since ES 2015, transpiling JavaScript has become very common. Transpilation is a source-to-source compilation in which newer versions of JavaScript are used, and a transpiler rewrites the source code so that it is supported by older browsers. Usually, transpilers transpile down to ES3 to maintain compatibility with all versions of browsers. The settings to transpile to a specific version can be configured according to need. Transpiling adds an extra step to the build process and is sometimes done to avoid needing polyfills. Polyfills create new features for older environments that lack them. Polyfills do this at runtime in the interpreter, such as the user's browser or on the server. Instead, transpiling rewrites the ECMA code itself during the build phase of development before it reaches the interpreter.
Conformance
In 2010, Ecma International started developing a standards test for Ecma 262 ECMAScript.
Test262 is an ECMAScript conformance test suite that can be used to check how closely a JavaScript implementation follows the ECMAScript Specification. The test suite contains thousands of individual tests, each of which tests some specific requirement(s) of the ECMAScript specification. The development of Test262 is a project of the Ecma Technical Committee 39 (TC39). The testing framework and individual tests are created by member organizations of TC39 and contributed to Ecma for use in Test262.
Important contributions were made by Google (Sputnik testsuite) and Microsoft who both contributed thousands of tests.
The Test262 testsuite consisted of 38014 tests as of January 2020. ECMAScript specifications through ES7 are well-supported in major web browsers. The table below shows the conformance rate for current versions of software with respect to the most recent editions of ECMAScript.
See also
ECMAScript for XML (E4X)
JavaScript
JScript
List of ECMAScript engines |
F* (pronounced F star) is a functional programming language inspired by ML and aimed at program verification. Its type system includes dependent types, monadic effects, and refinement types. This allows expressing precise specifications for programs, including functional correctness and security properties. The F* type-checker aims to prove that programs meet their specifications using a combination of SMT solving and manual proofs.
Programs written in F* can be translated to OCaml, F#, and C for execution. Previous versions of F* could also be translated to JavaScript.
It was introduced in 2011 and is under active development on GitHub.
History
Versions
Up until version 2022.03.24 F* was written entirely in a common subset of F* and F# and supported bootstrapping in both OCaml and F#. This was dropped beginning in version 2022.04.02.
Ahman, Danel; Hriţcu, Cătălin; Maillard, Kenji; Martínez, Guido; Plotkin, Gordon; Protzenko, Jonathan; Rastogi, Aseem; Swamy, Nikhil (2017). "Dijkstra Monads for Free". 44nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.
Swamy, Nikhil; Hriţcu, Cătălin; Keller, Chantal; Rastogi, Aseem; Delignat-Lavaud, Antoine; Forest, Simon; Bhargavan, Karthikeyan; Fournet, Cédric; Strub, Pierre-Yves; Kohlweiss, Markulf; Zinzindohoue, Jean-Karim; Zanella-Béguelin, Santiago (2016). "Dependent Types and Multi-Monadic Effects in F*". 43nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages.
F* Homepage
F* source code on GitHub
F* tutorial |
BeanShell is a small, free, embeddable Java source interpreter with object scripting language features, written in Java. It runs in the Java Runtime Environment (JRE), dynamically executes standard Java syntax and extends it with common scripting conveniences such as loose types, commands, and method closures, like those in Perl and JavaScript.
Features
While BeanShell allows its users to define functions that can be called from within a script, its underpinning philosophy has been to not pollute its syntax with too many extensions and "syntactic sugar", thereby ensuring that code written for a Java compiler can usually be executed interpretively by BeanShell without any changes and, almost just as much, vice versa. This makes BeanShell a popular testing and debugging tool for the Java virtual machine (JVM) platform.
BeanShell supports scripted objects as simple method closures like those in Perl and JavaScript.
BeanShell is an open source project and has been incorporated into many applications, such as Apache OpenOffice, Apache Ant, WebLogic Server Application Server, Apache JMeter, jEdit, ImageJ, JUMP GIS, Apache Taverna, and many others. BeanShell provides an easy to integrate application programming interface (API). It can also be run in command-line mode or within its own graphical environment.
History
The first versions of BeanShell (0.96, 1.0) were released by Patrick Niemeyer in 1999, followed by a series of versions. BeanShell 1.3.0 was released in August 2003. Version 2.0b1 was released in September 2003, culminating with version 2.0b4 in May 2005, which as of January 2015 is the newest release posted on the official webpage.BeanShell has been included in the Linux distribution Debian since 1999.BeanShell was undergoing standardization through the Java Community Process (JCP) under JSR 274.Following the JCP approval of the BeanShell JSR Review Ballot in June 2005, no visible activity was taking place around BeanShell. The JSR 274 status is "Dormant".
Since Java 9, Java instead includes JShell, a different read–eval–print loop (REPL) shell based on Java syntax, indicating that BeanShell will not be continued.A fork of BeanShell, BeanShell2, was created in May 2007 on the now-defunct Google Code Web site. The beanshell2 project has made a number of fixes and enhancements to BeanShell and multiple releases. As of January 2020, the latest version of BeanShell2 is v2.1.9, released March 2018. This fork was merged back into the original tree in 2018, retaining all the independent changes from both, and the official project has been hosted at GitHub.In December 2012, following a proposal to accept BeanShell as an Apache Incubator project, BeanShell was licensed to The Apache Software Foundation and migrated to the Apache Extras, changing the license to Apache License 2.0. The project was not accepted but instead projected to become part of the Apache Commons at a future time.
Due to changes in the developers' personal circumstances, the BeanShell community did not, however, complete the move to Apache, but remained at Apache Extras. The project has since released BeanShell 2.0b5, which is used by Apache OpenOffice and Apache Taverna.
A Windows automated installer, BeanShell Double-Click, was created in 2013. It includes desktop integration features.
See also
List of JVM languages
Comparison of programming languages
Comparison of command shells
BeanShell at Apache Extras |
BlooP and FlooP (Bounded loop and Free loop) are simple programming languages designed by Douglas Hofstadter to illustrate a point in his book Gödel, Escher, Bach. BlooP is a non-Turing-complete programming language whose main control flow structure is a bounded loop (i.e. recursion is not permitted). All programs in the language must terminate, and this language can only express primitive recursive functions.FlooP is identical to BlooP except that it supports unbounded loops; it is a Turing-complete language and can express all computable functions. For example, it can express the Ackermann function, which (not being primitive recursive) cannot be written in BlooP. Borrowing from standard terminology in mathematical logic, Hofstadter calls FlooP's unbounded loops MU-loops. Like all Turing-complete programming languages, FlooP suffers from the halting problem: programs might not terminate, and it is not possible, in general, to decide which programs do.
BlooP and FlooP can be regarded as models of computation, and have sometimes been used in teaching computability.
BlooP examples
The only variables are OUTPUT (the return value of the procedure) and CELL(i) (an unbounded sequence of natural-number variables, indexed by constants, as in the Unlimited Register Machine). The only operators are ⇐ (assignment), + (addition), × (multiplication), < (less-than), > (greater-than) and = (equals).
Each program uses only a finite number of cells, but the numbers in the cells can be arbitrarily large. Data structures such as lists or stacks can be handled by interpreting the number in a cell in specific ways, that is, by Gödel numbering the possible structures.
Control flow constructs include bounded loops, conditional statements, ABORT jumps out of loops, and QUIT jumps out of blocks. BlooP does not permit recursion, unrestricted jumps, or anything else that would have the same effect as the unbounded loops of FlooP. Named procedures can be defined, but these can call only previously defined procedures.
Factorial function
DEFINE PROCEDURE FACTORIAL [N]:
BLOCK 0: BEGIN
OUTPUT ⇐ 1;
CELL(0) ⇐ 1;
LOOP AT MOST N TIMES:
BLOCK 1: BEGIN
OUTPUT ⇐ OUTPUT × CELL(0);
CELL(0) ⇐ CELL(0) + 1;
BLOCK 1: END;
BLOCK 0: END.
Subtraction function
This is not a built-in operation and (being defined on natural numbers) never gives a negative result (e.g. 2 − 3 := 0). Note that OUTPUT starts at 0, like all the CELLs, and therefore requires no initialization.
DEFINE PROCEDURE MINUS [M,N]:
BLOCK 0: BEGIN
IF M < N, THEN:
QUIT BLOCK 0;
LOOP AT MOST M + 1 TIMES:
BLOCK 1: BEGIN
IF OUTPUT + N = M, THEN:
ABORT LOOP 1;
OUTPUT ⇐ OUTPUT + 1;
BLOCK 1: END;
BLOCK 0: END.
FlooP example
The example below, which implements the Ackermann function, relies on simulating a stack using Gödel numbering: that is, on previously defined numerical functions PUSH, POP, and TOP satisfying PUSH [N, S] > 0, TOP [PUSH [N, S]] = N, and POP [PUSH [N, S]] = S. Since an unbounded MU-LOOP is used, this is not a legal BlooP program. The QUIT BLOCK instructions in this case jump to the end of the block and repeat the loop, unlike the ABORT, which exits the loop.
DEFINE PROCEDURE ACKERMANN [M, N]:
BLOCK 0: BEGIN
CELL(0) ⇐ M;
OUTPUT ⇐ N;
CELL(1) ⇐ 0;
MU-LOOP:
BLOCK 1: BEGIN
IF CELL(0) = 0, THEN:
BLOCK 2: BEGIN
OUTPUT ⇐ OUTPUT + 1;
IF CELL(1) = 0, THEN: ABORT LOOP 1;
CELL(0) ⇐ TOP [CELL(1)];
CELL(1) ⇐ POP [CELL(1)];
QUIT BLOCK 1;
BLOCK 2: END
IF OUTPUT = 0, THEN:
BLOCK 3: BEGIN
OUTPUT ⇐ 1;
CELL(0) ⇐ MINUS [CELL(0), 1];
QUIT BLOCK 1;
BLOCK 3: END
OUTPUT ⇐ MINUS [OUTPUT, 1];
CELL(1) ⇐ PUSH [MINUS [CELL(0), 1], CELL(1)];
BLOCK 1: END;
BLOCK 0: END.
See also
Machine that always halts
Dictionary of Programming Languages - BLooP
Dictionary of Programming Languages - FLooP
The Retrocomputing Museum
Portland Pattern Repository: Bloop Floop and Gloop
A compiler for BlooP and FlooP |
Flapjax is a programming language built on JavaScript. It provides a spreadsheet-like reactive programming, dataflow computing style, termed functional reactive programming, making it easy to create reactive web pages without the burden of callbacks and potentially inconsistent mutation. Flapjax can be viewed in two ways: either as a library, for use in regular JavaScript programs, or as a new language that the compiler converts into generic JavaScript. In either case, the resulting programs can be run in a regular web browser. Flapjax comes with persistent storage and a simple application programming interface (API) that masks the complexity of using Ajax, and sharing and access control (AC) for server data.It is free and open-source software released under a 3-clause BSD license.
The Flapjax compiler is written in the language Haskell.
Further reading
Leo Meyerovich, Arjun Guha, Jacob Baskin, Greg Cooper, Michael Greenberg, Aleks Bromfield, Shriram Krishnamurthi".Flapjax: A Programming Language for Ajax Applications". OOPSLA 2009.
Leo Meyerovich, Arjun Guha, Jacob Baskin, Greg Cooper, Michael Greenberg, Aleks Bromfield, Shriram Krishnamurthi. "Flapjax: A Programming Language for Ajax Applications". Brown University Tech Report CS-09-04.
Arjun Guha, Shriram Krishnamurthi, Trevor Jim".Using Static Analysis for Ajax intrusion Detection". WWW 2009.
Arjun Guha, Jacob Matthews, Robert Bruce Findler, Shriram Krishnamurthi".Relationally-Parametric Polymorphic Contracts". DLS 2007.
Official website
Flapjax on GitHub |
Forth or FORTH may refer to:
Arts and entertainment
forth magazine, an Internet magazine
Forth (album), by The Verve, 2008
Forth, a 2011 album by Proto-Kaw
Radio Forth, a group of independent local radio stations in Scotland
People
Eric Forth (1944–2006), British politician
Frederick Forth (1808–1876), British colonial administrator
Hugh Forth (1610–1676), English politician
Jane Forth (born 1953), American actress and model
John Forth (c. 1769 – 1848), British jockey and racehorse trainer
Lisette Denison Forth (c. 1786 – 1866), American slave who became a landowner and philanthropist
Tasman Forth, pen name of Alexander Rud Mills (1885–1964), Australian Odinist
Places
Forth, Tasmania, Australia
Forth, Eckental, Germany
Forth, South Lanarkshire, Scotland
River Forth, in Scotland
River Forth (Tasmania), Australia
Forth (County Carlow barony), Ireland
Forth (County Wexford barony), Ireland
Forth (Edinburgh ward), Scotland
Ships
HMS Forth, the name of several ships of the Royal Navy
Forth (1814 ship), a sailing ship built at Calcutta, British India
Forth (1826 ship), a sailing ship built at Leith, Scotland
Other uses
Forth (programming language)
Foundation for Research & Technology – Hellas, a research centre in Greece
See also
All pages with titles beginning with Forth
All pages with titles containing Forth
Fort (disambiguation)
Fourth (disambiguation)
Sally Forth (disambiguation)
Firth of Forth, an estuary in Scotland
Islands of the Forth |
Visual FoxPro is a Microsoft data-centric procedural programming language with object-oriented programming (OOP) features.
It was derived from FoxPro (originally known as FoxBASE) which was developed by Fox Software beginning in 1984. Fox Technologies merged with Microsoft in 1992, after which the software acquired further features and the prefix "Visual". FoxPro 2.6 worked on Mac OS, DOS, Windows, and Unix.
Visual FoxPro 3.0, the first "Visual" version, reduced platform support to only Mac and Windows, and later versions 5, 6, 7, 8 and 9 were Windows-only. The current version of Visual FoxPro is COM-based and Microsoft has stated that they do not intend to create a Microsoft .NET version.
Version 9.0, released in December 2004 and updated in October 2007 with the SP2 patch, was the final version of the product. Support ended in January 2010 and extended support in January 2015.
History
Visual FoxPro originated as a member of the class of languages commonly referred to as "xBase" languages, which have syntax based on the dBase programming language. Other members of the xBase language family include Clipper and Recital (database).
Visual FoxPro, commonly abbreviated as VFP, is tightly integrated with its own relational database engine, which extends FoxPro's xBase capabilities to support SQL query and data manipulation. Unlike most database management systems, Visual FoxPro is a full-featured, dynamic programming language that does not require the use of an additional general-purpose programming environment. It can be used to write not just traditional "fat client" applications, but also middleware and web applications.
In late 2002, it was demonstrated that Visual FoxPro can run on Linux under the Wine Windows compatibility suite. In 2003, this led to complaints by Microsoft: it was claimed that the deployment of runtime FoxPro code on non-Windows machines violates the End User License Agreement.Visual FoxPro had a rapid rise and fall in popularity as measured by the TIOBE Programming Community Index. In December 2005, VFP broke into the top 20 for the first time. In June 2006 it peaked at position 12, making it (at the time) a "B" language. As of January 2023, Visual FoxPro holds position 21 on the TIOBE index.In March 2007, Microsoft announced that there would be no VFP 10, thus making VFP9 (released to manufacturing on December 17, 2004) the last commercial VFP release from Microsoft. Service Pack 2 for Microsoft Visual FoxPro 9.0 was released on October 16, 2007. The support of Version 9 ended on January 13, 2015.At the time of the end of life announcement, work on the next release codenamed Sedna (named after a recently discovered dwarf planet) which was built on top of the VFP9 codebase had already begun. "Sedna" is a set of add-ons to VFP 9.0 of xBase components to support a number of interoperability scenarios with various Microsoft technologies including SQL Server 2005, .NET Framework, Windows Vista, Office 2007, Windows Search and Team Foundation Server (TFS). Microsoft released Sedna under the Shared source license on the CodePlex site. Microsoft has clarified that the VFP core will still remain closed source. Sedna was released on January 25, 2008. As of March 2008, all xBase components of the VFP 9 SP2 (including Sedna) were available for community-development on CodePlex.
In late March 2007 a grassroots campaign was started by the Spanish-speaking FoxPro community at MásFoxPro ("MoreFoxPro" in English) to sign a petition to Microsoft to continue updating Visual FoxPro or release it to the community as open-source. On April 3, 2007, the movement was noted by the technical press.On April 3, 2007, Microsoft responded to the petition with this statement from Alan Griver:
"We're very aware of the FoxPro community and that played a large part in what we announced on March 13th. It's never an easy decision to announce that we're not going to release another version of a product and it's one that we consider very carefully.
"We're not announcing the end of FoxPro: Obviously, FoxPro applications will continue to work. By some of our internal estimates, there are more applications running in FoxPro 2.6 than there are in VFP and FoxPro 2.6 hasn't been supported in many years. Visual FoxPro 9 will be supported by Microsoft through 2015.
"For Microsoft to continue to evolve the FoxPro base, we would need to look at creating a 64-bit development environment and that would involve an almost complete rewrite of the core product. We've also invested in creating a scalable database with SQL Server, including the freely available SQL Server Express Edition. As far as forming a partnership with a third-party is concerned, we've heard from a number of large FoxPro customers that this would make it impossible for them to continue to use FoxPro since it would no longer be from an approved vendor. We felt that putting the environment into open source on CodePlex, which balances the needs of both the community and the large customers, was the best path forward."
Version Timeline
All versions listed are for Windows.
Code samples
The FoxPro language contains commands quite similar to other programming languages such as BASIC.
Some basic syntax samples:
Hello World examples:
Object
VFP has an extensive library of predefined classes and visual objects which are accessed in the IDE by a Property Sheet (including Methods), so code such as the above defining classes and objects are only needed for special purposes and the framework of large systems.
Data handling
The language also has extensive database manipulation and indexing commands.
The "help" index of commands in VFP 9 has several hundred commands and functions described.
The examples below show how to code the creation and indexing of tables, however VFP has table and database builder screens which create the tables and indexes without making you write code.
ODBC access using SQL passthrough
See also
Visual Objects
Xbase++
Harbour
XSharp
Microsoft pages
Main Visual FoxPro Microsoft page
MSDN FoxPro support board
VFP's online help
Other pages
A site devoted to the history of FoxPro
VFPx A Visual FoxPro Community effort to create open source add-ons for VFP 9.0
Fox In Cloud online clone homepage |
FoxPro was a text-based procedurally oriented programming language and database management system (DBMS), and it was also an object-oriented programming language, originally published by Fox Software and later by Microsoft, for MS-DOS, Windows, Macintosh, and UNIX. The final published release of FoxPro was 2.6. Development continued under the Visual FoxPro label, which in turn was discontinued in 2007.
FoxPro was derived from FoxBase (Fox Software, Perrysburg, Ohio), which was in turn derived from dBase III (Ashton-Tate) and dBase II. dBase II was the first commercial version of a database program written by Wayne Ratliff, called Vulcan, running on CP/M, as does dBase II.FoxPro was both a DBMS and a relational database management system (RDBMS), since it extensively supported multiple relationships between multiple DBF files (tables). However, it lacked transactional processing.
FoxPro was sold and supported by Microsoft after they acquired Fox Software in its entirety in 1992. At that time there was an active worldwide community of FoxPro users and programmers. FoxPro 2.6 for UNIX (FPU26) has even been successfully installed on Linux and FreeBSD using the Intel Binary Compatibility Standard (ibcs2) support library.
Version information
Operating system compatibility
Technical aspects
FoxPro 2 included the "Rushmore" optimizing engine, which used indices to accelerate data retrieval and updating. Rushmore technology examined every data-related statement and looked for filter expressions. If one was used, it looked for an index matching the same expression.
FoxPro 2 was originally built on Watcom C/C++, which used the DOS/4GW memory extender to access expanded and extended memory. It could also use almost all available RAM even if no HIMEM.SYS was loaded.
Version timeline
History of FoxPro - Timeline
A site devoted to the history of FoxPro |
In computer programming, Franz Lisp is a discontinued Lisp programming language system written at the University of California, Berkeley (UC Berkeley, UCB) by Professor Richard Fateman and several students, based largely on Maclisp and distributed with the Berkeley Software Distribution (BSD) for the Digital Equipment Corporation (DEC) VAX minicomputer. Piggybacking on the popularity of the BSD package, Franz Lisp was probably the most widely distributed and used Lisp system of the 1970s and 1980s.The name is a pun on the composer and pianist Franz Liszt.
It was written specifically to be a host for running the Macsyma computer algebra system on VAX. The project began at the end of 1978, soon after UC Berkeley took delivery of their first VAX 11/780 (named Ernie CoVax, after Ernie Kovacs, the first of many systems with pun names at UCB). Franz Lisp was available free of charge to educational sites, and was also distributed on Eunice, a Berkeley Unix emulator that ran on VAX VMS.
History
At the time of Franz Lisp's creation, the Macsyma computer algebra system ran mainly on a DEC PDP-10. This computer's limited address space caused difficulties. Attempted remedies included ports of Maclisp to Multics or Lisp machines, but even if successful, these would only be solutions for the Massachusetts Institute of Technology (MIT) as these machines were costly and uncommon. Franz Lisp was the first example of a framework where large Lisp programs could be run outside the Lisp machines environment; Macsyma was then considered a very large program. After being ported to Franz Lisp, Macsyma was distributed to about 50 sites under a license restricted by MIT's interest in making Macsyma proprietary. The VAX Macsyma that ran on Franz Lisp was called Vaxima. When Symbolics Inc., bought the commercial rights to Macsyma from MIT to sell along with its Lisp machines, it eventually was compelled to sell Macsyma also on DEC VAX and Sun Microsystems computers, paying royalties to the University of California for the use of Franz Lisp.
Other Lisp implementations for the VAX were MIT's NIL (never fully functional), University of Utah's Portable Standard Lisp, DEC's VAX Lisp, Xerox's Interlisp-VAX, and Le Lisp.
In 1982, the port of Franz Lisp to the Motorola 68000 processor was begun. In particular, it was ported to a prototype Sun-1 made by Sun Microsystems, which ran a variant of Berkeley Software Distribution (BSD) Unix called SunOS. In 1986, at Purdue University, Franz Lisp was ported to the CCI Power 6/32 platform, code named Tahoe.
The major contributors to Franz Lisp at UC Berkeley were John K. Foderaro, Keith Sklower, and Kevin Layer.
A company was formed to provide support for Franz Lisp called Franz Inc., by founders Richard Fateman, John Foderaro, Fritz Kunze, Kevin Layer, and Keith Sklower, all associated with UC Berkeley. After that, development and research on Franz Lisp continued for a few years, but the acceptance of Common Lisp greatly reduced the need for Franz Lisp. The first product of Franz Inc. was Franz Lisp running on various Motorola 68000-based workstations. A port of Franz Lisp was even done to VAX VMS for Lawrence Berkeley National Laboratory. However, almost immediately Franz Inc. began work on their implementation of Common Lisp, Allegro Common Lisp.
Features
The Franz Lisp interpreter was written in C and Franz Lisp. It was bootstrapped solely using the C compiler. The Franz Lisp compiler, written entirely in Franz Lisp, was called Liszt, completing the pun on the name of the composer Franz Liszt.
Some notable features of Franz Lisp were arrays in Lisp interchangeable with arrays in Fortran and a foreign function interface (FFI) which allowed interoperation with other languages at the binary level. Many of the implementation methods were borrowed from Maclisp: bibop memory organization (BIg Bag Of Pages), small integers represented uniquely by pointers to fixed values in fields, and fast arithmetic.
Important applications
Franz Lisp was used as the example language in Robert Wilensky's first edition of Lispcraft
An implementation of OPS5 by DEC on Franz Lisp was used as the basis for a rule-based system for configuring VAX-11 computer system orders and was important to DEC's sales of these computers
Slang: a circuit simulator used to design and test the reduced instruction set computer RISC-I microprocessor
As a derivative: Cadence Design Systems Skill programming language
See also
PC-LISP is an implementation of Franz Lisp for the operating system DOS which still runs on emulators and Microsoft Windows today.
Franz Lisp Opus 38.92 for VAX source code
other Franz Lisp resources
Franz Lisp at History of LISP |
A rune is a letter in a set of related alphabets known as runic alphabets native to the Germanic peoples. Runes were used to write Germanic languages (with some exceptions) before they adopted the Latin alphabet, and for specialised purposes thereafter. In addition to representing a sound value (a phoneme), runes can be used to represent the concepts after which they are named (ideographs). Scholars refer to instances of the latter as Begriffsrunen ('concept runes'). The Scandinavian variants are also known as futhark or fuþark (derived from their first six letters of the script: F, U, Þ, A, R, and K); the Anglo-Saxon variant is futhorc or fuþorc (due to sound-changes undergone in Old English by the names of those six letters).
Runology is the academic study of the runic alphabets, runic inscriptions, runestones, and their history. Runology forms a specialised branch of Germanic philology.
The earliest secure runic inscriptions date from around AD 150, with a potentially earlier inscription dating to AD 50 and Tacitus's potential description of rune use from around AD 98. The Svingerud Runestone dates from between AD 1 to 250. Runes were generally replaced by the Latin alphabet as the cultures that had used runes underwent Christianisation, by approximately AD 700 in central Europe and 1100 in northern Europe. However, the use of runes persisted for specialized purposes beyond this period. Up until the early 20th century, runes were still used in rural Sweden for decorative purposes in Dalarna and on runic calendars.
The three best-known runic alphabets are the Elder Futhark (c. AD 150–800), the Anglo-Saxon Futhorc (400–1100), and the Younger Futhark (800–1100). The Younger Futhark is divided further into the long-branch runes (also called Danish, although they were also used in Norway, Sweden, and Frisia); short-branch or Rök runes (also called Swedish-Norwegian, although they were also used in Denmark); and the stavlösa or Hälsinge runes (staveless runes). The Younger Futhark developed further into the medieval runes (1100–1500), and the Dalecarlian runes (c. 1500–1800).
The exact development of the early runic alphabet remains unclear but the script ultimately stems from the Phoenician alphabet. Early runes may have developed from the Raetic, Venetic, Etruscan, or Old Latin as candidates. At the time, all of these scripts had the same angular letter shapes suited for epigraphy, which would become characteristic of the runes and related scripts in the region.
The process of transmission of the script is unknown. The oldest clear inscriptions are found in Denmark and northern Germany. A "West Germanic hypothesis" suggests transmission via Elbe Germanic groups, while a "Gothic hypothesis" presumes transmission via East Germanic expansion. Runes continue to be used in a wide variety of ways in modern popular culture.
Name
Etymology
The name stems from a Proto-Germanic form reconstructed as *rūnō, which may be translated as 'secret, mystery; secret conversation; rune'. It is the source of Gothic rūna (𐍂𐌿𐌽𐌰, 'secret, mystery, counsel'), Old English rún ('whisper, mystery, secret, rune'), Old Saxon rūna ('secret counsel, confidential talk'), Middle Dutch rūne ('id'), Old High German rūna ('secret, mystery'), and Old Norse rún ('secret, mystery, rune'). The earliest Germanic epigraphic attestation is the Primitive Norse rūnō (accusative singular), found on the Einang stone (AD 350–400) and the Noleby stone (AD 450).The term is related to Proto-Celtic *rūna ('secret, magic'), which is attested in Old Irish rún ('mystery, secret'), Middle Welsh rin ('mystery, charm'), Middle Breton rin ('secret wisdom'), and possibly in the ancient Gaulish Cobrunus (< *com-rūnos 'confident'; cf. Middle Welsh cyfrin, Middle Breton queffrin, Middle Irish comrún 'shared secret, confidence') and Sacruna (< *sacro-runa 'sacred secret'), as well as in Lepontic Runatis (< *runo-ātis 'belonging to the secret'). However, it is difficult to tell whether they are cognates (linguistic siblings from a common origin), or if the Proto-Germanic form reflects an early borrowing from Celtic. Various connections have been proposed with other Indo-European terms (for example: Sanskrit ráuti रौति 'roar', Latin rūmor 'noise, rumor'; Ancient Greek eréō ἐρέω 'ask' and ereunáō ἐρευνάω 'investigate'), although linguist Ranko Matasović finds them difficult to justify for semantic or linguistic reasons. Because of this, some scholars have speculated that the Germanic and Celtic words may have been a shared religious term borrowed from an unknown non-Indo-European language.
Related terms
In early Germanic, a rune could also be referred to as *rūna-stabaz, a compound of *rūnō and *stabaz ('staff; letter'). It is attested in Old Norse rúna-stafr, Old English rún-stæf, and Old High German rūn-stab. Other Germanic terms derived from *rūnō include *runōn ('counsellor'), *rūnjan and *ga-rūnjan ('secret, mystery'), *raunō ('trial, inquiry, experiment'), *hugi-rūnō ('secret of the mind, magical rune'), and *halja-rūnō ('witch, sorceress'; literally '[possessor of the] Hel-secret'). It is also often part of personal names, including Gothic Runilo (𐍂𐌿𐌽𐌹𐌻𐍉), Frankish Rúnfrid, Old Norse Alfrún, Dagrún, Guðrún, Sigrún, Ǫlrún, Old English Ælfrún, and Lombardic Goderūna.The Finnish word runo, meaning 'poem', is an early borrowing from Proto-Germanic, and the source of the term for rune, riimukirjain, meaning 'scratched letter'. The root may also be found in the Baltic languages, where Lithuanian runoti means both 'to cut (with a knife)' and 'to speak'.The Old English form rún survived into the early modern period as roun, which is now obsolete. The modern English rune is a later formation that is partly derived from Late Latin runa, Old Norse rún, and Danish rune.
History and use
The runes were in use among the Germanic peoples from the 1st or 2nd century AD. This period corresponds to the late Common Germanic stage linguistically, with a continuum of dialects not yet clearly separated into the three branches of later centuries: North Germanic, West Germanic, and East Germanic.
No distinction is made in surviving runic inscriptions between long and short vowels, although such a distinction was certainly present phonologically in the spoken languages of the time. Similarly, there are no signs for labiovelars in the Elder Futhark (such signs were introduced in both the Anglo-Saxon futhorc and the Gothic alphabet as variants of p; see peorð.)
Origins
The formation of the Elder Futhark was complete by the early 5th century, with the Kylver Stone being the first evidence of the futhark ordering as well as of the p rune.
Specifically, the Rhaetic alphabet of Bolzano is often advanced as a candidate for the origin of the runes, with only five Elder Futhark runes (ᛖ e, ᛇ ï, ᛃ j, ᛜ ŋ, ᛈ p) having no counterpart in the Bolzano alphabet. Scandinavian scholars tend to favor derivation from the Latin alphabet itself over Rhaetic candidates. A "North Etruscan" thesis is supported by the inscription on the Negau helmet dating to the 2nd century BC. This is in a northern Etruscan alphabet but features a Germanic name, Harigast. Giuliano and Larissa Bonfante suggest that runes derived from some North Italic alphabet, specifically Venetic: But since Romans conquered Veneto after 200 BC, and then the Latin alphabet became prominent and Venetic culture diminished in importance, Germanic people could have adopted the Venetic alphabet within the 3rd century BC or even earlier.The angular shapes of the runes are shared with most contemporary alphabets of the period that were used for carving in wood or stone. There are no horizontal strokes: when carving a message on a flat staff or stick, it would be along the grain, thus both less legible and more likely to split the wood. This characteristic is also shared by other alphabets, such as the early form of the Latin alphabet used for the Duenos inscription, but it is not universal, especially among early runic inscriptions, which frequently have variant rune shapes, including horizontal strokes. Runic manuscripts (that is written rather than carved runes, such as Codex Runicus) also show horizontal strokes.
The "West Germanic hypothesis" speculates on an introduction by West Germanic tribes. This hypothesis is based on claiming that the earliest inscriptions of the 2nd and 3rd centuries, found in bogs and graves around Jutland (the Vimose inscriptions), exhibit word endings that, being interpreted by Scandinavian scholars to be Proto-Norse, are considered unresolved and long having been the subject of discussion.
In the early Runic period, differences between Germanic languages are generally presumed to be small. Another theory presumes a Northwest Germanic unity preceding the emergence of Proto-Norse proper from roughly the 5th century. An alternative suggestion explaining the impossibility of classifying the earliest inscriptions as either North or West Germanic is forwarded by È. A. Makaev, who presumes a "special runic koine", an early "literary Germanic" employed by the entire Late Common Germanic linguistic community after the separation of Gothic (2nd to 5th centuries), while the spoken dialects may already have been more diverse.
The Meldorf fibula and Tacitus's Germania
With the potential exception of the Meldorf fibula, a possible runic inscription found in Schleswig-Holstein dating to around 50 AD, the earliest reference to runes (and runic divination) may occur in Roman Senator Tacitus's ethnographic Germania. Dating from around 98 CE, Tacitus describes the Germanic peoples as utilizing a divination practice involving rune-like inscriptions:
For divination and casting lots they have the highest possible regard. Their procedure for casting lots is uniform: They break off the branch of a fruit tree and slice into strips; they mark these by certain signs and throw them, as random chance will have it, on to a white cloth. Then a state priest, if the consultation is a public one, or the father of the family, if it is private, prays to the gods and, gazing to the heavens, picks up three separate strips and reads their meaning from the marks scored on them. If the lots forbid an enterprise, there can be no further consultation about it that day; if they allow it, further confirmation by divination is required.
As Victoria Symons summarizes, "If the inscriptions made on the lots that Tacitus refers to are understood to be letters, rather than other kinds of notations or symbols, then they would necessarily have been runes, since no other writing system was available to Germanic tribes at this time."
Early inscriptions
Runic inscriptions from the 400-year period 150–550 AD are described as "Period I". These inscriptions are generally in Elder Futhark, but the set of letter shapes and bindrunes employed is far from standardized. Notably the j, s, and ŋ runes undergo considerable modifications, while others, such as p and ï, remain unattested altogether prior to the first full futhark row on the Kylver Stone (c. 400 AD).
Artifacts such as spear heads or shield mounts have been found that bear runic marking that may be dated to 200 AD, as evidenced by artifacts found across northern Europe in Schleswig (North Germany), Funen, Zealand, Jutland (Denmark), and Scania (Sweden). Earlier—but less reliable—artifacts have been found in Meldorf, Süderdithmarschen, in northern Germany; these include brooches and combs found in graves, most notably the Meldorf fibula, and are supposed to have the earliest markings resembling runic inscriptions.
Magical or divinatory use
The stanza 157 of Hávamál attribute to runes the power to bring that which is dead back to life. In this stanza, Odin recounts a spell:
The earliest runic inscriptions found on artifacts give the name of either the craftsman or the proprietor, or sometimes, remain a linguistic mystery. Due to this, it is possible that the early runes were not used so much as a simple writing system, but rather as magical signs to be used for charms. Although some say the runes were used for divination, there is no direct evidence to suggest they were ever used in this way. The name rune itself, taken to mean "secret, something hidden", seems to indicate that knowledge of the runes was originally considered esoteric, or restricted to an elite. The 6th-century Björketorp Runestone warns in Proto-Norse using the word rune in both senses:
Haidzruno runu, falahak haidera, ginnarunaz. Arageu haeramalausz uti az. Weladaude, sa'z þat barutz. Uþarba spa.
I, master of the runes(?) conceal here runes of power. Incessantly (plagued by) maleficence, (doomed to) insidious death (is) he who breaks this (monument). I prophesy destruction / prophecy of destruction.
The same curse and use of the word, rune, is also found on the Stentoften Runestone. There also are some inscriptions suggesting a medieval belief in the magical significance of runes, such as the Franks Casket (AD 700) panel.
Charm words, such as auja, laþu, laukaʀ, and most commonly, alu, appear on a number of Migration period Elder Futhark inscriptions as well as variants and abbreviations of them. Much speculation and study has been produced on the potential meaning of these inscriptions. Rhyming groups appear on some early bracteates that also may be magical in purpose, such as salusalu and luwatuwa. Further, an inscription on the Gummarp Runestone (500–700 AD) gives a cryptic inscription describing the use of three runic letters followed by the Elder Futhark f-rune written three times in succession.Nevertheless, it has proven difficult to find unambiguous traces of runic "oracles": although Norse literature is full of references to runes, it nowhere contains specific instructions on divination. There are at least three sources on divination with rather vague descriptions that may, or may not, refer to runes: Tacitus's 1st-century Germania, Snorri Sturluson's 13th-century Ynglinga saga, and Rimbert's 9th-century Vita Ansgari.
The first source, Tacitus's Germania, describes "signs" chosen in groups of three and cut from "a nut-bearing tree", although the runes do not seem to have been in use at the time of Tacitus' writings. A second source is the Ynglinga saga, where Granmar, the king of Södermanland, goes to Uppsala for the blót. There, the "chips" fell in a way that said that he would not live long (Féll honum þá svo spánn sem hann mundi eigi lengi lifa). These "chips", however, are easily explainable as a blótspánn (sacrificial chip), which was "marked, possibly with sacrificial blood, shaken, and thrown down like dice, and their positive or negative significance then decided."The third source is Rimbert's Vita Ansgari, where there are three accounts of what some believe to be the use of runes for divination, but Rimbert calls it "drawing lots". One of these accounts is the description of how a renegade Swedish king, Anund Uppsale, first brings a Danish fleet to Birka, but then changes his mind and asks the Danes to "draw lots". According to the story, this "drawing of lots" was quite informative, telling them that attacking Birka would bring bad luck and that they should attack a Slavic town instead. The tool in the "drawing of lots", however, is easily explainable as a hlautlein (lot-twig), which according to Foote and Wilson would be used in the same manner as a blótspánn.
The lack of extensive knowledge on historical use of the runes has not stopped modern authors from extrapolating entire systems of divination from what few specifics exist, usually loosely based on the reconstructed names of the runes and additional outside influence.
A recent study of runic magic suggests that runes were used to create magical objects such as amulets, but not in a way that would indicate that runic writing was any more inherently magical, than were other writing systems such as Latin or Greek.
Medieval use
As Proto-Germanic evolved into its later language groups, the words assigned to the runes and the sounds represented by the runes themselves began to diverge somewhat and each culture would create new runes, rename or rearrange its rune names slightly, or stop using obsolete runes completely, to accommodate these changes. Thus, the Anglo-Saxon futhorc has several runes peculiar to itself to represent diphthongs unique to (or at least prevalent in) the Anglo-Saxon dialect.
Some later runic finds are on monuments (runestones), which often contain solemn inscriptions about people who died or performed great deeds. For a long time it was presumed that this kind of grand inscription was the primary use of runes, and that their use was associated with a certain societal class of rune carvers.
In the mid-1950s, however, approximately 670 inscriptions, known as the Bryggen inscriptions, were found in Bergen. These inscriptions were made on wood and bone, often in the shape of sticks of various sizes, and contained inscriptions of an everyday nature—ranging from name tags, prayers (often in Latin), personal messages, business letters, and expressions of affection, to bawdy phrases of a profane and sometimes even of a vulgar nature. Following this find, it is nowadays commonly presumed that, at least in late use, Runic was a widespread and common writing system.
In the later Middle Ages, runes also were used in the clog almanacs (sometimes called Runic staff, Prim, or Scandinavian calendar) of Sweden and Estonia. The authenticity of some monuments bearing Runic inscriptions found in Northern America is disputed; most of them have been dated to modern times.
Runes in Eddic poetry
In Norse mythology, the runic alphabet is attested to a divine origin (Old Norse: reginkunnr). This is attested as early as on the Noleby Runestone from c. 600 AD that reads Runo fahi raginakundo toj[e'k]a..., meaning "I prepare the suitable divine rune..." and in an attestation from the 9th century on the Sparlösa Runestone, which reads Ok rað runaʀ þaʀ rægi[n]kundu, meaning "And interpret the runes of divine origin". In the Poetic Edda poem Hávamál, Stanza 80, the runes also are described as reginkunnr:
The poem Hávamál explains that the originator of the runes was the major deity, Odin. Stanza 138 describes how Odin received the runes through self-sacrifice:
In stanza 139, Odin continues:
In the Poetic Edda poem Rígsþula another origin is related of how the runic alphabet became known to humans. The poem relates how Ríg, identified as Heimdall in the introduction, sired three sons—Thrall (slave), Churl (freeman), and Jarl (noble)—by human women. These sons became the ancestors of the three classes of humans indicated by their names. When Jarl reached an age when he began to handle weapons and show other signs of nobility, Ríg returned and, having claimed him as a son, taught him the runes. In 1555, the exiled Swedish archbishop Olaus Magnus recorded a tradition that a man named Kettil Runske had stolen three rune staffs from Odin and learned the runes and their magic.
Runic alphabets
Elder Futhark (2nd to 8th centuries)
The Elder Futhark, used for writing Proto-Norse, consists of 24 runes that often are arranged in three groups of eight; each group is referred to as an ætt (Old Norse, meaning 'clan, group'). The earliest known sequential listing of the full set of 24 runes dates to approximately AD 400 and is found on the Kylver Stone in Gotland, Sweden.
Most probably each rune had a name, chosen to represent the sound of the rune itself. The names are, however, not directly attested for the Elder Futhark themselves. Germanic philologists reconstruct names in Proto-Germanic based on the names given for the runes in the later alphabets attested in the rune poems and the linked names of the letters of the Gothic alphabet. For example, the letter /a/ was named from the runic letter called Ansuz. An asterisk before the rune names means that they are unattested reconstructions. The 24 Elder Futhark runes are the following:
Anglo-Saxon runes (5th to 11th centuries)
The futhorc (sometimes written "fuþorc") are an extended alphabet, consisting of 29, and later 33 characters. It was probably used from the 5th century onwards. There are competing theories as to the origins of the Anglo-Saxon Futhorc. One theory proposes that it was developed in Frisia and later spread to England, while another holds that Scandinavians introduced runes to England, where the futhorc was modified and exported to Frisia. Some examples of futhorc inscriptions are found on the Thames scramasax, in the Vienna Codex, in Cotton Otho B.x (Anglo-Saxon rune poem) and on the Ruthwell Cross.
The Anglo-Saxon rune poem gives the following characters and names: ᚠ feoh, ᚢ ur, ᚦ þorn, ᚩ os, ᚱ rad, ᚳ cen, ᚷ gyfu, ᚹ ƿynn, ᚻ hægl, ᚾ nyd, ᛁ is, ᛄ ger, ᛇ eoh, ᛈ peorð, ᛉ eolh, ᛋ sigel, ᛏ tir, ᛒ beorc, ᛖ eh, ᛗ mann, ᛚ lagu, ᛝ ing, ᛟ œthel, ᛞ dæg, ᚪ ac, ᚫ æsc, ᚣ yr, ᛡ ior, ᛠ ear.
Extra runes attested to outside of the rune poem include ᛢ cweorð, ᛣ calc, ᚸ gar, and ᛥ stan. Some of these additional letters have only been found in manuscripts. Feoh, þorn, and sigel stood for [f], [þ], and [s] in most environments, but voiced to [v], [ð], and [z] between vowels or voiced consonants. Gyfu and wynn stood for the letters yogh and wynn, which became [g] and [w] in Middle English.
"Marcomannic runes" (8th to 9th centuries)
A runic alphabet consisting of a mixture of Elder Futhark with Anglo-Saxon futhorc is recorded in a treatise called De Inventione Litterarum, ascribed to Hrabanus Maurus and preserved in 8th- and 9th-century manuscripts mainly from the southern part of the Carolingian Empire (Alemannia, Bavaria). The manuscript text attributes the runes to the Marcomanni, quos nos Nordmannos vocamus, and hence traditionally, the alphabet is called "Marcomannic runes", but it has no connection with the Marcomanni, and rather is an attempt of Carolingian scholars to represent all letters of the Latin alphabets with runic equivalents.
Wilhelm Grimm discussed these runes in 1821.
Younger Futhark (9th to 11th centuries)
The Younger Futhark, also called Scandinavian Futhark, is a reduced form of the Elder Futhark, consisting of only 16 characters. The reduction correlates with phonetic changes when Proto-Norse evolved into Old Norse. They are found in Scandinavia and Viking Age settlements abroad, probably in use from the 9th century onward. They are divided into long-branch (Danish) and short-twig (Swedish and Norwegian) runes. The difference between the two versions is a matter of controversy. A general opinion is that the difference between them was functional (viz., the long-branch runes were used for documentation on stone, whereas the short-twig runes were in everyday use for private or official messages on wood).
Medieval runes (12th to 15th centuries)
In the Middle Ages, the Younger Futhark in Scandinavia was expanded, so that it once more contained one sign for each phoneme of the Old Norse language. Dotted variants of voiceless signs were introduced to denote the corresponding voiced consonants, or vice versa, voiceless variants of voiced consonants, and several new runes also appeared for vowel sounds. Inscriptions in medieval Scandinavian runes show a large number of variant rune forms, and some letters, such as s, c, and z often were used interchangeably.Medieval runes were in use until the 15th century. Of the total number of Norwegian runic inscriptions preserved today, most are medieval runes. Notably, more than 600 inscriptions using these runes have been discovered in Bergen since the 1950s, mostly on wooden sticks (the so-called Bryggen inscriptions). This indicates that runes were in common use side by side with the Latin alphabet for several centuries. Indeed, some of the medieval runic inscriptions are written in Latin.
Dalecarlian runes (16th to 19th centuries)
According to Carl-Gustav Werner, "In the isolated province of Dalarna in Sweden a mix of runes and Latin letters developed." The Dalecarlian runes came into use in the early 16th century and remained in some use up to the 20th century. Some discussion remains on whether their use was an unbroken tradition throughout this period or whether people in the 19th and 20th centuries learned runes from books written on the subject. The character inventory was used mainly for transcribing Swedish in areas where Elfdalian was predominant.
Differences from Roman script
While Roman script would ultimately replace runes in most contexts, it differed significantly from runic script. For example, on the differences between the use of Anglo-Saxon runes and the Latin script that would come to replace them, runologist Victoria Symons says:
As well as being distinguished from the roman alphabet in visual appearance and letter order, the fuþorc is further set apart by the fact that, unlike their roman counterparts, runic letters are often associated not only with sound values but also with names. These names are often nouns and, in almost all instances, they begin with the sound value represented by the associated letter. ... The fact that each rune represents [both] a sound value and a word gives this writing system a multivalent quality that further distinguishes it from roman script. A roman letter simply represents its sound value. When used, for example, for the purpose of pagination, such letters can assume added significance, but this is localised to the context of an individual manuscript. Runic letters, on the other hand, are inherently multivalent; they can, and often do, represent several different kinds of information simultaneously. This aspect of runic letters is one that is frequently employed and exploited by writers and scribes who include them in their manuscripts.
Use as ideographs (Begriffsrunen)
In addition to their historic use as letters in the runic alphabets, runes were also used to represent their names (ideographs). Such instances are sometimes referred to by way of the modern German loan word Begriffsrunen, meaning 'concept-runes' (singular Begriffsrune). The criteria for the use of Begriffsrunen and the frequency of their use by ancient rune-writers remains controversial. The topic of Begriffsrunen has produced much discussion among runologists. Runologist Klaus Düwel has proposed a two-point criteria for the identification of runes as Begriffsrunen: A graphic argument and a semantic argument.Examples of Begriffsrunen (or potential Begriffsrunen) include the following:
In addition to the instances above, several different runes occur as ideographs in Old English and Old Norse manuscripts (featuring Anglo-Saxon runes and Younger Futhark runes respectively). Runologist Thomas Birkett summarizes these numerous instances as follows:
The maðr rune is found regularly in Icelandic manuscripts, the fé rune somewhat less frequently, whilst in Anglo-Saxon manuscripts the runes mon, dæg, wynn and eþel are all used on occasion. These are some of the most functional of the rune names, occurring relatively often in written language, unlike the elusive peorð, for example, which would be of little or no use as an abbreviation because of its rarity. The practicality of using an abbreviation for a familiar noun such as 'man' is demonstrated clearly in the Old Norse poem Hávamál, where the maðr rune is used a total of forty-five times, saving a significant amount of space and effort (Codex Regius: 5–14)
Academic study
The modern study of runes was initiated during the Renaissance, by Johannes Bureus (1568–1652). Bureus viewed runes as holy or magical in a kabbalistic sense. The study of runes was continued by Olof Rudbeck Sr (1630–1702) and presented in his collection Atlantica. Anders Celsius (1701–1744) further extended the science of runes and travelled around the whole of Sweden to examine the runstenar. From the "golden age of philology" in the 19th century, runology formed a specialized branch of Germanic linguistics.
Body of inscriptions
The largest group of surviving Runic inscription are Viking Age Younger Futhark runestones, commonly found in Denmark and Sweden. Another large group are medieval runes, most commonly found on small objects, often wooden sticks. The largest concentration of runic inscriptions are the Bryggen inscriptions found in Bergen, more than 650 in total. Elder Futhark inscriptions number around 350, about 260 of which are from Scandinavia, of which about half are on bracteates. Anglo-Saxon futhorc inscriptions number around 100 items.
Modern use
Runic alphabets have seen numerous uses since the 18th-century Viking revival, in Scandinavian Romantic nationalism (Gothicismus) and Germanic occultism in the 19th century, and in the context of the Fantasy genre and of modern Germanic paganism in the 20th century.
Esotericism
Germanic mysticism and Nazi Germany
The pioneer of the Armanist branch of Ariosophy and one of the more important figures in esotericism in Germany and Austria in the late 19th and early 20th century was the Austrian occultist, mysticist, and völkisch author, Guido von List. In 1908, he published in Das Geheimnis der Runen ("The Secret of the Runes") a set of eighteen so-called, "Armanen runes", based on the Younger Futhark and runes of List's own introduction, which allegedly were revealed to him in a state of temporary blindness after cataract operations on both eyes in 1902. The use of runes in Germanic mysticism, notably List's "Armanen runes" and the derived "Wiligut runes" by Karl Maria Wiligut, played a certain role in Nazi symbolism. The fascination with runic symbolism was mostly limited to Heinrich Himmler, and not shared by the other members of the Nazi top echelon. Consequently, runes appear mostly in insignia associated with the Schutzstaffel ("SS"), the paramilitary organization led by Himmler. Wiligut is credited with designing the SS-Ehrenring, which displays a number of "Wiligut runes".
Modern paganism and esotericism
Runes are popular in New Age esotericism, modern Germanic paganism, and to a lesser extent in other forms of modern paganism. Various systems of Runic divination have been published since the 1980s, notably by Ralph Blum (1982), Stephen Flowers (1984, onward), Stephan Grundy (1990), and Nigel Pennick (1995).The Uthark theory originally was proposed as a scholarly hypothesis by Sigurd Agrell in 1932.
In 2002, Swedish esotericist Thomas Karlsson popularized this "Uthark" runic row, which he refers to as, the "night side of the runes", in the context of modern occultism.
Bluetooth
The Bluetooth logo is the combination of two runes of the Younger Futhark, ᚼ hagall and ᛒ bjarkan, equivalent to the letters H and B, that are the initials of Harald Blåtand's name (Bluetooth in English), who was a king of Denmark from the Viking Age.
Fantasy Literature
In J. R. R. Tolkien's novel The Hobbit (1937), the Anglo-Saxon runes are used on a map and on the title page to emphasize its connection to the Dwarves. They also were used in the initial drafts of The Lord of the Rings, but later were replaced by the Cirth rune-like alphabet invented by Tolkien, used to write the language of the Dwarves, Khuzdul. Following Tolkien, historical and fictional runes appear commonly in modern popular culture, particularly in fantasy literature, like in J. K. Rowling's Harry Potter, where Runes is a subject taught at Hogwarts, also in the 7th book Harry Potter and the Deathly Hallows, Dumbledore gave Hermione a children's book called The Tales of Beedle the Bard which is written in runes.
Video, board and role-playing games
Runes feature extensively in many video games that incorporate themes from early Germanic cultures, including Hellblade: Senua's Sacrifice, Jøtun, Northgard and God of War. They are used for a range of purposes including as names, symbols, decoration and on runestones that provide information about Nordic mythology and background for the game's narrative.The 1992 video game Heimdall used runes as "magical symbols" associated with unnatural forces. Role-playing games, such as the Ultima series, use a runic font for in-game signs and printed maps and booklets, and Metagaming's The Fantasy Trip used rune-based cipher for clues and jokes throughout its publications.
Unicode
Runic alphabets were added to the Unicode Standard in September, 1999 with the release of version 3.0.
The Unicode block for Runic alphabets is U+16A0–U+16FF. It is intended to encode the letters of the Elder Futhark, the Anglo-Frisian runes, and the Younger Futhark long-branch and short-twig (but not the staveless) variants, in cases where cognate letters have the same shape resorting to "unification".
The block as of Unicode 3.0 contained 81 symbols: 75 runic letters (U+16A0–U+16EA), 3 punctuation marks (Runic Single Punctuation U+16EB ᛫, Runic Multiple Punctuation U+16EC ᛬ and Runic Cross Punctuation U+16ED ᛭), and three runic symbols that are used in early modern runic calendar staves ("Golden number Runes", Runic Arlaug Symbol U+16EE ᛮ, Runic Tvimadur Symbol U+16EF ᛯ, Runic Belgthor Symbol U+16F0 ᛰ). As of Unicode 7.0 (2014), eight characters were added, three attributed to J. R. R. Tolkien's mode of writing Modern English in Anglo-Saxon runes, and five for the "cryptogrammic" vowel symbols used in an inscription on the Franks Casket.
See also
Bautil
Gothic runic inscriptions
List of runestones
Pentadic numerals – Runic notation for presenting numbers
Runiform (disambiguation), various scripts having a "rune-like" appearance
Runic magic
Sveriges runinskrifter
Footnotes
Nytt om Runer (runology journal), NO: UIO
Bibliography of Runic Scholarship, Galinn grund, archived from the original on 2008-09-05
Gamla Runinskrifter, SE: Christer hamp
Gosse, Edmund William (1911). "Runes, Runic Language and Inscriptions" . Encyclopædia Britannica. Vol. 23 (11th ed.). pp. 852–853.
Smith, Nicole; Beale, Gareth; Richards, Julian; Scholma-Mason, Nela (2018), "Maeshowe: The Application of RTI to Norse Runes (Data Paper)", Internet Archaeology (47), doi:10.11141/ia.47.8, S2CID 165773006
Old Norse Online by Todd B. Krause and Jonathan Slocum, free online lessons at the Linguistics Research Center at the University of Texas at Austin, contains a lesson on runic inscriptions |
The Z shell (Zsh) is a Unix shell that can be used as an interactive login shell and as a command interpreter for shell scripting. Zsh is an extended Bourne shell with many improvements, including some features of Bash, ksh, and tcsh.
Zsh was created by Paul Falstad in 1990 while he was a student at Princeton University. It combines features from both ksh and tcsh, offering functionality such as programmable command-line completion, extended file globbing, improved variable/array handling, and themeable prompts.
Zsh is available for Microsoft Windows as part of the UnxUtils collection and has been adopted as the default shell for macOS and Kali Linux. The "Oh My Zsh" user community website provides a platform for third-party plug-ins and themes, featuring a large and active contributor base.
History
Paul Falstad wrote the first version of Zsh in 1990 while a student at Princeton University. The name zsh derives from the name of Yale professor Zhong Shao (then a teaching assistant at Princeton University) – Paul Falstad regarded Shao's login-id, "zsh", as a good name for a shell.Zsh was at first intended to be a subset of csh for the Amiga, but expanded far beyond that. By the time of the release of version 1.0 in 1990 the aim was to be a cross between ksh and tcsh – a powerful "command and programming language" that is well-designed and logical (like ksh), but also built for humans (like tcsh), with all the neat features like spell checking, login/logout watching and termcap support that were "probably too weird to make it into an AT&T product".Zsh is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities.In 2019, macOS Catalina adopted Zsh as the default login shell, replacing the GPLv2 licensed version of Bash, and when Bash is run interactively on Catalina, a warning is shown by default.In 2020, Kali Linux adopted Zsh as the default shell since its 2020.4 release.
Features
Features include:
Programmable command-line completion that can help the user type both options and arguments for most used commands, with out-of-the-box support for several hundred commands
Sharing of command history among all running shells
Extended file globbing allows file specification without needing to run an external program such as find
Improved variable/array handling
Editing of multi-line commands in a single buffer
Spelling correction and autofill of command names (and optionally arguments, assumed to be file names)
Various compatibility modes, e.g. Zsh can pretend to be a Bourne shell when run as /bin/sh
Themeable prompts, including the ability to put prompt information on the right side of the screen and have it auto-hide when typing a long command
Loadable modules, providing among other things: full TCP and Unix domain socket controls, an FTP client, and extended math functions.
The built-in where command. Works like the which command but shows all locations of the target command in the directories specified in $PATH rather than only the one that will be used.
Named directories. This allows the user to set up shortcuts such as ~mydir, which then behave the way ~ and ~user do.
Community
A user community website known as "Oh My Zsh" collects third-party plug-ins and themes for the Z shell. As of 2021, their GitHub repository has over 1900 contributors, over 300 plug-ins, and over 140 themes. It also comes with an auto-update tool that makes it easier to keep installed plug-ins and themes updated.
See also
Comparison of command shells
Official website
Z shell on SourceForge
zsh at Curlie
Oh My Zsh on GitHub |
The Z notation is a formal specification language used for describing and modelling computing systems. It is targeted at the clear specification of computer programs and computer-based systems in general.
History
In 1974, Jean-Raymond Abrial published "Data Semantics". He used a notation that would later be taught in the University of Grenoble until the end of the 1980s. While at EDF (Électricité de France), working with Bertrand Meyer, Abrial also worked on developing Z. The Z notation is used in the 1980 book Méthodes de programmation.Z was originally proposed by Abrial in 1977 with the help of Steve Schuman and Bertrand Meyer. It was developed further at the Programming Research Group at Oxford University, where Abrial worked in the early 1980s, having arrived at Oxford in September 1979.
Abrial has said that Z is so named "Because it is the ultimate language!" although the name "Zermelo" is also associated with the Z notation through its use of Zermelo–Fraenkel set theory.
In 1992, the Z User Group (ZUG) was established to oversee activities concerning the Z notation, especially meetings and conferences.
Usage and notation
Z is based on the standard mathematical notation used in axiomatic set theory, lambda calculus, and first-order predicate logic. All expressions in Z notation are typed, thereby avoiding some of the paradoxes of naive set theory. Z contains a standardized catalogue (called the mathematical toolkit) of commonly used mathematical functions and predicates, defined using Z itself. It is augmented with Z schema boxes, which can be combined using their own operators, based on standard logical operators, and also by including schemas within other schemas. This allows Z specifications to be built up into large specifications in a convenient manner.
Because Z notation (just like the APL language, long before it) uses many non-ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX. There are also Unicode encodings for all standard Z symbols.
Standards
ISO completed a Z standardization effort in 2002. This standard and a technical corrigendum are available from ISO free:
the standard is publicly available from the ISO ITTF site free of charge and, separately, available for purchase from the ISO site;
the technical corrigendum is available from the ISO site free of charge.
Award
In 1992, Oxford University Computing Laboratory was awarded The Queen's Award for Technological Achievement for their joint development with IBM of Z notation.
See also
Z User Group (ZUG)
Community Z Tools (CZT) project
Other formal methods (and languages using formal specifications):
VDM-SL, the main alternative to Z
B-Method, developed by Jean-Raymond Abrial (creator of Z notation)
Z++ and Object-Z : object extensions for the Z notation
Alloy, a specification language inspired by Z notation and implementing the principles of Object Constraint Language (OCL).
Verus, a proprietary tool built by Compion, Champaign, Illinois (later purchased by Motorola), for use in the multi-level secure UNIX project pioneered by its Addamax division.
Fastest is a model-based testing tool for the Z notation.
Syntropy – Measure of distance to normalityPages displaying short descriptions of redirect targets
Unified Modeling Language – Software system design modeling tool by Object Management Group
Further reading
Spivey, John Michael (1992). The Z Notation: A reference manual. International Series in Computer Science (2nd ed.). Prentice Hall.
Davies, Jim; Woodcock, Jim (1996). Using Z: Specification, Refinement and Proof. International Series in Computer Science. Prentice Hall. ISBN 0-13-948472-8.
Bowen, Jonathan (1996). Formal Specification and Documentation using Z: A Case Study Approach. International Thomson Computer Press, International Thomson Publishing. ISBN 1-85032-230-9.
Jacky, Jonathan (1997). The Way of Z: Practical Programming with Formal Methods. Cambridge University Press. ISBN 0-521-55976-6. |
ZOPL is a programming language created by Geac Computer Corporation in the early 1970s for use on their mainframe computer systems used in libraries and banking institutions. It had similarities to C and Pascal.
ZOPL stood for "Version Z, Our Programming Language".
ZOPL is still in use at CGI Group (formerly known as RealTime Datapro), who ported it to VAX/VMS and Unix in the 1980s, and to Windows in 1998. by 2010 it had been ported to run on Windows XP/2000/2003 and Red Hat Linux. The RTM (formerly ZUG) language compiler and runtime framework are written in ZOPL.
Outside of CGI, ZOPL has not been in general use since the late 1980s, although there is still one known working system where it is found embedded in programs written in the KARL programming language. |
Yorick is a character in William Shakespeare's play Hamlet. He is the dead court jester whose skull is exhumed by the First Gravedigger in Act 5, Scene 1, of the play. The sight of Yorick's skull evokes a reminiscence by Prince Hamlet of the man, who apparently played a role during Hamlet's upbringing:
Alas, poor Yorick! I knew him, Horatio; a fellow of infinite jest, of most excellent fancy; he hath borne me on his back a thousand times; and now, how abhorred in my imagination it is! My gorge rises at it. Here hung those lips that I have kissed I know not how oft. Where be your gibes now? Your gambols? Your songs? Your flashes of merriment, that were wont to set the table on a roar? (Hamlet, V.i)
It is suggested that Shakespeare may have intended his audience to connect Yorick with the Elizabethan comedian Richard Tarlton, a celebrated performer of the pre-Shakespearean stage, who had died a decade or so before Hamlet was first performed.
Vanitas imagery
The contrast between Yorick as "a fellow of infinite jest, of most excellent fancy" and his grim remains reflects on the theme of earthly vanity: death being unavoidable, the things of this life are inconsequential.
This theme of Memento mori ("Remember you shall die") is common in 16th- and 17th-century painting, appearing in art throughout Europe. Images of Mary Magdalene regularly showed her contemplating a skull. It is also a very common motif in 15th- and 16th-century British portraiture.
Memento mori are also expressed in images of playful children or young men, depicted looking at a skull as a sign of the transience of life. It was also a familiar motif in emblem books and tombs.
Hamlet meditating upon the skull of Yorick has become a lasting embodiment of this idea, and has been depicted by later artists as part of the vanitas tradition.
Name
The name Yorick has been interpreted as an attempt to render a Scandinavian forename: usually either "Eric" or "Jørg", a form of the name George. The name "Rorik" has also been suggested, since it appears in Saxo Grammaticus, one of Shakespeare's source texts, as the name of the queen's father. There has been no agreement about which name is most likely.Alternative suggestions include the ideas that it may be derived from the Viking name of the city of York (Jórvík), or that it is a near-anagram of the Greek word 'Kyrios' and thus a reference to the Catholic martyr Edmund Campion.
The name was used by Laurence Sterne in his comic novels Tristram Shandy and A Sentimental Journey as the surname of one of the characters, a parson who is a humorous portrait of the author. Parson Yorick is supposed to be descended from Shakespeare's Yorick.
Portrayals
The earliest known printed image of Hamlet holding Yorick's skull is a 1773 engraving by John Hall after a design by Edward Edwards in Bell's edition of Shakespeare's plays. It has since become a common subject. While Yorick normally only appears as the skull, there have been scattered portrayals of him as a living man, such as Philip Hermogenes Calderon's painting The Young Lord Hamlet (1868), which depicts him carrying the child Hamlet on his back, as if being ridden like a horse by the prince. He was portrayed by comedian Ken Dodd in a flashback during the gravedigging scene in Kenneth Branagh's 1996 film Hamlet.
Pianist André Tchaikowsky donated his skull to the Royal Shakespeare Company for use in theatrical productions, hoping that it would be used as the skull of Yorick. Tchaikowsky died in 1982. His skull was used during rehearsals for a 1989 RSC production of Hamlet starring Mark Rylance, but the company eventually decided to use a replica skull in the performance. Musical director Claire van Kampen, who later married Rylance, recalled:
As a company, we all felt most privileged to be able to work the gravedigger scene with a real skull ... However, collectively as a group we agreed that as the real power of theatre lies in the complicity of illusion between actor and audience, it would be inappropriate to use a real skull during the performances, in the same way that we would not be using real blood, etc. It is possible that some of us felt a certain primitive taboo about the skull, although the gravedigger, as I recall, was all for it!
Although Tchaikowsky's skull was not used in the performances of this production, its use during rehearsals affected some interpretations and line readings: for example, Rylance delivered the line "That skull had a tongue in it, and could sing once" with especial reproach. In this production, Hamlet retained Yorick's skull throughout subsequent scenes, and it was eventually placed on a mantelpiece as a "talisman" during his final duel with Laertes. In 2008, Tchaikowsky's skull was used by David Tennant in an RSC production of Hamlet at the Courtyard Theatre, Stratford-upon-Avon. It was later announced that the skull had been replaced after it became apparent that news of the skull distracted the audience too much from the play. This was untrue however, and the skull was used as a prop throughout the run of the production after its move to London's West End.
Yorick appears as a principal character in the novel The Skull of Truth by Bruce Coville. |
In computer programming, Yoix is a high-level, general-purpose, interpreted, object-based, dynamic programming language. The Yoix interpreter is implemented using standard Java technology without any add-on packages and requires only a Sun-compliant JVM to operate. Initially developed by AT&T Labs researchers for internal use, it has been available as free and open source software since late 2000.
History
In 1998, Java technology was still emerging: the Swing toolkit was an add-on package; interruptible I/O, regular expressions, and a printf capability were not yet features; nor had Java Web Start been developed. Moreover, Java scripting languages were largely non-existent at that time: Groovy and JRuby had not yet been invented and Jython had just been created in late 1997. Browsers in 1998 had limited feature sets, were too unstable for production use in an 8-hour shift and were still fighting skirmishes in the Browser Wars. In this environment, Yoix technology was created in response to a pressing need for a reliable, easy to distribute and maintain, GUI front-end for a mission-critical application being developed within AT&T, namely its Global Fraud Management System, which to this day monitors and tracks fraud activity related to voice traffic on AT&T's expanding networks: wireline, wireless, and IP. Yoix technology was first released to the public in late 2000 under the Open Source Initiative Common Public License V1.0.
The Yoix name came about partially from the fox hunting cry of encouragement to the hounds, partially to echo another familiar four-letter name that ends in ix, and partially to avoid too many false-positives in a Google search.
Overview
Yoix technology provides a pure Java programming language implementation of a general purpose dynamic programming language developed by researchers at AT&T Labs. Its syntax and grammar should be easy to learn for those familiar with the C programming language and Java. To an end-user, a Yoix application is indistinguishable from a Java application, but to the application developer Yoix should provide a simpler coding experience than working in Java directly, much like writing Perl code can be simpler than writing C code.
Features
The Yoix language is not an object oriented language, but makes use of over 165 object types that provide access to most of the standard Java classes. Because the Yoix interpreter is built entirely using Java technology, it means that Yoix applications are cross-platform, GUI-capable and both network and thread friendly, yet Yoix developers find themselves insulated from the more complex and error-prone parts of coding the same functionality directly in Java. It does not use reflection to access Java functionality and thus adds value by not only simplifying access to that functionality, but also improving application reliability by coding through both Java glitches and complicated Java features one-time, behind-the-scenes. The Yoix language includes safe pointers, addressing, declarations, and global and local variables. In addition to supporting native user functions, users can add their own builtin functions written in Java.
Design
The two central elements in the Yoix design are borrowed from the PostScript language: dictionaries as language components and permissions-protected dictionaries as exposed system components. Homage to the Tcl language and its exposure philosophy should also be given, though it did not have a direct influence.Another key Yoix design element involves pointers and addressing. Pointers and pointer arithmetic in the Yoix language is syntactically similar to what is found in the C language, but the Yoix implementation prevents using a pointer outside its bounds. In addition, the address operator always produces a valid, usable result.Overall, the Yoix design attempted to make the language easy to learn by programmers experienced with the C and Java languages.
Applications
The Yoix distribution includes the Yoix Web Application Instant Template (YWAIT), a software framework for building a Yoix web application. A Yoix web application resides on a web server and is downloaded piecemeal at run-time on an as-needed basis by Yoix interpreters running on client machines. This model, analogous to the familiar model of client web browsers downloading a website piecemeal as-needed at run-time, permits simple, efficient distribution and maintenance of applications and relies only on the ubiquitous web server and the Yoix interpreter. Building a web application using the YWAIT framework requires just a few standard Unix tools available in most modern operating systems, such as Linux or Mac OS X, or under Microsoft Windows with the help of add-on packages such as U/Win. The client side of a YWAIT-based application relies only on the Yoix interpreter and is thus platform independent, running wherever Java runs. Because the Yoix software development philosophy aims to keep things simple by eschewing the popular tendency for multiple embedded specialized languages and the YWAIT framework permits easy, incremental screen development in a simple, logical source tree hierarchy, development of a Yoix web application is reduced to the basics: a command prompt and a text editor. IDE enthusiasts may be nonplussed, but this Small Is Beautiful approach to software development has been highly effective in practice at AT&T.
Data visualization
In addition to its role as a tool for building GUI applications, Yoix technology supports several modes of data visualization.
Data mining
A data visualization module called YDAT (Yoix Data Analysis Tool) has been included in the public Yoix distribution since release 2.1.2. YDAT uses a data manager component to coordinate data display and filtering among its several visualization components that include an event plot, a graph drawing pane, histogram filters and tabular detail. YDAT is able to display graphs generated by the GraphViz graph drawing and layout tool, which is another open source tool freely available from AT&T Labs. YDAT is highly configurable at the Yoix language level. The image below is a screenshot of a Yoix YDAT instantiation, which in this example is being used to analyze vehicle auction transactions.
Graph drawing
Yoix technology provides good support for graph drawing. In addition to graph display mentioned above as part of the YDAT module, data types in the Yoix language support building, manipulating and traversing graph structures. Native Yoix functions support the DOT language output and a built-in DOT language parser to facilitate interaction with the GraphViz layout engines.
Organizing cells of data
The YChart data visualization toolkit was added to the Yoix distribution with release 2.2.0. YChart allows one to organize and display cells of data. Two interactive YChart applications contained in the Yoix distribution are a Periodic Table of the Elements and a Unicode Chart. A program to demonstrate using YChart with variable width cells, as might occur with a schedule, is also available in the Yoix distribution.
Interactive 2D graphics
The Yoix distribution also includes a Yoix package, called Byzgraf, for rendering basic data plots such as line charts, histograms and statistical box plots.
Limitations and focus
As currently implemented, the Yoix language is interpreted, which means that, for example, it is probably not the right choice for computationally intensive applications unless one codes those computations in a Java module extension. Similarly, excessive looping will also display the limitations of this interpreted language. The focus of the language is interactive standalone or client/server GUI and data visualization applications.
Licensing
Yoix technology is free software licensed under the Open Source Initiative Common Public License. Yoix is a registered trademark of At&T Inc.
Examples
1. Extract all HTML directives from the AT&T home page and write them to standard output.
2. Build and display a GUI with two buttons in a titled frame (i.e., window) that also has a titled border. One button pops up a message when pressed, the other quits the example. The window is sized automatically to just fit its components, and some additional code calculates its location to put it in the center of the screen before making it visible.
3. The code shown here was used to generate the Yoix logo image in PNG format that can be seen in the language description box near the top of this page. Command-line arguments allow the size of the image to be specified as well as select between PNG image output or display in an on-screen window.
Archive of original AT&T Labs-Research: Yoix Home Page
Web Engineering Workshop Paper
Software - Practice & Experience Paper |
Q# (pronounced as Q sharp) is a domain-specific programming language used for expressing quantum algorithms. It was initially released to the public by Microsoft as part of the Quantum Development Kit.
History
Historically, Microsoft Research had two teams interested in quantum computing, the QuArC team based in Redmond, directed by Krysta Svore, that explored the construction of quantum circuitry, and Station Q initially located in Santa Barbara and directed by Michael Freedman, that explored topological quantum computing.During a Microsoft Ignite Keynote on September 26, 2017, Microsoft announced that they were going to release a new programming language geared specifically towards quantum computers. On December 11, 2017, Microsoft released Q# as a part of the Quantum Development Kit.At Build 2019, Microsoft announced that it would be open-sourcing the Quantum Development Kit, including its Q# compilers and simulators.Bettina Heim currently leads the Q# language development effort.
Usage
Q# is available as a separately downloaded extension for Visual Studio, but it can also be run as an independent tool from the Command line or Visual Studio Code. The Quantum Development Kit ships with a quantum simulator which is capable of running Q#.In order to invoke the quantum simulator, another .NET programming language, usually C#, is used, which provides the (classical) input data for the simulator and reads the (classical) output data from the simulator.
Features
A primary feature of Q# is the ability to create and use qubits for algorithms. As a consequence, some of the most prominent features of Q# are the ability to entangle and introduce superpositioning to qubits via Controlled NOT gates and Hadamard gates, respectively, as well as Toffoli Gates, Pauli X, Y, Z Gate, and many more which are used for a variety of operations; see the list at the article on quantum logic gates.The hardware stack that will eventually come together with Q# is expected to implement Qubits as topological qubits. The quantum simulator that is shipped with the Quantum Development Kit today is capable of processing up to 32 qubits on a user machine and up to 40 qubits on Azure.
Documentation and resources
Currently, the resources available for Q# are scarce, but the official documentation is published: Microsoft Developer Network: Q#. Microsoft Quantum Github repository is also a large collection of sample programs implementing a variety of Quantum algorithms and their tests.
Microsoft has also hosted a Quantum Coding contest on Codeforces, called Microsoft Q# Coding Contest - Codeforces, and also provided related material to help answer the questions in the blog posts, plus the detailed solutions in the tutorials.
Microsoft hosts a set of learning exercises to help learn Q# on GitHub: microsoft/QuantumKatas with links to resources, and answers to the problems.
Syntax
Q# is syntactically related to both C# and F# yet also has some significant differences.
Similarities with C#
Uses namespace for code isolation
All statements end with a ;
Curly braces are used for statements of scope
Single line comments are done using //
Variable data types such as Int Double String and Bool are similar, although capitalised (and Int is 64-bit)
Qubits are allocated and disposed inside a using block.
Lambda functions are defined using the => operator.
Results are returned using the return keyword.
Similarities with F#
Variables are declared using either let or mutable
First-order functions
Modules, which are imported using the open keyword
The datatype is declared after the variable name
The range operator ..
for … in loops
Every operation/function has a return value, rather than void. Instead of void, an empty Tupleis returned.
Definition of record datatypes (using the newtype keyword, instead of type).
Differences
Functions are declared using the function keyword
Operations on the quantum computer are declared using the operation keyword
Lack of multiline comments
Asserts instead of throwing exceptions
Documentation is written in Markdown instead of XML-based documentation tags
Example
The following source code is a multiplexer from the official Microsoft Q# library repository.
Official website
qsharp-language on GitHub |
In Islamic philosophy, the qalb (Arabic: قلب) or heart is the center of the human personality. The Quran mentions "qalb" 132 times and its root meaning suggests that the heart is always in a state of motion and transformation. According to the Quran and the prophetic tradition, the heart plays a central role in human existence, serving as the source of good and evil, right and wrong. In Islam, God is more concerned with the motives of one's heart than their actions. The heart is also a medium for God's revelations to human beings, and is associated with virtues such as knowledge, faith, purity, piety, love, and repentance. Without purification, however, the heart can become plagued with negative attributes such as sickness, sinfulness, evil, and hate.
Theologically, the heart is regarded as the barzakh or isthmus between this world and the next, and between the visible and invisible worlds, the human realm, and the realm of the Spirit.
In the Quran
The Quran frequently employs the term "qalb" (heart), which appears 132 times, and at times substitutes it with similar terms. The word's root meaning denotes concepts of change, transformation, and fluctuation, implying that the heart is constantly in motion and may undergo reversal or alteration. The Quran uses the term "heart" in various ways that highlight its central role in human existence. These diverse uses of the word imply that its original meaning - involving ideas of turning, changing, and overturning - remains relevant, as the heart is regarded as the source of good and evil, right and wrong. The Quran teaches that both believers and non-believers possess hearts. In general, the Quran portrays the heart "as the locus of that which makes a human being human, the center of the human personality". This importance of the heart is due to the profound relationship between humans and God, with the heart being the point of convergence where they can meet God. This interaction is multi-dimensional, encompassing both cognitive and moral dimensions.God pays special attention to the heart, as it is viewed as the true center of a person. Quranic verses highlight that God is more concerned with the motives of one's heart than their actions. While mistakes can be forgiven, the intentions of the heart are critical. For example, in 33:5 the Quran states: "There is no fault in you if you make mistakes, but only in what your hearts premeditate". In 2:225, it says: "God will
not take you to task for a slip in your oaths; but He will take you to task for what your hearts have earned; and God is Forgiving, Clement" (cf. 2:118, 8:70).According to the Quran, the heart serves as a medium for God's revelations to human beings. Prophets receive revelations in their hearts, and it is also a place for vision, understanding, and remembrance. The heart plays a crucial role in fostering faith and directing guidance towards the right path. However, it can also serve as a breeding ground for doubt, denial, unbelief, and misguidance, which Satan may try to instill. The heart is associated with virtues such as purity, piety, love, and repentance, but these virtues are not inherent and must be placed by God. Without God's purification, the heart can become plagued with negative attributes such as sickness, sinfulness, evil, and hate. The heart is meant to be open and receptive to the divine guidance, light, and love. However, the hearts of those who do wrong can become hard and harsh. The Quran teaches that God has sent down a beautiful scripture, and those who fear Him tremble when they read it, causing their skin and hearts to soften. However, if the heart is not receptive, it can become hard like stone, or even harder, as the hearts of some have become.
In prophetic tradition
The Prophet Muhammad frequently used supplications where he called upon God as the one who makes hearts fluctuate or turn about. He described the heart as being like a feather in the desert, blown by the wind to and fro. One of his wives reported that he used to pray for his heart to be fixed in God's religion, and when she asked him about it, he explained that every person's heart lies between two fingers of God and that God can make it go straight or swerve as He wishes.
Theological aspects
In Islamic thought, the heart is considered the core of human being, encompassing not only physical and emotional aspects but also intellectual and spiritual aspects. It serves as a connection between individuals and the larger, transcendent realms of existence. According to Seyyed Hossein Nasr, modern society rejects the importance of heart-knowledge because it fails to recognize the existence of individuals beyond their individualistic levels of being.
The heart is not a center of our being; it is the supreme center, its uniqueness resulting from the metaphysical principle that for any specific realm of manifestation there must exist a principle of unity. The heart is the barzakh or isthmus between this world and the next, between the visible and invisible worlds, between the human realm and the realm of the Spirit, between the horizontal and vertical dimensions of existence.
Stages of taming qalb
Qalb also refers to the second among the six purities or Lataif-e-sitta in Sufi philosophy.
To attend Tasfiya-e-Qalb, the Salik needs to achieve the following sixteen goals.
Zuhd or abstention from evil
Taqwa or God-consciousness
War' a or attempt to get away from things that are not related to Allah.
Tawakkul or being content with whatever Allah gives
Sabır or patience regarding whatever Allah Subhanahu wa ta'âlâ does
Şukr or gratefulness for whatever Allah gives
Raza or seeking the happiness of Allah
Khauf or fear of Allah's wrath
Rija or hope of Allah's blessing
Yaqeen or complete faith in Allah
Ikhlas or purity of intention
Sidq or bearing the truth of Allah
Muraqabah or total focus on Allah
Khulq or humbleness for Allah
Dhikr or remembrance of Allah
Khuloot or isolation from everyone except Allah
Murata, S. (1992). "The Heart". The Tao of Islam: A Sourcebook on Gender Relationships in Islamic Thought. State University of New York Press. ISBN 978-0-7914-0913-8.
Nasr, S.H.; Chittick, W.C. (2007). The Essential Seyyed Hossein Nasr. Library of perennial philosophy The perennial philosophy series. World Wisdom. ISBN 978-1-933316-38-3.
Rothman, Abdallah; Coyle, Adrian (2018). "Toward a Framework for Islamic Psychology and Psychotherapy: An Islamic Model of the Soul". Journal of Religion and Health. Springer Science and Business Media LLC. 57 (5): 1731–1744. doi:10.1007/s10943-018-0651-x. ISSN 0022-4197. PMC 6132620.
See also
Lataif-e-sitta
Nafs
Ruh
Sufism |
QtScript is a scripting engine that has been part of the Qt cross-platform application framework since version 4.3.0. It was first deprecated and then dropped as of Qt 6.5 (which has Qt QML as replacement).
The scripting language is based on the ECMAScript standard with a few extensions, such as QObject-style signal and slot connections. The library contains the engine, and a C++ API for evaluating QtScript code and exposing custom QObject-derived C++ classes to QtScript.
The QtScript Binding Generator provides bindings for the Qt API to access directly from ECMAScript. QtScript and the binding generator are used for Amarok 2's scripting system.
The current (as of Qt 4.7) implementation uses JavaScriptCore and will not be further developed. The module was deprecated as of Qt 5.5.
Qt Script for Applications (QSA)
An earlier and unrelated scripting engine, called Qt Script for Applications (QSA), was shipped by Trolltech as a separate Qt-based library, dual-licensed under GPL and a commercial license.
With the release of QtScript, QSA has been deprecated and reached its end of life in 2008.
Qt: Making applications scriptable
QtScript module
QSA documentation (version 1.2.2)
Last working snapshot of QSA homepage from archive.org
QSA download directory |
ZPL may refer to:
ZPL (complexity), a complexity class
ZPL (programming language), for scientific applications
Zebra Programming Language, for label printers
Zope Public License
Lachixío Zapotec language (ISO 639-3 language code) |
Jinn (Arabic: جن, jinn) – also romanized as djinn or anglicized as genies – are invisible creatures in early religion in pre-Islamic Arabia and later in Islamic culture and beliefs.
Like humans, they are accountable for their deeds, can be either believers (muslims) or unbelievers (kafir); depending on whether they accept God's guidance. Since jinn are neither innately evil nor innately good, Islam acknowledged spirits from other religions and was able to adapt them during its expansion. Jinn are not a strictly Islamic concept; they may represent several pagan beliefs integrated into Islam. To assert a strict monotheism and the Islamic concept of tawhid (oneness of God), Islam denies all affinities between the jinn and God, thus placing the jinn parallel to humans, also subject to God's judgment and afterlife. The Quran condemns the pre-Islamic Arabian practice of worshipping or seeking protection from them.Although generally invisible, jinn are supposed to be composed of thin and subtle bodies (Arabic: أَجْسَام, romanized: ʾajsām), and can change at will. They favour a snake form, but can also choose to appear as scorpions, lizards, or as humans. They may even engage in sexual affairs with humans and produce offspring. If they are injured by someone, they usually seek revenge or possess the assailant's body, refusing to leave it until forced to do so by exorcism. Jinn do not usually meddle in human affairs, preferring to live with their own kind in tribes similar to those of pre-Islamic Arabia.
Individual jinn appear on charms and talismans. They are called upon for protection or magical aid, often under the leadership of a king. Many people who believe in jinn wear amulets to protect themselves against the assaults of jinn, sent out by sorcerers and witches. A commonly-held belief maintains that jinn cannot hurt someone who wears something with the name of God written upon it. While some Muslim scholars in the past have had ambivalent attitudes towards sorcery, believing that good jinn do not require one to commit sin, most contemporary Muslim scholars associate dealing with jinn with idolatry.
Etymology and translation
Jinn is an Arabic collective noun deriving from the Semitic root JNN (Arabic: جَنّ / جُنّ, jann), whose primary meaning is 'to hide' or 'to adapt'. Some authors interpret the word to mean, literally, 'beings that are concealed from the senses'. Cognates include the Arabic majnūn (مَجْنُون, 'possessed' or, generally, 'insane'), jannah (جَنَّة, 'garden', 'eden' or 'heaven'), and janīn (جَنِين, 'embryo'). Jinn is properly treated as a plural (however in Classical Arabic, may also appear as jānn, جَانّ), with the singular being jinnī (جِنِّيّ).The origin of the word jinn remains uncertain.(p22) Some scholars relate the Arabic term jinn to the Latin genius – a guardian spirit of people and places in Roman religion – as a result of syncretism during the reign of the Roman empire under Tiberius and Augustus;(p38) however, this derivation is also disputed.(p25) Another suggestion holds that jinn may be derived from Aramaic ginnaya (Classical Syriac: ܓܢܝܐ) with the meaning of 'tutelary deity'(p24) or 'guardian'. Others claim a Persian origin of the word, in the form of the Avestic Jaini, a wicked (female) spirit. Jaini were among various creatures in the possibly even pre-Zoroastrian mythology of peoples of Iran.The anglicized form genie is a borrowing of the French génie, also from the Latin genius. It first appeared in 18th-century translations of the Thousand and One Nights from the French, where it had been used owing to its rough similarity in sound and sense and further applies to benevolent intermediary spirits, in contrast to the malevolent spirits called 'demon' and 'heavenly angels', in literature. In Assyrian art, creatures ontologically between humans and divinities are also called genie. Although the term spirit is frequently used, it has been criticised for not capturing the corporeal nature of the jinn, and that the term genie should be used instead.Though not a precise fit, descriptive analogies that have been used for these beings in Western thought include demon, spirit and fairy, depending on source.(p22)
Pre-Islamic era
The exact origins of belief in jinn are not entirely clear.(pp 1–10) Belief in jinn in the pre-Islamic Arab religion is testified not only by the Quran, but also by pre-Islamic literature in the seventh century.: 54 Some scholars of the Middle East hold that they originated as malevolent spirits residing in deserts and unclean places, who often took the forms of animals;(p 1–10) others hold that they were originally pagan nature deities who gradually became marginalized as other deities took greater importance.(pp 1–10) Since the term jinn seems to be not of Arabic, but of Aramaic origin, denoting demonized pagan deities, the jinn probably entered Arabian belief in the late pre-Islamic period.: 54 Still, jinn had been worshipped by many Arabs during the Pre-Islamic period,(p 34) though, unlike gods, jinn were not regarded as immortal. Emilie Savage-Smith, who asserted that jinn are malevolent in contrast to benevolent gods, does not hold this distinction to be absolute, admitting jinn-worship in pre-Islamic Arabia.: 39 In the regions north to the Hejaz, Palmyra and Baalbek, the terms jinni and ilah were often used interchangeably. Julius Wellhausen likewise states that in pre-Islamic Arabia it was assumed there are friendly and helpful beings among the jinn. He asserts that the distinction between a god and a jinni is, the jinn are worshipped in private while the gods are worshipped in public.: 39 Although their mortality ranks them lower than gods, it seems that the veneration of jinn had played more importance in the everyday life of pre-Islamic Arabs than the gods themselves. According to common Arabian belief, soothsayers, pre-Islamic philosophers, and poets were inspired by the jinn.(p 34)(pp 1–10) Their culture and society were analogous to that pre-Islamic Arabian culture, having tribal leaders, protecting their allies and avenging murder for any member of their tribe or allies.(p 424) Although the powers of jinn exceed those of humans, it is conceivable a man could kill a jinni in single combat. Jinn were thought to shift into different shapes, but were feared especially in their invisible form, since then they could attack without being seen.
Jinn were also feared because they had been thought to be responsible for various diseases and mental illnesses.(p 122)(pp 1–10) Julius Wellhausen observed that such spirits were thought to inhabit desolate, dingy, and dark places and that they were feared.
One had to protect oneself from them, but they were not the objects of a true cult. Al-Jahiz credits the pre-Islamic Arabs with believing that the society of jinn constitutes several tribes and groups and some natural events were attributed to them, such as storms. They also thought jinn could protect, marry, kidnap, possess, and kill people. Despite that they were often feared or they inspired awe, the jinn were also pictured to have romantic feelings for humans. According to a famous pre-Islamic story, the jinni Manzur fell in love with a human woman called Habbah, teaching her the arts of healing.Some scholars argue that angels and devils were introduced by the Islamic prophet Muhammad to Arabia and did not exist among the jinn. On the other hand, Amira el-Zein argues that angels were known to the pagan Arabs, but the term jinn was used for all kinds of supernatural entities among various religions and cults; thus, Zoroastrian, Christian, and Jewish angels and devils were conflated with jinn.(p 34)
Islamic beliefs
In scripture
Jinn are mentioned literally 32 times in the Quran, and as an individual by name as "Iblees" 11 times, and as the devil and passively many more times than that. By that, the Quran confirms their existence to Muslims, but does not elaborate on them any further. In Islamic tradition, Muhammad was sent as a prophet to both human and jinn communities, and that prophets and messengers were sent to both communities. Traditionally, the 72nd surah, Al-Jinn, named after them, is held to tell about the revelation to jinn and several stories mention one of Muhammad's followers accompanied him, witnessing the revelation to the jinn.(p64)The Quran condemns pre-Islamic practice of worshipping jinn for means of protection (72:6). The Quran reduced the status of jinn from that of tutelary deities to that of minor spirits, usually paralleling humans. They are, like humans, rational beings formed of nations (7:38). Surah 51:56 resumes that both jinn and humans were created to worship God. Surah 6:130 states that God has sent messengers to both humans and jinn. Individuals among both communities are held accountable for their deeds, and will be punished or rewarded in the afterlife, in accordance with their deeds (7:179, 55:56). It is impossible for both jinn and humans to approach God both physically (55:33) and mentally (17:90).Unlike humans, jinn are not vicegerents of the earth. Al-Baqara only credits Adam as a successor (khalifa). However, some exegetes, like Tabari, argue that jinn inherited the world before, and that when angels complain about God creating humans who "will shed blood", they link humans to the jinn who ruled the earth previously.In the story of Solomon, it is implied that the jinn live on the earth alongside humans. Solomon is granted dominion over humans, ants, birds and jinn. The jinn served him as soldiers and builders of the First Temple. According to the Quran, when Solomon died, the jinn have not recognized that his soul left his body until he fell on the ground. This is understood to be proof that the jinn, despite being generally invisible themselves, do not know the unseen (Al-Ghaib).The jinn are also mentioned in collections of canonical hadiths. According to the reports of the hadiths, the jinn eat like humans, but instead of fresh food, they prefer rotten flesh and bones.(p51) Another hadith advises to close doors and keep children close at night for the jinn go around and snatch things away. One hadith divides them into three groups, with one type of jinn flying through the air; another that are snakes and dogs; and a third that moves from place to place like human. This account parallels the jinn to humans, similar to the Quran, as another hadith divides humans into three groups, with one kind which is like four-legged beast, who are said to remain ignorant of God's message; a second one which is under the protection of God; and a last one with the body of a human, but the soul of a devil (shaitan).A famous, yet weak (da'if), hadith narrates that ibn Masud accompanied Muhammad to a lecture to the jinn somewhere in the mountains. Muhammad would have drawn a line around ibn Masud and commanded him not to leave the circle. Then ibn Masud watched Muhammad being surrounded by silhouettes and he was afraid that Muhammad would be attacked by his enemies. Remembering Muhammad's words, he decided not to intervene. When Muhammad returned, he told ibn Masud that, if he had left his place, he would have been killed by these jinn.
Exegesis
Belief in jinn is not included among the six articles of Islamic faith, as belief in angels is, however many Muslim scholars believe it essential to the Islamic faith. Many scholars regard their existence and ability to enter human bodies as part of the aqida (theological doctrines) in the tradition of Ashari.
In Quranic interpretation, the term jinn can be used in two different ways:
as invisible beings, offspring of abu Jann considered to be, along with humans, thaqalān (accountable for their deeds), created out of "fire and air" (Arabic: مَارِجٍ مِن نَّار, mārijin min nār).
as the opposite of al-Ins (something in shape) referring to any object that cannot be detected by human sensory organs, including angels, devils, and the interior of human beings.Tabari records from ibn Abbas yet another usage for the term jinn, as reference to a tribe of angels created from the fires of samūm (Arabic: سَمُوم, 'poisonous fire'). They got their name from jannah ("heaven" or "paradise"), instead. They would have waged war against the jinn before the creation of Adam. According to Tabari, the angels were created on Wednesday, the jinn on Thursday, and humans on Friday, though not in succession, but rather, more than 1000 years later, respectively.(p 43) With the revelation of Islam, the jinn were given a new chance to access salvation. However, because of their prior creation, the jinn would attribute themselves to a superiority over humans and envy them for their place and rank on earth.(p 43)The different jinn known in Islamic folklore are disregarded among most mufassirs – authors of tafsir – Tabari being an exception (though he is not specific about them, probably due to lack of theological significance). Since Tabari is one of the earliest commentators, the several jinn have been known since the earliest stages of Islam.(p 132) The ulama (scholars of Islamic law) discuss permissiblity of jinn marriage. Since the Quran talks about marriage with human women only, many regard it as prohibited. Some argue that someone who marries a jinn will lose fear in God.Although conjuring jinn is considered unbelief (kufr) by Islamic scholars, most agree that they are capable of performing magic.
Classic theology
The notion that jinn can possess individuals is generally accepted by the majority of Muslim scholars, and considered part of the doctrines (aqidah) of the "people of the Sunnah" (ahl as-sunnah wal-jammah'a) in the tradition of Ash'ari.(p 68) A minority of Muslim scholars, associated with the Muʿtazila, denied that jinn could possess a human physically, asserting they could only influence humans by whispering to them, like the devils do.(p 73) Some, like ibn Sina,(p 89) even denied their existence altogether. Sceptics refused to believe in a literal reading on jinn in Islamic sacred texts, preferring to view them as "unruly men" or metaphorical.Other critics, such as Jahiz and Mas'udi, explained jinn and demons as a merely psychological phenomena. Jahiz states in his Kitāb al-Hayawān that loneliness induces humans to mind-games and wishful thinking, causing waswās (Arabic: وَسْوَاس, 'demonic whisperings in the mind'), causing a fearful man to see things which are not real. These alleged appearances are told to other generations in bedtime stories and poems, and when they grow up, they remember these stories when they are alone or afraid, encouraging their imaginations and causing another alleged sighting of jinn.(p37)According to the Asharites, the existence of jinn and demons cannot be proven or falsified, because arguments concerning the existence of such entities are beyond human comprehension. Adepts of Ashʿari theology explain that jinn are invisible to humans because humans lack the appropriate sensory organs to envision them.(p22) Hanbali scholar ibn Taymiyya and Zahiri scholar ibn Hazm regarded denial of jinn as "unbelief" (kufr), since they are mentioned in Islamic sacred texts. They further point towards demons and spirits in other religions, such as Christianity, Zorastrianism and Judaism, as evidence for their existence.(p33) Ibn Taymiyya believed the jinn to be generally "ignorant, untruthful, oppressive, and treacherous". He held that the jinn account for much of the "magic" that is perceived by humans, cooperating with magicians to lift items in the air, delivering hidden truths to fortune tellers, and mimicking the voices of deceased humans during seances.Al-Maturidi relates the jinn to their depiction as former minor deities, writing that humans seek refuge among the jinn, but the jinn are actually weaker than humans. Not the jinn but human's own mind and attitude towards them are the sources of fear. By submitting to the jinn, humans allow the jinn to have power over them, humiliate themselves, increase their dependence on them, and commit shirk. Abu l-Lait as-Samarqandi, a disciple of the Maturidi school of theology, is attributed to the opinion that, unlike angels and devils, humans and jinn are created with fitra, neither born as believers nor as unbelievers; their attitude depends on whether they accept God's guidance.Still, jinn were not perceived as necessarily evil or hostile beings. In the story of Nasir Khusraw's (1004 – after 1070 CE) burial, his brother is assisted by two jinn. They cut a rock and shape it into a tombstone.
Modern theology
Many modernists tried to reconcile the traditional perspective on jinn with modern sciences. Muhammad Abduh understood references to jinn in the Quran to denote anything invisible, be it an indefined force or a simple inclination towards good or evil. He further asserted that jinn might be an ancient description of germs, since both are associated with diseases and cannot be perceived by the human eye alone, an idea adapted by the Ahmadi sect.On the other hand, Salafism rejects a metaphorical reinterpretation of jinn or to identify them with microorganisms, advocating a literal belief in jinn. Furthermore, they reject protection and healing rituals common across Islamic culture used to ward off jinn or to prevent possession. It takes up the position that this is a form of idolatry (shirk), associating the jinn with devils. Further, they share a rather limited scope but univocally teaching on the matters of otherworldy beings, contrarily to earlier conceptualizations of jinn, angels and devils. Many modern preachers substituted (evil) jinn by devils. For that reason, Saudi Arabia, following the Wahhabism tradition of Salafism, imposes a death penalty for dealing with jinn to prevent sorcery and witchcraft. The importance of belief in jinn to Islamic belief in contemporary Muslim society was underscored by the judgment of apostasy by an Egyptian Sharia court in 1995 against liberal theologian Nasr Abu Zayd. Zayd was declared an unbeliever of Islam for – among other things – arguing that the reason for the presence of jinn in the Quran was that they (jinn) were part of Arab culture at the time of the Quran's revelation, rather than that they were part of God's creation. Death threats led to Zayd leaving Egypt several weeks later.In Turkey, Süleyman Ateş's Quran commentary describes the jinn as hostile beings to whom the pagans made sacrifices in order to please them. They would have erroneously assumed that the jinn (and angels) were independent deities and thus fell into širk. By that, humans would associate partners with God and humiliate themselves towards the jinn spiritually.
Belief in jinn
Folklore
The jinn are of pre-Islamic Arabian origin. Since the Quran affirms their existence, when Islam spread outside of Arabia jinn belief was adopted by later Islamic culture. The Quran reduced the status of the jinn from that of tutelary deities to something parallel to humans, subject to the judgement of the supreme deity of Islam. By that, the jinn were considered a third class of invisible beings, not consequently equated with devils,(p52) and Islam was able to integrate local beliefs about spirits and deities from Iran, Africa, Turkey and India, into a monotheistic framework.The jinn are believed to live in societies resembling those of humans, practicing religion (including Islam, Christianity and Judaism), having emotions, needing to eat and drink, and can procreate and raise families. Muslim jinn are usually thought to be benign, Christian and Jewish jinn indifferent unless angered, and pagan jinn evil. Other common characteristics include fear of iron and wolves, generally appearing in desolate or abandoned places, and being stronger and faster than humans. Night is considered a particularly dangerous time, because the jinn would then leave their hiding places.: 15 Since the jinn share the earth with humans, Muslims are often cautious not to accidentally hurt an innocent jinn.
Jinn are often believed to be able to take control over a human's body. Although this is a strong belief among many Muslims, some authors argue that since the Quran doesn't explicitly attribute possession to the jinn, it derives from pre-Islamic beliefs. Morocco, especially, has many possession traditions, including exorcism rituals, However, jinn can not enter a person whenever the jinni wants; rather, the victim must be predisposed for possession in a state of dha'iyfah (Arabic: ضَعِيفَة, "weakness"). Feelings of insecurity, mental instability, unhappy love and depression (being "tired from the soul") are forms of dha'iyfah.Javanese Muslims hold similar beliefs about the jinn as inhabiting lonely and haunted places, and the ability to possess or scare people who trample their homes or accidentally kill a related jinni. In some cases, jinn might even take revenge by inflicting physical damage. Muslims avoid hurting jinn by uttering "destur" (permission), before sprinkling hot water, so the jinn might leave the place.(p149) Some jinn guard graves and cause illness to people, who intend to disturb the graves. Benevolent jinn are called jinn Islam, and they are pious and faithful, the other are called jinn kafir. While good jinn might even help a Muslim to do hard work and produce magical acts, evil jinn follow the influence of devils (shayatin).
In Artas (Bethlehem), benevolent jinn might support humans and teach them moral lessons. The evil jinn frequently ascend to the surface, causing sickness to children, snatching food, and taking revenge when humans mistreat them. In later Albanian lore too, jinn (Xhindi) live either on earth or under the surface rather than in the air, and may possess people who have insulted them, for example if their children are trodden upon or hot water is thrown on them. In Senegal jinn are believed provide magical aid if the powers of a spiritual healer are too weak. Like in the previously mentioned regions, jinn can also be dangerous. They can scare or devour a human being, if they desire. Therefore, most people try to avoid contacting jinn or offer gifts when it is believed jinn are preying on a human.Among Turks, jinn (Turkish: Cin) often appear along with other demonic entities, such as the divs as within Azerbaijani mythology. The divs are from Persian mythology. Some early Persian translations of the Quran translated jinn as peri (fairy) and used div for evil spirits, such as shayatin and Iblis. In some variations of Turkic language, the jinn are known as cor and chort, distinguished from iye. While the iye is bound to a specific place, Turkish sources too, describe jinn as mobile creatures causing illnesses and mental disorders with a physical body, which only remains invisible until they die, and inhabiting desolated places. The term in, used only in the form in-cin has the same meaning as jinn. In-cin are used in Turkish phrases to refer to a place so deserted, such beings which usually hide from sight would gather around this place, such as in "in cin top oynuyor" (In play ball with the jinn).In Central Asia shamanism, jinn might aid shamans as familiars to protect against malicious spirits, a practise taken up by the Muslim guides (mullah) in the 19th Century.
In folk literature
The jinn can be found in various stories of the One Thousand and One Nights, including in:
"The Fisherman and the Jinni";
"Ma‘ruf the Cobbler": more than three different types of jinn are described;
"Aladdin and the Wonderful Lamp": two jinn help young Aladdin; and
"Tale of Núr al-Dín Alí and his Son Badr ad-Dīn Ḥasan": Ḥasan Badr al-Dīn weeps over the grave of his father until sleep overcomes him, and he is awoken by a large group of sympathetic jinn.In some stories, the jinn are credited with the ability of instantaneous travel (from China to Morocco in a single instant); in others, they need to fly from one place to another, though quite fast (from Baghdad to Cairo in a few hours).
Modern and post-modern era
Prevalence of belief
Though discouraged by some teachings of modern Islam, cultural beliefs about jinn remain popular among Muslim societies and their understanding of cosmology and anthropology. Affirmation on the existence of jinn as sapient creatures living along with humans is still widespread in the Middle Eastern world, and mental illnesses are still often attributed to jinn possession.According to a survey undertaken by the Pew Research Center in 2012, at least 86% of Muslims in Morocco, 84% in Bangladesh, 63% in Turkey, 55% in Iraq, 53% in Indonesia, 47% in Thailand and 15% elsewhere in Central Asia, affirm a belief in the existence of jinn. The low rate in Central Asia might be influenced by Soviet religious oppression. 36% of Muslims in Bosnia and Herzegovina believe in jinn, which is higher than the European average (30%), although only 21% believe in sorcery and 13% would wear talisman for protection against jinn. 12% support offerings and appeal given to the jinn.Sleep paralysis is understood as a "jinn attack" by many sleep paralysis sufferers in Egypt, as discovered by a Cambridge neuroscience study Jalal, Simons-Rudolph, Jalal, & Hinton (2013). The study found that as many as 48% of those who experience sleep paralysis in Egypt believe it to be an assault by the jinn. Almost all of these sleep paralysis sufferers (95%) would recite verses from the Quran during sleep paralysis to prevent future "jinn attacks". In addition, some (9%) would increase their daily Islamic prayer (salah) to get rid of these assaults by jinn. Sleep paralysis is generally associated with great fear in Egypt, especially if believed to be supernatural in origin.However, despite belief in jinn being prevalent in Iran's folklore, especially among more observant believers of Islam, some phenomena such as sleep paralysis were traditionally attributed to other supernatural beings; in the case of sleep paralysis, it was bakhtak (night hag). But at least in some areas of Iran, an epileptic seizure was thought to be a jinn attack or jinn possession, and people would try to exorcise the jinn by citing the name of God and using iron blades to draw protective circles around the victim.Most of the Islamic-majority countries in West Africa have a long tradition of jinn stories and populations that mostly believe in their existence, although there are some Islamic movements in the area that reject their existence.
Post-modern literature and movies
Jinn feature in the magical realism genre, introduced into Turkish literature by Tekin (1983), who uses magical elements known from pre-Islamic and Islamic Anatolian lore. Since the 1980s, this genre has become prominent in Turkish literature. A story by Tekin combines elements of folkloric and religious belief with a rationalized society. The protagonist is a girl who befriends inanimate objects and several spirits, such as jinn and peri (fairy). While the existence of jinn is generally accepted by the people within the novel, when her family moves from rural Anatolia into the city, the jinn do not appear anymore.
Jinn are still accepted as real by Muslims in the novel's urban setting, but play no part in modern life. The existence of jinn is accepted throughout the novel, but when the setting changes to the city, they cease to have any importance, symbolizing the replacement of tradition by modernization for Anatolian immigrants.Contrary to the neutral to positive depiction of jinn in Tekin's novels, jinn became a common trope in Middle Eastern horror movies. In Turkish horror, jinn have been popular since 2004. Out of 89 films, 59 have direct references to jinn as the antagonist, 12 use other sorts of demons, while other types of horror, such as the impending apocalypse, hauntings, or ghosts, constitute only 14 films. Unlike other Horror elements, such as ghosts and zombies, the existence of jinn is affirmed by the Quran, and thus it is accepted the majority of Muslims. The presentation of jinn usually combines Quranic with oral and cultural beliefs about jinn. The jinn are presented as inactive inhabitants of the earth, only interfering with human affairs when summoned by a sorcerer or witch. Although the jinn, often summoned by pagan rituals or sorcery, appear to pose a challenge to Islam, the films assure that Islamic law protects Muslims from their presence. It is the one who summoned them in the first place who gets punished or suffers from the presence of jinn.Similarly, jinn appear in Iranian horror movies despite a belittling of the popular understanding of jinn by an increasing number of Islamic fundamentalistic reformists. In the post-Iranian revolution psychological horror movie Under the Shadow the protagonist is afraid the jinn, who are completely veiled and concealed and intrude into her life frequently. In the end, however, she is forced by the Iranian guards to take on a Chador, and thus becomes like the jinn she feared. The jinn symbolize the Islamic regime and their intrusion into private life, criticises the Islamic regime and patriarchal structures.
Physicality and relationships with humans
Jinn are not supernatural in the sense of being purely spiritual and transcendent to nature; while they are believed to be invisible (or often invisible) they also eat, drink, sleep, breed with the opposite sex, and produce offspring that resemble their parents. Intercourse is not limited to other jinn alone, but is also possible between human and jinn.Despite being invisible, jinn are usually thought to have bodies (ajsām). Zakariya al-Qazwini includes the jinn (angels, jinn, and devils all created from different parts of fire) among animals, along with humans, burdened beasts (like horses), cattles, wild beasts, birds, and finally insects and reptiles.(p135) The Qanoon-e-Islam, written 1832 by Sharif Ja'far, writing about jinn-belief in India, states that their bodies are constituted of 90% spirit and 10% flesh. They resemble humans in many regards, their subtle matter being the only main difference. But it is this very nature that enables them to change their shape, move quickly, fly, and, entering human bodies, cause epilepsy and illness, hence the temptation for humans to make them allies by means of magical practices.Jinn are further known as gifted shapeshifters, often assuming the form of an animal. In Islamic culture, many narratives concern a serpent who is actually a jinni.(p116) Other chthonic animals regarded as forms of jinn include scorpions and lizards. Both scorpions and serpents have been venerated in the ancient Near East. Some sources even speak of killed jinn leaving behind a carcass similar to either a serpent or a scorpion.: 91–93 When they shift into a human form, however they are said to stay partly animal and are not fully human. Individual jinn are thus often depicted as monstrous and anthropomorphized creatures with body parts from different animals or human with animal traits.(p164)(p164)
Certain hadith, though ones considered fabricated (mawḍūʻ) by some muhaddith (hadith scholars), support the belief in human-jinn relationships:"The Hour will come when the children of jinn will become many among you."
"Among you are those who are expatriated (mugharrabûn);" and
this, he explained, meant "crossed with jinn." Among those scholars that hold to these beliefs, some consider marriage between a jinn and a human permissible, though undesirable (makruh), while others strongly forbid it. Offspring of human-jinn relationships are often considered to be gifted and talented people with special abilities, and some historical persons were considered to have jinnic ancestry. In a study of exorcism culture in the Hadhramaut of Yemen, love was one of the most frequent cited causes of relationships between humans and jinn.
Visual art
Although there are very few visual representations of jinn in Islamic art, when they do appear, it is usually related to a specific event or individual jinn.
Visual representations of jinn appear in manuscripts and their existence is often implied in works of architecture by the presence of apotropaic devices like serpents, which were intended to ward off evil spirits. Lastly, King Solomon is illustrated very often with jinn as the commander of an army that included them.
The seven jinn kings
In the Book of Wonders compiled in the 14th century by Abd al-Hasan al-Isfahani, there are illustrations of "The seven jinn kings".(p27) In general, each 'King of the Jinn' was represented alongside his helpers and alongside the corresponding talismanic symbols.(p27) For instance, the 'Red King of Tuesday' was depicted in the Book of Wonders as a sinister form astride a lion. In the same illustration, he holds a severed head and a sword. This was because the 'Red King of Tuesday' was aligned with Mars, the god of war.(p27) Alongside that, there were illustrations of the 'Gold King' and the 'White King'.(p27)Aside from the seven 'Kings of the Jinn', the Book of Wonders included an illustration of Huma, or the 'Fever'. Huma was depicted as three-headed and as embracing the room around him, in order to capture someone and bring on a fever in them.(p28)
Architectural representation
In addition to these representations of jinn in vicinity to kingship, there were also architectural references to jinn throughout the Islamic world. In the Citadel of Aleppo, the entrance gate Bab al-Hayyat made reference to jinn in the stone relief carvings of serpents; likewise, the water gate at Ayyubid Harran housed two copper sculptures of jinn, serving as talismans to ward off both snakes and evil jinn in the form of snakes.(p408)Alongside these depictions of the jinn found at the Aleppo Citadel, depictions of the jinn can be found in the Rūm Seljuk palace. There are a phenomenal range of creatures that can be found on the eight-pointed tiles of the Seal of Sulaymān device.(p390) Among these were the jinn, that belonged among Solomon's army and as Solomon claimed to have control over the jinn, so did the Rūm Seljuk sultan that claimed to be the Sulaymān of his time.(p393) In fact, one of the most common representations of jinn are alongside or in association with King Solomon. It was thought that King Solomon had very close ties to the jinn, and even had control over many of them.(p399) The concept that a great and just ruler has the ability to command jinn was one that extended far past only King Solomon– it was also thought that emperors, such as Alexander the Great, could control an army of jinn in a similar way.(p399) Given this association, Jinn were often seen with Solomon in a princely or kingly context, such as the small, animal-like jinn sitting beside King Solomon on his throne illustrated in an illuminated manuscript of Aja'ib al-Makhluqat by Zakariya al-Qazwini, written in the 13th century.
Talismanic representation
The jinn had an indirect impact on Islamic art through the creation of talismans that were alleged to guard the bearer from the jinn and were enclosed in leather and included Qur'anic verses.(p80) It was not unusual for those talismans to be inscribed with separated Arabic letters, because the separation of those letters was thought to positively affect the potency of the talisman overall.(p82) An object that was inscribed with the word of Allah was thought to have the power to ward off evil from the person who obtained the object, though many of these objects also had astrological signs, depictions of prophets, or religious narratives.
In witchcraft and magical literature
Witchcraft (Arabic: سِحْر, sihr, which is also used to mean 'magic, wizardry') is often associated with jinn and afarit around the Middle East. Therefore, a sorcerer may summon a jinn and force him to perform orders. Summoned jinn may be sent to the chosen victim to cause demonic possession. Such summonings were done by invocation,(p153) by aid of talismans or by satisfying the jinn, thus to make a contract.Jinn are also regarded as assistants of soothsayers. Soothsayers reveal information from the past and present; the jinn can be a source of this information because their lifespans exceed those of humans. Another way to subjugate them is by inserting a needle to their skin or dress. Since jinn are afraid of iron, they are unable to remove it with their own power.Ibn al-Nadim, Muslim scholar of his Kitāb al-Fihrist, describes a book that lists 70 jinn led by Fuqṭus (Arabic: فقْطس), including several jinn appointed over each day of the week.(p38) Bayard Dodge, who translated al-Fihrist into English, notes that most of these names appear in the Testament of Solomon. A collection of late 14th- or early 15th-century magico-medical manuscripts from Ocaña, Spain describes a different set of 72 jinn (termed "Tayaliq") again under Fuqtus (here named "Fayqayțūš" or Fiqitush), blaming them for various ailments. According to these manuscripts, each jinni was brought before King Solomon and ordered to divulge their "corruption" and "residence" while the Jinn King Fiqitush gave Solomon a recipe for curing the ailments associated with each jinni as they confessed their transgressions.A disseminated treatise on the occult, written by al-Ṭabasī, called Shāmil, deals with subjugating devils and jinn by incantations, charms and the combination of written and recited formulae and to obtain supernatural powers through their aid. Al-Ṭabasī distinguished between licit and illicit magic, the latter founded on disbelief, while the first on purity. Allegedly, he was able to demonstrate to Mohammad Ghazali the jinn. He would have appeared to him as "a shadow on the wall."Seven kings of the Jinn are traditionally associated with days of the week.(p87) They are also attested in the Book of Wonders. Although many passages are damaged, they remain in Ottoman copies. These jinn-kings (sometimes afarit instead) are invoked to legitimate spells performed by amulets.
During the Rwandan genocide, both Hutus and Tutsis avoided searching local Rwandan Muslim neighborhoods because they widely believed the myth that local Muslims and mosques were protected by the power of Islamic magic and the efficacious jinn. In the Rwandan city of Cyangugu, arsonists ran away instead of destroying the mosque because they feared the wrath of the jinn, whom they believed were guarding the mosque.
Comparative mythology
Ancient Mesopotamian religion
Beliefs in entities similar to the jinn are found throughout pre-Islamic Middle Eastern cultures.: 1–10 The ancient Sumerians believed in Pazuzu, a wind demon,: 1–10 : 147–148 who was shown with "a rather canine face with abnormally bulging eyes, a scaly body, a snake-headed penis, the talons of a bird and usually wings."(p147) Ancient Mesopotamian religion has udug, Babylonian utukku, a class of demons that were believed to haunt remote wildernesses, graveyards, mountains, and the sea, all locations where jinn were later thought to reside.: 1–10 The Babylonians also believed in the rabisu, a vampiric demon believed to leap out and attack travelers at unfrequented locations, similar to the post-Islamic ghūl,: 1–10 a specific kind of jinn whose name is etymologically related to that of the Sumerian galla, a class of Underworld demon.Lamashtu, also known as Labartu, was a divine demoness said to devour human infants.: 1–10 (p115) Lamassu, also known as Shedu, were guardian spirits, sometimes with evil propensities.: 1–10 : 115–116 The Assyrians believed in the alû, sometimes described as a wind demon residing in desolate ruins who would sneak into people's houses at night and steal their sleep.: 1–10 In the ancient Syrian city of Palmyra, entities similar to jinn were known as ginnayê,: 1–10 an Aramaic name which may be etymologically derived from the name of the genii from Roman mythology.: 1–10 Like jinn among modern-day Bedouin, ginnayê were thought to resemble humans.: 1–10 They protected caravans, cattle, and villages in the desert: 1–10 and tutelary shrines were kept in their honor.: 1–10 They were frequently invoked in pairs.: 1–10
Judaism
The Jewish depiction of jinn (Hebrew: Shedim) closely resemble that of the Islamic depictions in many regards. The story of Solomon being replaced by the evil jinn-king, is well known in both Quranic exegesis and the Talmud.(p120) Likewise, they may be rebellious and evil or lawful obeying the holy scripture (i.e. the Torah). Their resemblance to humans is captured in a description in the Babylonian Talmud: "In three regards the shedim are like angels, and in three like humans: They have wings, they fly from one end of the world to another, they know the future listening from behind the veil of the angels; and in three regards they resemble humans: They eat and drink, procreate, and die like humans."In earlier midrashim they are corporeal beings. If they take on human forms, their feet would remain that of roosters (instead of hooves in Muslim depiction). Later, in Judaism such entities developed into more abstract beings, in contrast to Islam where they retained their corporeal image. However, like their Islamic-counterparts they are credited with possession. Like Muslim excorcism on jinn, Jewish excorcism as well includes negotiations with these beings, asking for their religion, sex, name, and intention. The treatment of possession by jinn (jnun , shedim, etc.) differs from that of traditional Jewish cure of spirit possession associated with ghosts (Dybbuk).
Buddhism
As in Islam, the idea of spiritual entities converting to one's own religion can be found in Buddhism. According to lore, Buddha preached to Devas and Asura, spiritual entities who, like humans, are subject to the cycle of life, and who resemble the Islamic notion of jinn, who are also ontologically placed among humans in regard to eschatological destiny.(p165)
Christianity
Abraham Ecchellensis writes that the jinn would be the children of Lilith and devils, therefore the jinn would share three qualities with humans, such as procreation, eating, and dying, but share three qualities with devils in regard to flying, invisibility, and passing through solid substances, a depiction linked to the Jewish account on shedim. Because of their human-like qualities, they are less noxious to humans than devils, and many would indeed live in some familiarity and even friendship with humans. In India, certain young jinn would assume a human form to play games with native children of human parents.Van Dyck's Arabic translation of the Old Testament uses the alternative collective plural "jann" (Arab:الجان; translation:al-jānn) to render the Hebrew word usually translated into English as "familiar spirit" (אוב, Strong #0178) in several places (Leviticus 19:31, 20:6; 1 Samuel 28:3,7,9; 1 Chronicles 10:13).Some scholars evaluated whether the jinn might be compared to fallen angels in Christian traditions. Comparable to Augustine's descriptions of fallen angels as ethereal, jinn seem to be considered as the same substance. Although the concept of fallen angels is not absent in the Quran, the jinn nevertheless differ in their major characteristics from that of fallen angels: While fallen angels fell from heaven, the jinn did not, but try to climb up to it in order to receive the news of the angels. Jinn are closer to daemons.
See also
Further reading
Etymology of genie |
FP may refer to:
Arts, media and entertainment
Music
Fortepiano, an early version of the piano
Fortepiano (musical dynamic), an Italian musical term meaning 'loud soft'
Flux Pavilion, a British dubstep artist
Francis Poulenc, an early 20th century pianist and composer
FP (catalogue) - of his compositions.
Publications
Financial Post, a Canadian business newspaper, published from 1907 to 1998, now National Post
Foreign Policy, a bimonthly American magazine founded in 1970
Other media
Facepunch Studios, a British video game company
The FP, a 2011 comedy film
Fast Picket, a class of fictional artificially intelligent starship in The Culture universe of late Scottish author Iain Banks
Science and technology
Computing
FP (complexity), in computational complexity theory, a complexity class
FP (programming language) designed by John Backus in the 1970s
Feature Pack, a software update for various devices which include new features
Floating point, a numerical-representation system in computing
Frame pointer
Microsoft FrontPage, an HTML editor
Functional programming, a programming paradigm
Function point, a measurement of the business functionality an information system provides
Transportation
F - Production, a class of race cars
F engine, a piston engine by Mazda
New Zealand FP class electric multiple unit (Matangi), a class of electric multiple locomotive unit
Other uses in science and technology
False positive, in statistics, a result that indicates, inaccurately, that a condition has been fulfilled
Ilford FP, a cubic-grain black-and-white photographic film
Fabry–Pérot interferometer, a device in optics
Fire protection, in the construction industry
Fluoroprotein foam, a type of fire retardant foam
Forensic psychiatry, a subspeciality of psychiatry related to criminology
Cyclopentadienyliron(II) dicarbonyl group (Fp = (η5-C5H5)Fe(CO)2, colloquially pronounced as "fip"), see: cyclopentadienyliron dicarbonyl dimer (Fp2)
Prostaglandin F receptor, a receptor on cells which mediates responses to prostaglandin F2alpha
Politics
Freedom Party (disambiguation), the name of various political parties
Folkpartiet, a former name of the Liberal People's Party in Sweden
The Federalist Papers, a series of essays advocating the ratification of the United States Constitution
Força Portugal, a Portuguese political alliance
Force Publique (French for "public force"), the military force of the Belgian Congo during the colonial period
Framework Programmes for Research and Technological Development of the European Union
Fuerza Popular, a right-wing populist Fujimorist political party in Peru
Virtue Party (Fazilet Partisi), an Islamist political party in Turkey
Other uses
50 metre pistol, a shooting sport formerly known as free pistol
Facepalm, a gesture indicating frustration
Family planning, the use of birth control and planning when to have children
Family practice, a general practitioner or family physician
Finance Park, an educational program co-managed by the Stavros Institute in Pinellas County, Florida
First person, a grammatical person referring to the speaker
FlyPelican, an Australian airline (IATA code FP)
Force protection, in the US military
French Polynesia (FIPS 10-4 country code)
Friends Provident, a British insurance company
Friday prayer, a congregational prayer service among Muslims
Fp, runestone style characterized by runic bands that end with animal heads seen from above.
The nickname of Falakata Polytechnic, West Bengal
FP grade tea |
Factor, a Latin word meaning "who/which acts", may refer to:
Commerce
Factor (agent), a person who acts for, notably a mercantile and colonial agent
Factor (Scotland), a person or firm managing a Scottish estate
Factors of production, such a factor is a resource used in the production of goods and services
Science and technology
Biology
Coagulation factors, substances essential for blood coagulation
Environmental factor, any abiotic or biotic factor that affects life
Enzyme, proteins that catalyze chemical reactions
Factor B, and factor D, peptides involved in the alternate pathway of immune system complement activation
Transcription factor, a protein that binds to specific DNA sequences
Computer science and information technology
Factor (programming language), a concatenative stack-oriented programming language
Factor (Unix), a utility for factoring an integer into its prime factors
Factor, a substring, a subsequence of consecutive symbols in a string
Authentication factor, a piece of information used to verify a person's identity for security purposes
Decomposition (computer science), also known as factoring, the organization of computer code
Enumerated type: a data type consisting of a set of named values, called factor in the R programming language
Other uses in science and technology
Factor, in the design of experiments, a phenomenon presumed to affect an experiment
Human factors, a profession that focuses on how people interact with products, tools, or procedures
Sun protection factor, a unit describing reduction in transmitted ultraviolet light
Mathematics
General mathematics
Factor (arithmetic), either of two numbers involved in a multiplication
Divisor, an integer which evenly divides a number without leaving a remainder
Factorization, the decomposition of an object into a product of other objects
Integer factorization, the process of breaking down a composite number into smaller non-trivial divisors
A coefficient, a multiplicative factor in an expression, usually a number
The act of forming a factor group or quotient ring in abstract algebra
A von Neumann algebra, with a trivial center
Factor (graph theory), a spanning sub graph
Any finite contiguous sub-sequence of a word in combinatorics or of a word in group theory
Statistics
An independent categorical variable.
In experimental design, the factor is a category of treatments controlled by the experimenter.
In factor analysis, the factors are unobserved underlying hidden variables that explain variability in a set of correlated variables.
People
Factor (producer), Canadian hip hop artist
John Factor (1892–1984), British-American Prohibition-era gangster
Max Factor Sr. (1872–1938), Polish-American businessman and cosmetician
Max Factor Jr. (1904–1996), son of the above, born Francis Factor
Places
Factor, Arecibo, Puerto Rico, a barrio
Other uses
Factor (chord), a member or component of a chord
FACTOR, the Foundation to Assist Canadian Talent on Records
The Factor, an April 2017 TV show on Fox News Channell
All pages with titles containing factor
All pages with titles beginning with Factor
Co-factor (disambiguation)
Factoring (disambiguation) |
Clean may refer to:
Cleaning, the process of removing unwanted substances, such as dirt, infectious agents, and other impurities, from an object or environment
Cleanliness, the state of being clean and free from dirt
Arts and media
Music
Albums
Clean (Cloroform album), 2007
Clean (Deitiphobia album), 1994
Clean (Severed Heads album), 1981
Clean (Shane & Shane album), 2004
Clean (Soccer Mommy album), 2018
Clean (The Japanese House EP), second EP by English indie pop act The Japanese House
Clean (Whores EP), second EP by American rock band Whores
Clean, an Edwin Starr album
Songs
"Clean", a song by Depeche Mode from their 1990 album Violator
"Clean" (song), a song by Taylor Swift from her album 1989, also covered by Ryan Adams from his album 1989
"Clean", a song by KSI and Randolph from the 2019 album New Age
Other uses in music
Clean, an amplifier sound in guitar terminology
Clean vocals, a term used for singing to distinguish it from unclean vocals, such as screaming or growling
Clean, a term used for the edited or censored version of a piece of media; see Parental Advisory#Application
The Clean, an influential first-wave indie rock band
Other uses in arts, entertainment, and media
Clean (2004 film), a 2004 French drama film directed by Olivier Assayas
Clean (2021 film), a 2021 American crime drama film directed by Paul Solet
Clean comedy (or clean performance), entertainment which avoids profanity and other objectionable material; the opposite of blue comedy
Sports
Clean and jerk, a weightlifting movement
Clean climbing, the choice to employ non-destructive hardware and techniques in rock climbing
Other uses
Clean (programming language), a purely functional programming language
Clean language, a questioning technique used in psychotherapy and coaching
See also
CLEAN (disambiguation)
Cleaning (disambiguation) |
CPL or Cpl may refer to:
Organizations
CPFL Energia (NYSE: CPL), the largest non state-owned group of electric energy generation and distribution in Brazil
CPL Aromas, a British fragrance company formerly known as Contemporary Perfumers Limited
CPL Resources, a resourcing/placement company based in Dublin
Libraries
Chicago Public Library, the public library system that serves the city of Chicago, Illinois, US
Cleveland Public Library, the public library system that serves the city of Cleveland, Ohio, US
Calgary Public Library, the public library system that serves the city of Calgary, Alberta, Canada
Coquitlam Public Library, a public library that serves Coquitlam, British Columbia, Canada
Codices Palatini latini, the Latin section of the medieval manuscript collection in the Bibliotheca Palatina in Heidelberg.
Sports
Canadian Premier League, a men's professional soccer league sanctioned by the Canada Soccer which represents the sport's highest level in Canada
Caribbean Premier League, a Twenty20 cricket league
Coastal Plain League, a wood-bat collegiate summer league
Coastal Plain League (Class D), a former minor league baseball affiliation
Cyberathlete Professional League, a professional sports tournament organization specializing in computer and console video game competitions
Science and technology
Caprolactam, an organic compound with the formula (CH2)5C(O)NH
Chemical Physics Letters, a peer-reviewed scientific journal
Chinese Physics Letters, an open access scientific journal published in China, from the Chinese Physical Society
Circular polarizing filter, a type of photographic filter
Crackle Photolithography, a kind of cost efficient photolithography process using random crack template as mask.
Computing
Call-Processing Language, a language that can be used to describe and control Internet telephony services
Characters per line, the maximal number of monospaced characters that may appear on a single line
Common Public License, a free software/open-source software license published by IBM
Composition Playlist, a file that defines the playback order of a Digital Cinema Package
Command Programming Language, a scripting language used by the PRIMOS operating system
Complementary pass-transistor logic, one of many logic families of pass transistor logic used in the design of integrated circuits
.cpl files, the Control Panel applets in Microsoft Windows
CPL (programming language), a multi-paradigm programming language
Current privilege level, of a task or program on x86 CPUs
Other uses
Centre for Professional Learning, on the Campus The Hague of Leiden University, Netherlands
Cents Per Line, a measurement of how much transcriptionists are paid for their work
Certified Professional Landman, the highest designation offered to landmen in the oil/gas industry by the American Association of Professional Landmen
Certified Professional Locksmith, a trade qualification awarded to members of the Associated Locksmiths of America
Clavis Patrum Latinorum, a numbered list of the Latin authors in the Corpus Christianorum
Color position light, a type of North American railroad signal
Commercial pilot licence, a qualification that permits the holder to act and be paid as an aircraft pilot
Concealed Pistol License, a permit for concealed carry in the United States
Contemporary Pictorial Literature, a 1970s comic book fanzine published by the CPL Gang
Continuous pressure laminate, decorative laminate produced using pressure in a continuous process
Corporal (Cpl or CPL), a rank in use in some form by most militaries and some police forces
Cost per Lead, an online advertising pricing model
Criminal procedure law
See also
Communist Party of Latvia, a political party in Latvia
Communist Party of Lithuania, a political party in Lithuania
Communist Party of Luxembourg, more commonly known by their French (PCL) or German (KPL) initials |
Curl or CURL may refer to:
Science and technology
Curl (mathematics), a vector operator that shows a vector field's rate of rotation
Curl (programming language), an object-oriented programming language designed for interactive Web content
cURL, a program and application library for transferring data with URLs
Antonov An-26, an aircraft, NATO reporting name CURL
Sports and weight training
Curl (association football), is spin on the ball, which will make it swerve when kicked
Curl, in the sport of curling, the curved path a stone makes on the ice or the act of playing; see Glossary of curling
Biceps curl, a weight training exercises
Leg curl, a weight training exercises
Wrist curl, a weight training exercises
Other uses
Curl (Japanese snack), a brand of corn puffs
Curl or ringlet, a lock of hair that grows in a curved, rather than straight, direction
Consortium of University Research Libraries, an association of UK academic and research libraries
Executive curl, the ring above a naval officer's gold lace or braid rank insignia
People with the surname
Kamren Curl (born 1999), American football player
Martina Gangle Curl (1906–1994), American artist and activist
Robert Curl (1933–2022), Nobel Laureate and emeritus professor of chemistry at Rice University
Rod Curl (born 1943), American professional golfer
Phil Curls (1942–2007), American politician
See also
Curling (disambiguation)
Overlap (disambiguation)
Spiral |
PostScript (PS) is a page description language in the electronic publishing and desktop publishing realm. It is a dynamically typed, concatenative programming language. It was created at Adobe Systems by John Warnock, Charles Geschke, Doug Brotz, Ed Taft and Bill Paxton from 1982 to 1984.
History
The concepts of the PostScript language were seeded in 1976 by John Gaffney at Evans & Sutherland, a computer graphics company. At that time Gaffney and John Warnock were developing an interpreter for a large three-dimensional graphics database of New York Harbor.
Concurrently, researchers at Xerox PARC had developed the first laser printer and had recognized the need for a standard means of defining page images. In 1975-76 Bob Sproull and William Newman developed the Press format, which was eventually used in the Xerox Star system to drive laser printers. But Press, a data format rather than a language, lacked flexibility, and PARC mounted the Interpress effort to create a successor.
In 1978 John Gaffney and Martin Newell then at Xerox PARC wrote J & M or JaM (for "John and Martin") which was used for VLSI design and the investigation of type and graphics printing. This work later evolved and expanded into the Interpress language.
Warnock left with Chuck Geschke and founded Adobe Systems in December 1982. They, together with Doug Brotz, Ed Taft and Bill Paxton created a simpler language, similar to Interpress, called PostScript, which went on the market in 1984. At about this time they were visited by Steve Jobs, who urged them to adapt PostScript to be used as the language for driving laser printers.
In March 1985, the Apple LaserWriter was the first printer to ship with PostScript, sparking the desktop publishing (DTP) revolution in the mid-1980s. The combination of technical merits and widespread availability made PostScript a language of choice for graphical output for printing applications. For a time an interpreter (sometimes referred to as a RIP for Raster Image Processor) for the PostScript language was a common component of laser printers, into the 1990s.
However, the cost of implementation was high; computers output raw PS code that would be interpreted by the printer into a raster image at the printer's natural resolution. This required high performance microprocessors and ample memory. The LaserWriter used a 12 MHz Motorola 68000, making it faster than any of the Macintosh computers to which it attached. When the laser printer engines themselves cost over a thousand dollars the added cost of PS was marginal. But as printer mechanisms fell in price, the cost of implementing PS became too great a fraction of overall printer cost; in addition, with desktop computers becoming more powerful, it no longer made sense to offload the rasterization work onto the resource-constrained printer. By 2001, few lower-end printer models came with support for PostScript, largely due to growing competition from much cheaper non-PostScript ink jet printers, and new software-based methods to render PostScript images on the computer, making them suitable for any printer; PDF, a descendant of PostScript, provides one such method, and has largely replaced PostScript as de facto standard for electronic document distribution.
On high-end printers, PostScript processors remain common, and their use can dramatically reduce the CPU work involved in printing documents, transferring the work of rendering PostScript images from the computer to the printer.
PostScript Level 1
The first version of the PostScript language was released to the market in 1984. The qualifier Level 1 was added when Level 2 was introduced.
PostScript Level 2
PostScript Level 2 was introduced in 1991, and included several improvements: improved speed and reliability, support for in-Raster Image Processing (RIP) separations, image decompression (for example, JPEG images could be rendered by a PostScript program), support for composite fonts, and the form mechanism for caching reusable content.
PostScript 3
PostScript 3 (Adobe dropped the "level" terminology in favor of simple versioning) came at the end of 1997, and along with many new dictionary-based versions of older operators, introduced better color handling and new filters (which allow in-program compression/decompression, program chunking, and advanced error-handling).
PostScript 3 was significant in terms of replacing the existing proprietary color electronic prepress systems, then widely used for magazine production, through the introduction of smooth shading operations with up to 4096 shades of grey (rather than the 256 available in PostScript Level 2), as well as DeviceN, a color space that allowed the addition of additional ink colors (called spot colors) into composite color pages.
Use in printing
Before PostScript
Prior to the introduction of Interpress and PostScript, printers were designed to print character output given the text—typically in ASCII—as input. There were a number of technologies for this task, but most shared the property that the glyphs were physically difficult to change, as they were stamped onto typewriter keys, bands of metal, or optical plates.
This changed to some degree with the increasing popularity of dot matrix printers. The characters on these systems were drawn as a series of dots, as defined by a font table inside the printer. As they grew in sophistication, dot matrix printers started including several built-in fonts from which the user could select, and some models allowed users to upload their own custom glyphs into the printer.
Dot matrix printers also introduced the ability to print raster graphics. The graphics were interpreted by the computer and sent as a series of dots to the printer using a series of escape sequences. These printer control languages varied from printer to printer, requiring program authors to create numerous drivers.
Vector graphics printing was left to special-purpose devices, called plotters. Almost all plotters shared a common command language, HPGL, but were of limited use for anything other than printing graphics. In addition, they tended to be expensive and slow, and thus rare.
PostScript printing
Laser printers combine the best features of both printers and plotters. Like plotters, laser printers offer high quality line art, and like dot-matrix printers, they are able to generate pages of text and raster graphics. Unlike either printers or plotters, a laser printer makes it possible to position high-quality graphics and text on the same page. PostScript made it possible to exploit fully these characteristics by offering a single control language that could be used on any brand of printer.
PostScript went beyond the typical printer control language and was a complete programming language of its own. Many applications can transform a document into a PostScript program: the execution of which results in the original document. This program can be sent to an interpreter in a printer, which results in a printed document, or to one inside another application, which will display the document on-screen. Since the document-program is the same regardless of its destination, it is called device-independent.
PostScript is noteworthy for implementing 'on-the fly' rasterization in which everything, even text, is specified in terms of straight lines and cubic Bézier curves (previously found only in CAD applications), which allows arbitrary scaling, rotating and other transformations. When the PostScript program is interpreted, the interpreter converts these instructions into the dots needed to form the output. For this reason, PostScript interpreters are occasionally called PostScript raster image processors, or RIPs.
Font handling
Almost as complex as PostScript itself is its handling of fonts. The font system uses the PS graphics primitives to draw glyphs as curves, which can then be rendered at any resolution. A number of typographic issues had to be considered with this approach.
One issue is that fonts do not scale linearly at small sizes and features of the glyphs will become proportionally too large or small and start to look displeasing. PostScript avoided this problem with the inclusion of font hinting, in which additional information is provided in horizontal or vertical bands to help identify the features in each letter that are important for the rasterizer to maintain. The result was significantly better-looking fonts even at low resolution. It had formerly been believed that hand-tuned bitmap fonts were required for this task.
At the time, the technology for including these hints in fonts was carefully guarded, and the hinted fonts were compressed and encrypted into what Adobe called a Type 1 Font (also known as PostScript Type 1 Font, PS1, T1 or Adobe Type 1). Type 1 was effectively a simplification of the PS system to store outline information only, as opposed to being a complete language (PDF is similar in this regard). Adobe would then sell licenses to the Type 1 technology to those wanting to add hints to their own fonts. Those who did not license the technology were left with the Type 3 Font (also known as PostScript Type 3 Font, PS3 or T3). Type 3 fonts allowed for all the sophistication of the PostScript language, but without the standardized approach to hinting.
The Type 2 font format was designed to be used with Compact Font Format (CFF) charstrings, and was implemented to reduce the overall font file size. The CFF/Type2 format later became the basis for handling PostScript outlines in OpenType fonts.
The CID-keyed font format was also designed, to solve the problems in the OCF/Type 0 fonts, for addressing the complex Asian-language (CJK) encoding and very large character set issues. The CID-keyed font format can be used with the Type 1 font format for standard CID-keyed fonts, or Type 2 for CID-keyed OpenType fonts.
To compete with Adobe's system, Apple designed their own system, TrueType, around 1991. Immediately following the announcement of TrueType, Adobe published the specification for the Type 1 font format. Retail tools such as Altsys Fontographer (acquired by Macromedia in January 1995, owned by FontLab since May 2005) added the ability to create Type 1 fonts. Since then, many free Type 1 fonts have been released; for instance, the fonts used with the TeX typesetting system are available in this format.
In the early 1990s there were several other systems for storing outline-based fonts, developed by Bitstream and Metafont for instance, but none included a general-purpose printing solution and they were therefore not widely used.
In the late 1990s, Adobe joined Microsoft in developing OpenType, essentially a functional superset of the Type 1 and TrueType formats. When printed to a PostScript output device, the unneeded parts of the OpenType font are omitted, and what is sent to the device by the driver is the same as it would be for a TrueType or Type 1 font, depending on which kind of outlines were present in the OpenType font.
Other implementations
In the 1980s, Adobe drew most of its revenue from the licensing fees for their implementation of PostScript for printers, known as a raster image processor or RIP. As a number of new RISC-based platforms became available in the mid-1980s, some found Adobe's support of the new machines to be lacking.
This and issues of cost led to third-party implementations of PostScript becoming common, particularly in low-cost printers (where the licensing fee was the sticking point) or in high-end typesetting equipment (where the quest for speed demanded support for new platforms faster than Adobe could provide). At one point, Microsoft licensed to Apple a PostScript-compatible interpreter it had bought called TrueImage, and Apple licensed to Microsoft its new font format, TrueType. Apple ended up reaching an accord with Adobe and licensed genuine PostScript for its printers, but TrueType became the standard outline font technology for both Windows and the Macintosh.
Today, third-party PostScript-compatible interpreters are widely used in printers and multifunction peripherals (MFPs). For example, CSR plc's IPS PS3 interpreter, formerly known as PhoenixPage, is standard in many printers and MFPs, including those developed by Hewlett-Packard and sold under the LaserJet and Color LaserJet lines. Other third-party PostScript solutions used by print and MFP manufacturers include Jaws and the Harlequin RIP, both by Global Graphics. A free software version, with several other applications, is Ghostscript. Several compatible interpreters are listed on the Undocumented Printing Wiki.Some basic, inexpensive laser printers do not support PostScript, instead coming with drivers that simply rasterize the platform's native graphics formats rather than converting them to PostScript first. When PostScript support is needed for such a printer, Ghostscript can be used. There are also a number of commercial PostScript interpreters, such as TeleType Co.'s T-Script.
Use as a display system
PostScript became commercially successful due to the introduction of the graphical user interface (GUI), allowing designers to directly lay out pages for eventual output on laser printers. However, the GUI's own graphics systems were generally much less sophisticated than PostScript; Apple's QuickDraw, for instance, supported only basic lines and arcs, not the complex B-splines and advanced region filling options of PostScript. In order to take full advantage of PostScript printing, applications on the computers had to re-implement those features using the host platform's own graphics system. This led to numerous issues where the on-screen layout would not exactly match the printed output, due to differences in the implementation of these features.
As computer power grew, it became possible to host the PS system in the computer rather than the printer. This led to the natural evolution of PS from a printing system to one that could also be used as the host's own graphics language. There were numerous advantages to this approach; not only did it help eliminate the possibility of different output on screen and printer, but it also provided a powerful graphics system for the computer, and allowed the printers to be "dumb" at a time when the cost of the laser engines was falling. In a production setting, using PostScript as a display system meant that the host computer could render low-resolution to the screen, higher resolution to the printer, or simply send the PS code to a smart printer for offboard printing.
However, PostScript was written with printing in mind, and had numerous features that made it unsuitable for direct use in an interactive display system. In particular, PS was based on the idea of collecting up PS commands until the showpage command was seen, at which point all of the commands read up to that point were interpreted and output. In an interactive system this was clearly not appropriate. Nor did PS have any sort of interactivity built in; for example, supporting hit detection for mouse interactivity obviously did not apply when PS was being used on a printer.
When Steve Jobs left Apple and started NeXT, he pitched Adobe on the idea of using PS as the display system for his new workstation computers. The result was Display PostScript, or DPS. DPS added basic functionality to improve performance by changing many string lookups into 32 bit integers, adding support for direct output with every command, and adding functions to allow the GUI to inspect the diagram. Additionally, a set of "bindings" was provided to allow PS code to be called directly from the C programming language. NeXT used these bindings in their NeXTStep system to provide an object oriented graphics system. Although DPS was written in conjunction with NeXT, Adobe sold it commercially and it was a common feature of most Unix workstations in the 1990s.
Sun Microsystems took another approach, creating NeWS. Instead of DPS's concept of allowing PS to interact with C programs, NeWS instead extended PS into a language suitable for running the entire GUI of a computer. Sun added a number of new commands for timers, mouse control, interrupts and other systems needed for interactivity, and added data structures and language elements to allow it to be completely object oriented internally. A complete GUI, three in fact, were written in NeWS and provided for a time on their workstations. However, the ongoing efforts to standardize the X11 system led to its introduction and widespread use on Sun systems, and NeWS never became widely used.
Portable Document Format
The PDF and PostScript share the same imaging model and both documents are mutually convertible to each other. Both documents produce the same result when printed. The difference between the PDF and PostScript is that the PDF lacks the general-purpose programming language framework of the PostScript language. A PDF document is a static data structure made for efficient access and embeds navigational information suitable for interactive viewing.: 9
The language
PostScript is a Turing-complete programming language, belonging to the concatenative group. Typically, PostScript programs are not produced by humans, but by other programs. However, it is possible to write computer programs in PostScript just like any other programming language.PostScript is an interpreted, stack-based language similar to Forth but with strong dynamic typing, data structures inspired by those found in Lisp, scoped memory and, since language level 2, garbage collection. The language syntax uses reverse Polish notation, which makes the order of operations unambiguous, but reading a program requires some practice, because one has to keep the layout of the stack in mind. Most operators (what other languages term functions) take their arguments from the stack, and place their results onto the stack. Literals (for example, numbers) have the effect of placing a copy of themselves on the stack. Sophisticated data structures can be built on the array and dictionary types, but cannot be declared to the type system, which sees them all only as arrays and dictionaries, so any further typing discipline to be applied to such user-defined "types" is left to the code that implements them.
The character "%" is used to introduce comments in PostScript programs. As a general convention, every PostScript program should start with the characters "%!PS" as an interpreter directive so that all devices will properly interpret it as PostScript.
"Hello world"
A Hello World program, the customary way to show a small example of a complete program in a given language, might look like this in PostScript (level 2):
or if the output device has a console
Units of length
PostScript uses the point as its unit of length. However, unlike some of the other versions of the point, PostScript uses exactly 72 points to the inch. Thus:
1 point = 1/72 inch = 25.4/72 mm = 127/360 mm = 352.777… micrometersFor example, in order to draw a vertical line of 4 cm length, it is sufficient to type:
More readably and idiomatically, one might use the following equivalent, which demonstrates a simple procedure definition and the use of the mathematical operators mul and div:
Most implementations of PostScript use single-precision reals (24-bit mantissa), so it is not meaningful to use more than 9 decimal digits to specify a real number, and performing calculations may produce unacceptable round-off errors.
Software
List of software which can be used to render the PostScript documents:
Ghostscript
pstoedit
Zathura
See also
Adobe StandardEncoding (PostScript character set)
Computer font
Document Structuring Conventions
Encapsulated PostScript
LaTeX
PostScript Printer Description (PPD)
Printer Command Language (PCL)
Typeface
Further reading
Adobe Systems Incorporated (February 1999) [1985]. PostScript Language Reference Manual (PDF) (1st printing, 3rd ed.). Addison-Wesley Publishing Company. ISBN 0-201-37922-8. Retrieved 2023-07-14. (NB. This book (PLR3) together with the Supplement (PDF), archived from the original (PDF) on 2016-03-05, retrieved 2006-04-29 is the de facto defining work on PostScript 3 and is informally called "red book" due to its red cover.)
Adobe Systems Incorporated (1990) [1985]. PostScript Language Reference Manual (2nd ed.). Addison-Wesley Publishing Company. (NB. This edition (PLR2) covers PostScript Level 2 and also contains a description of Display PostScript, which is no longer discussed in the third edition.)
Adobe Systems Incorporated (1985). PostScript Language Reference Manual (1st ed.). Addison-Wesley Publishing Company. (NB. This edition (PLR1) covers PostScript Level 1.)
Geschke, Charles (1986) [1985]. Preface. PostScript Language Tutorial and Cookbook. By Adobe Systems Incorporated (27th printing, August 1998, 1st ed.). Addison Wesley Publishing Company. ISBN 0-201-10179-3. 9-780201-101799. Retrieved 2017-02-27. (NB. This introductory text is informally called "blue book" due to its blue cover.)
PostScript language program design. Adobe Systems. Archived from the original (Zip) on 2011-06-13. (NB. This book is informally called "green book" due to its green cover.)
The Type 1 Font Format (PDF), Adobe, archived from the original (PDF) on 2015-03-21 (NB. This book is informally called "black book" due to its black cover.)
PostScript vs. PDF, Adobe, archived from the original on 2016-04-13 (NB. Official introductory comparison of PS, EPS vs. PDF.)
A First Guide to PostScript, Tail recursive
Casselman, William ‘Bill’. Mathematical Illustrations: A Manual of Geometry and PostScript (PDF).[1]
Reid, Glenn (1990). Thinking in PostScript (PDF). Colorado, USA: Addison-Wesley. (NB. A thorough tutorial available online courtesy of the author.)
Computer History Museum: article about early development of PostScript |
BWO can refer to:
BW Offshore, an FPSO owner and operator
BWO (band), a Swedish pop group formerly known as Bodies Without Organs
Body without organs, a sociological concept developed by Gilles Deleuze and Félix Guattari
Backward wave oscillator
The Blue World Order, a stable of professional wrestlers
BWO, Blue-winged Olive mayflies used in fly fishing
Bricket Wood railway station, Hertfordshire, England (by National Rail station code |
Paul van der Sterren (born 17 March 1956) is a Dutch chess grandmaster. He won the Dutch Chess Championship twice, in 1985 and 1993. In 1993 he qualified for the Candidates Tournament for the FIDE World Chess Championship 1996, but was eliminated in the first round (+1 −3 =3) by Gata Kamsky.
Van der Sterren represented the Netherlands in 11 consecutive Chess Olympiads from 1982 through 2000.He is the author of the two-volume opening encyclopedia Fundamental Chess Openings, which was published in 2009 and 2011. He is also the author of the book Your First Chess Lessons published in 2016.
Paul van der Sterren player profile and games at Chessgames.com |
John Denis Martin Nunn (born 25 April 1955) is an English chess grandmaster, a three-time world champion in chess problem solving, a chess writer and publisher, and a mathematician. He is one of England's strongest chess players and was formerly in the world's top ten.
Education and early life
Nunn was born in London. As a junior, he showed a prodigious talent for the game and in 1967, at 12 years of age, he won the British under-14 Championship. At 14, he was London Under-18 Champion for the 1969–70 season and less than a year later, at just 15 years of age, he proceeded to Oriel College, Oxford, to study mathematics. At the time, Nunn was Oxford's youngest undergraduate since Cardinal Wolsey in 1520. Graduating in 1973, he went on to gain a Doctor of Philosophy degree in 1978 with a thesis on finite H-spaces supervised by John Hubbuck. Nunn remained in Oxford as a mathematics lecturer until 1981, when he became a professional chess player.
Career
In 1975, he became the European Junior Chess Champion. He gained the Grandmaster title in 1978 and was British champion in 1980. Nunn has twice won individual gold medals at Chess Olympiads. In 1989, he finished sixth in the inaugural 'World Cup', a series of tournaments in which the top 25 players in the world competed. His best performance in the World Chess Championship cycle came in 1987, when he lost a playoff match against Lajos Portisch for a place in the Candidates Tournament. At the prestigious Hoogovens tournament (held annually in Wijk aan Zee) he was a winner in 1982, 1990 and 1991.
Nunn achieved his highest Elo rating of 2630 in January 1995. Six years earlier, in January 1989, his then rating of 2620 was high enough to elevate him into the world's top ten, where he shared ninth place. This was close to the peak of the English chess boom, and there were two English players above him on the list: Nigel Short (world number three, 2650) and Jonathan Speelman (world number five, 2640). Nunn has now retired from serious tournament play and, until he resurfaced as a player in two Veterans events in 2014 and 2015, had not played a FIDE-rated game since August 2006; however, he has been active in the ECF rapid play.As well as being a strong player, Nunn is regarded as one of the best contemporary authors of chess books. He has penned many books, including Secrets of Grandmaster Chess, which won the British Chess Federation Book of the Year award in 1988, and John Nunn's Best Games, which took the award in 1995. He is the director of chess publishers Gambit Publications. Chess historian Edward Winter has written of him:
A polymath, Nunn has written authoritative monographs on openings, endings and compositions, as well as annotated games collections and autobiographical volumes. As an annotator he is equally at home presenting lucid prose descriptions for the relative novice and analysis of extreme depth for the expert.
In a 2010 interview, Magnus Carlsen explained that he thought extreme intelligence could actually be a hindrance to one's chess career. As an example of this, he cited Nunn:
I am convinced that the reason the Englishman John Nunn never became world champion is that he is too clever for that. ... He has so incredibly much in his head. Simply too much. His enormous powers of understanding and his constant thirst for knowledge distracted him from chess.
Nunn is also involved with chess problems, composing several examples and solving as part of the British team on several occasions. On this subject he wrote Solving in Style (1985). He won the World Chess Solving Championship in Halkidiki, Greece, in September 2004 and also made his final GM norm in problem solving. There were further wins of the World Solving Championship in 2007 and in 2010. He is the third person ever to gain both over-the-board and solving GM titles (the others being Jonathan Mestel and Ram Soffer; Bojan Vučković has been the fourth since 2008).
Nunn has long been interested in computer chess. In 1984, he began annotating games between computers for Personal Computer World magazine, and joined the editorial board of Frederic Friedel's Computerschach & Spiele- magazine. In 1987, he was announced as the first editor of the newly created Chessbase magazine. The 1992 release of his first book making use of chess endgame tablebases, Secrets Of Rook Endings, was later followed by Secrets of Minor-Piece Endings, and Secrets Of Pawnless Endings. These books include human-usable endgame strategies found by Nunn (and others) by extensive experimentation with tablebases, and new editions have come out and are due as more tablebases are created and tablebases are more deeply data-mined. Nunn is thus (as of 2004) the foremost data miner of chess endgame tablebases.
Nunn finished third in the World Senior Chess Championship (over-50 section) of 2014 in Katerini, Greece, second in the European Senior Chess Championship (over-50) of 2015 in Eretria, Greece, and first in the World Senior Chess Championship (over-65 section) in Assisi, Italy in 2022.
Notable games
Jacob Øst-Hansen vs John Nunn, World Student Olympiad, Teesside 1974, Vienna Game, 0–1: the Frankenstein-Dracula Variation of the Vienna Game regularly provides swashbuckling play and Nunn's game with Jacob Øst-Hansen at Teesside 1974, was no exception. The latter part of the game was played in a frantic time scramble, with Nunn sacrificing pieces to bring the enemy king into the open and deliver checkmate.
Alexander Beliavsky vs John Nunn 1985, Wijk Aan Zee 1985, King's Indian, Samisch Variation, 0–1: this game is sometimes referred to as "Nunn's Immortal", and was included in the book The Mammoth Book Of The World's Greatest Chess Games (Robinson Publishing, 2010). In his book Winning Chess Brilliancies, Yasser Seirawan called this game the best of the 1980s.
Books
Personal life
Nunn is married to Petra Fink-Nunn, a German chess player with the title Woman FIDE Master. They have a son, Michael.
Astronomy
Coincident with a reduction in his over-the-board chess, Nunn has developed a passion for astronomy, a hobby he shares with ex-world chess champion Viswanathan Anand. Nunn has various articles and lectures published in Chessbase News. |
Savielly Tartakower (also known as Xavier or Ksawery Tartakower, less often Tartacover or Tartakover; 21 February 1887 – 4 February 1956) was a Polish chess player. He was awarded the title of International Grandmaster in its inaugural year, 1950. Tartakower was also a leading chess journalist and author of the 1920s and 1930s.
Early career
Tartakower was born on 21 February 1887 in Rostov-on-Don, Russia, to Austrian citizens of Jewish origin. His father, a first-generation Christian, had him christened with the Latin form of his name, Sabelius. His parents were killed in a robbery in Rostov-on-Don in 1911. Tartakower stayed mainly in Austria. He graduated from the law faculties of universities in Geneva and Vienna. He spoke German and French. During his studies he became interested in chess and started attending chess meetings in various cafés for chess players in Vienna. He met many notable masters of the time, among them Carl Schlechter, Géza Maróczy (against whom he played what was probably his most famous brilliancy), Milan Vidmar, and Richard Réti. His first achievement was first place in a tournament in Nuremberg in 1906. Three years later he achieved second place in the tournament in Vienna, losing only to Réti.
During World War I Tartakower was drafted into the Austro-Hungarian army and served as a staff officer on various posts. He went to the Russian front with the Viennese infantry house-regiment. After the war he emigrated to France, and settled in Paris. Although Tartakower did not speak Polish, after Poland regained its independence in 1918 he accepted Polish citizenship and became one of the country's most prominent honorary ambassadors. He was the captain and trainer of the Polish chess team in six international tournaments, winning a gold medal for Poland at the Hamburg Olympiad in 1930.
Chess professional
In France, Tartakower decided to become a professional chess player. He also started cooperating with various chess magazines, and wrote several books and brochures on chess. The most famous of these, Die Hypermoderne Schachpartie (The Hypermodern Chess Game) was published in 1924 and has been issued in almost 100 editions since. Tartakower took part in many of the most important chess tournaments of his day. In 1927 and 1928 he won two tournaments in Hastings and shared first place with Aron Nimzowitsch in London. On the latter occasion, he defeated such notable players as Frank Marshall, Milan Vidmar, and Efim Bogoljubov. In 1930 he won the Liège tournament, beating Mir Sultan Khan by two points. Further down the list were, among others, Akiba Rubinstein, Nimzowitsch, and Marshall.
Tartakower won the Polish Chess Championship twice, at Warsaw 1935 and Jurata 1937. In the 1930s he represented Poland in six Chess Olympiads, and France in 1950, winning three individual medals (gold in 1931 and bronze in 1933 and 1935), as well as five team medals (gold in 1930, two silver in 1931 and 1939, and two bronze in 1935 and 1937).
In 1930, at second board at the 3rd Chess Olympiad in Hamburg (+9−1=6);
In 1931, at second board at the 4th Chess Olympiad in Prague (+10−1=7);
In 1933, at first board at the 5th Chess Olympiad in Folkestone (+6−2=6);
In 1935, at first board at the 6th Chess Olympiad in Warsaw (+6−0=11);
In 1937, at first board at the 7th Chess Olympiad in Stockholm (+1−2=10);
In 1939, at first board at the 8th Chess Olympiad in Buenos Aires (+7−3=7);
In 1950, at first board at the 9th Chess Olympiad in Dubrovnik (+5−5=5).In 1935 he was one of the main organizers of the Chess Olympiad in Warsaw.
In 1939, the outbreak of World War II found him in Buenos Aires, where he was playing the 8th Chess Olympiad, representing Poland on a team which included Miguel Najdorf, who always called Tartakower "my teacher".
Final years
After a short stay in Argentina Tartakower returned to Europe. He arrived in France shortly before its collapse in 1940. Under the pseudonym Cartier, he joined the forces of general Charles de Gaulle.
After World War II and the Soviet takeover of Poland, Tartakower became a French citizen. He played in the first Interzonal tournament at Saltsjöbaden 1948, but did not qualify for the Candidates tournament. He represented France at the 1950 Chess Olympiad. FIDE instituted the title of International Grandmaster in 1950; Tartakower was in the first group of players to receive it. In 1953, he won the French Chess Championship in Paris.He died on 4 February 1956 in Paris, 18 days before his 69th birthday.
Personality and chess contributions
Tartakower is regarded as one of the most notable chess personalities of his time. Harry Golombek translated Tartakower's book of his best games, and in the foreword wrote: Dr. Tartakower is far and away the most cultured and the wittiest of all the chess masters I have ever met. His extremely well stored mind and ever-flowing native wit make conversation with him a perpetual delight. So much so that I count it as one of the brightest attractions an international tournament can hold out for me that Dr. Tartakower should also be one of the participants. His talk and thought are rather like a modernized blend of Baruch Spinoza and Voltaire; and with it all a dash of paradoxical originality that is essential Tartakower.
A talented chess player, Tartakower is also known for his countless aphorisms, sometimes called Tartakoverisms. One variation of the Dutch Defence is named after him. The Tartakower Defence in the Queen's Gambit Declined (also known as the Tartakower–Makogonov–Bondarevsky System) also bears his name, as does the most common variation of the Torre Attack. He is alleged to be the inventor of the Orangutan Opening, 1.b4, so named after Tartakower had admired a great ape during his visit to the zoo whilst playing in the great 1924 tournament in New York. Tartakower originated the Catalan Opening at Barcelona 1929. This system starts with 1.d4 d5 2.c4 Nf6 3.g3. It remains very popular today at all levels. Also, a very solid variation in the Caro–Kann Defence, which starts with 1.e4 c6 2.d4 d5 3.Nc3 dxe4 4.Nxe4 Nf6 5.Nxf6+ exf6 is named after Tartakower.
José Raúl Capablanca scored +5−0=7 against Tartakower, but they had many hard fights. After their fighting draw in London 1922 (where Tartakower played his new defense), Capablanca said, "You are lacking in solidity", and Tartakower replied in his usual banter, "That is my saving grace." But in Capablanca's reports of the 1939 Chess Olympiad in Buenos Aires for the Argentine newspaper Crítica, he wrote: The Polish team … is captained and led by Dr S. Tartakower, a master with profound knowledge and great imagination, qualities which make him a formidable adversary. … Luckily for the others, the Polish team has only one Tartakower.
Sugden and Damsky stated that like other chess players of all ages and ranks among whom there is generally no lack of idiosyncrasy or superstition, Tartakower, a trenchant wit, took a most unsightly old hat with him from tournament to tournament. He would only wear it on the last round and he would win. Notably this hat did not guarantee him success in casinos, which he visited as though it were a job of work. The roulette table would regularly acquire both the Grandmaster's prizes and the numerous fees from his endless string of articles.
Quotations
Several chess witticisms are attributed to Tartakower:
"It's always better to sacrifice your opponent's men."
"An isolated pawn spreads gloom all over the chessboard."
"The blunders are all there on the board, waiting to be made."
"The winner of the game is the player who makes the next-to-last mistake."
"The move is there, but you must see it." (Horowitz 1971:137)
"No game was ever won by resigning."
"I never defeated a healthy opponent." (This refers to players who blame an illness, sometimes imaginary, for their loss.)
"Tactics is what you do when there is something to do; strategy is what you do when there is nothing to do."
"Moral victories do not count."
"Chess is a fairy tale of 1001 blunders."
"The great master places a knight on e5; checkmate follows by itself."
"A master can sometimes play badly, a fan never!"
"A match demonstrates less than a tournament. But a tournament demonstrates nothing at all."
"Chess is a struggle against one's own errors."
"Every chessplayer should have a hobby."
"A game of chess has three phases: the opening, where you hope you stand better; the middlegame, where you think you stand better; and the ending, where you know you stand to lose."
"As long as an opening is reputed to be weak it can be played."
"Stalemate is the tragicomedy of chess."
"Erro, ergo sum."
Talking about 1.Nf3 Réti Opening: "An opening of the past, which became, towards 1923, the opening of the future."
"To avoid losing a piece, many a person has lost the game."
"A draw can be obtained normally by repeating three moves, but also by playing one bad move."
"Some part of a mistake is always correct."
"Whenever you have to make a rook move, and both rooks are available for said move, you should evaluate which rook to move and, once you have made up your mind, move the other one."
Writings
500 Master Games of Chess by Savielly Tartakower and Julius du Mont, Dover Publications, June 1, 1975, ISBN 0-486-23208-5. (Previously published in two volumes by G. Bell & Sons, 1952.)
100 Master Games of Modern Chess by Savielly Tartakower and Julius du Mont, Dover Publications, June 1, 1975, ISBN 0-486-20317-4. (Previously published by G. Bell & Sons, 1955.)
Bréviaire des échecs, one of the best known introductory texts for chess in the French language. (English edition: A Breviary of Chess, translated by J. Du Mont, London: George Rutledge & Sones, Ltd.,1937)
Die hypermoderne Schachpartie by Savielly Tartakower, published in German by Wiener Schachzeitung in 1924 (English translation of the second edition: The Hypermodern Game of Chess, translated by Jared Becker, Russell Enterprises, 2015)
My Best Games of Chess 1905–1954 by S.G. Tartakower, Dover Publications, 1985, ISBN 0-486-24807-0. The definitive recollection of Tartakower's career, written in his unique style; translated by Harry Golombek.
Notable games
Rudolf Spielmann vs Savielly Tartakower, Copenhagen 1923, Caro–Kann Defense: Exchange Variation (B13), 0–1
Savielly Tartakower vs Akiba Rubinstein, Moscow International Tournament 1925, Bishop's Opening: Vienna Hybrid (C28), 1–0
Savielly Tartakower vs Jacques Mieses, Baden-Baden 1925, Dutch Defense: Staunton Gambit, Tartakower Variation (A82), 1–0
Alexander Alekhine vs Savielly Tartakower, Folkestone ol 1933, Queen's Gambit Declined: Tartakower Defense, (D58), 0–1
See also
Hypermodernism
List of Jewish chess players
Bibliography
Damsky, Yakov; Sugden, John (TRN) (2005-08-28). The Batsford Book of Chess Records. London: Batsford. ISBN 978-0-7134-8946-0. OCLC 66717591.
Horowitz, I. A. (1971). All About Chess. New York: Collier Books. OCLC 2522287.
Saviely Tartakower chess games at 365Chess.com
Savielly Tartakower player profile and games at Chessgames.com
Savielly Tartakower Chess Olympiad record at OlimpBase.org
Kmoch, Hans (2004). "Grandmasters I Have Known: Sawielly Grigoriewitsch Tartakower" (PDF) – via Chesscafe.com. (62.2 KB) (subscription required).
Ree, Hans (2006). "Tartakower's Poetry". – via Chesscafe.com (subscription required)
Steve Goldberg review of Moral Victories: the Story of Savielly Tartakower by David Lovejoy – via Chesscafe.com (subscription required)
"Savielly Tartakower" by Edward Winter |
Vogue may refer to:
Business
Vogue (magazine), a US fashion magazine
British Vogue, a British fashion magazine
Vogue Arabia, an Arab fashion magazine
Vogue Australia, an Australian fashion magazine
Vogue China, a Chinese fashion magazine
Vogue France, a French fashion magazine
Vogue Greece, a Greek fashion magazine
Vogue India, an Indian fashion magazine
Vogue Italia, an Italian fashion magazine
Vogue México y Latinoamérica, a Mexican/Latin American fashion magazine
Vogue Nederland, a Dutch fashion magazine
Vogue Polska, a Polish fashion magazine
Vogue Scandinavia, a Scandinavian fashion magazine
Vogue Singapore, a Singaporean fashion magazine
Vogue Ukraine, a Ukrainian fashion magazine
Vogue Records, a short-lived American 1940s label
Disques Vogue, a French jazz record company
Singer Vogue, two generations of British cars manufactured by Singer
Vogue Tyre, a wheel manufacturer based in Chicago
The Vogue Theater, Chula Vista, California, United States
Vogue Theatre, Vancouver, British Columbia, Canada
Vogue Theatre - see List of theatres in Louisville, Kentucky
Vogue (cigarette), an upmarket cigarette brand
HTC Vogue, a codename for the HTC Tough Pocket PC
The Vogue, venue in Broad Ripple Village, Indianapolis, US
Music
The Vogue, an American rock band from Seattle
"Vogue" (Madonna song), 1990
"Vogue" (KMFDM song), 1992
"Vogue" (Ayumi Hamasaki song), 2000
"The Vogue", a song by Antonelli Electr. featuring Miss Kittin
Places
Vogue, Cornwall, UK, a hamlet
Vogüé, a village in Ardèche department, France
People
Eugène-Melchior de Vogüé (1848–1910), French diplomat
Melchior de Vogüé (1829–1916), French archaeologist, uncle of the above
Nelly de Vogüé (1908–2003), French writer, painter, and business executive, granddaughter-in-law of the above
Vogue Williams (born 1985), Irish model
Other uses
Vogue (dance), a highly stylized, modern house dance
See also
New Vogue (dance), an Australian form of sequence dancing
Voulge, a medieval weapon
All pages with titles beginning with Vogue
All pages with titles containing Vogue |
Machgielis "Max" Euwe (Dutch: [ˈøːʋə]; May 20, 1901 – November 26, 1981) was a Dutch chess player, mathematician, author, and chess administrator. He was the fifth player to become World Chess Champion, a title he held from 1935 until 1937. He served as President of FIDE, the World Chess Federation, from 1970 to 1978.
Early years, education and professional career
Euwe was born in the Watergraafsmeer, in Amsterdam. He studied mathematics at the University of Amsterdam under the founder of intuitionistic logic, L.E.J. Brouwer (who later became his friend and for whom he held a funeral oration), and earned his doctorate in 1926 under Roland Weitzenböck. He taught mathematics, first in Rotterdam, and later at a girls' Lyceum in Amsterdam. After World War II, Euwe became interested in computer programming and was appointed professor in this subject at the universities of Rotterdam and Tilburg, retiring from Tilburg University in 1971. He published a mathematical analysis of the game of chess from an intuitionistic point of view, in which he showed, using the Thue–Morse sequence, that the then-official rules (in 1929) did not exclude the possibility of infinite games.
Early chess career
Euwe played his first tournament at age 10, winning every game.
He won every Dutch chess championship that he entered from 1921 until 1952, and won the title in 1955; his 12 titles are still a record. The only other winners during this period were Salo Landau in 1936, when Euwe, then world champion, did not compete; and Jan Hein Donner in 1954. He became the world amateur chess champion in 1928, at The Hague, with a score of 12/15.Euwe married in 1926, started a family soon afterwards, and could play competitive chess only during school vacations, so his opportunities for top-level international chess competition were limited. But he performed well in the few tournaments and matches for which he could find time, from the early 1920s to mid-1930s. He lost a training match to Alexander Alekhine in the Netherlands in December 1926 / January 1927, with 4½/10 (+2−3=5). The match was played to help Euwe prepare for a future encounter with José Raúl Capablanca, then world champion. Euwe lost both the first and second FIDE Championship matches to Efim Bogoljubow, held in the Netherlands in 1928 and 1928‒29 respectively, scoring 4½/10 in each match ((+2−3=5) in the first match, (+1−2=7) in the second match). He lost a match to Capablanca in Amsterdam in 1931 with 4/10 (+0−2=8). He won a match against Spielmann in Amsterdam in 1932, 3–1, played to help Euwe prepare for his upcoming match with Salo Flohr.
In 1932, Euwe drew a match with Flohr 8–8, and was equal second with Flohr, behind Alekhine, at a major tournament in Bern. According to Reuben Fine, these results established Euwe and Flohr as Alekhine's most credible challengers.At Zürich 1934, Euwe again finished equal second with Flohr, behind Alekhine, and he defeated Alekhine in their game.
World Champion
In 1933, Max Euwe challenged Alekhine to a championship match. Alekhine accepted the challenge for October 1935. Earlier that year, Dutch radio sports journalist Han Hollander asked Capablanca for his views on the forthcoming match. In the rare archival film footage where Capablanca and Euwe both speak, Capablanca replies: "Dr. Alekhine's game is 20% bluff. Dr. Euwe's game is clear and straightforward. Dr. Euwe's game—not so strong as Alekhine's in some respects—is more evenly balanced." Then Euwe gives his assessment in Dutch, explaining that his feelings alternated from optimism to pessimism, but in the previous ten years, their score had been evenly matched at 7–7.On December 15, 1935, after 30 games played in 13 different cities around the Netherlands over a period of 80 days, Euwe defeated Alekhine by 15½–14½, becoming the fifth World Chess Champion. Alekhine quickly went three games ahead, but Euwe managed to even out and eventually win the match. His title gave a huge boost to chess in the Netherlands. It was also the first world championship where the players had seconds to help them with analysis during adjournments.Euwe's win was regarded as a major upset – he reportedly had believed that beating Alekhine was unlikely – and is sometimes attributed to Alekhine's alcoholism. But Salo Flohr, who helped Euwe during the match, thought Alekhine's over-confidence was more of a problem than alcohol; Alekhine himself said he would win easily. Former World Champions Vasily Smyslov, Boris Spassky, Anatoly Karpov, and Garry Kasparov later analysed the match and concluded that Euwe deserved to win and that the standard of play was worthy of a world championship. Former World Champion Vladimir Kramnik has said that Euwe won the 1935 match on merit and that the result was not affected by Alekhine's drinking before or during the match.
Euwe's performance in the great tournament of Nottingham 1936 (equal third, half a point behind Botvinnik and Capablanca, half a point ahead of Alekhine) indicated he was a worthy champion, even if he was not as dominant as the earlier champions. Reuben Fine wrote, "In the two years before the return match, Euwe's strength increased. Although he never enjoyed the supremacy over his rivals that his predecessors had, he had no superiors in this period."Euwe lost the title to Alekhine in a rematch in 1937, also played in the Netherlands, by the lopsided margin of 15½–9½. Alekhine had given up alcohol and tobacco to prepare for the rematch, although he resumed drinking later. He returned to the sort of form he had shown from 1927 to 1934, when he dominated chess. The match was a real contest initially, but Euwe's play collapsed near the end, and he lost four of the last five games. Fine, who was Euwe's second, attributed the collapse to nervous tension, possibly aggravated by Euwe's attempts to maintain a calm appearance.The two world title matches against Alekhine represent the heart of Euwe's career. Altogether, they played 86 competitive games, and Alekhine had a +28−20=38 lead. Many of Alekhine's wins came early in their series; he was nine years older, and had more experience during that time. The rematch was also one-sided in Alekhine's favour.
Later chess career
Euwe finished equal fourth with Alekhine and Reshevsky in the AVRO tournament of 1938 in the Netherlands, which featured the world's top eight players and was an attempt to decide who should challenge Alekhine for the world championship. Euwe also had a major organizational role in the event.He played a match with Paul Keres in the Netherlands in 1939–40, losing 6½–7½.
After Alekhine's death in 1946, Euwe was considered by some to have a moral right to the position of world champion, based at least partially on his clear second-place finish in the great tournament at Groningen in 1946, behind Mikhail Botvinnik. But Euwe consented to participate in a five-player tournament to select the new champion, the World Chess Championship 1948. At 47, Euwe was significantly older than the other players, and well past his best. He finished last. In 1950, FIDE granted Euwe the title of international grandmaster on its inaugural list. He took part in the Gijón international tournament in 1951, winning ahead of Pilnik and Rossolimo with a score of (+7 =2).
Euwe's final major tournament was the double round robin Candidates' Tournament in Zürich, 1953, where he finished next to last. He was in the top half of the field after the first half of the tournament, but tired in the second half.
Euwe played for the Netherlands in seven Chess Olympiads from 1927 to 1962, a 35-year span, always on first board. He scored 10½/15 at London 1927, 9½/13 at Stockholm 1937 for a bronze medal, 8/12 at Dubrovnik 1950, 7½/13 at Amsterdam 1954, 8½/11 at Munich 1958 for a silver medal at age 57, 6½/16 at Leipzig 1960, and finally 4/7 at Varna 1962. His aggregate was 54½/87 for 62.6 percent.
In 1957, Euwe played a short match against 14-year-old future world champion Bobby Fischer, winning one game and drawing the other. His lifetime score against Fischer was one win, one loss, and one draw.
Euwe won a total of 102 first prizes in tournaments during his career, many of them local.He became a computer science professor at Tilburg University in 1964.
FIDE President
From 1970 (at age 69) until 1978, Euwe was president of FIDE. As president, he usually did what he considered morally right rather than what was politically expedient. On several occasions this brought him into conflict with the USSR Chess Federation, which thought it had the right to dominate matters because it contributed a very large share of FIDE's budget and Soviet players dominated the world rankings – in effect, they treated chess as an extension of the Cold War. These conflicts included:
The events leading up to Bobby Fischer's participation in the World Chess Championship 1972 match against Boris Spassky, which led to Fischer's becoming the first non-Soviet champion since World War II. Euwe thought it important for the game's health and reputation that Fischer have the opportunity to challenge for the title as soon as possible, and interpreted the rules very flexibly to enable Fischer to play in the 1970 Interzonal Tournament, which he won by a commanding score.
The defection of Gennadi Sosonko in 1972. The Soviets demanded that Sosonko should be treated as an "unperson", excluded from competitive chess, television or any other event that might be evidence of his defection. When Euwe refused, Soviet players boycotted the 1974 Wijk aan Zee tournament in the Netherlands because Sosonko competed.
In 1976, world championship contender Viktor Korchnoi sought political asylum in the Netherlands. In a discussion a few days earlier, Euwe told Korchnoi: "... of course you will retain all your rights ..." and opposed Soviet efforts to prevent Korchnoi from challenging Anatoly Karpov's title in 1978.
Later in 1976, Euwe supported FIDE's decision to hold the 1976 Chess Olympiad in Israel, which the Soviet Union did not recognize as a country, although the Soviets had won the 1964 Olympiad which had also been held in Israel. The Central Committee of Communist Party of the Soviet Union then started plotting to depose Euwe as president of FIDE.Euwe lost some of his battles with the Soviets. According to Sosonko, in 1973, he accepted the Soviets' demand that Bent Larsen and Robert Hübner, the two strongest non-Soviet contenders (Fischer was now champion), should play in the Leningrad Interzonal tournament rather than the weaker one in Petrópolis. Larsen and Hübner were eliminated from the competition for the World Championship because Korchnoi and Karpov took the first two places at Leningrad.Some commentators have also questioned whether Euwe did as much as he could have to prevent Fischer from forfeiting his world title in 1975.It is also notable that in 1976, Rohini Khadilkar became the first female to compete in the Indian Men's Championship. Her involvement in a male competition caused a furore that necessitated a successful appeal to the High Court and caused Euwe to rule that women could not be barred from national or international championships.Despite the turbulence of the period, most assessments of Euwe's performance as president of FIDE are sympathetic:
Spassky, who had nominated Euwe for the job: "He should certainly not have disqualified Fischer, and he should have been a little tougher with the Soviets ... you get a pile of complicated problems. But Euwe, of course, was the man for the job."
Karpov said Euwe was a very good FIDE President, although he did commit one very serious error, rapidly extending the membership of FIDE to many small third-world countries. "But neither he nor I could have foreseen what this would lead to. ... This led not only to the inflation of the grandmaster title, but also to the leadership vacuum at the head of the world of chess."
Garry Kasparov was blunter: "... unfortunately, he could not foresee the dangers flowing from a FIDE practically under Soviet dominance."
Korchnoi regarded Euwe as the last honorable president of FIDE.
Yuri Averbakh, who was a Soviet chess official as well as a grandmaster: "... he always sought to understand the opposing point of view ... Such behavior was in sharp contrast to the behavior of the Soviet delegation leaders ... Max Euwe was, without a doubt, the best President FIDE ever had."He died in 1981, age 80, of a heart attack. Revered around the chess world for his many contributions, he had travelled extensively while FIDE President, bringing many new members into the organization.
Assessment of Euwe's chess
Euwe was noted for his logical approach and for his knowledge of openings, in which he made major contributions to chess theory. Paradoxically his two title matches with Alekhine were displays of tactical ferocity from both sides. But the comments by Kmoch and Alekhine (below) may explain this: Euwe "strode confidently into some extraordinarily complex variations" if he thought logic was on his side; and he was extremely good at calculating these variations. On the other hand, he "often lacked the stamina to pull himself out of bad positions".Alekhine was allegedly more frank in his Russian-language articles than in those he wrote in English, French or German. In his Russian articles he often described Euwe as lacking in originality and in the mental toughness required of a world champion. Sosonko thought Euwe's modesty was a handicap in top-class chess (although Euwe was well aware of how much stronger he was than "ordinary" grandmasters).Vladimir Kramnik also says Euwe anticipated Botvinnik's emphasis on technical preparation, and Euwe was usually in good shape physically because he was a keen sportsman.
Chess books by Euwe
Euwe wrote over 70 chess books, far more than any other World Champion; some of the best-known are The Road to Chess Mastery, Judgement and Planning in Chess, The Logical Approach to Chess, and Strategy and Tactics in Chess. Former Soviet grandmaster Sosonko used Euwe and den Hertog's 1927 Practische Schaaklessen as a textbook when teaching in the Leningrad House of Pioneers, and considers it "one of the best chess books ever". Fischer World Champion, an account of the 1972 World Chess Championship match, co-authored by Euwe with Jan Timman, was written in 1972 but not published in English until 2002. Euwe's book From My Games, 1920–1937 was originally published in 1939 by Harcourt, Brace and Company, and was republished by Dover in 1975 (ISBN 0-486-23111-9). He also did not forget children in his published writings. The year he won the world championship Chess he wrote a book named: (Dutch) Oom Jan leert zijn neefje schaken. (EAN 9789043900669)
Bibliography
Strategy and Tactics in Chess. 1937. McKay.
My Best Games 1920–1937 My Rise to become World Champion. 2003 [1939]. Hardinge Simpole.
Meet The Masters: Pen Portraits to the Greats by a World Champion. 2004 [1940]. Hardinge Simpole.
The Hague/Moscow 1948 Match/Tournament for the World Chess Championship. 2013 [1948]. Russell Enterprises.
Judgement and Planning in Chess. 1998 [1954]. Batsford.
The Logical Approach to Chess. 1982 [1958]. Dover.
Chess Master vs. Chess Amateur. with Walter Meiden. 1994 [1963]. Dover.
The Middlegame Book One Static Features. with H. Kramer. 1994 [1964]. Hays Pub.
The Middlegame Book Two Dynamic & Subjective Features. with H. Kramer. 1994 [1964]. Hays Pub.
The Road to Chess Mastery. with Walter Meiden. 1966. David McKay.
The Development of Chess Style. with John Nunn. 1997 [1968]. International Chess Enterprises.
Fischer World Champion. with Jan Timman. 2009 [1972]. New In Chess.
Euwe vs. Alekhine Match 1935. 1973. Chess Digest.
A Guide to Chess Endings. with David Hooper. 1976. Dover.
Bobby Fischer The Greatest? 1979 [1976]. Sterling.
Chess Master vs. Chess Master with Walter Meiden. 1977. McKay
Legacy
In Amsterdam, there is a Max Euwe Plein (square) (near the Leidseplein) with a large chess set and statue, where the 'Max Euwe Stichting' is located in a former jailhouse. It has a Max Euwe museum and a large collection of chess books.
Honours
In 1936, Euwe was appointed Officer of the Order of Orange-Nassau.
In 1979, Euwe was promoted to Commander of the Order of Orange-Nassau.
Kasparov, Garry (2003). My Great Predecessors, part II. Everyman Chess. ISBN 1-85744-342-X.
Winter, Edward, ed. (2006). "World Chess Champions". 0-08-024094-1. ISBN. : Cite journal requires |journal= (help)
Max Euwe player profile and games at Chessgames.com
Machgielis Euwe's biography
Max Euwe Centrum, Amsterdam
Machgielis (Max) Euwe a short history of Euwe's playing career
Albert Silver, "Alekhine-Euwe 1935: powerful images", ChessBase, 13 December 2013.
"Max Euwe (1901-81)" by Edward Winter |
Akiba Kiwelowicz Rubinstein (1 December 1880 – 14 March 1961) was a Polish chess player. He is considered to have been one of the greatest players never to have become World Chess Champion. Rubinstein was granted the title International Grandmaster in 1950, at its inauguration.
In his youth, he defeated top players José Raúl Capablanca and Carl Schlechter and was scheduled to play a match with Emanuel Lasker for the World Chess Championship in 1914, but it was cancelled due to the outbreak of World War I. He was unable to re-create consistently the same form after the war, and his later life was plagued by mental illness.
Biography
Early life
Akiba Kiwelowicz Rubinstein was born in Stawiski, Congress Poland, to a Jewish family. He was the youngest of 12 children, but only one sister survived to adulthood. Rubinstein learned to play chess at the relatively late age of 14, and his family had planned for him to become a rabbi. He trained with and played against the strong master Gersz Salwe in Łódź and in 1903, after finishing fifth in a tournament in Kyiv, Rubinstein decided to abandon his rabbinical studies and devote himself entirely to chess.
Chess career
Between 1907 and 1912, Rubinstein established himself as one of the strongest players in the world. In 1907, he won the Carlsbad tournament and the All-Russian Masters' tournament, and shared first at Saint Petersburg. In 1912 he had a record string of wins, finishing first in five consecutive major tournaments: San Sebastián, Pöstyén, Breslau, Warsaw and Vilna (All-Russian Masters' tournament), although none of these events included Lasker or Capablanca. Some sources believe that he was stronger than World Champion Emanuel Lasker at this time. Ratings from Chessmetrics support this conclusion, placing him as world No. 1 between mid-1912 and mid-1914.During the first decade of the 20th century, the playing field for competitive chess was relatively thin. Wilhelm Steinitz, the first universally recognized world champion, died in 1900 after having been largely retired from chess for several years, Russian master Mikhail Chigorin was nearing the end of his life, while American master Frank Marshall lived on the other side of the Atlantic, far from the center of chess activity in Europe. Another promising American master, Harry Nelson Pillsbury, had died in 1906 at just 33. In the pre-FIDE era, the reigning world champion handpicked his challenger, and Emanuel Lasker demanded a high sum of money that Rubinstein could not produce. In the St. Petersburg tournament in 1909, he had tied with Lasker and won his individual encounter with him. However, he had a poor showing at the 1914 St. Petersburg tournament, not placing in the top five. A match with Lasker was arranged for October 1914, but it did not take place because of the outbreak of World War I.Rubinstein's peak as a player is generally considered to have been between 1907 and 1914. During World War I, he was confined to Poland, although he played in a few organized chess events there and traveled to Berlin in early 1918 for a tournament. His playing after the war never regained the same consistency as it had pre-1914. He and his family moved to Sweden following the Armistice in November 1918, where they stayed until 1922, and then moved to Germany. Rubinstein won at Vienna in 1922, ahead of future World Champion Alexander Alekhine, and was the leader of the Polish team that won the 1930 Chess Olympiad at Hamburg with a record of thirteen wins and four draws. He also won an Olympic silver at the 1931 Chess Olympiad, again leading the Polish team.
Rubinstein came in fourth place in the London 1922 tournament, after which the new world champion Jose Raul Capablanca offered to play him in a match if he could raise the money, which once again he was unable to do. At Hastings 1922, he came in second place, followed by a fifth-place finish at Teplitz-Schönau late in the year, and then won in Vienna brilliantly. This triumph, however, was soured when Austrian border guards impounded most of the prize money he had won. Rubinstein closed out 1922 with another appearance at Hastings, which he won, but his tournament record during 1923 was disappointing as he came in just twelfth place at Carlsbad and tenth at Maehrisch-Ostrau.
His first tournament of 1924, at Meran, saw him come in third. He attempted to participate in the New York tournament that spring but was excluded from the event due to a limited number of available slots, all of which were filled. Rubinstein's 1925 tournament record was reasonably good, but his year-end appearance in Moscow saw him come in 14th. His record in 1926 was fair but not outstanding. That year, the Rubinstein family moved to Belgium permanently.
In 1927, Rubinstein visited his birthplace in Poland, where he won the Polish Championship in Łódź. He embarked on an exhibition tour of the United States in early 1928; although a match with reigning US chess champion Frank Marshall was proposed along with an international tournament, it never materialized. He tied third with Max Euwe at Bad Kissingen and then delivered a poor performance in Berlin. Rubinstein had his best post-WWI showing during 1929, when he dominated the Ramsgate tournament in Britain and had excellent showings at Carlsbad and Budapest. He won Rogaška-Slatina.
As the 1930s started, Rubinstein contested the San Remo tournament, coming in fourth. He played well in a few Belgian events that year, and then third place at Scarborough. His performance at Liege was weak, possibly due to exhaustion. He skipped Bled 1931 despite an invitation, played well at Antwerp, but came in dead last at Rotterdam. This was the last major chess event he participated in.
Mental health problems and later life
After 1932. he withdrew from tournament play as his noted anthropophobia showed traces of schizophrenia during a mental health breakdown. In one period, after making a chess move he would go and hide in the corner of the tournament hall while awaiting his opponent's reply. Regardless, his former strength was recognized by FIDE when he was one of 27 players awarded the inaugural Grandmaster title in 1950.It is not clear how Rubinstein, who was Jewish, survived World War II in Nazi-occupied Belgium. Chess historian Edward Winter has written on the subject. Citing a number of Rubinstein's peers in the chess world and people who were close to him, it seems that Rubinstein spent the war in a sanatorium. He cites a story about Rubinstein that has, since the war, been published in various books and articles, with varying details: "Nazi investigators once descended on the place and asked Rubinstein, "Are you happy here?" "Not at all", Rubinstein replied. "Would you prefer to go to Germany and work for the Wehrmacht?" "I'd be delighted to", Rubinstein replied. "Then he really must be barmy", the Nazis decided", but Winter quotes Rubinstein's biographers as saying "Most stories concerning Rubinstein are at best half truths, which have become so embellished over time that they bear little resemblance to what actually transpired", before adding "That is indisputable."Rubinstein was also a well-known coffee drinker, and was known to consume the hot beverage in large quantities before important matches. Unlike many other top grandmasters, he left no literary legacy, which has been attributed to his mental health problems. He spent the last 29 years of his life living at home with his family and in a sanatorium because of his severe mental illness. Rubinstein is a tragic, mentally ill character in the novel The Lüneburg Variation about chess masters, obsession and revenge, by Italian writer Paolo Maurensig.
However, while in the mental clinic Rubinstein was visited by Alberic O'Kelly on a number of occasions and he provided the latter with some chess guidance.
Legacy
He was one of the earliest chess players to take the endgame into account when choosing and playing the opening. He was exceptionally talented in the endgame, particularly in rook endings, where he broke new ground in knowledge. Jeremy Silman ranked him as one of the five best endgame players of all time, and a master of rook endgames.He originated the Rubinstein System against the Tarrasch Defense variation of the Queen's Gambit Declined: 1.d4 d5 2.Nf3 c5 3.c4 e6 4.cxd5 exd5 5.Nc3 Nc6 6.g3 Nf6 7.Bg2 (Rubinstein–Tarrasch, 1912). He is also credited with inventing the Meran Variation, which stems from the Queen's Gambit Declined but reaches a position of the Queen's Gambit Accepted with an extra move for Black.
Many opening variations are named for him. According to Grandmaster Boris Gelfand, "Most of the modern openings are based on Rubinstein." The "Rubinstein Attack" often refers to 1.d4 d5 2.c4 e6 3.Nc3 Nf6 4.Bg5 Be7 5.e3 0-0 6.Nf3 Nbd7 7.Qc2. The Rubinstein Variation of the French Defence arises after 1.e4 e6 2.d4 d5 3.Nc3 (or 3.Nd2) dxe4 4.Nxe4. Apart from 4.Qc2, the Rubinstein Variation of the Nimzo-Indian: 1.d4 Nf6 2.c4 e6 3.Nc3 Bb4 4.e3. There are also the Rubinstein Variation of the Four Knights Game, which arises after 1.e4 e5 2.Nf3 Nc6 3.Nc3 Nf6 4.Bb5 Nd4, and the Rubinstein Variation of the Symmetrical English, 1.c4 c5 2.Nc3 Nf6 3.g3 d5 4.cxd5 Nxd5 5.Bg2 Nc7, a complex system that is very popular at the grandmaster level.
The Rubinstein Trap, an opening trap in the Queen's Gambit Declined that loses at least a pawn for Black, is named for him because he fell into it twice. One version of it runs 1.d4 d5 2.c4 e6 3.Nc3 Nf6 4.cxd5 exd5 5.Bg5 Be7 6.e3 0-0 7.Nf3 Nbd7 8.Bd3 c6 10.0-0 Re8 11.Rc1 h6 12. Bf4 Nh5? 13. Nxd5! Now 13...cxd5?? is met by 14.Bc7, winning the queen, while 13...Nxf4 14.Nxf4 leaves White a pawn ahead.
The Rubinstein Memorial tournament in his honour has been held annually since 1963 in Polanica Zdrój, with a glittering list of top-flight winners. Boris Gelfand has named Rubinstein as his favourite player, and once said, "what I like in chess ... comes from Akiba."
Notable games
Hermanis Mattison vs. Akiba Rubinstein, Carlsbad 1929, (C68), 0–1 This game contains a rook and pawn ending that seemed "hopelessly drawn" but was won by Rubinstein. The editor of the tournament book said that if this game had been played 300 years earlier, Rubinstein would have been burned at the stake for dealing with evil spirits.
George Rotlewi vs. Akiba Rubinstein, Lodz 1907, Tarrasch Defense: Symmetrical Variation (D02), 0–1 This game contains an attacking combination that was called "perhaps the most magnificent ... of all time" by Carl Schlechter.
Akiba Rubinstein vs. Emanuel Lasker, St.Petersburg 1909, Queen's Gambit Declined: Orthodox Variation (D30), 1–0 This game ends in a position where Lasker has no good moves (zugzwang).
Akiba Rubinstein vs. Karel Hromádka, Moravská Ostrava 1923, King's Gambit Declined: Classical Variation (C30), 1–0 A game full of tactics and hanging pieces in which Rubinstein beat former Czech champion Karel Hromádka.
Akiba Rubinstein vs. Carl Schlechter, San Sebastian 1912, 1–0 Capablanca called this game "a monument of magnificent precision".
Akiba Rubinstein vs. Milan Vidmar Sr., Berlin 1908, 0–1 This game was the sensation of the tournament, in that Vidmar defeated Rubinstein, the winner of six previous tournaments. Vidmar employed the then novel Budapest Gambit. The game featured a spectacular King hunt, with the White King fleeing from e1 to h5. White resigned on move 24, one move shy of checkmate.
Personal life
In 1917, Rubinstein married Eugénie Lew. They had two sons, Jonas in 1918 and Sammy in 1927. For a time, they lived above the restaurant that Eugénie operated. After she died in 1954, Rubinstein lived in an old-people's home until his death in 1961 at the age of 80. He reportedly still followed chess in his final years; his sons recalled going over the games of the 1954 Botvinnik–Smyslov world championship match with him.
See also
List of chess grandmasters
Further reading
Donaldson, John and Nikolay Minev (1994). Akiva Rubinstein: Uncrowned King. International Chess Enterprises. ISBN 1-879479-19-2.
Chernev, Irving (1995). Twelve Great Chess Players and Their Best Games. New York: Dover. pp. 14–28. ISBN 0-486-28674-6.
Kmoch, Hans (1960). Rubinstein's Chess Masterpieces/100 Selected Games. Barnie F. Winkelman. Dover. ISBN 0-486-20617-3.Pritchett, Craig (2009). Heroes of Classical Chess: Learn from Carlsen, Anand, Fischer, Smyslov and Rubinstein. London: Everyman Chess. pp. 12-50. ISBN 978-1857446197.
Donaldson, John and Nikolay Minev (2018, 2nd edition). Akiva Rubinstein, Volume 1: Uncrowned King. Milford, CT: Russell Enterprises. ISBN 978-1-941270-88-2.
Donaldson, John and Nikolay Minev (2011, 2nd edition). Akiva Rubinstein, Volume 2: The Later Years. Milford, CT: Russell Enterprises. ISBN 978-1-888690-51-4.
Franco, Zenón (2016). Rubinstein: Move by Move. London: Everyman Chess. ISBN 978-1781943144.
Razuvaev, Yuri and Valery Murakhveri (2023, 1st English edition). Akiba Rubinstein. Stockholm: Verendel Publishing. ISBN 978-91-519-7645-7.
Akiba Rubinstein player profile and games at Chessgames.com
Starfire bio
Supreme Chess bio
Akiba Rubinstein Foundation Archived 2021-12-10 at the Wayback Machine |
A++ stands for abstraction plus reference plus synthesis which is used as a name for the minimalistic programming language that is built on ARS-based programming. ARS-based programming is used as a name for programming which consists mainly of applying patterns derived from ARS to programming in any language. ARS is an abstraction from the Lambda Calculus, taking its three basic operations, and giving them a more general meaning, thus providing a foundation for the three major programming paradigms functional programming, object-oriented programming and imperative programming.
The technical texts in this article are taken from the online version of the 1st edition of the A++-book published in 2004. The 2nd edition of the book A++ The Smallest Programming Language in the World (292 pages) was published in 2018.
History
A++ was developed by Georg P. Loczewski and Britain Hamm in the years from 1996 - 2002 working as a software developer for Bull's Software-Haus in Langen, Germany and as a freelance programmer with the purpose to serve as a learning instrument rather than as a programming language used to solve practical problems.
The development of A++ is based on the 'Lambda Calculus' by Alonzo Church and is influenced by Guy L. Steele's Programming Language Scheme.
A++ is intended to be an effective tool to become familiar with the core of programming and with programming patterns that can be applied in other languages needed to face the real world.
Publications
The first published documentation appeared in German in January 2003 with the title 'Programmierung pur --- Programmieren fundamental und ohne Grenzen' ('Undiluted Programming') (919 pages) ISBN 978-3-87820-108-3.In the year 2005 followed an introduction to A++ in English with the title: 'A++ The Smallest Programming Language in the World --- An Educational Language (242 pages) ISBN 978-3-87820-116-8.
Purpose
A++ is a language similar to C++, with its interpreter available in Scheme, Java, C, C++ and Python, and offers an ideal environment for basic training in programming, enforcing rigorous confrontation with the essentials of programming languages.
Constitutive principles
ARS (basic operations)
Abstraction
+ Reference
+ Synthesis
Lexical scope
Closure
Programming paradigms supported
Functional programming, (directly supported)
(writing expressions to be evaluated),
Object-oriented programming (directly supported)
(sending messages to objects),
Imperative programming (directly supported)
(writing statements to be executed), including structured programming.
Logic programming (indirectly supported)
(rule based programming)
Core features
Logical abstractions
(true, false, if, not, and, or),
Numerical abstractions
(natural numbers, zerop, succ, pred, add, sub, mult),
Relational abstractions,
(equalp, gtp, ltp, gep)
Recursion,
Creation and processing of lists
(cons, car, cdr, nil, nullp, llength, remove, nth, assoc),
Higher order functions
(compose, curry, map, mapc, map2, filter, locate, for-each),
Set operations
(memberp, union, addelt),
Iterative control structure
('while').
Development of applications with A++
The purpose of A++ is not to be used as a programming language to write applications for the needs of the real world. Nevertheless, it is possible to write simple application programs in A++ like object oriented implementations of a simple account handling and a library management system.
To write real world application programs the language ARS++ is provided, which extends A++ to a language similar to Scheme. ARS++ is derived from ARS plus Scheme plus Extensions.
See also
The information on the following internal Link referring to ARS++ and ARS-Based Programming may not be up-to-date or accurate. It is recommended to use the following external link instead:
ARS-based programming and ARS++:
ARS-based programming
Educational programming language
ARS++
A++ Official Web-Site
The A++ Book (online-edition)
ARS/ARS++ Web-Site
The Lambda Calculus and A++ |
ABC ALGOL is an extension of the programming language ALGOL 60 with arbitrary data structures and user-defined operators, intended for computer algebra (symbolic mathematics). Despite its advances, it was never used as widely as Algol proper.
van de Riet, R.P. (1973). ABC Algol: A Portable Language for Formula Manipulation Systems. Matematisch Centrum (Amsterdam). Retrieved May 26, 2017.
van de Riet, R.P. (1973). ABC Algol: The language. Mathematisch Centrum. Retrieved May 26, 2017.
van de Riet, R.P. (1973). ABC Algol: The compiler. Mathematisch Centrum. Retrieved May 26, 2017. |
Action may refer to:
Action (narrative), a literary mode
Action fiction, a type of genre fiction
Action game, a genre of video game
Film
Action film, a genre of film
Action (1921 film), a film by John Ford
Action (1980 film), a film by Tinto Brass
Action 3D, a 2013 Telugu language film
Action (2019 film), a Kollywood film.
Music
Action (music), a characteristic of a stringed instrument
Action (piano), the mechanism which drops the hammer on the string when a key is pressed
The Action, a 1960s band
Albums
Action (B'z album) (2007)
Action! (Desmond Dekker album) (1968)
Action Action Action or Action, a 1965 album by Jackie McLean
Action! (Oh My God album) (2002)
Action (Oscar Peterson album) (1968)
Action (Punchline album) (2004)
Action (Question Mark & the Mysterians album) (1967)
Action (Uppermost album) (2011)
Action (EP), a 2012 EP by NU'EST
Action, a 1984 album by Kiddo
Songs
"Action" (Freddy Cannon song) (1965), the theme song to the TV series Where the Action Is
"Action" (Sweet song) (1975), covered by various artists
"Action", a version of "Feeling This" by Blink-182
"Action", a 1994 song by Terror Fabulous featuring Nadine Sutherland
"Action", a 1984 song by The Fits
"Action", a 1960 song by Lance Fortune
"Action", a 1988 song by Pearly Gates
"Action", a 1988 song by Girlschool from Take a Bite
"Action", a 1989 song by Gorky Park from Gorky Park (album)
"Action", a 2003 song by Powerman 5000 from Transform
"Action", a 1972 song by Scorpions from Lonesome Crow
"Actions", a 1980 song by The Stingrays
"Action Action Action Action Action", a 2008 song by We Are the Physics from We Are the Physics Are OK at Music
Literature
Action (comics), a British comic book published in 1976–1977
Action Comics, a DC Comics comic book series
Action: A Book about Sex, a 2016 book by Amy Rose Spiegel
Action (newspaper), a newspaper of Oswald Mosley's British Union of Fascists
People
Action Bronson (born 1983), American rapper, reality television star, author, and talk show host
Television and radio
Action (Canadian TV channel)
Action (French TV channel)
The Action Channel (US TV channel), a subsidiary of Luken Communications
Action (radio), a 1945 radio program
Action (TV series), a comedy series on Fox in 1999–2000
CBS Action (2009–2018), now known as CBS Justice
Sky Sports Action, a TV channel
Theatre
Action (theatre), a principle in Western theatre practice
Action (play), a 1975 play by Sam Shepard
Organizations
Businesses
Action (store), a Dutch discount store chain with branches in many European countries
Action (supermarkets), an Australian supermarket chain
Actions Semiconductor, a Chinese semiconductor company
ACTION, an Australian public transport company
The Action Network (branded as Action), an American sports betting analytics company
Political parties
Action (Cypriot political party), a Cypriot political alliance
Action (Greek political party), a Greek political party
Action (Italian political party), an Italian political party
Other organizations
ACTION (U.S. government agency), a former US government federal domestic volunteer agency
Science, technology, and mathematics
Action (physics), an attribute of the dynamics of a physical system
Action at a distance, an outdated term for nonlocal interaction in physics
Group action (mathematics)
Continuous group action
Semigroup action
Ring Action (mathematics)
Action (firearms), the mechanism that manipulates cartridges and/or seals the breech
Action! (programming language), for the Atari 8-bit family of microcomputers
Action (UML), in the Unified Modeling Language
Dudek Action, a Polish paraglider design
Diia, a brand of e-governance in Ukraine
Other uses
Action (philosophy), something which is done by a person
Lawsuit or action
Action Force (disambiguation)
Action Jackson (disambiguation)
Action Man (disambiguation)
Action theory (disambiguation)
Acteon (disambiguation)
Actaeon (disambiguation)
Acción (disambiguation)
Structural load, forces, deformations, or accelerations applied to a structure or its components
All pages with titles beginning with Action
All pages with titles containing action |
Adenine(symbol A or Ade) is a purine nucleobase. It is one of the four nucleobases in the nucleic acids of DNA, the other three being guanine (G), cytosine (C), and thymine (T). Adenine derivatives have various roles in biochemistry including cellular respiration, in the form of both the energy-rich adenosine triphosphate (ATP) and the cofactors nicotinamide adenine dinucleotide (NAD), flavin adenine dinucleotide (FAD) and Coenzyme A. It also has functions in protein synthesis and as a chemical component of DNA and RNA. The shape of adenine is complementary to either thymine in DNA or uracil in RNA.
The adjacent image shows pure adenine, as an independent molecule. When connected into DNA, a covalent bond is formed between deoxyribose sugar and the bottom left nitrogen (thereby removing the existing hydrogen atom). The remaining structure is called an adenine residue, as part of a larger molecule. Adenosine is adenine reacted with ribose, as used in RNA and ATP; deoxyadenosine is adenine attached to deoxyribose, as used to form DNA.
Structure
Adenine forms several tautomers, compounds that can be rapidly interconverted and are often considered equivalent. However, in isolated conditions, i.e. in an inert gas matrix and in the gas phase, mainly the 9H-adenine tautomer is found.
Biosynthesis
Purine metabolism involves the formation of adenine and guanine. Both adenine and guanine are derived from the nucleotide inosine monophosphate (IMP), which in turn is synthesized from a pre-existing ribose phosphate through a complex pathway using atoms from the amino acids glycine, glutamine, and aspartic acid, as well as the coenzyme tetrahydrofolate.
Manufacturing method
Patented Aug. 20, 1968, the current recognized method of industrial-scale production of adenine is a modified form of the formamide method. This method heats up formamide under 120 degree Celsius conditions within a sealed flask for 5 hours to form adenine. The reaction is heavily increased in quantity by using a phosphorus oxychloride (phosphoryl chloride) or phosphorus pentachloride as an acid catalyst and sunlight or ultraviolet conditions. After the 5 hours have passed and the formamide-phosphorus oxychloride-adenine solution cools down, water is put into the flask containing the formamide and now-formed adenine. The water-formamide-adenine solution is then poured through a filtering column of activated charcoal. The water and formamide molecules, being small molecules, will pass through the charcoal and into the waste flask; the large adenine molecules, however, will attach or "adsorb" to the charcoal due to the van der Waals forces that interact between the adenine and the carbon in the charcoal. Because charcoal has a large surface area, it's able to capture the majority of molecules that pass a certain size (greater than water and formamide) through it. To extract the adenine from the charcoal-adsorbed adenine, ammonia gas dissolved in water (aqua ammonia) is poured onto the activated charcoal-adenine structure to liberate the adenine into the ammonia-water solution. The solution containing water, ammonia, and adenine is then left to air dry, with the adenine losing solubility due to the loss of ammonia gas that previously made the solution basic and capable of dissolving adenine, thus causing it to crystallize into a pure white powder that can be stored.
Function
Adenine is one of the two purine nucleobases (the other being guanine) used in forming nucleotides of the nucleic acids. In DNA, adenine binds to thymine via two hydrogen bonds to assist in stabilizing the nucleic acid structures. In RNA, which is used for protein synthesis, adenine binds to uracil.
Adenine forms adenosine, a nucleoside, when attached to ribose, and deoxyadenosine when attached to deoxyribose. It forms adenosine triphosphate (ATP), a nucleoside triphosphate, when three phosphate groups are added to adenosine. Adenosine triphosphate is used in cellular metabolism as one of the basic methods of transferring chemical energy between chemical reactions. ATP is thus a derivative of adenine, adenosine, cyclic adenosine monophosphate, and adenosine diphosphate.
History
In older literature, adenine was sometimes called Vitamin B4. Due to it being synthesized by the body and not essential to be obtained by diet, it does not meet the definition of vitamin and is no longer part of the Vitamin B complex. However, two B vitamins, niacin and riboflavin, bind with adenine to form the essential cofactors nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD), respectively. Hermann Emil Fischer was one of the early scientists to study adenine.
It was named in 1885 by Albrecht Kossel after Greek ἀδήν aden "gland", in reference to the pancreas, from which Kossel's sample had been extracted.Experiments performed in 1961 by Joan Oró have shown that a large quantity of adenine can be synthesized from the polymerization of ammonia with five hydrogen cyanide (HCN) molecules in aqueous solution; whether this has implications for the origin of life on Earth is under debate.On August 8, 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting building blocks of DNA and RNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space. In 2011, physicists reported that adenine has an "unexpectedly variable range of ionization energies along its reaction pathways" which suggested that "understanding experimental data on how adenine survives exposure to UV light is much more complicated than previously thought"; these findings have implications for spectroscopic measurements of heterocyclic compounds, according to one report.
Vitamin B4 MS Spectrum |
Axum, or Aksum (pronounced: ), is a town in the Tigray Region of Ethiopia with a population of 66,900 residents (as of 2015). It is the site of the historic capital of the Aksumite Empire, a naval and trading power that ruled the whole region in addition parts of West Asia as Saudi Arabia, and Yemen. It ruled the region from about 400 BCE into the 10th century. Axum is located at the La’ilay Maychew district of Ethiopia. Axum is located in the Central Zone of the Tigray Region, near the base of the Adwa mountains. It has an elevation of 2,131 metres (6,991 feet) and is surrounded by La'ilay Maychew, a separately administered woreda of the Tigray region.
In 1980, UNESCO added Axum's archaeological sites to its list of World Heritage Sites due to their historic value. Prior to the beginning of the Tigray War in 2020, Axum was a leading tourist destination for foreign visitors.
History
Axum was the hub of the marine trading power known as the Aksumite Empire, which predated the earliest mentions in Roman-era writings. Around 356 CE, its ruler was converted to an Abyssinian variety of Christianity by Frumentius. Later, under the reign of the Emperor Kaleb, Axum was a quasi-ally of Byzantium against the Sasanian Empire which had adopted Zoroastrianism. The historical record is unclear with ancient church records being the primary contemporary sources.
It is believed the empire began a long and slow decline after the 7th century due partly to the Persians and then the Arabs contesting old Red Sea trade routes. Eventually the empire was cut off from its principal markets in Alexandria, Byzantium and Southern Europe and its share of trade captured by Arab traders of the era.
The Aksumite Empire was finally destroyed in the 10th century by Empress Gudit, and eventually some of the people of Axum were forced south and their old way of life declined. As the empire's power declined so did the influence of the city, which is believed to have lost population in the decline, similar to Rome and other cities thrust away from the flow of world events. The last known (nominal) emperor to reign was crowned in about the 10th century, but the empire's influence and power had ended long before that.
Its decline in population and trade then contributed to the shift of the power hub of the Ethiopian Empire south to the Amhara region as it moved further inland. In this period the city of Axum became the administrative seat of an empire spanning one million square miles. Eventually, the alternative name of Ethiopia was adopted by the central region and then by the modern state that presently exists."Axum" (or its Greek and Latin equivalents) appears as an important centre on indigenous maps of the northern Horn of Africa in the 15th century. Adal leader Ahmed ibn Ibrahim al-Ghazi led the conquest of Axum in the sixteenth century.
The Aksumite Empire and the Ethiopian Church
The Aksumite Empire had its own written language, Geʽez, and developed a distinctive architecture exemplified by giant obelisks. The oldest of these, though relatively small, dates from 5000–2000 BCE. The empire was at its height under Emperor Ezana, baptized as Abreha in the 4th century (which was also when the empire officially embraced Christianity).The Ethiopian Orthodox Tewahedo Church claims that the Church of Our Lady Mary of Zion in Axum houses the Biblical Ark of the Covenant, in which lie the Tablets of Stone upon which the Ten Commandments are inscribed. Ethiopian traditions suggest that it was from Axum that Makeda, the Queen of Sheba, journeyed to visit King Solomon in Jerusalem and that the two had a son, Menelik, who grew up in Ethiopia but travelled to Jerusalem as a young man to visit his father's homeland. He lived several years in Jerusalem before returning to his country with the Ark of the Covenant. According to the Ethiopian Church and Ethiopian tradition, the Ark still exists in Axum. This same church was the site where Ethiopian emperors were crowned for centuries until the reign of Fasilides, then again beginning with Yohannes IV until the end of the empire.
Axum is considered to be the holiest city in Ethiopia and is an important destination of pilgrimages. Significant religious festivals are the Timkat festival (known as Epiphany in western Christianity) on 19 January (20 January in leap years) and the Festival of Maryam Zion on 30 November (21 Hidar on the Ethiopian calendar).
In 1937, a 24 m (79 ft) tall, 1,700-year-old Obelisk of Axum, was broken into five parts by the Italians and shipped to Rome to be erected. The obelisk is widely regarded as one of the finest examples of engineering from the height of the Axumite empire. Despite a 1947 United Nations agreement that the obelisk would be shipped back, Italy balked, resulting in a long-standing diplomatic dispute with the Ethiopian government, which views the obelisk as a symbol of national identity. In April 2005, Italy finally returned the obelisk pieces to Axum amidst much official and public rejoicing; Italy also covered the US$4 million costs of the transfer. UNESCO assumed responsibility for the re-installation of this stele in Axum, and by the end of July 2008 the obelisk had been reinstalled. It was unveiled on 4 September 2008.
Axum and Islam
The Aksumite Empire had a long-standing relationship with Islam. According to ibn Hisham, when Muhammad faced oppression from the Quraysh clan in Mecca, he sent a small group of his original followers, that included his daughter Ruqayya and her husband Uthman, to Axum. The Negus, the Aksumite monarch (known as An-Najashi (النجاشي) in the Islamic tradition), gave them refuge and protection and refused the requests of the Quraish clan to send the refugees back to Arabia. These refugees did not return until the sixth Hijri year (628 C.E.) and even then many remained in Ethiopia, eventually settling at Negash in what is now the Misraqawi Zone.There are different traditions concerning the effect these early Muslims had on the ruler of Axum. The Muslim tradition is that the ruler of Axum was so impressed by these refugees that he became a secret convert. On the other hand, Arabic historians and Ethiopian tradition state that some of the Muslim refugees who lived in Ethiopia during this time converted to Orthodox Christianity. There is also a second Ethiopian tradition that, on the death of Ashama ibn Abjar, Muhammed is reported to have prayed for the king's soul, and told his followers, "Leave the Abyssinians in peace, as long as they do not take the offensive."
Earlier researches
In February 1893 the British explorers, James Theodore Bent and his wife Mabel Bent, travelled by boat to Massawa on the west coast of the Red Sea. They then made their way overland to excavate at Axum and Yeha, in the hope of researching possible links between early trading networks and cultures on both sides of the Red Sea. They reached Axum by 24 February 1893, but their work was curtailed by the tensions between the Italian occupiers and local warlords, together with the continuing ramifications of the First Italo-Ethiopian War and they had to make a hasty retreat by the end of March to Zula for passage back to England.
3D documentation with laser-scanning
The Zamani Project documents cultural heritage sites in 3D to create a record for future generations. The documentation is based on terrestrial laser-scanning. The 3D documentation of parts of the Axum Stelae Field was carried out in 2006 and 3D models, plans and images can be viewed here.
1989 air raid
During the Ethiopian Civil War, on 30 March 1989, Axum was bombed from the air by the Ethiopian National Defence Forces and three people were killed.
Maryam Ts'iyon massacre
Thousands of civilians died during the Axum massacre that took place in and around the Maryam Ts'iyon Church in Axum during the Tigray War in December 2020. There was indiscriminate shooting by the Eritrean Defence Forces (EDF) throughout Axum and focussed killings at the Church of Our Lady Mary of Zion (Maryam Ts'iyon) by the Ethiopian National Defense Force (ENDF) and Amhara militia.The church was also a place where the corpses of civilians killed elsewhere were collected for burial. A tight government communications blackout ensured that news of the massacre (or two separate massacres; reports are still emerging) was only revealed internationally in early January 2021 after survivors escaped to safer locations.
Main sites of Axum
The major Aksumite monuments in the town are steles. These obelisks are around 1,700 years old and have become a symbol of the Ethiopian people's identity. The largest number are in the Northern Stelae Park, ranging up to the 33-metre-long (108 ft) Great Stele, believed to have fallen and broken during construction. The Obelisk of Axum was removed by the Italian army in 1937, and returned to Ethiopia in 2005 and reinstalled 31 July 2008. The next tallest is the 24 m (79 ft) King Ezana's Stele. Three more stelae measure 18.2 m (60 ft) high, 15.8 m (52 ft) high, 15.3 m (50 ft) high. The stelae are believed to mark graves and would have had cast metal discs affixed to their sides, which are also carved with architectural designs. The Gudit Stelae to the west of town, unlike the northern area, are interspersed with mostly 4th century tombs.
The other major features of the town are the old and new churches of Our Lady Mary of Zion. The Church of Our Lady Mary of Zion was built in 1665 by Emperor Fasilides and said to have previously housed the Ark of the Covenant. The original cathedral, said to have been built by Ezana and augmented several times afterwards, was believed to have been massive with an estimated 12 naves. It was burned to the ground by Gudit, rebuilt, and then destroyed again during the Abyssinian–Adal war of the 1500s. It was again rebuilt by Emperor Gelawdewos (completed by his brother and successor Emperor Minas) and Emperor Fasilides replaced that structure with the present one. Only men are permitted entry into the Old St. Mary's Cathedral (some say as a result of the destruction of the original church by Gudit). The New Cathedral of St. Mary of Zion stands next to the old one, and was built to fulfil a pledge by Emperor Haile Selassie to Our Lady of Zion for the liberation of Ethiopia from the Fascist occupation. Built in a neo-Byzantine style, work on the new cathedral began in 1955, and allows entry to women. Emperor Haile Selassie interrupted the state visit of Queen Elizabeth II to travel to Axum to attend the dedication of the new cathedral and pay personal homage, showing the importance of this church in the Ethiopian Empire. Queen Elizabeth visited the Cathedral a few days later. Between the two cathedrals is a small chapel known as The Chapel of the Tablet built at the same time as the new cathedral, and which is believed to house the Ark of the Covenant. Emperor Haile Selassie's consort, Empress Menen Asfaw, paid for its construction from her private funds. Admittance to the chapel is closed to all but the guardian monk who resides there. Entrance is even forbidden to the Patriarch of the Orthodox Church, and to the Emperor of Ethiopia during the monarchy. The two cathedrals and the chapel of the Ark are the focus of pilgrimage and considered the holiest sites in Ethiopia to members of its Orthodox Church.
Other attractions in Axum include archaeological and ethnographic museums, the Ezana Stone written in Sabaean, Geʽez and Ancient Greek in a similar manner to the Rosetta Stone, King Bazen's Tomb (a megalith considered to be one of the earliest structures), the so-called Queen of Sheba's Bath (actually a reservoir), the 4th-century Ta'akha Maryam and 6th-century Dungur palaces, Pentalewon Monastery and Abba Liqanos and about 2 km (1.2 mi) west is the rock art called the Lioness of Gobedra.
Local legend claims the Queen of Sheba lived in the town.
Climate
The Köppen-Geiger climate classification system classifies its climate as subtropical highland (Cwb).
Demographics
According to the Central Statistical Agency of Ethiopia (CSA), as of 1 July 2012 the town of Axum's estimated population was 56,576. The census indicated that 30,293 of the population were females and 26,283 were males.The 2007 national census showed that the town population was 44,647, of whom 20,741 were males and 23,906 females). The majority of the inhabitants said they practised Ethiopian Orthodox Christianity, with 88.03% reporting that as their religion, while 10.89% of the population were Ethiopian Muslim.The 1994 national census reported the population for the city as 27,148, of whom 12,536 were men and 14,612 were women. The largest ethnic group reported was Tigrayans with 98.54% and Tigrinya was spoken as a first language by 98.68%. The majority of the population practised Ethiopian Orthodox Christianity with 85.08% reported as embracing that religion, while 14.81% were Muslim.
Transport
Axum Airport, also known as Emperor Yohannes IV Airport, is located just 5.5 km (3.4 miles) to the east of the city.
Education
Aksum University was established in May 2006 on a greenfield site, 4 km (2.5 mi) from Axum's central area. The inauguration ceremony was held on 16 February 2007 and the current area of the campus is 107 ha (260 acres), with ample room for expansion. The establishment of a university in Axum is expected to contribute much to the ongoing development of the country in general and of the region in particular.
Notable people
Abune Mathias (b. 1941), among his titles he is the "Archbishop of Axum"
Abay Tsehaye (1953–2021), politician and a founding member of the Tigray People's Liberation Front
Zera Yacob (1599–1692), philosopher
Zeresenay Alemseged (b. 1969), palaeoanthropologist and was Chair of the Anthropology Department at the California Academy of Sciences in San Francisco, United States
Gallery
See also
List of megalithic sites
List of World Heritage Sites in Ethiopia
Further reading
Francis Anfray. Les anciens ethiopiens. Paris: Armand Colin, 1991.
Yuri M. Kobishchanov. Axum (Joseph W. Michels, editor; Lorraine T. Kapitanoff, translator). University Park, Pennsylvania: University of Pennsylvania, 1979. ISBN 0-271-00531-9
David W. Phillipson. Ancient Ethiopia. Aksum: Its antecedents and successors. London: The British Brisith Museum, 1998.
David W. Phillipson. Archaeology at Aksum, Ethiopia, 1993–7. London: British Institute in Eastern Africa, 2000. ISBN 1-872566-13-8
Stuart Munro-Hay. Aksum: An African Civilization of Late Antiquity. Edinburgh: University Press. 1991. ISBN 0-7486-0106-6 online edition
Stuart Munro-Hay. Excavations at Aksum: An account of research at the ancient Ethiopian capital directed in 1972-74 by the late Dr Nevill Chittick London: British Institute in Eastern Africa, 1989 ISBN 0-500-97008-4
Sergew Hable Sellassie. Ancient and Medieval Ethiopian History to 1270 Addis Ababa: United Printers, 1972.
African Zion, the Sacred Art of Ethiopia. New Haven: Yale University Press, 1993.
J. Theodore Bent. The Sacred City of the Ethiopians: Being a Record of Travel and Research in Abyssinia in 1893. London: Longmans, Green and Co, 1894. online edition
Ethiopian Treasures — Queen of Sheba, Aksumite Kingdom — Aksum
Kingdom of Aksum article from "About Archaeology"
UNESCO – World Heritage Sites — Aksum
The Metropolitan Museum of Art — "Foundations of Aksumite Civilization and Its Christian Legacy (1st–7th century)"
On Axum
More on Axum
Axum from Catholic Encyclopedia
Final obelisk section in Ethiopia, BBC, 25 April 2005
Axum Heritage Site on Aluka digital library
Aksum World Heritage Site in panographies – 360 degree interactive imaging |
Averest is a synchronous programming language and set of tools to specify, verify, and implement reactive systems. It includes a compiler for synchronous programs, a symbolic model checker, and a tool for hardware/software synthesis.
It can be used to model and verify finite and infinite state systems, at varied abstraction levels. It is useful for hardware design, modeling communication protocols, concurrent programs, software in embedded systems, and more.
Components: compiler to translate synchronous programs to transition systems, symbolic model checker, tool for hardware/software synthesis. These cover large parts of the design flow of reactive systems, from specifying to implementing. Though the tools are part of a common framework, they are mostly independent of each other, and can be used with 3rd-party tools.
See also
Synchronous programming language
Esterel
Averest Toolbox Official home site
Embedded Systems Group Research group that develops the Averest Toolbox |
Aleph (or alef or alif, transliterated ʾ) is the first letter of the Semitic abjads, including Phoenician ʾālep 𐤀, Hebrew ʾālef א, Aramaic ʾālap 𐡀, Syriac ʾālap̄ ܐ, Arabic ʾalif ا, and North Arabian 𐪑. It also appears as South Arabian 𐩱 and Ge'ez ʾälef አ.
These letters are believed to have derived from an Egyptian hieroglyph depicting an ox's head to describe the initial sound of *ʾalp, the West Semitic word for ox (compare Biblical Hebrew אֶלֶף ʾelef, "ox"). The Phoenician variant gave rise to the Greek alpha (Α), being re-interpreted to express not the glottal consonant but the accompanying vowel, and hence the Latin A and Cyrillic А.
Phonetically, aleph originally represented the onset of a vowel at the glottis. In Semitic languages, this functions as a prosthetic weak consonant, allowing roots with only two true consonants to be conjugated in the manner of a standard three consonant Semitic root. In most Hebrew dialects as well as Syriac, the aleph is an absence of a true consonant, a glottal stop ([ʔ]), the sound found in the catch in uh-oh. In Arabic, the alif represents the glottal stop pronunciation when it is the initial letter of a word. In texts with diacritical marks, the pronunciation of an aleph as a consonant is rarely indicated by a special marking, hamza in Arabic and mappiq in Tiberian Hebrew. In later Semitic languages, aleph could sometimes function as a mater lectionis indicating the presence of a vowel elsewhere (usually long). When this practice began is the subject of some controversy, though it had become well established by the late stage of Old Aramaic (ca. 200 BCE). Aleph is often transliterated as U+02BE ʾ , based on the Greek spiritus lenis ʼ; for example, in the transliteration of the letter name itself, ʾāleph.
Origin
The name aleph is derived from the West Semitic word for "ox" (as in the Biblical Hebrew word Eleph (אֶלֶף) 'ox'), and the shape of the letter derives from a Proto-Sinaitic glyph that may have been based on an Egyptian hieroglyph, which depicts an ox's head.
In Modern Standard Arabic, the word أليف /ʔaliːf/ literally means 'tamed' or 'familiar', derived from the root ʔ-L-F, from which the verb ألِف /ʔalifa/ means 'to be acquainted with; to be on intimate terms with'. In modern Hebrew, the same root ʔ-L-P (alef-lamed-peh) gives me’ulaf, the passive participle of the verb le’alef, meaning 'trained' (when referring to pets) or 'tamed' (when referring to wild animals).
Ancient Egyptian
The Egyptian "vulture" hieroglyph (Gardiner G1), by convention pronounced [a]) is also referred to as aleph, on grounds that it has traditionally been taken to represent a glottal stop ([ʔ]), although some recent suggestions tend towards an alveolar approximant ([ɹ]) sound instead. Despite the name it does not correspond to an aleph in cognate Semitic words, where the single "reed" hieroglyph is found instead.
The phoneme is commonly transliterated by a symbol composed of two half-rings, in Unicode (as of version 5.1, in the Latin Extended-D range) encoded at U+A722 Ꜣ LATIN CAPITAL LETTER EGYPTOLOGICAL ALEF and U+A723 ꜣ LATIN SMALL LETTER EGYPTOLOGICAL ALEF. A fallback representation is the numeral 3, or the Middle English character ȝ Yogh; neither are to be preferred to the genuine Egyptological characters.
Aramaic
The Aramaic reflex of the letter is conventionally represented with the Hebrew א in typography for convenience, but the actual graphic form varied significantly over the long history and wide geographic extent of the language. Maraqten identifies three different aleph traditions in East Arabian coins: a lapidary Aramaic form that realizes it as a combination of a V-shape and a straight stroke attached to the apex, much like a Latin K; a cursive Aramaic form he calls the "elaborated X-form", essentially the same tradition as the Hebrew reflex; and an extremely cursive form of two crossed oblique lines, much like a simple Latin X.
Hebrew
Hebrew spelling: אָלֶף
In Modern Israeli Hebrew, the letter either represents a glottal stop ([ʔ]) or indicates a hiatus (the separation of two adjacent vowels into distinct syllables, with no intervening consonant). It is sometimes silent (word-finally always, word-medially sometimes: הוּא [hu] "he", רָאשִׁי [ʁaˈʃi] "main", רֹאשׁ [ʁoʃ] "head", רִאשׁוֹן [ʁiˈʃon] "first"). The pronunciation varies in different Jewish ethnic divisions.
In gematria, aleph represents the number 1, and when used at the beginning of Hebrew years, it means 1000 (e.g. א'תשנ"ד in numbers would be the Hebrew date 1754, not to be confused with 1754 CE).
Aleph, along with ayin, resh, he and heth, cannot receive a dagesh. (However, there are few very rare examples of the Masoretes adding a dagesh or mappiq to an aleph or resh. The verses of the Hebrew Bible for which an aleph with a mappiq or dagesh appears are Genesis 43:26, Leviticus 23:17, Job 33:21 and Ezra 8:18.)
In Modern Hebrew, the frequency of the usage of alef, out of all the letters, is 4.94%.
Aleph is sometimes used as a mater lectionis to denote a vowel, usually /a/. That use is more common in words of Aramaic and Arabic origin, in foreign names, and some other borrowed words.
Rabbinic Judaism
Aleph is the subject of a midrash that praises its humility in not demanding to start the Bible. (In Hebrew, the Bible begins with the second letter of the alphabet, bet.) In the story, aleph is rewarded by being allowed to start the Ten Commandments. (In Hebrew, the first word is אָנֹכִי, which starts with an aleph.)
In the Sefer Yetzirah, the letter aleph is king over breath, formed air in the universe, temperate in the year, and the chest in the soul.
Aleph is also the first letter of the Hebrew word emet (אֶמֶת), which means truth. In Judaism, it was the letter aleph that was carved into the head of the golem that ultimately gave it life.
Aleph also begins the three words that make up God's name in Exodus, I Am who I Am (in Hebrew, Ehyeh Asher Ehyeh אהיה אשר אהיה), and aleph is an important part of mystical amulets and formulas.
Aleph represents the oneness of God. The letter can be seen as being composed of an upper yud, a lower yud, and a vav leaning on a diagonal. The upper yud represents the hidden and ineffable aspects of God while the lower yud represents God's revelation and presence in the world. The vav ("hook") connects the two realms.
Judaism relates aleph to the element of air, and the Scintillating Intelligence (#11) of the path between Kether and Chokmah in the Tree of the Sephiroth.
Yiddish
In Yiddish, aleph is used for several orthographic purposes in native words, usually with different diacritical marks borrowed from Hebrew niqqud:
With no diacritics, aleph is silent; it is written at the beginning of words before vowels spelled with the letter vov or yud. For instance, oykh 'also' is spelled אויך. The digraph וי represents the initial diphthong [oj], but that digraph is not permitted at the beginning of a word in Yiddish orthography, so it is preceded by a silent aleph. Some publications use a silent aleph adjacent to such vowels in the middle of a word as well when necessary to avoid ambiguity.
An aleph with the diacritic pasekh, אַ, represents the vowel [a] in standard Yiddish.
An aleph with the diacritic komets, אָ, represents the vowel [ɔ] in standard Yiddish.Loanwords from Hebrew or Aramaic in Yiddish are spelled as they are in their language of origin.
Syriac
In the Syriac alphabet, the first letter is ܐ, Classical Syriac: ܐܵܠܲܦ, alap (in eastern dialects) or olaph (in western dialects). It is used in word-initial position to mark a word beginning with a vowel, but some words beginning with i or u do not need its help, and sometimes, an initial alap/olaph is elided. For example, when the Syriac first-person singular pronoun ܐܸܢܵܐ is in enclitic positions, it is pronounced no/na (again west/east), rather than the full form eno/ana. The letter occurs very regularly at the end of words, where it represents the long final vowels o/a or e. In the middle of the word, the letter represents either a glottal stop between vowels (but West Syriac pronunciation often makes it a palatal approximant), a long i/e (less commonly o/a) or is silent.
South Arabian/Ge'ez
In the Ancient South Arabian alphabet, 𐩱 appears as the seventeenth letter of the South Arabian abjad. The letter is used to render a glottal stop /ʔ/.
In the Ge'ez alphabet, ʾälef አ appears as the thirteenth letter of its abjad. This letter is also used to render a glottal stop /ʔ/.
Arabic
Written as ا or 𐪑, spelled as ألف or 𐪑𐪁𐪐 and transliterated as alif, it is the first letter in Arabic and North Arabian. Together with Hebrew aleph, Greek alpha and Latin A, it is descended from Phoenician ʾāleph, from a reconstructed Proto-Canaanite ʾalp "ox".
Alif is written in one of the following ways depending on its position in the word:
Arabic variants
Alif mahmūza: أ and إ
The Arabic letter was used to render either a long /aː/ or a glottal stop /ʔ/. That led to orthographical confusion and to the introduction of the additional marking hamzat qaṭ‘ ﺀ to fix the problem. Hamza is not considered a full letter in Arabic orthography: in most cases, it appears on a carrier, either a wāw (ؤ), a dotless yā’ (ئ), or an alif.
The choice of carrier depends on complicated orthographic rules. Alif إ أ is generally the carrier if the only adjacent vowel is fatḥah. It is the only possible carrier if hamza is the first phoneme of a word. Where alif acts as a carrier for hamza, hamza is added above the alif, or, for initial alif-kasrah, below it and indicates that the letter so modified is indeed a glottal stop, not a long vowel.
A second type of hamza, hamzat waṣl (همزة وصل) whose diacritic is normally omitted outside of sacred texts, occurs only as the initial letter of the definite article and in some related cases. It differs from hamzat qaṭ‘ in that it is elided after a preceding vowel. Alif is always the carrier.
Alif mamdūda: آ
The alif maddah is a double alif, expressing both a glottal stop and a long vowel. Essentially, it is the same as a أا sequence: آ (final ـآ) ’ā /ʔaː/, for example in آخر ākhir /ʔaːxir/ 'last'.
"It has become standard for a hamza followed by a long ā to be written as two alifs, one vertical and one horizontal." (the "horizontal" alif being the maddah sign).
Alif maqṣūrah: ى
The ى ('limited/restricted alif', alif maqṣūrah), commonly known in Egypt as alif layyinah (ألف لينة, 'flexible alif'), may appear only at the end of a word. Although it looks different from a regular alif, it represents the same sound /aː/, often realized as a short vowel. When it is written, alif maqṣūrah is indistinguishable from final Persian ye or Arabic yā’ as it is written in Egypt, Sudan and sometimes elsewhere.
The letter is transliterated as y in Kazakh, representing the vowel /ə/. Alif maqsurah is transliterated as á in ALA-LC, ā in DIN 31635, à in ISO 233-2, and ỳ in ISO 233.
In Arabic, alif maqsurah ى is not used initially or medially, and it is not joinable initially or medially in any font. However, the letter is used initially and medially in the Uyghur Arabic alphabet and the Arabic-based Kyrgyz alphabet, representing the vowel /ɯ/: (ىـ ـىـ).
Numeral
As a numeral, alif stands for the number one. It may be modified as follows to represent other numbers.
Other uses
Mathematics
In set theory, the Hebrew aleph glyph is used as the symbol to denote the aleph numbers, which represent the cardinality of infinite sets. This notation was introduced by mathematician Georg Cantor. In older mathematics books, the letter aleph is often printed upside down by accident, partly because a Monotype matrix for aleph was mistakenly constructed the wrong way up.
Character encodings
See also
ʾ
Al-
Aleph number
Arabic yāʼ
Hamzah
"The Letter Aleph (א)". Hebrew Today. Retrieved 2019-05-05. |
ALGOL W is a programming language. It is based on a proposal for ALGOL X by Niklaus Wirth and Tony Hoare as a successor to ALGOL 60. ALGOL W is a relatively simple upgrade of the original ALGOL 60, adding string, bitstring, complex number and reference to record data types and call-by-result passing of parameters, introducing the while statement, replacing switch with the case statement, and generally tightening up the language.
Wirth's entry was considered too little of an advance over ALGOL 60, and the more complex entry from Adriaan van Wijngaarden that would later become ALGOL 68 was selected in a highly contentious meeting. Wirth later published his version as A contribution to the development of ALGOL. With a number of small additions, this eventually became ALGOL W.
Wirth supervised a high quality implementation for the IBM System/360 at Stanford University that was widely distributed. The implementation was written in PL360, an ALGOL-like assembly language designed by Wirth. The implementation includes influential debugging and profiling abilities.
ALGOL W served as the basis for the Pascal language, and the syntax of ALGOL W will be immediately familiar to anyone with Pascal experience. The key differences are improvements to record handling in Pascal, and, oddly, the loss of ALGOL W's ability to define the length of an array at runtime, which is one of Pascal's most-complained-about features.
Syntax and semantics
ALGOL W's syntax is built on a subset of the EBCDIC character encoding set. In ALGOL 60, reserved words are distinct lexical items, but in ALGOL W they are only sequences of characters, and do not need to be stropped. Reserved words and identifiers are separated by spaces. In these ways ALGOL W's syntax resembles that of Pascal and later languages.
The ALGOL W Language Description defines ALGOL W in an affix grammar that resembles Backus–Naur form (BNF). This formal grammar was a precursor of the Van Wijngaarden grammar.Much of ALGOL W's semantics is defined grammatically:
Identifiers are distinguished by their definition within the current scope. For example, a ⟨procedure identifier⟩ is an identifier that has been defined by a procedure declaration, a ⟨label identifier⟩ is an identifier that is being used as a goto label.
The types of variables and expressions are represented by affixes. For example ⟨τ function identifier⟩ is the syntactic entity for a function that returns a value of type τ, if an identifier has been declared as an integer function in the current scope then that is expanded to ⟨integer function identifier⟩.
Type errors are grammatical errors. For example, ⟨integer expression⟩ / ⟨integer expression⟩ and ⟨real expression⟩ / ⟨real expression⟩ are valid but distinct syntactic entities that represent expressions, but ⟨real expression⟩ DIV ⟨integer expression⟩ (i.e., integer division performed on a floating-point value) is an invalid syntactic entity.
Example
This demonstrates ALGOL W's record type facility.
aw2c – ALGOL W compiler for Linux
awe – aw2c updated version
ALGOL W @ Everything2 – informal but detailed description of the language by a former user, with sidebars extolling ALGOL W over Pascal as an educational programming language
1969 ALGOL W compiler listing at bitsavers.org
The Michigan Terminal System Manuals, Volume 16: ALGOL W in MTS
ALGOL W materials More than 200 ALGOL W programs and documentation |
Alma-0 is a multi-paradigm computer programming language. This language is an augmented version of the imperative Modula-2 language with logic-programming features and convenient backtracking ability. It is small, strongly typed, and combines constraint programming, a limited number of features inspired by logic programming and supports imperative paradigms. The language advocates declarative programming. The designers claim that search-oriented solutions built with it are substantially simpler than their counterparts written in purely imperative or logic programming style. Alma-0 provides natural, high-level constructs for building search trees.
Overview
Since the designers of Alma-0 wanted to create a distinct and substantially simpler proposal than prior attempts to integrate declarative programming constructs (such as automatic backtracking) into imperative programming, the design of Alma-0 was guided by four principles:
The logic-based extension should be downward compatible with the underlying imperative programming language
The logic-based extension should be upward compatible with a future extension that will support constraint programming
The constructs that will implement the extension should support and encourage declarative programming
The extension should be kept small: nine new features have been proposed and implementedAlma-0 can be viewed not only as a specific and concrete programming language proposal, but also as an example of a generic method for extending any imperative programming language with features that support declarative programming.
The feasibility of the Alma-0 approach has been demonstrated through a full implementation of the language (including a description of its semantics) for a subset of Modula-2.
Features
The implemented features in Alma-0 include:
Use of boolean expressions as statements and vice versa
A dual for the FOR statement that introduces non-determinism in the form of choice points and backtracking
A FORALL statement that introduces a controlled form of iteration over the backtracking
Unification which, although limited to the use of equality as assignment, yields a new parameter-passing mechanism.
Imperative and logic programming modes
The Alma-0 designers claim that the assignment, which is usually shunned in pure declarative and logic programming, is actually needed in a number of natural situations, including for counting and recording purposes. They also affirm that the means of expression of such "natural" uses of assignment within the logic programming paradigm are unnatural.
Jacob Brunekreef (1998). "Annotated Algebraic Specification of the Syntax and Semantics of the Programming Language Alma-0".
Krzysztof R. Apt, Jacob Brunekreef, Vincent Partington, Andrea Schaerf (1998). "Alma-0: An Imperative Language that Supports Declarative Programming".
Krzysztof R. Apt, Andrea Schaerf (1998). "Programming in Alma-0, or Imperative and Declarative Programming Reconciled".
Krzysztof R. Apt, Andrea Schaerf (1998). "Integrating Constraints into an Imperative Programming Language".
Krzysztof R. Apt, Andrea Schaerf (1999). "The Alma Project, or How First-Order Logic Can Help Us in Imperative Programming". |
AmbientTalk is an experimental object-oriented distributed programming language developed at the Programming Technology Laboratory at the Vrije Universiteit Brussel, Belgium. The language is primarily targeted at writing programs deployed in mobile ad hoc networks.
AmbientTalk is meant to serve as an experimentation platform to experiment with new language features or programming abstractions to facilitate the construction of software that has to run in highly volatile networks exhibiting intermittent connectivity and little infrastructure. It is implemented in Java which enables interpretation on various platforms, including Android. The interpreter standard library also provides a seamless interface between Java and AmbientTalk objects, called the symbiosis.
The language's concurrency features, which include support for futures and event-loop concurrency, are founded on the actor model and have been largely influenced by the E programming language. The language's object-oriented features find their influence in languages like Smalltalk (i.e. block closures, keyworded messages) and Self (prototype-based programming, traits, delegation).
Hello world
system.println("Hello world");
The classical "Hello, World!" program is not very representative of the language features. However, consider its distributed version:
AmbientTalk official site
Open-source interpreter |
Amiga E is a programming language created by Wouter van Oortmerssen on the Amiga computer. The work on the language started in 1991 and was first released in 1993. The original incarnation of Amiga E was being developed until 1997, when the popularity of the Amiga platform dropped significantly after the bankruptcy of Amiga intellectual property owner Escom AG.
According to Wouter van Oortmerssen:"It is a general-purpose programming language, and the Amiga implementation is specifically targeted at programming system applications. [...]"In his own words:"Amiga E was a tremendous success, it became one of the most popular programming languages on the Amiga."
Overview
Amiga E combines features from several languages but follows the original C programming language most closely in terms of basic concepts. Amiga E's main benefits are fast compilation (allowing it to be used in place of a scripting language), very readable source code, flexible type system, powerful module system, exception handling (not C++ variant), and Object-oriented programming.Amiga E was used to create the core of the popular Amiga graphics software Photogenics.
"Hello, world" example
A "hello world" program in Amiga E looks like this:
PROC main()
WriteF('Hello, World!')
ENDPROC
History
1993: The first public release of Amiga E; the first release on Aminet was in September, although the programming language source codes were published on the Amiga E mailing list at least since May.1997: The last version of Amiga E is released (3.3a).1999: Unlimited compiler executable of Amiga E is released.1999: Source code of the Amiga E compiler in m68k assembler is released under the GPL.
Implementations and derivatives
Discontinued
Amiga E
The first compiler. It was written by Wouter van Oortmerssen in the m68k assembler. It supports tools that are written in E. The compiler generates 68000 machine code directly.
Platforms: AmigaOS and compatibles.
Targets: Originally AmigaOS with 68000 CPU, but has modules that can handle 68060 architecture.
Status: Stable, mature, discontinued, source available, freeware.
CreativE
It was created by Tomasz Wiszkowski. It is based on the GPL sources of Amiga E and adds many extensions to the compiler.
Platforms: AmigaOS and compatibles.
Targets: Like Amiga E, plus some limited support for the last generations of m68k CPUs.
Status: Stable, mature, discontinued in 2001, source available, freeware.
PowerD
It was created by Martin Kuchinka, who cooperated with Tomasz Wiszkowski in the Amiga development group "The Blue Suns." It is derived from the Amiga E and CreativE languages but is incompatible with the former due to syntax changes.
Platforms: AmigaOS and compatibles.
Targets: AmigaOS 3.0 or newer; at least 68020 CPU+FPU or PowerPC (PPC); and 4MB of RAM.
Status: Stable, mature, closed source, freeware. The project has been dormant since 2010.
YAEC
Written from scratch in Amiga E by Leif Salomonsson and published in 2001. It uses an external assembler and linker. The project was abandoned in favor of ECX.
Platforms: AmigaOS and compatibles.
Targets: AmigaOS 3.0 with 68020 CPU and FPU.
Status: Obsolete, unfinished, discontinued, closed source, freeware.
ECX
A compiler and tools written from scratch by Leif Salomonsson in Amiga E, with internal functions developed in m68k and PPC assemblers. It can compile itself, supports multiple targets, and adds many extensions.
Platforms: AmigaOS compatibles and derivatives.
Targets: AmigaOS 3.0, AmigaOS 4, and MorphOS with m68k or PPC architecture.
Status: Stable, mature, open source, freeware. The project has been dormant since 2013.
RE
RE was created by Marco Antoniazzi in PowerD. It is not fully compatible with the Amiga E.
Platforms: AmigaOS and compatibles.
Targets: AmigaOS 3.0 68020 CPU+FPU; PPC.
Status: Stable, closed source, freeware. Dormant since 2008.
Under development
Portabl E
Created by Christopher Handley. It is a meta-compiler written from scratch in Amiga E. It can compile itself and supports multiple targets.
Platforms: AmigaOS (m68k), AmigaOS 4 (PPC), AROS, MorphOS, Linux, and Windows,
Targets: C++ and Amiga E. The Amiga E code is compatible with CreativE, and with proper settings, it can be compatible with the ECX compiler.
Status: Stable, mature, under development, closed source, freeware.
E-VO
It is a derivative of the Amiga E compiler, written by Darren Coles. It expands upon the original language and incorporates features from the CreativE compiler.
Platforms: AmigaOS and compatibles.
Targets: Like Amiga E; AmigaOS with 68000 and 020+ CPU.
Status: Stable, mature, under development, source available, freeware.
Amiga E home page
A Beginner's Guide to Amiga E md a Published by Jason R. Hulance in 1997
The original Amiga E manual (for v3.3a) – Compiler manual written by Wouter van Oortmerssen.
Amiga E mailing list
Amiga E packages on Aminet
PortablE home page (a free Windows & Amiga-compatibles implementation)
E-VO compiler on GitHub
How the code in Amiga E – Written by Jan Stötzer, a member of Amiga Zentrum Thueringen and Neuhaus13 groups. The tutorial was published in 1994 on Aminet in AmigaGuide document format; lha archive.
Amiga E object-oriented framework - Published by Damien Guichard in 1996.
Absolute Beginners Amiga E code examples – Published by Edward Farrow in 1997.
Beginner's Guide to Amiga E – Published by Jason R. Hulance in 1997; AmigaGuide format.
System programming with Amiga E – Published by Damien Guichard in 2007; AmigaGuide format. |
Arc may refer to:
Mathematics
Arc (geometry), a segment of a differentiable curve
Circular arc, a segment of a circle
Arc (topology), a segment of a path
Arc length, the distance between two points along a section of a curve
Arc (projective geometry), a particular type of set of points of a projective plane
arc (function prefix) (arcus), a prefix for inverse trigonometric functions
Directed arc, a directed edge in graph theory
Minute and second of arc, a unit of angular measurement equal to 1/60 of one degree.
Wild arc, a concept from geometric topology
Science and technology
Geology
Arc, in geology a mountain chain configured as an arc due to a common orogeny along a plate margin or the effect of back-arc extension
Hellenic arc, the arc of islands positioned over the Hellenic Trench in the Aegean Sea off GreeceBack-arc basin, a subsided region caused by back-arc extensionBack-arc region, the region created by back-arc extension, containing all the basins, faults, and volcanoes generated by the extension
Island arc, an arc-shaped archipelago, usually so configured for geologic causes, such as sea-floor spreading, common orogeny on the margin of the same plate, or back-arc extension
Northeastern Japan Arc, an island arc
Banda Arc, a set of island arcs in Indonesia
Continental arc, in geology a continental mountain chain or parallel alignment of chains (as opposed to island arcs), configured in an arc
Eastern Arc Mountains, a continental arc of Africa
Volcanic arc, a chain of volcanoes positioned in an arc shape as seen from above
Aleutian Arc, a large volcanic arc in the U.S. state of Alaska
Nastapoka arc, a circular coastline in Hudson Bay
Technology
arc, the command-line interface for ArcInfo
ARC (file format), a file name extension for archive files
ARC (processor), 32-bit RISC architecture
ARC (adaptive replacement cache), a page replacement algorithm for high-performance filesystems
Arc (browser), a freeware browser developed by The Browser Company
Arc (programming language), a Lisp dialect designed by Paul Graham
Sony Ericsson Xperia Arc, a cellphone
Audio Return Channel, an audio technology working over HDMI
Authenticated Received Chain, an email authentication system
Arc lamp, a lamp that produces light by an electric arc
Xenon arc lamp, a highly specialized type of gas discharge lamp
Deuterium arc lamp, a low-pressure gas-discharge light source
Hydrargyrum medium-arc iodide lamp, the trademark name of Osram's brand of metal-halide gas discharge medium arc-length lamp
Electric arc furnace, a furnace that heats charged material by means of an electric arc
Arc welding, a welding process that is used to join metal to metal
Arc-fault circuit interrupter, a specialized circuit breaker
Arc converter, a spark transmitter
Intel Arc, brand of graphics processing units designed by Intel
Other science
Electric arc, an ongoing plasma discharge (an electric current through a gas), producing light and heat
Arc flash, the light and heat produced as part of an arc fault
Arc (protein), a name of product of an immediate early gene, also called Arg3.1
Reflex arc, a neural pathway that controls a reflex
Circumhorizontal arc, an optical phenomenon
Circumzenithal arc, an optical phenomenon
Arts and entertainment
Music
The Arcs, an American garage rock band formed by Dan Auerbach in 2015
A.R.C. (album), by pianist Chick Corea with bassist David Holland and drummer Barry Altschul recorded in 1971
Arc (Neil Young & Crazy Horse album), 1991
Arc (Everything Everything album), 2013
Arc (EP), a 2016 EP by Agoraphobic Nosebleed
"Arc", a song by Pearl Jam from Riot Act
Video games
Arc System Works, a video game developer
Luminous Arc, a video game series
The title character of Arc the Lad, a series of role-playing video games for the PlayStation and PlayStation 2
Armored Response Coalition, a fictional military/resistance organization seen in the 2020 video game “DOOM Eternal”
Other arts and entertainment
Tilted Arc, a controversial public art installation by Richard Serra
Arc Poetry Magazine, a Canadian literary journal
Character arc, the status of a character as it unfolds throughout the story
Story arc, an extended or continuing storyline
Arcs, one of the twelve basic principles of animation
Codes
Arcata Transit Center, Amtrak code for the station in Arcata, California
IATA airport code of Arctic Village Airport, a public use airport in Alaska
ISO 639-2 and -5 language code of the Aramaic language, a Semitic language
ISO 639-3 language code of the Official Aramaic language, spoken between 700 BCE and 300 BCE
Companies and organizations
Arc Infrastructure, an Australian railway company
Arc International, a French manufacturer and distributor of household goods
Arc Publications, a UK independent publisher of poetry
Arc @ UNSW, the principal student organisation at the University of New South Wales
Arc of the United States, a charitable organization serving people with intellectual and developmental disabilities
Arc Holdings, a French manufacturer of household goods
Places
Arc (Provence), a river of southern France, flowing into the Étang de Berre
Arc (Savoie), a river of eastern France, tributary of the Isère river
Les Arcs, a ski resort in the French Alps
Arc, short for "Arcade"; a Street suffix as used in the US
Other uses
Arc (Bahá'í), a number of administrative buildings for the Bahá'í Faith, located on Mount Carmel in Israel
Arc (greyhounds), a major greyhound race in the Greyhound Board of Great Britain calendar
See also
Joan of Arc (c. 1412–1431), national heroine of France and Catholic saint
ARC (disambiguation)
Arc Angel (disambiguation)
Arc reactor (disambiguation)
Arch (disambiguation)
Arch of Triumph (disambiguation)
Ark (disambiguation) |
ARexx is an implementation of the Rexx language for the Amiga, written in 1987 by William S. Hawes, with a number of Amiga-specific features beyond standard REXX facilities. Like most REXX implementations, ARexx is an interpreted language. Programs written for ARexx are called "scripts", or "macros"; several programs offer the ability to run ARexx scripts in their main interface as macros.
ARexx can easily communicate with third-party software that implements an "ARexx port". Any Amiga application or script can define a set of commands and functions for ARexx to address, thus making the capabilities of the software available to the scripts written in ARexx.
ARexx can direct commands and functions to several applications from the same script, thus offering the opportunity to mix and match functions from the different programs. For example, an ARexx script could extract data from a database, insert the data into a spreadsheet to perform calculations on it, then insert tables and charts based on the results into a word processor document.
History
ARexx was first created in 1987, developed for the Amiga by William S. Hawes. It is based on the REXX language described by Mike Cowlishaw in the book The REXX Language: A Practical Approach to Programming. ARexx was included by Commodore with AmigaOS 2.0 in 1990, and has been included with all subsequent AmigaOS releases. This later version of ARexx follows the official REXX language closely; Hawes was later involved in drafting the ANSI standard for REXX.
ARexx is written in 68000 Assembly, and cannot therefore function at full speed with new PPC CPUs, a version of ARexx has not been rewritten for them and is still missing from MorphOS 3.0. William Hawes is no longer involved in development of Amiga programs and no other Amiga-related firm is financing new versions of ARexx. Notwithstanding this fact, the existing version of ARexx continues to be used, although it is not distributed with MorphOS.
From the ARexx manual:
ARexx was developed on an Amiga 1000 computer with 512k bytes of
memory and two floppy disk drives. The language prototype was
developed in C using Lattice C, and the production version was written
in assembly-language using the Metacomco assembler. The documentation
was created using the TxEd editor, and was set in TeX using AmigaTeX.
This is a 100% Amiga product.
Characteristics
ARexx is a programming language that can communicate with other applications. Using ARexx, for example, one could request data from a database application and send it to a spreadsheet application. To support this facility, an application must be "ARexx compatible" by being able to receive commands from ARexx and execute them. A database program might have commands to search for, retrieve, and save data — the MicroFiche Filer database has an extensive ARexx command set. A text editor might have ARexx commands corresponding to its editing command set — the Textra editor supplied with JForth can be used to provide an integrated programming environment. The AmigaVision multimedia presentation program also has ARexx port built in and can control other programs using ARexx.
ARexx can increase the power of a computer by combining the capabilities of various programs. Because of the popularity of a stand-alone ARexx package, Commodore included it with Release 2 of AmigaDOS.
Like all REXX implementations, ARexx uses typeless data representation. Other programming languages made distinctions between integers, floating point numbers, strings, characters, vectors, etc. In contrast, REXX systems treat all data as strings of characters, making it simpler to write expressions and algorithms.
As is often the case in dynamically scoped languages, variables are not declared before using them, they come into being on their first use.
ARexx scripts benefit from an error handling system which monitors execution and responds accordingly. The programmer can choose to suspend and resume the execution of the program as needed.
The ARexx command set is simple, but in addition to the commands there are the functions of its Amiga reference library (rexxsyslib.library). It is also easy to add other libraries or individual functions. ARexx scripts can also be invoked as functions from other ARexx scripts. Any Amiga program which has an ARexx port built in can share its functions with ARexx scripts.
Examples of ARexx solutions to common problems
Implementing new features and capabilities via scripts
If end user is using a program which builds animations by joining various bitmap image files but which lacks image processing capabilities, he could write an ARexx script which performs these actions:
ARexx locates the image files in their directories
ARexx loads first image
ARexx loads paint program
The image is loaded into paint program which performs modifications to file
The modified image is stored into another directory
ARexx repeats procedure on any image in the directory
The paint program is closed and the animation program is loaded
The animation is built
The animation is saved in its directory
The animation program is closed
Avoiding repetitive procedures
EqFiles.rexx is a well-known example of a simple ARexx script written to automate repetitive and boring procedures. This script uses the ALeXcompare program to compare files, and then finds all duplicates in a set of files and returns output by highlighting any results in a different color.
Expand AmigaOS capabilities
One of the main features of ARexx is the fact it could expand the capabilities of the AmigaOS by adding some procedures the OS lacked. For example, a simple ARexx program could be written to print a warning message on the screen of the monitor, or play an audio alert signal if a certain Amiga program stops, faults or has finished its scheduled job.
The following script is a minimal ARexx script that displays warnings depending on events that take place.
See also
REXX
Callaway, Merrill. (1992). The ARexx cookbook. Albuquerque, NM: Whitestone. ISBN 0-9632773-0-8.
Zamara, Chris; Sullivan, Nick (1991). Using Arexx on the Amiga. Abacus Software Inc. ISBN 1-55755-114-6.
Beginning ARexx Tutorial
Command and Function Reference
Design Tool |
Argus is the Latinized form of the Ancient Greek word Argos. It may refer to:
Greek mythology
See Argus (Greek myth) for mythological characters named Argus
Argus (king of Argos), son of Zeus (or Phoroneus) and Niobe
Argus (son of Arestor), builder of the ship Argo in the tale of the Argonauts
Argus Panoptes (Argus "All-Eyes"), a giant with a hundred eyes
Argus, the eldest son of Phrixus and Chalciope
Argus, the son of Phineus and Danaë, in a variant of the myth
Argus or Argos (dog), belonging to Odysseus
Argus or Argeus (king of Argos), son of Megapenthes
Argus, one of Actaeon's dogs
Argus, a watchful guardian
Arts and entertainment
Fictional entities
Argus (comics), in the DC Comics Universe
Argus (Mortal Kombat), a deity
ARGUS (Splinter Cell), a military contractor
A.R.G.U.S., a government agency in the DC Universe
Argus Filch, in the Harry Potter series
Argus, a planet in the Warcraft franchise
Argus, a hero in Mobile Legends: Bang Bang
Argus, in the video game Shadow of the Colossus
KNRB-0 Argus, a weapons platform in the game Vanquish
The Manhattan Argus, a newspaper in the film The Hudsucker Proxy
Games
Argus (video game), a 1986 game by NMK
Argus no Senshi, the original Japanese title for the arcade game Rygar
Music
Argus (album), a 1972 album by Wishbone Ash
"The Argus", a song by Ween from the album Quebec
Television
"Argus" (30 Rock), a 2010 episode
Argus (TV series), a Norwegian TV debate series that aired between 1993 and 1994
Businesses
Argus (camera company), a camera manufacturer
Argus Brewery, a brewing company located in Chicago, Illinois
Argus Corporation, a Canadian holding company
Argus Media, a business information company
Argos (retailer), a British catalogue retailer
Places
Iran
Argus, Iran, a village in Kerman Province
Spain
Argos (river), a river in the region of Murcia
United States
Argus, California, an unincorporated community
Argus, Pennsylvania, an unincorporated community
Argus Range, a mountain range in Inyo County, California
Publishing
See The Argus (disambiguation) for publications named "The Argus"
United Kingdom
The Argus (Brighton), a newspaper serving Brighton and Hove, England; a member of the Newsquest Media Group
South Wales Argus, published in Newport, South Wales; a member of the Newsquest Media Group
Argus Press, a British publishing company
Telegraph and Argus, a newspaper serving Bradford and surrounding areas.
United States
Barre Montpelier Times Argus, a daily morning newspaper serving the capital region of Vermont
Carlsbad Current-Argus, a New Mexico newspaper
Livingston County Daily Press & Argus, a newspaper that covers Livingston County, Michigan
The Dispatch / The Rock Island Argus, American newspaper that covers the Quad Cities in Illinois and Iowa
Argus Leader, American newspaper that covers Sioux Falls, South Dakota
Argus, a newspaper in Albany, New York, which long functioned as the organ of the Albany Regency
Argus, Midwood High School's school newspaper
Elsewhere
The Argus (Dundalk), a newspaper serving Dundalk, Ireland; a member of the Independent News & Media group also known as Independent.ie
The Argus (Melbourne), former Australian newspaper of record, established in 1846 and closed in 1957
Cape Argus, a newspaper printed in Cape Town, South Africa
Weekend Argus, a newspaper in South Africa, owned by Independent News & Media
Goondiwindi Argus, a newspaper in Goondiwindi, Queensland, Australia, owned by Fairfax Media
Sport
Argus finals system, used in Australian rules football in the early 20th century
Cape Argus Cycle Race in South Africa, colloquially referred to as "The Argus"
Science and technology
Biology
Argus (bird), pheasants from the genera Argusianus and Rheinartia
Argus butterflies, including:
Nymphalidae, e.g., Erebia, Junonia
Polyommatinae (Lycaenidae), e.g., as Aricia, Plebeius, Polyommatus
Theclinae (Lycaenidae): the invalid genus Argus (described by Gerhard, 1850), now in Satyrium
Argus monitor (Varanus panoptes), a species of lizard
Scatophagus argus, a species of fish of the family Scatophagidae
Terebra argus, a mollusk of the family Terebridae
Electronics and computing
Argus (monitoring software), a network and systems monitoring application
Argus (programming language), an extension of the CLU language
Argus - Audit Record Generation and Utilization System, a network auditing system
Argus retinal prosthesis, a bionic eye implant manufactured by Second Sight
Ferranti Argus, a line of industrial control computers
ARGUS-IS, a surveillance system produced by BAE Systems
Honeywell ARGUS, a low-level computer programming language
Oracle Argus Safety, a pharmacovigilance system from Oracle Health Sciences
Other uses in science and technology
Argus (camera company), a brand of camera
Argus Coastal Monitoring, a video system for observing coastal processes and related phenomena
ARGUS distribution, a probability distribution used in particle physics
Argus Motoren, a German aircraft engine manufacturing firm
ARGUS reactor, a nuclear reactor at the Russian Kurchatov Institute
Operation Argus, a 1958 US military effort to create orbital electron belts using atomic bombs
ARGUS (experiment), a particle physics experiment at DESY
Project Argus, a project to search for extraterrestrial intelligence
Vehicles
Argus (automobile), a German automobile manufactured between 1901 and 1909
Aircraft
Fairchild Argus, a British version of the UC-61 Forwarder transport aircraft
Canadair CP-107 Argus, a Royal Canadian Air Force maritime patrol aircraft
Saab 340 AEW&C, designated the S 100B Argus by the Swedish Air Force
Named vessels
French brig Argus (1800), a French naval ship that took part in the Battle of Trafalgar
HMS Argus, the name of many ships in the British Royal Navy
RFA Argus (A135), a 1981 Primary Casualty Receiving Ship in Britain's Royal Fleet Auxiliary
USS Argus, various ships of the United States Navy
SS Argus, a steel-hulled ship lost in the Great Lakes Storm of 1913
See also
The Argus (disambiguation), the name of several newspapers
Argos (disambiguation)
All pages with titles beginning with Argus
All pages with titles containing Argus |
RStudio is an integrated development environment for R, a programming language for statistical computing and graphics. It is available in two formats: RStudio Desktop is a regular desktop application while RStudio Server runs on a remote server and allows accessing RStudio using a web browser.
Licensing model
The RStudio integrated development environment (IDE) is available with the GNU Affero General Public License version 3. The AGPL v3 is an open source license that guarantees the freedom to share the code.
RStudio Desktop and RStudio Server are both available in free and fee-based (commercial) editions. OS support depends on the format/edition of the IDE. Prepackaged distributions of RStudio Desktop are available for Windows, macOS, and Linux. RStudio Server and Server Pro run on Debian, Ubuntu, Red Hat Linux, CentOS, openSUSE and SLES.
Overview and History
The RStudio IDE is partly written in the C++ programming language and uses the Qt framework for its graphical user interface. The bigger percentage of the code is written in Java. JavaScript is also used.Work on the RStudio IDE started around December 2010, and the first public beta version (v0.92) was officially announced in February 2011. Version 1.0 was released on 1 November 2016. Version 1.1 was released on 9 October 2017.In April 2018, RStudio PBC (at the time RStudio, Inc.) announced that it will provide operational and infrastructure support to Ursa Labs in support of the Labs focus on building a new data science runtime powered by Apache Arrow.In April 2019, RStudio PBC (at the time RStudio, Inc.) released a new product, the RStudio Job Launcher. The Job Launcher is an adjunct to RStudio Server. The launcher provides the ability to start processes within various batch processing systems (e.g. Slurm) and container orchestration platforms (e.g. Kubernetes). This function is only available in RStudio Server Pro (fee-based application).
Packages
In addition to the RStudio IDE, RStudio PBC and its employees develop, maintain, and promote a number of R packages. These include:
Tidyverse – R packages for data science, including ggplot2, dplyr, tidyr, and purrr
Shiny – an interactive web technology
RMarkdown – Markdown documents make it easy for users to mix text with code of different languages, most commonly R. However, the platform supports mixing R with Python, shell scripts, SQL, Stan, JavaScript, CSS, Julia, C, Fortran, and other languages in the same RMarkdown document.
flexdashboard – publish a group of related data visualizations as a dashboard
TensorFlow – open-source software library for Machine Intelligence.
Tidymodels – install and load tidyverse packages related to modeling and analysis
Reticulate – provides a comprehensive set of tools for interoperability between Python and R.
knitr – dynamic reports combining R, TeX, Markdown & HTML
packrat – package dependency tool
devtools – package development tool as well as helps to install R-packages from GitHub.
sf – supports for simple features, a standardized way to encode spatial vector data. Binds to 'GDAL' for reading and writing data, to 'GEOS' for geometrical operations, and to 'PROJ' for projection conversions and datum transformations.
Addins
The RStudio IDE provides a mechanism for executing R functions interactively from within the IDE through the Addins menu. This enables packages to include Graphical User Interfaces (GUIs) for increased accessibility. Popular packages that use this feature include:
bookdown – a knitr extension to create books
colourpicker – a graphical tool to pick colours for plots
datasets.load – a graphical tool to search and load datasets
googleAuthR – Authenticate with Google APIs
Development
The RStudio IDE is developed by Posit, PBC, a public-benefit corporation founded by J. J. Allaire, creator of the programming language ColdFusion. Posit has no formal connection to the R Foundation, a not-for-profit organization located in Vienna, Austria, which is responsible for overseeing development of the R environment for statistical computing. Posit was formerly known as RStudio Inc. In July 2022, it announced that it changed its name to Posit, to signify its broadening exploration towards other programming languages such as Python.
See also
R interfaces
Comparison of integrated development environments
Official website |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.