content
string
openbmb-fasttext-classifier-score
float64
source
string
fineweb-edu-classifier-score
float64
Science Misconceptions – How Should Teachers Deal With Them? We’ve all heard or expressed the common teacher refrain or some variation of “I taught it to them so many times and in so many different ways and yet they still got it wrong on the exam!” It’s frustrating and hard to comprehend how something which may have been thoroughly and skillfully taught, and by all indications well understood by the students, just doesn’t take hold. Perhaps what is happening is that we are trying to teach something that contradicts the students’ existing erroneous conceptions on the subject. Unfortunately such existing misconceptions have more “sticking” power and often remain as the student’s dominant explanation. For example, if you ask your Secondary 2 students to explain why summers are warmer than winters, you may often get the explanation that in summer the Sun is closer to the Earth than in winter. Many teachers have found that even if you take them through a teaching unit which explains the seasons as the result of the tilt of the earth’s axis, students will often remain faithful to their original misconception that seasons are a result of the earth’s relative proximity to (or of a possible variation of the intensity of) the sun. Dr. Patrice Potvin, a science education professor at the Université du Québec à Montréal (UQAM), has done considerable research into student misconceptions in science (more correctly referred to as alternate conceptions!). He has studied the nature of these conceptions with an eye to helping teachers help their students deal with them and direct them to more acceptable scientific understandings. But he has discovered, as so many science teachers have too, that student misconceptions can be very tenacious. Dr Potvin notes that “a growing number of studies have argued that many frequent non-scientific conceptions (sometimes designated as “misconceptions”) will not vanish or be recycled during learning, but will on the contrary survive or persist in learners’ minds even though these learners eventually become able to produce scientifically correct answers.” Potvin et al. (2015). What then can teachers do in the classroom to mitigate the learning obstacles presented by these misconceptions? Dr. Potvin has recently done research in which he has exposed students in different science disciplines and of different ages to “treatments”. In all cases students were given a pre-test, then exposed to a “treatment” i.e. a teaching situation designed to teach the correct concept, and then a post-test to see if the initial misconception had changed for the better. In one study of Grade 5 and 6 students, for example, he tackled the factors which influence an object’s buoyancy in water – trying to steer them away from the erroneous idea that size or weight alone determine buoyancy. In another study of physics students he worked to correct incorrect notions of electric currents – that a single wire can light a bulb or that a bulb consumes current, for example. Both of these studies involved large numbers of students, rigorous experimental methodology and sophisticated statistical analysis to determine whether or not the results were significant. The results showed the tenacity of student misconceptions. They were written up in peer-reviewed journals. Dr. Potvin’s research makes a couple of suggestions to teachers: - Be aware that initial misconceptions may persist and so teach with durability in mind. - Provoke “conceptual conflicts” by giving illustrations which dramatically illustrate the differences between the correct and the erroneous conceptions. For example when trying to dispel the idea that the weight of an object is the main factor in its buoyancy, he suggests “comparing the buoyancy of a giant tanker boat (that floats even though it weights thousands of tons) to that of a sewing needle would provoke a stronger conceptual conflict than, say, comparing a wooden ball with a slightly bigger lead ball” Potvin (2015) This is just a brief glimpse of the research being carried out in this complex area of science education, both locally at UQAM as well as internationally and being reported in many academic journals of science education. With this in mind, an interesting project is being undertaken at McGill University to help teachers tackle science misconceptions that their students bring to the class. As a joint bilingual undertaking of McGill and UQAM, its aim is to help teachers of Cycle 1 secondary Science and Technology (S&T) diagnose and hopefully correct their students’ alternate conceptions in as many of the 85 concepts of the MELS S&T program as possible. Teachers from 3 school boards (two English and one French) have been working hard to develop diagnostic questions for the concepts – questions whose incorrect answers help identify misconceptions their students have. Corrective measures are also being developed to help teachers guide their students. LEARN Quebec is a partner in the project and will be the online distributor to teachers across the province once the question bank has been completed. Hopefully, along with the current research being done, this will help advance our students’ understanding of the science concepts needed to make them scientifically literate members of society. Potvin, P., Mercier, J., Charland, P., & Riopel, M. (2012). Does classroom explicitation of initial conceptions favour conceptual change or is it counter-productive. Research in Science Education, 42(3), 401–414. Potvin, P., Sauriol, É. and Riopel, M. (2015), Experimental evidence of the superiority of the prevalence model of conceptual change over the classical models and repetition. J Res Sci Teach, 52: 1082–1108. doi:10.1002/tea.21235
0.915
FineWeb
3.578125
Ready to Buy? + Free Shipping 1-2 Business Days Alera Interval Task Chair Compact Design, Tilt Controls, Green Fabric, Black Frame Item #: ALEIN4871 FREE Shipping on this item Description: The Alera Interval Series Task Chair is ideal for all-day seating in tight spaces. - Designed to fit in tight workspaces. - Molded plastic shell resists impact. - Waterfall seat edge helps relieve pressure points on the underside of legs. - Five-star base with casters for easy mobility. - Optional Arms sold separately. - Supports up to 250 lbs. - 360 Degree Swivel: Chair rotates a full 360 degrees in either direction for ease of motion. - Back Height Adjustment: Simple lift motion positions lumbar support within a fixed range to alleviate back stress. - Pneumatic Seat Height Adjustment: Quick and easy adjustment regulates height of chair relative to floor. - Tilt: Pivot point located directly above center of chair base. - Tilt Lock: Locks out tilt function when chair is in upright position. - Tilt Tension: Controls rate and ease with which chair reclines to different weight and strengths of users. - Seat: 19-1/2"W x 17-3/4"D - Back: 16-1/2"W x 15-1/4"H - Seat Height Range: 18-3/4" to 23-1/2" - Overall Height: 34" to 39" Some Assembly Required General Office & Task Pneumatic Seat Height Adjustment: Back Height Adjustment: Tilt Tension/Tilt Lock: Overall Width Maximum: Overall Depth Maximum: Overall Height Minimum: Overall Height Maximum: Seat Width Maximum: Seat Depth Maximum: Seat Height Minimum: Seat Height Maximum: Back Width Maximum: Back Height Minimum: Back Height Maximum: Five 2" hooded casters. Supports up to 250 lbs. Alera Interval Series For Use With: Alera Fixed Height T-Arms, Alera Optional Height-Adjustable T-Arms Meets or exceeds ANSI/BIFMA Standards Pre-Consumer Recycled Content Percent: Post-Consumer Recycled Content Percent: Total Recycled Content Percent: Casters supplied with this chair are not suitable for all floor types. Optional Arms sold and shipped separately. This product is not yet rated. Be the first to Write a Review!
0.8031
FineWeb
0.914063
An Ethical Framework for Global Vaccine Allocation Emanuel, E., Persad, G., Kern, A., et al.. (2020). An Ethical Framework for Global Vaccine Allocation. (Added 12/28/2020.) Science. 369(6509):1309-1312. The authors of this article describe a three-phased Fair Priority Model for distribution of COVID-19 vaccine that prioritizes preventing urgent harms earlier. Phase 1 addresses premature deaths and other irreversible health effects, phase 2 addresses other enduring health harms and economic and social deprivations, and phase 3 addresses community transmission.
0.6436
FineWeb
1.773438
International Journal of Mathematics and Mathematical Sciences Volume 20 (1997), Issue 1, Pages 19-32 Generalized transforms and convolutions 1Department of Mathematics, Northwestern College, Orange City 51041, IA, USA 2Department of Mathematics and Statistics, Miami University, Oxford 45056, OH, USA 3Department of Mathematics and Statistics, University of Nebraska, Lincoln 68588, NE, USA Received 27 June 1995; Revised 8 August 1995 Copyright © 1997 Timothy Huffman et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In this paper, using the concept of a generalized Feynman integral, we define a generalized Fourier-Feynman transform and a generalized convolution product. Then for two classes of functionals on Wiener space we obtain several results involving and relating these generalized transforms and convolutions. In particular we show that the generalized transform of the convolution product is a product of transforms. In addition we establish a Parseval's identity for functionals in each of these classes.
0.5535
FineWeb
1.460938
Peg + Cat This new animated preschool series that follows Peg and her sidekick Cat as they embark on adventures and learn foundational math concepts and skills. In each episode, Peg and Cat encounter an unexpected challenge that requires them to use math and problem-solving skills in order to save the day. Their adventures take viewers from a farm to a distant planet, from a pirate island to a prehistoric valley, from Romeo and Juliet’s Verona to Cleopatra’s Egypt to New York’s Radio City Music Hall. While teaching specific math lessons, the series displays the value of resilience and perseverance in problem-solving. The program’s curriculum is grounded in principles and standards for school mathematics as established by the National Council of Teachers of Mathematics and the Common Core State Standards for Mathematics for kindergarten and first grade. Peg and Cat’s website provides viewers with interactive games, videos, apps and more. - Go on a treasure hunt - Journey on a math adventure - Play dozens of games like Knights of the Round Table - Watch videos Airs weekdays at 10:30 a.m. and Saturdays at 7:30 a.m. on WCNY.
0.8038
FineWeb
2.671875
By packaging Leukemia Inhibitory Factor (LIF) inside biodegradable nanoparticles, scientists developed a nanoparticle-based system to deliver growth factors to stem cells in culture, resulting in cell colony growth with a 10,000 fold lower dose of LIF when using the nanoparticle-based delivery system compared to traditional methods using soluble LIF in a growth medium. Stem cells – unspecialized cells that have the potential to develop into different types of cells – play an important role in medical research. In the embryotic stage of an organism’s growth, stem cells develop into specialized heart, lung, and skin cells, among others; in adults, they can act as repairmen, replacing cells that have been damaged by injury, disease, or simply by age. Given their enormous potential in future treatments against disease, the study and growth of stem cells in the lab is widespread and critical. But growing the cells in culture offers numerous challenges, including the constant need to replenish a culture medium to support the desired cell growth. Tarek Fahmy, Associate Professor of Biomedical Engineering & Chemical & Environmental Engineering, and colleagues have developed a nanoparticle-based system to deliver growth factors to stem cells in culture. These growth factors, which directly affect the growth of stem cells and their differentiation into specific cell types, are ordinarily supplied in a medium that is exchanged every day. Using the researchers’ new approach, this would no longer be necessary. “Irrespective of their scale or nature, all cell culture systems currently in practice conventionally supply exogenous bioactive factors by direct addition to the culture medium,” says Paul de Sousa, a University of Edinburgh researcher and co-principal investigator on the paper. With that approach, he explains, “Cost is one issue, especially during prolonged culture and when there is a requirement for complex cocktails of factors to expand or direct differentiation of cells to a specific endpoint.” A second issue, says de Sousa, is specificity: growth factors supplied by direct addition to the culture medium can lead to the growth of undesired cell populations, which can end up competing with the growth of the desired cell types. “A relatively unexplored strategy to improve the efficiency of stem cell culture is to affinity-target critical bioactive factors sequestered in biodegradable micro or nanoparticles to cell types of interest,” explains de Sousa, “thereby achieving a spatially and temporally controlled local ‘paracrine’ stimulation of cells.” Fahmy and his colleagues packaged leukemia inhibitory factor, which supports stem cell growth and viability, inside biodegradable nanoparticles. The nanoparticles were “targeted” by attaching an antibody – one specific to an antigen on the surface of mouse embryonic stem cells being grown in culture. As a result, the nanoparticles target and attach themselves to the stem cells, ensuring direct delivery of the bioactive factors packaged inside. The researchers have previously demonstrated the potential uses of this approach in drug delivery and vaccination, including targeted delivery of Leukocyte Inhibitory Factor (LIF), which prevents certain types of white blood cells from migrating, in order to regular immune responses. In stem cell cultures, LIF is also the key factor required to keep the cells alive and let them retain their ability to develop into specialized types of cells. In this research, Fahmy and his colleagues packed LIF into the biodegradable nanoparticles for slow-release delivery to the stem cells in culture. Their results showed cell colony growth with a 10,000 fold lower dose of LIF when using the nanoparticle-based delivery system compared to traditional methods using soluble LIF in a growth medium. While a stem cell culture sustained using a traditional method of exchanging growth medium consumes as much as 25 nanograms of LIF in a day – about 875 nanograms after five weeks of culture – only 0.05 total nanograms of LIF would be required to achieve the same level of growth using the nanoparticle delivery system, a remarkable reduction in the required materials. The next step is to use these systems with human cells to direct their differentiation into hematopoietic cells—blood products. Clinical and industrial translation of this ability requires efficient and cost effective strategies for cell manufacturing. In principle, this method offers a means to produce standardized or individually tailored cells to overcome challenges associated with donated blood products. Reference: “Paracrine signalling events in embryonic stem cell renewal mediated by affinity targeted nanoparticles” by Bruna Corradettia, Paz Freilea, Steve Pellsa, Pierre Bagnaninchia, Jason Park, Tarek M. Fahmy and Paul A. de Sousaa, 30 June 2012, Biomaterials.
0.9745
FineWeb
3.015625
The 5G Evolution; An Advancement in Technology; How Can it Affect Us? According to industry proponents, 5G technology is considered a necessary evolution in wireless transmission to accommodate the increasing number of wireless devices, such as mobile phones, internet transmitting devices and many cutting edge technologies, such as robotics. The technological advancement to 5G allows more devices to communicate and more data to transmit, more rapidly. The high frequency microwaves required will necessitate more 5G networks and thus, cell phone towers to accommodate the level of the increased technological velocity of 5G. Cell phone towers in closer proximity, such as in our neighborhoods, can result in more difficulty minimizing the amount of radiation we are exposed to. Research conducted by University of Washington professor Dr. Henry Lai demonstrated that brain cells are clearly damaged by microwave levels far below the US government’s safety guidelines. Dr. Lai notes that even minimal doses of radio frequency can cumulate over time and lead to harmful effects. What is our solution to the potentially harmful side effects of an expeditiously expanding wireless network industry? Our proven patented and proprietary products function to help neutralize the adverse effects of the increasing daily exposures to harmful radiation. Implement The Cell Phone Chip Store`s full-scale product line of radiation guards as a front line of defense against the long term, cumulative effects of harmful radiation!
0.8792
FineWeb
2.734375
700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ ReadersThis Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics) The present study was conducted to determine the effect of raw anchovy (Engraulis encrasicolus L.) as wet feed on growth performance and production cost of rainbow trout (Oncorhynchus mykiss W.) reared in net pen during winter season in the Black Sea. The fish with an initial body weight of 100 g were hand fed to apparent satiation with only raw anchovy, only pellet and anchovy/pellet combination over 58 days. Final mean body weight of the groups fed anchovy and anchovy/pellet were significantly higher (P<0.05) than that of the group fed with only pellet. However, no difference was found between the groups fed anchovy and anchovy/pellet combination. Raw anchovy was well accepted than the pellet by the fish during the low water temperature. The use of raw anchovy as wet feed made positive effect on the production cost. In conclusion, by-catch anchovy must be evaluated as a supplemental diet to the pellet for rainbow trout, especially over a period of low water temperature in the Black Sea. Rainbow trout, (Oncorhynchus mykiss W.) wet feed, anchovy (Engraulis encrasicolus L.), growth, Fisheries feed, Aquatic (both freshwater and marine) systems, Aquatic health management, Ciguatera Fish Poisoning
0.5779
FineWeb
2.3125
What does it mean to be a hero? In The Heroic Heart, Tod Lindberg traces the quality of heroic greatness from its most distant origin in human prehistory to the present day. The designation of “hero” once conjured mainly the prowess of conquerors and kings slaying their enemies on the battlefield. Heroes in the modern world come in many varieties, from teachers and mentors making a lasting impression on others by giving of themselves, to firefighters no less willing than their ancient counterparts to risk life and limb. They don’t do so to assert a claim of superiority over others, however. Rather, the modern heroic heart acts to serve others and save others. The spirit of modern heroism is generosity, what Lindberg calls “the caring will,” a primal human trait that has flourished alongside the spread of freedom and equality. Through its intimate portraits of historical and literary figures and its subtle depiction of the most difficult problems of politics, The Heroic Heart offers a startlingly original account of the passage from the ancient to the modern world and the part the heroic type has played in it. Lindberg deftly combines social criticism and moral philosophy in a work that ranks with such classics as Thomas Carlyle’s nineteenth-century On Heroes, Hero-Worship and the Heroic in History and Joseph Campbell’s twentieth-century The Hero with a Thousand Faces.
0.8493
FineWeb
3.109375
Introduced in the 1950’s, skinny jeans were first worn by film stars like Roy Rogers, Lone Ranger, Cisco Kid, Zorro, Gene Autry, Marilyn Monroe, and Sandra Dee. Known for its thigh hugging, form fitting silhouette- skinny jeans tapper at the ankle and are widely recognized for its slim cut that exudes sex appeal. By the 1960’s, women began pushing gender roles by widely adopting slim cut denim and other male dominated fashion statements. Skinny jeans served as a means of communicating gender empowerment and equality through channeling female sexuality while drawing attention to feminine curves. Skinny jeans epitomized both sexuality and sex appeal when Elvis, the king of rock n’ roll, began wearing them during his tantalizing performances in the late 1950’s and early 1960’s. During the 1970’s, skinny jeans became synonymous with a ‘bad boy’ rock n’ roll image and served as a uniform staple for fashion-forward rockers in the alternative music industry. Rock band legends like Mick Jagger from The Rolling Stones and The Beatles also helped paved the way for the skinny jean phenomenon through fusing fashion with performance/entertainment. The 1970’s set the British punk rock movement in motion- where self-proclaimed ‘scenester’ bands like The Clash, The Sex Pistols and The Ramones put a notorious punk spin on the growing skinny jean trend. Through incorporating dark color palettes, leather and zipper embellishments, the punk rock movement was the first fashion wave to truly individualize and stylize slim cut denim. In 1971, fashion designer, Vivienne Westwood, opened SEX (Boutique), one of the fist stores to ever specialize in punk and fetish-inspired clothing. Never before had a retail boutique been solely dedicated to selling skinny jeans and other British ‘scenester’ attire- bringing slim cut denim to the masses. Tight fitted clothing like skinny jeans functioned as a form of rebellion for fashion-conscious nonconformists in the 1970’s. Skinny jeans’ fashion uprising was sustained well into the 1980’s with the origination of movements surrounding heavy metal and glam metal. Bands like Poison, Mötley Crüe, Bon Jovi, Guns N’ Roses and Kiss were prominent in the 1980’s and all donned skinny jeans along with other form fitting bottoms such as spandex during their concert performances. The skinny jeans trend made a steep declined in the 1990’s with the advancement of hip hop and grunge music. Both grunge and hip hop dictated a uniform consisting of baggy jeans, flannel shirts and over sized outerwear- starkly contrasting the considerably contoured fashion trends of the 1970’s and 1980’s. In 2000, skinny jeans made a comeback thanks to fashion icon, Kate Moss, garage rock and the formation of indie rock in popular music culture. Moss, who once dated Peter Doherty of The Libertines, was photographed with Doherty dressed in skinny jeans and boots- letting fashionistas around the world know that wearing skinny jeans was once again appropriate for daily attire. The overlapping trends within the fashion and music industry is undeniable, and the history of skinny jeans greatly exemplifies this widespread notion. Today, the appeal of skinny jeans has reached other industries that have little to do with fashion and music. While many find skinny jeans to be rather restricting, professional skate borders and BMX bike riders prefer sporting skinny jeans due to their stretchy material that accommodates movement and flexibility.
0.6228
FineWeb
2.28125
People at Google must be aficionados of the Spanish painter Diego Velázquez [1599-1660], because they've celebrated his birthday by creating a graphic Google banner based upon the famous painting called Las Meninas [Maids of Honor]. Here's a fragment of the original Velázquez masterpiece: The intriguing nature of this painting was first brought to my attention back in 1966 when I read a popular work of modern philosophy, Les mots et les choses by Michel Foucault [translated into English as The Order of Things], which starts with an in-depth analysis of the Velázquez painting. Foucault suggests that this painting demonstrates, or at least symbolizes, the existence of an invisible emptiness at the heart of the world that we attempt vainly to circumscribe... not by images, but by language. So, let us see rapidly what is so upsetting about this painting. At first sight, one has the impression that the subject of the painting is the blonde child between the two maids. Her name is Margarita, and she's the eldest daughter of the Spanish queen. When we examine the individuals more closely, however, we find that the artist Velázquez himself is present, standing behind the left-hand maid, and that he is looking directly, not at the little princess, but at us, the viewers. Then a blurry mirror on the rear wall, just to the right of the painter's head (as we see things), reveals the true subject of the painter's work: the barely-recognizable king and queen of Spain, Philip IV and Marianna. The painting is inverted in such a way that we see, not the true subject, but rather the regard of those who can see this subject. In the antipodean sense that I evoke often in this blog, the painter has turned his world upside-down and inside-out. At a visual level, the two most prominent subjects in the foreground of the painting, from our viewpoint, are a bulky pet dog and a plump male dwarf in female attire (said to be an Italian jester). Meanwhile, supposedly major individuals such as the royal couple and a noble man are seen as mere images on rear-wall mirrors, suggesting that Velázquez himself was not overly preoccupied with the task of reproducing their image on his canvas. This complex work of art (designated by many admirers as the greatest painting ever made) is an excellent symbol for Google. We throng to Google in the hope of receiving profound knowledge about our world... whereas Google, in reality, is simply throwing back at us, through its endless lists of websites of all kinds, our own imperfect image. Maybe a vast but essentially empty image.
0.5895
FineWeb
2.140625
- Education’s Woes and Pros: A new study conducted by UNESCO reveals that less than 30% of schools have access to electricity and only half of them have toilets for girls. In order to address such woeful capacity, the Rajasthan’s state government has signed a public private partnership with UNICEF to expand education across the state — the program will particularly focus on educating young girls. - Healthcare’s Woes and Pros: A new report by the UN reveals that India suffers from the highest abseentism rate with regard to healthcare workers, and that these no-shows will likely result in India failing to meet the Millennium Development Goals. However, a more positive story is that a new HIV test can be administered rapidly to pregnant women in rural areas, enabling doctors to administer the necessary treatment to prevent transmission to the baby. - Mobile Technology: With the advent of 3G coming to India soon, Bharat Sanchar Nigam (BSNL) is looking to new ways to use the increased speeds to connect to the rural poor of India. - Energy: In Jharkhand, the government looks to wind to help power the future of that region.
0.5287
FineWeb
2.1875
Bullying is a serious workplace issue. According to the Canadian Center for Occupational Health and Safety, workplace bullying generally involves repeated incidents intended to “intimidate, offend, degrade or humiliate a particular person or group of people.” CCOHS notes that although a fine line exists between strong management and constructive criticism and bullying, workplace bullying exists and can lead to a number of issues. The agency provides a number of examples of workplace bullying. Those include: - Spreading malicious, untrue rumors - Socially isolating someone - Purposefully hindering someone’s work - Physically injuring someone or threatening abuse - Taking away a worker’s responsibility without justification - Yelling or swearing - Not assigning enough work or assigning an unreasonable amount of work - Setting impossible-to-meet deadlines in an effort to make the worker fail - Blocking a worker’s request for leave, training or a promotion Bullying can have serious repercussions. Victims of bullying may feel angry or helpless and experience a loss of confidence. Additionally, bullying can cause physical side effects, including an inability to sleep, loss of appetite, headaches, or panic attacks. According to CCOHS, organizations with a culture of bullying may experience many unfavorable side effects, including increased turnover and absenteeism, increased stress among workers, and decreased morale. CCOHS states that the most important thing management can do to express a commitment to preventing workplace bullying is to have a comprehensive written policy. The agency provides the following advice for creating a policy: - Involve both management and employees in the creation of the policy. - Be very clear in your definition of workplace bullying. Provide examples of what is and is not acceptable behavior. - Clearly state the consequences of bullying. - Encourage workers to report bullying behavior by making the reporting process completely confidential. Let workers know they will not be punished in any way for reporting bullying. - If your workplace has an Employee Assistance Program, encourage workers experiencing problems to use it. - Regularly review the policy and update it as needed.
0.9921
FineWeb
3.296875
Amy's husband deploys for months at a time, so she discusses the 5 things you should never say to a military spouse. 1. "You must be used to this" 2. Do you worry about his safety" 3. "My spouse travels for work too. I totally know what you're going thru" 4. "Wow, you must miss him" 5. How do you go such a long time without.... (being physical)" When Amy's husband gets deployed he sends her flowers and Bobby a stick.
0.6357
FineWeb
0.601563
Use these 6 times 7 table worksheets to evaluate your kid’s multiplication skill. It may sound so basic, but it can prove to be useful for you or your child to memorize your times table. Once your child has a full set of 6 or 7 times tables, they need to practice so that they are automatic in their times table drill. Ensure that your child learns the standard methods of multiplication using these worksheets, for better evaluation and assessment. Good times-tables knowledge is vital for quick mental multiplication math. If a child knows that 6 x 3 = 18 they will be able to comprehend that 6 x 30 = 180 or 60 x 3 = 180. Using these 6 times 7 multiplication worksheets will help develop a good understanding of the relationship between numbers in multiplication. Try the worksheet below for more practice with basic multiplication facts. A strong grasp of times tables helps increase enjoyment of the subject. The multiplication printable worksheets below will take your child through their multiplication learning step-by-step so that they are learning the math skills to solve and master multiplication. Help your students achieve the ability to rapidly recall their times table facts with these fabulous new times table worksheets that your students are going to love! These fun math worksheets are free to download and print for educational use.
0.9997
FineWeb
3.796875
Writing for the screen : creative and critical approaches - Craig Batty and Zara Waldeback. - Houndmills, Basingstoke, Hampshire [England] ; New York : Palgrave Macmillan, 2008. - Physical description - ix, 201 p. ; 22 cm. - Approaches to writing. - Includes filmography: p. 192-194. - Includes bibliographical references (p. 189-191) and index. - Acknowledgments Introduction PART I: FOUNDATIONS Establishing Practice Subject: Ideas into Character Structure and Narrative Visual Storytelling Dialogue and Voice The Cultures of Screenwriting Key Points and Foundations Exercises PART II: SPECULATIONS Exploring Possibilities Subjects: Ideas into Character Structures and Narratives Visual Storytelling Dialogues and Voices Further Cultures of Screenwriting Key Points and Speculations Exercises Notes Bibliography Index. - (source: Nielsen Book Data)9780230550759 20160528 - Publisher's Summary - This book presents an innovative approach to the art and practice of screenwriting, using contemporary case studies and interactive exercises. It presents an innovative and fresh approach to the art and practice of screenwriting, developing creative and critical awareness for writers, students and critics. It includes contemporary case studies, in-depth analysis and unique writing exercises. The book explores a wide variety of techniques, from detailed scene writing and non-linear structure, to documentary drama and the short film. This fresh approach to scriptwriting, innovative in style and approach, incorporates both creativity and critical appraisal as essential methods in writing for the screen. Contemporary case studies, in-depth analysis and interactive exercises create a wealth of ideas for those wishing to work in the industry or deepen their study of the practice. (source: Nielsen Book Data)9780230550759 20160528 - Motion picture authorship. - Publication date - Approaches to writing - 9780230550759 (pbk.) - 0230550754 (pbk.) Browse related items Start at call number:
0.6682
FineWeb
2.203125
- HSK 1 - 4 - Pronunciation – Pinyin - Chinese Measure Words - Chinese Course - Games to Learn Chinese - HSK Test - Chinese Words - Chinese Characters - Chinese Phrases Do you know how to translate the chinese word 九? The pronunciation in pinyin is written jiǔ or jiu3. Here the english translation of that chinese word and audio file (mp3). |Example sentences in Chinese| |Today is the 9th of September.| |At half past nine I go to sleep.| |In China, every child has to go to school for 9 years. (compulsory education)| |This school has 20 teachers, among them are 9 chinese teachers. (from China)| |Today I learned one, two, three, four, five, six, seven, eight, nine, ten.| |There are 194 countires in the world.|
0.9036
FineWeb
2.640625
In Myanmar, the number of medical professionals including doctors and nurses is remarkably low, and the training of medical assistance personnel called caregiver will become the key to the future of medical development. We aim to create a large number of high global standard quality talents to Myanmar by providing educational programs with applying Japan KAIGO Know-How. Why do we train medical personnel? Two social issues in Myanmar In Myanmar's medical environment, shortage of medical personnel is becoming a big issue. Especially the number of nurses which is fewer than doctors makes human medical infrastructure improvement by training nurses become a big development task. The reason is because of the lacking of public educational institutions. There are only three nursing colleges in the countries whose degrees are officially accepted. (University of Nursing, Yangon / Mandalay Institute of Nursing/ Defence services institute of nursing and paramedical science) In Myanmar, caregivers often carry out medical assistance in place of nurses, but there is no curriculum standard for private educational institutions that educate those personnel. The nursing assistants who graduated from a private educational institution has a low salary level, and there are many cases of changing jobs to other industries instead of medical care or going overseas through an opaque institution. By utilizing the internship program in Japan, we can solve these social problems in Myanmar. We aim to greatly improve the quality of medical care and welfare in Myanmar Achieve advanced human resources by taking advantage of the resource of each country We have cultivated more than 15,000 KAIGO talents in Japan, and we aim to develop human resources that have global standard. If you can acquire Japanese advanced KAIGO technology, you will be able to work in all countries around the world. Education of KAIGO technology will be conducted in Japan after finishing Japanese Language education. We are aiming for the world's highest level of KAIGO education by harmonizing the "virtuous culture" rooted in Myanmar and the "hospitality heart" of Japan. In collaboration with Yangon Japanese Language School "Better Life", we aim to reach N3 level from N5 level in 6 months. In our own developed curriculum, we learn not only daily expressions but also about frequently used Japanese phrase in nursing care practice, so that interns can come to Japan and get into work smoothly. Also, native Japanese speakers will support your study. We will carefully answer what you care about and what you do not understand when you travel to Japan Myanmar talents who are cultivated by our company aim to achieve skills that can be employed not only in their countries but also in Japan, Thailand and Singapore. For that reason, we aim not only nursing skills and language, but also to educate about business manners and communication that form the basis of work, and thereby aim to produce global standard talents that can work internationally.
0.6386
FineWeb
2.0625
In this suggestive VIS image, taken by the NASA - Mars Odyssey Orbiter on December, 29th, 2015, and during its 62.289th orbit around the Red Planet, we can see an a (truly) small portion of the Martian Region known as Nilus Chaos. Located to the North of the Kasei Valles System, this Chaotic Region formed (approximately) at the Elevation Boundary between the aforementioned Kasei Valles System and the surrounding (and relatively flat) Northern Plains. Latitude (centered): 25,7934° North Longitude (centered): 283,4270° East This image (which is an Original Mars Odyssey Orbiter b/w and Map Projected frame published on the NASA - Planetary Photojournal with the ID n. PIA 20417) has been additionally processed, magnified to aid the visibility of the details, extra-contrast enhanced and sharpened, Gamma corrected and then colorized in Absolute Natural Colors (such as the colors that a normal human eye would actually perceive if someone were onboard the NASA - Mars Odyssey Orbiter and then looked down, towards the Surface of Mars), by using an original technique created - and, in time, dramatically improved - by the Lunar Explorer Italia Team.
0.7976
FineWeb
2.15625
At a glance - Legitimate interests is the most flexible lawful basis for processing, but you cannot assume it will always be the most appropriate. - It is likely to be most appropriate where you use people’s data in ways they would reasonably expect and which have a minimal privacy impact, or where there is a compelling justification for the processing. - If you choose to rely on legitimate interests, you are taking on extra responsibility for considering and protecting people’s rights and interests. - Public authorities can only rely on legitimate interests if they are processing for a legitimate reason other than performing their tasks as a public authority. - There are three elements to the legitimate interests basis. It helps to think of this as a three-part test. You need to: - identify a legitimate interest; - show that the processing is necessary to achieve it; and - balance it against the individual’s interests, rights and freedoms. - The legitimate interests can be your own interests or the interests of third parties. They can include commercial interests, individual interests or broader societal benefits. - The processing must be necessary. If you can reasonably achieve the same result in another less intrusive way, legitimate interests will not apply. - You must balance your interests against the individual’s. If they would not reasonably expect the processing, or if it would cause unjustified harm, their interests are likely to override your legitimate interests. - Keep a record of your legitimate interests assessment (LIA) to help you demonstrate compliance if required. - You must include details of your legitimate interests in your privacy information. - We have checked that legitimate interests is the most appropriate basis. - We understand our responsibility to protect the individual’s interests. - We have conducted a legitimate interests assessment (LIA) and kept a record of it, to ensure that we can justify our decision. - We have identified the relevant legitimate interests. - We have checked that the processing is necessary and there is no less intrusive way to achieve the same result. - We have done a balancing test, and are confident that the individual’s interests do not override those legitimate interests. - We only use individuals’ data in ways they would reasonably expect, unless we have a very good reason. - We are not using people’s data in ways they would find intrusive or which could cause them harm, unless we have a very good reason. - If we process children’s data, we take extra care to make sure we protect their interests. - We have considered safeguards to reduce the impact where possible. - We have considered whether we can offer an opt out. - If our LIA identifies a significant privacy impact, we have considered whether we also need to conduct a DPIA. - We keep our LIA under review, and repeat it if circumstances change. - We include information about our legitimate interests in our privacy information. What’s new under the GDPR? The concept of legitimate interests as a lawful basis for processing is essentially the same as the equivalent Schedule 2 condition in the 1998 Act, with some changes in detail. You can now consider the legitimate interests of any third party, including wider benefits to society. And when weighing against the individual’s interests, the focus is wider than the emphasis on ‘unwarranted prejudice’ to the individual in the 1998 Act. For example, unexpected processing is likely to affect whether the individual’s interests override your legitimate interests, even without specific harm. The GDPR is clearer that you must give particular weight to protecting children’s data. Public authorities are more limited in their ability to rely on legitimate interests, and should consider the ‘public task’ basis instead for any processing they do to perform their tasks as a public authority. Legitimate interests may still be available for other legitimate processing outside of those tasks. The biggest change is that you need to document your decisions on legitimate interests so that you can demonstrate compliance under the new GDPR accountability principle. You must also include more information in your privacy information. In the run up to 25 May 2018, you need to review your existing processing to identify your lawful basis and document where you rely on legitimate interests, update your privacy information, and communicate it to individuals. What is the ‘legitimate interests’ basis? Article 6(1)(f) gives you a lawful basis for processing where:“processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.” This can be broken down into a three-part test: - Purpose test: are you pursuing a legitimate interest? - Necessity test: is the processing necessary for that purpose? - Balancing test: do the individual’s interests override the legitimate interest? A wide range of interests may be legitimate interests. They can be your own interests or the interests of third parties, and commercial interests as well as wider societal benefits. They may be compelling or trivial, but trivial interests may be more easily overridden in the balancing test. The GDPR specifically mentions use of client or employee data, marketing, fraud prevention, intra-group transfers, or IT security as potential legitimate interests, but this is not an exhaustive list. It also says that you have a legitimate interest in disclosing information about possible criminal acts or security threats to the authorities. ‘Necessary’ means that the processing must be a targeted and proportionate way of achieving your purpose. You cannot rely on legitimate interests if there is another reasonable and less intrusive way to achieve the same result. You must balance your interests against the individual’s interests. In particular, if they would not reasonably expect you to use data in that way, or it would cause them unwarranted harm, their interests are likely to override yours. However, your interests do not always have to align with the individual’s interests. If there is a conflict, your interests can still prevail as long as there is a clear justification for the impact on the individual. When can we rely on legitimate interests? Legitimate interests is the most flexible lawful basis, but you cannot assume it will always be appropriate for all of your processing. If you choose to rely on legitimate interests, you take on extra responsibility for ensuring people’s rights and interests are fully considered and protected. Legitimate interests is most likely to be an appropriate basis where you use data in ways that people would reasonably expect and that have a minimal privacy impact. Where there is an impact on individuals, it may still apply if you can show there is an even more compelling benefit to the processing and the impact is justified. You can rely on legitimate interests for marketing activities if you can show that how you use people’s data is proportionate, has a minimal privacy impact, and people would not be surprised or likely to object – but only if you don’t need consent under PECR. See ICO’s Guide to PECR for more on when you need consent for electronic marketing. You can consider legitimate interests for processing children’s data, but you must take extra care to make sure their interests are protected. See our detailed guidance on children and the GDPR. You may be able to rely on legitimate interests in order to lawfully disclose personal data to a third party. You should consider why they want the information, whether they actually need it, and what they will do with it. You need to demonstrate that the disclosure is justified, but it will be their responsibility to determine their lawful basis for their own processing. You should avoid using legitimate interests if you are using personal data in ways people do not understand and would not reasonably expect, or if you think some people would object if you explained it to them. You should also avoid this basis for processing that could cause harm, unless you are confident there is nevertheless a compelling reason to go ahead which justifies the impact. If you are a public authority, you cannot rely on legitimate interests for any processing you do to perform your tasks as a public authority. However, if you have other legitimate purposes outside the scope of your tasks as a public authority, you can consider legitimate interests where appropriate. This will be particularly relevant for public authorities with commercial interests. See our guidance page on the lawful basis for more information on the alternatives to legitimate interests, and how to decide which basis to choose. How can we apply legitimate interests in practice? If you want to rely on legitimate interests, you can use the three-part test to assess whether it applies. We refer to this as a legitimate interests assessment (LIA) and you should do it before you start the processing. An LIA is a type of light-touch risk assessment based on the specific context and circumstances. It will help you ensure that your processing is lawful. Recording your LIA will also help you demonstrate compliance in line with your accountability obligations under Articles 5(2) and 24. In some cases an LIA will be quite short, but in others there will be more to consider. First, identify the legitimate interest(s). Consider: - Why do you want to process the data – what are you trying to achieve? - Who benefits from the processing? In what way? - Are there any wider public benefits to the processing? - How important are those benefits? - What would the impact be if you couldn’t go ahead? - Would your use of the data be unethical or unlawful in any way? Second, apply the necessity test. Consider: - Does this processing actually help to further that interest? - Is it a reasonable way to go about it? - Is there another less intrusive way to achieve the same result? Third, do a balancing test. Consider the impact of your processing and whether this overrides the interest you have identified. You might find it helpful to think about the following: - What is the nature of your relationship with the individual? - Is any of the data particularly sensitive or private? - Would people expect you to use their data in this way? - Are you happy to explain it to them? - Are some people likely to object or find it intrusive? - What is the possible impact on the individual? - How big an impact might it have on them? - Are you processing children’s data? - Are any of the individuals vulnerable in any other way? - Can you adopt any safeguards to minimise the impact? - Can you offer an opt-out? You then need to make a decision about whether you still think legitimate interests is an appropriate basis. There’s no foolproof formula for the outcome of the balancing test – but you must be confident that your legitimate interests are not overridden by the risks you have identified. Keep a record of your LIA and the outcome. There is no standard format for this, but it’s important to record your thinking to help show you have proper decision-making processes in place and to justify the outcome. Keep your LIA under review and refresh it if there is a significant change in the purpose, nature or context of the processing. If you are not sure about the outcome of the balancing test, it may be safer to look for another lawful basis. Legitimate interests will not often be the most appropriate basis for processing which is unexpected or high risk. If your LIA identifies significant risks, consider whether you need to do a DPIA to assess the risk and potential mitigation in more detail. See our guidance on DPIAs for more on this. What else do we need to consider? You must tell people in your privacy information that you are relying on legitimate interests, and explain what these interests are. If you want to process the personal data for a new purpose, you may be able to continue processing under legitimate interests as long as your new purpose is compatible with your original purpose. We would still recommend that you conduct a new LIA, as this will help you demonstrate compatibility. If you rely on legitimate interests, the right to data portability does not apply. If you are relying on legitimate interests for direct marketing, the right to object is absolute and you must stop processing when someone objects. For other purposes, you must stop unless you can show that your legitimate interests are compelling enough to override the individual’s rights. See our guidance on individual rights for more on this. The Article 29 Working Party includes representatives from the data protection authorities of each EU member state. It adopts guidelines for complying with the requirements of the GDPR. There are no immediate plans for Article 29 Working Party guidance on legitimate interests under the GDPR, but WP29 Opinion 06/2014 (9 April 2014)gives detailed guidance on the key elements of the similar legitimate interests provisions under the previous Data Protection Directive 95/46/EC. Thank you for reading.
0.7598
FineWeb
2.203125
Cumulative Impact Study This study aims to examine the cumulative effects of activities and practices in the Lower Platter River Corridor over time and their impact on the terrestrial and aquatic habitats of the Platte River. Scope development was completed in August 2005. Data Acquisition was the focus of Phase ll. Compiling aerial photos and transect data for six time periods (1850, 1938, 1950's, 1970's, 1993, and 2003) with land-use classification lead to a hydrologic study looking at changes in the river over time and the development of an online internet mapping service to access the GIS information. A final report on the Cumulative Impact Study (CIS), Phase II was completed September, 2008. For access to the CIS interactive GIS program, click here. Prediction Model Development: Meetings for the development of a Conceptual Ecological Model are continuously being held throughout Phase III to identify missing information needed to: determine the character of the river, assess threats to endangered and threatened species, identify the processes of concern, and prioritize research and management actions. A select group of representatives of UNL, USFWS, USACE, USGS, NGPC, and the NRDs continue to identify components of the conceptual model and identify "knowns" and "gaps" as far as research is concerned. In spring of 2011, this group of representives and the LPRCA made significant head way in alliterating the basic components of the river's system and how they are related to one another. Research: Phase III research has focused on water flow and how it affects sediment transfer. Using data tools from Phase II and Phase III, we can identify how water flow and sediment changes could affect the amount of habitat for threatened and endangered species. The USGS, in coordination with the Army Corps of Engineers, spent the summer of 2010 collecting sediment samples and GIS cross-sections of the river, and then conducted a sediment budget analysis. Draft results of their studies, entitled "Sediment Samples and Channel-Geometry Data, Lower Platte River Watershed in Nebraska, 2010" and "Geomorphic Classification and Evaluation of Channel Width and Emergent Sandbar Habitat Relationships on the Lower Platte River, Nebraska", are available and can be viewed via the link below. The sediment budget analysis was completed by USGS and the Corps of Engineers in 2014. The USGS report can be found below or in the Publications section of the website. A final full report of all 3 phases of the CIS is expected in 2015. Future of the CIS: Items identified as priorities for the next phase of the CIS include: a 3-year Sandbar Monitoring study with USGS; a full reconnaissance study of bank stabilization along the Lower Platte; and continued development of the conceptual model.
0.7664
FineWeb
2.953125
Build from scratch an Automatic Speech Recognition system that could recognise spoken numerical digits from 0 to 9. We discuss how Convolution Neural Networks, the current state of the art for image recognition systems, might just provide the prefect solution! This is a beginner level tutorial to practice coding in Python. Prove a trivia of the famous sitcom, Friends using simple pattern recognition and basic scripting in Python. You will also get familiar with some built in modules in python.
0.9533
FineWeb
2.25
All of the conflicts selected for inclusion in the Shenandoah Valley Study have been referred to by historians as battles, but the range of comparison among these battles is so large that use of the term ``battle'' to describe all equally could be questioned. The nineteenth- and early twentieth-century archivists who compiled the Official Records, and other event lists and chronologies used a ranking system of ``battle,'' ``engagement,'' and ``action'' based on the command structure of the forces engaged (typically the Union forces engaged). Rather than providing guidance as to the size and intensity of an encounter, these terms tell us only that: a battle was directed by the ranking general of the military district and involved the bulk of the forces under his command; an engagement might be directed by a subordinate leader or involve only a portion of the armies in the field; an action was a conflict, typically limited in scope, that could not be easily labeled a battle or an engagement. This early ranking system was not designed to describe or interpret events but to award appropriate plaudits to the commanding officers and the units involved. Figure 10 portrays a range of comparison among the battlefields selected for the Shenandoah Valley Study, ranking them according to the relative size of the forces engaged and indicating their traditional ranking of battle (B), engagement (E), or action (A). The figures provided are the best approximations that can be offered, considering the uneven reliability of the sources. Confederate strengths, in particular, are often only estimated since many Confederate records were lost. Also, the full forces of one army or the other were not always brought to the field and were not all engaged. The number of troops on the field and actively engaged must be estimated, and existing estimates often differ widely. A second way to compare battles is to rank the number of fatalities incurred at each. More deaths in a conflict typically equated to determined, close-quarters fighting. Battles of maneuver and surprise, on the other hand, often resulted in lower numbers of fatalities and higher numbers of captured and missing. Figure 11 shows the Shenandoah Valley battlefields ranked according to the approximate number of fatalities. A third way to compare the battles is to rank attrition (total killed, wounded, captured, and missing) of the forces engaged, a useful measure of a battle's influence on the progress of its campaign. High attrition rates incurred by one side or the other in a single battle might cripple its force and compel a retreat. In many cases, higher than average attrition rates resulted from a disastrous rout by one side or the other with large numbers of prisoners falling into enemy hands. Figure 12 provides a ranking by estimating the combined attrition of the forces engaged. The battles of Opequon and Cedar Creek stand out in terms of size, fatalities, and attrition. Although the size of Confederate armies in the Valley remained surprisingly consistent from 1862 to 1864, averaging 16,000-24,000 men, the size of the Union armies increased dramatically under Sheridan's command in 1864, to nearly 40,000. At Opequon, Sheridan outnumbered Early 2.6 to 1, and both armies were fully engaged. Together, Opequon and Cedar Creek accounted for nearly 52 percent of the fatalities of the fifteen battles and 43 percent of the combined attrition. Considering that these two battles were fought only a month apart, the toll, in the context of Valley warfare, is staggering. In the six representative battles of Jackson's 1862 Campaign, the Confederate army inflicted 393 fatalities at a cost of 367 dead (total 760). This ratio is near parity. Looking at attrition, the tally diverges more dramatically. The Union armies suffered about 6,400 casualties compared to Confederate losses of 2,745 (total 9,145). Many of the surplus Union casualties were prisoners taken at First Winchester and Front Royal. In the six representative battles of the Early-Sheridan 1864 campaign, the Confederate army inflicted 1,587 fatalities at a cost of 776 dead (total 2,363), a two-to-one ratio. Overall, however, the Union armies closed the gap somewhat, suffering about 12,890 casualties compared to Confederate losses of 9,130 (total 22,020), a ratio of about three-to-two. These figures provide a useful comparison of scale between the 1862 and 1864 campaigns. Numbers engaged, fatalities, and attrition rates are indicators of how intensely a battle was fought. Yet these indicators tend to obscure the strategic significance of some of the smaller conflicts. While it is true that the larger battles achieved significance by sheer firepower and weight of numbers, the significance of a battle is best determined by its campaign context, a context that must be carefully assessed as to its influence on regional and national events. Often it was the battle that was not fought or the conflict cheaply won, that determined the course of a campaign and the ultimate strategic and political outcome. Thus a battle, such as Front Royal, which was won at little cost to Stonewall Jackson, attains a heightened importance when examined in light of his strategy of flanking the main Union army at Strasburg. Jackson's tactical loss at First Kernstown, for example, achieved strategic success by diverting thousands of Union soldiers as reinforcements to the Valley. Future historians will continue to debate the relative significance of these events. Return to contents page
0.704
FineWeb
3.71875
Written and illustrated by Honoria Tox The moon flickers like a gaslight behind the torn, torrid clouds as I watch out the upper window, straining my ears for the sound of horse-hooves. The earth falls away from my home and down to the river, only one thin horse-trail separating its wildness from mine; and the darkness courses above us. I sigh at the silence, leaving the window to move about the room: first to the stack of thick azure paper that sits on my work-bench. I cut the paper into cottony slices with my knife in strong, swooping gestures, like a factory-woman tossing the shuttle-cock back and forth across a loom. I fold the paper with quick, skilled strokes, my dainty fingers darting them into points and curves. Then I fit them with their mechanisms, small gears and springs thrust into their wings, and set them free: a hundred tiny blue-birds, my automata, winding their way through the air and into the night, flapping all their pretty wings against the moonlight as they go.
0.7993
FineWeb
1.4375
Bradenton Christian School was established in 1960 with the goal of providing an academically rich education built on the infallible Word of God. Over the years, the curriculum offerings have expanded and include both Christian and secular texts to provide the best possible educational tools. Yet each subject is taught from a Christian perspective to ensure each child understands how the Word of God applies. Admission is offered to students with a broad range of academic abilities. Yet BCS students consistently score an average of one and half to two years above their grade level on the Iowa Test of Basic Skills. This is an assessment tool given each year through grade 7. The curriculum elements include: - Social Studies - Language Arts - Resource Room - Physical Education - Bible / Spiritual Development - Band / Strings / Music Appreciation (Grades 5-6) Innovative activities inside and outside the classroom bring learning to life.
0.9964
FineWeb
2
Excel always interpret the "." of my keyboard as ",". My regional setting are correct ("," is the decimal separator) and my keyboard layout is correct (FR-BE). I don't find how to change the behavior, specific to Excel (and Powerpoint) only. All the other applications use the actual keyboard key (".") but Excel uses instead the decimal separator from the regional setting. In my case, the sign on the numerical part of my keyboard is a dot, not a comma, so it's NOT a decimal separator. How to change that behavior This thread is locked. You can follow the question or vote as helpful, but you cannot reply to this thread. What is the operating system? Which version of Excel is installed on the computer? You may try following these steps: Excel 2000 – 2003: Tools Menu - Options - International Tab - Separators > check ‘Use System Separators’ > then click ok. Excel 2007: Office button > Excel options > Advanced > Editing options >check the ‘Use System separators’ > click ok. Click on Start > Control Panel > Clock, Language and Region (Regional and languages) > Change the date, time, or number format > Click on Additional settings button. This would display the Customize Format window where the Decimal Separator is defined. 64 people were helped by this reply Did this solve your problem? Sorry this didn't help. Great! Thanks for marking this as the answer. How satisfied are you with this reply? Thanks for your feedback, it helps us improve the site.
0.5006
FineWeb
2.5625
Why should we care about lake mud? Part II The Great Lakes hold 20% of the Earth’s surface fresh water, and are important natural and economic resources for the US and Canada. During the past several thousand years they have been strongly influenced by climate change and the evolving glacial landscape of the Great Lakes region. Lake Erie, the shallowest, has been very sensitive to environmental changes during its Holocene evolution and also to human influences during the modern era. Lake level changes impact erosion rates; temperature shifts affect productivity and water chemistry; precipitation changes influence inflow from rivers and the Upper Great Lakes. Understanding these relationships and predicting future trends is important to maintaining these crucial natural resources. Also, understanding Lake Erie’s past climate is essential to predicting how the Great Lakes region will respond to both natural and human-induced climate change in the future. My senior thesis project investigates Lake Erie’s history by analyzing sediments deposited in the lake’s eastern basin during the Holocene (to about 3,500 years ago). Since changes in the physical, biological, and chemical proxies found in these lake sediments can be influenced by a variety of factors, clear identification of the primary factor or factors acting at the time of deposition is not always possible. However, good interpretations can be made based on a critical analysis of the combined data. In these Lake Erie sediments, we use a variety of proxies, looking at relationships between them to better understand the lake’s paleo-depositional environment. How do we know when proxy changes happen? (Or, how do we date mud?) Cores can be correlated by matching magnetic susceptibility peaks that appear in sediment across the central and eastern basins. Radiocarbon dates from above the magnetic susceptibility shift are out of stratigraphic order, indicating contamination or sediment re-working. An approximate age for the shift in our Station 23 sediment was estimated from 2900 14C yrs BP from immediately above and below the shift. CONCLUSIONS (so far) Multi-proxy data from a Lake Erie sediment core indicate a warm climate event, peaking at about 2900 14C years BP, followed by a period of greater climate variability. Lake Erie’s climate record differs from New York lake records, potentially indicating high regional variability. Understanding the response of Lake Erie to climate change is crucial to predicting and preparing for future changes. Because it has such a shallow basin, Lake Erie’s water levels are particularly sensitive to climate. As we continue to see shifts in regional and global temperatures, and as human impact on the Great Lakes increases, we need to prepare for major environmental consequences. Lake level fluctuations will impact coastal wetlands, commercial shipping, pleasure boating, and beach erosion. Temperature and water chemistry changes will impact primary productivity, fisheries, and invasive organisms. Further high-resolution paleo-climate work needs to be done in order to address these concerns.
0.9945
FineWeb
3.515625
Finally, you can slim and contour the area below and around your chin and neck – without surgery. Dr. Covey is proud to be one of the first physicians to offer the new revolutionary Kybella procedure: the first and only FDA – approved, non-surgical treatment to reduce submental fullness, more commonly known as “double chin.” Submental fullness affects both men and women, and can be influenced by several factors such as aging, genetics and weight gain. Submental fullness is often resistant to diet and exercise and can detract from a balanced facial appearance- resulting in an older and heavier look. According to a 2014 survey conducted by the American Society for Dermatologic Surgery, 68 percent of people said they are bothered by their double chin. With Kybella, we can achieve surgical results without the pain and downtime typically associated with traditional surgery. These injections have the potential, even, to replace liposuction. Kybella is a series of injections in the chin and neck area to contour and improve the appearance of moderate to severe submental fullness due to submental fat. The active ingredient in Kybella is deoxycholic acid, a naturally-occurring molecule in the body that aids in the breakdown and absorption of dietary fat. When injected into the fat around the chin and neck, Kybella causes the destruction of fat cells. Once destroyed, the cells in the treated area can no longer store or accumulate fat so re-treatment is not expected. Kybella injections are tolerable, as topical anesthesia is used to numb the skin. Post treatment, you can immediately return to work or normal daily activities as Kybella treatments require no downtime. Benefits of Kybella: - Kybella is safe, effective and non-invasive - Kybella requires no downtime – you can resume normal activities immediately - Kybella treatments are performed in approximately 15-20 minutes - Kybella is capable of providing permanent improvement to treated areas Frequently Asked Questions How many treatments are necessary? Dr. Covey will provide a tailored treatment plan depending on your needs and aesthetic goals. A series of injections will be administered at each treatment session. Usually, two to four treatment sessions will achieve your desired look. How often will I need to visit Dr. Covey for treatments? Kybella treatments are usually spaced a month or more apart. How long do the results last? Kybella works by destroying the unwanted fat cells so they cannot store or accumulate future fat in the treated areas. So, your newly contoured look will last, and last.
0.5524
FineWeb
1.351563
Changes in literacy practices, created by rapidly evolving technologies, have had many implications for the teaching and learning of literacy.This synthesis will reflect on the ten annotated articles to highlight how literacy teaching and learning has changed, and how teachers can best assist students in their learning.Traditionally, literacy was taught via approaches, such as “drilled in skills”, or in immersion processes, drowning students in experiences of print and visuals prior to developing semantics, syntax, or phonological skills (Henderson, R. slide 3). These pedagogies were at a time when texts were explicitly from a two dimensional print- based world of books and images (NSW Department of Education and Training, p. 3.). Nowadays, the very concept of „text „encompasses print and digital modes through what Cope and Kalantzis (2009) define as Learning by Design. These designs set out how students make meaning in all modes of texts via the linguistic, visual, audio, gestural, spatial and multimodal aspects (New London Group, p.78).Having the ability to comprehend or interpret the design modes in literacy ensures students become multiliterate it today‟s technological society. This requires not only a cognitive practice but also having an understanding, or an awareness, of the social concepts (Anstey and Bull, p.), aiming to empower students through literacy to read the “word and the world”, encouraging them to firstly indentify texts as social constructions and then to analyse their meanings (Freire & Macedo, 1987). It is little wonder that students are more adept at newer technologies then teachers, as these technologies have embedded themselves into the culture of the students, taking on complex roles and new mindsets in regards to communication (Asselin and Moayeri, p.1).Blogs, Skype and texting are just a snippet of the new forms of communications, transforming the very act of literacy learning, and progressing at such a rate that pedagogical practices are falling behind (Marsh, p.13). There is no one right way to teach literacy skills, but there are a number of pedagogical approaches that benefit the learning processes; didactive teachings, discovery based and exploratory approaches, are just a few (National Curriculum Board, p.16). These styles provide grounded experiences that are meaningful to students and relatable to their personal experiences both in and out of school. Structured dialogue is another approach teachers can take on board when teaching as it builds on robust learning environments and improves learning outcomes (Abbey, 2010). Dialogue, along with pedagogies and technologies, develop the cognitive mind as well as having social functions that enhance students‟ vocabulary. There are also a number of frameworks that assist both with the teaching, and learning of literacy and the Four Resource Model is one framework that lends itself into all subject areas (Santoro, p.52, Stewart-Dore, p.6) seeing literacy taught across all domains of the curriculum, not just in English.Faced with a digital driven and globalised world, teachers must adopt a pedagogy of multiliteracies and embed the new technologies into the learning frame in order to develop inclusivity, cultural knowledge and connectedness to the real world (Mills, p.7). Abbey is an Australian consultant and researcher with much experience in the government and community sectors. His article explores the benefits of structured dialogue and examines a four dimensional model and a stage-by-stage process for teachers to implement into the learning environment. Through his research, he suggests that new pedagogies and technologies need to align in order to bring optimal performance in the classrooms. Abbey argues that pedagogy and technology need to merge in order to transform classroom conversations into a structured dialogue, developing cognitive as well as social functions. How students are read to is just as important as how often they are read to as this will enhance their vocabulary. Anstey and Bull‟s article explores the term multiliteracies and the skills required by students to be cognitively and socially literate within the technology used. The implication for pedagogy begins at examining what constitutes text in an age of multimedia. Previously education worked within paper based text, hence a linguistic semiotic system dominated literacy pedagogy, however as texts are increasingly multimodal the term „literate persons‟ requires knowledge of all five semiotic systems as well as an understanding of how they work together. This means that teachers need to help students explore the changing nature of texts as they develop understandings about them. Asselin and Moayeri„s article offers examples of classroom practices drawing on social elements of „social webbing‟ (Web 2.0) which they believe are necessary in extending students ideas of new literacies. Expanding literacies for learning with Web 2.0 include criticality, metacognition, reflection and skills, all needed for creating and publishing, yet schools still remain to use Web 1.0 for games/activities and resources. The authors suggest social bookmarking sites as examples of collaborative cataloguing and indexing tools due to their collaborative nature of ranking information based on the number of people who have bookmarked them. The use of these technologies provides students with a collaborative environment with them being active participants in the development of new social literacy practices. Cope and Kalantzis refer to The New London Groups theory of multiliteracies pedagogy. They believe that due to a changing world and changing environment , pedagogy needs to change also. Instead of the traditional basics of reading, they call for a transformative pedagogy, allowing the learner to actively analyse and apply meaning making in four major dimensions of teaching. This article suggests that empirical activities will aid in the development of strategies for diversity among students enabling equity within the classroom and enabling students to be active participants in their learning providing them with the framework to be literate participants in society. Marsh‟s analysis suggests that schools take into account the way in which students are engaged in “innovative literacy practices” in order to adopt productive pedagogies. Because of the range of learning opportunities afforded by digital technologies, new pedagogical approaches are required in schools if the content is to be engaging and appropriate, and if students are to become competent and effective analysers and producers within a range of multimodal texts. Marsh draws on Bernstein‟s‟ (2000) Pedagogic Recontextualizing Field in relation to literacy learning and education to critique two different pedagogies (The National Literacy Strategy and Productive Pedagogies). Schools need to revisit how they teach literacy, and Information and Communication Technologies, and attempt to meld the two in order to achieve a more productive pedagogy. Mills‟ research paper looks at the findings of research regarding the interactions between pedagogy and access to multiliteracies among culturally and linguistically diverse learners. Conducted in an upper primary classroom of a low socio-economic area, Ms Mills conducted her research using the multiliteracies pedagogy and critical ethnographic methodology. Unfortunately the observations made by Ms Mills showed the teacher‟s relapse to existing pedagogies and traditional text thus prohibiting access to culturally diverse textual practices and multimodality. This article highlights the shortcomings of theories into classrooms as well as the importance for teachers to constantly re-evaluate their pedagogical beliefs and practices. The New South Wales Government delves into how digital technologies effect learning environments via teacher pedagogy, the nature of the learner, and reading and writing. They acknowledge that although these are still central to being literate, globalisation has created new literacy needs, which should equip students to become critical creators and consumers of the information they encounter. They draw on a range of frameworks they believe are influential in determining curriculum content, yet applying these frameworks alone do not ensure success in literacy learning amongst students. Pedagogical beliefs and knowledge in technology are also important ensuring teachers have understandings of what technology and media do .Educators need to adjust their literacy practices in order to stay at least on par with the changes occurring in literacies. Santoro‟s perspective in this article is that literacy learning is a complex set of practices operating within a variety of texts and within certain sets of social situations. He contrasts this to teachers who believe that once students have learnt to read and write, they are able to do so in all contexts. Santoro quells these beliefs by pointing out that there are many distinctive school and social literacies characterised by written, oral, aural, visual, digital and multimodal texts. Santoro advocates the use of the four-resource model as a “valuable tool” for middle year‟s teachers and student teachers. Students need to be strategic learners, acquiring a multitude of skills and strategies enabling them to gain, construct and communicate new knowledge‟s whilst building higher order thinking skills and experiences, according to Stewart-Dore. He examines the popular reading frameworks and touches on their shortcomings (linear, systematic progression, lacking in critical reflections regarding contents and processes). In turn, Stewart-Dore proposes an alternative framework through the Practicing Multiliteracies Learning Model comprising of four phases: accessing knowledge, interrogating meanings, selecting and organising information and representing knowledge. This article suggests that teachers require some guidelines ensuring their teaching strategies are appropriate to literacy education. The New London Group argues that the cultural and linguistic diversity occurring in society calls for extensive views on literacy rather than the traditional based language approaches. This article, written by ten academics, is concerned about the changes occurring in literacy due to globalisation, technology and the social and cultural diversity. It was through them that the term „multiliteracies was coined, acknowledging the many diverse ways that literacy is used. This new approach to literacy pedagogy combats the “limitations of traditional” pedagogies, taking on a transformative approach by introducing the “what” and “how” of literacy pedagogy. This article has been very influential regarding literacy within the educational system.
0.9935
FineWeb
3.5625
I'm currently developing a game, and one of the features involves breeding the creatures you collect. Since it's a game, a large amount of promiscuity is to be expected, and individuals could potentially have hundreds of siblings. How exactly can I go about presenting this information? My current setup is along these lines: |Grandparent 1|Grandparent 2|Grandparent 3|Grandparent 4| | Parent 1 | Parent 2 | |Individual | Siblings (if any) in list form | |Partner 1 | Children of Individual and Partner 1 | |Partner 2 | Children of Individual and Parther 2 | ..... It works, and by clicking on a relative you can make them the focus of the tree. But it just seems clunky and I don't think it's particularly user-friendly. Can anyone suggest a suitable way to go about presenting this information?
0.7314
FineWeb
1.195313
MangleHide your e-mail address from spammers. This script changes a spam-proof e-mail address into a readable, mailto address link. Simply click inside the window below, use your cursor to highlight the script, and copy (type Control-c or Apple-c) the script into a new file in your text editor (such as Note Pad or Simple Text) and save (Control-s or Command-s). The script is yours!!!
0.8048
FineWeb
0.625
The impact plan sets out what the prospective impact is, and how the organisation proposes to generate it. The assessment of impact risk appraises the plan for its validity, and for the confidence it inspires that the organisation, through carrying out its activities and delivering its outputs, will achieve the intended outcomes, and generate real positive change. - impact risk Impact risk is a measure of the certainty that an organisation will deliver on its proposed impact, as detailed in the impact plan. The question implied is: How sure is the impact plan to work, and what is the risk that the impact won’t be generated? An assessment of impact risk looks to the impact plan for six key qualities: Is the impact plan explicit in all particulars? The starting point for any structured and rational treatment of impact is being explicit. This involves ensuring that the impact plan displays: The impact plan articulates clearly each of its components and the linkages between them. This includes setting out what will be done, what processes will be used, and how the activities — within the defined context, and in combination with other conditions — will bring about the desired change. The impact plan is specific and concrete about what is to be used (resources, budget), who will be effected (target beneficiaries and their context), what is to be achieved (how much, how many), and the timelines involved (when will the activities be carried out, and the change happen). The impact plan is concrete also regarding the measurement system that will be used to track what is taking place. The impact plan gives a fair, true and complete picture of the processes and changes it presents, including implicit claims and assumptions, and appropriate consideration of how the change relates to other factors and the surrounding environment (including impacts upon other stakeholders). These are covered in the conditions for change and context of change sections of the impact plan. An impact plan that covers only the organisation’s own processes, with no address of the context, is deemed to be incomplete. A full address of the context, and all the ramifications of change (including deadweight, displacement, attribution, drop off, and unintended consequences), is likely to be beyond the scope of most impact plans, and the organisation must therefore make an assessment of materiality — i.e. a determination of the bounds of what is relevant and material to include in a true account of the impact. The impact plan is explicit as to where these bounds of materiality lie. The information that is deemed material is therefore provided, and gaps or holes in the information, or links that are unproven, are acknowledged and justified. Does the impact plan present a compelling and well-reasoned theory of change? Once the impact plan and its various components have been laid out explicitly, attention turns to how well reasoned an overall narrative or theory of change it presents. Pertinent questions include: - Do the mission and activities express a coherent response to the context (i.e. the problem and the target beneficiaries)? - Is the link between the proposed outputs and the anticipated outcomes thought-through and convincing? Do the outputs really drive the outcomes? Have the conditions for change been addressed, and their role in the change soundly reasoned? - Is the address of the context of change credible and fair, with the bounds of materiality set at a sensible level? A full address of the context of change can most likely only be achieved through conducting a control experiment (typically a randomised control trial, or RCT). However this is often impractical given the resources and the scale of operations. Under such circumstances, investors and organisations are often reliant upon a reasoned treatment of the counterfactual (a hypothetical scenario of “what would have happened anyway, what is happening elsewhere, and the role of other factors” that can be used to deal with questions of deadweight, displacement, and attribution). There may be uncertainties, and therefore impact risk, around how the outcomes are really brought about, and how reliably they are a result of the organisation’s work. Most important to the impact is that the organisation can make a compelling case for how it plays a critical role in the desired change (i.e. without it the change wouldn’t have happened). Backwards-mapping can be a powerful tool for testing the reasoning involved throughout the impact plan. Is the generation of impact integral to the organisation’s business and operations? A form of impact risk may arise if there is a potential tension within the organisation between its impact-generating and revenue-generating activities. Where there is a clear financial motive for the organisation to pursue less impactful strategies, and the business and impact interests are in this sense not well-aligned, there is a risk that the operational needs of the business will threaten the impact. This risk however is greatly reduced if the impact plan is integral to the organisation’s business strategy, operations, and revenue model. In this case, the business plan clearly supports the impact plan, with impact and operational sustainability going hand in hand. Where there is tension and potential risk regarding the integration of impact into the business model, the investor may look to some form of mission lock or protection via the governance or legal structure of the organisation (e.g. governance obligations, incorporation as a registered charity or CIC). Is the impact plan feasible? The question of feasibility focuses mainly on the links in the impact plan between the organisation, its activities and its outputs. For the impact plan to be feasible, it must show: - the organisation has the resources, capacity, skills and relevant experience to execute the plan - the operational risks inherent in the plan are identified and addressed, with measures in place to mitigate them where appropriate A significant aspect of the overall feasibility of the plan will relate to the financial and operational strength of the organisation. This however will generally fall within financial due diligence considerations, and typically go into a credit rating, and be given separate consideration. The question of feasibility, for impact risk therefore, focuses on those aspects not covered in the financial analysis — i.e. assuming credit-related issues are secure, is the impact plan feasible in other respects? This may include attention to: - key personnel Does the organisation have the right people to carry out the plan with respect to impact, with the necessary skills and relevant experience, as well as the vision, leadership and drive? - operational processes Does the organisation have processes in place to manage activities, and ensure they are reaching the right beneficiaries, and having the desired effect? Are the activities an effective means to deliver the desired outputs? Does the organisation have the staff, time, technology and facilities required to carry out activities? - projections around other factors Where the impact is reliant upon factors beyond the organisation’s direct control (e.g. conditions in the local economy, support or services to be delivered by other organisations, among the conditions for change), and assumptions are therefore made about them, are these assumptions feasible? Is there evidence to support the impact plan’s approach to impact generation? Evidence may include: - track record The organisation has carried out similar activities in the past, with robust impact measurement of past performance demonstrating the validity and effectiveness of the approach. For evaluating the track record, see quality of information and verification of results (in 4.2 Impact Reporting). To be considered as convincing evidence, a track record must demonstrate a change in the measured outcome (typically involving pre- and post-intervention measurements), and that, where used, samples are representative, and survey questions are neutral and non-leading. An independent evaluation of the activities and outputs of the organisation, where available, provides the best evidence on this front (and thereby lowest impact risk). The track records of other organisations, working with similar methods and assumptions, and again appropriately evidenced by measurement, may be used to demonstrate the validity of the approach. Studies or relevant expert knowledge may be used to back up the claims involved. Research can situate the organisation’s approach in the context of the problem and other relevant interventions, which it may align with or differ from according to the position taken. Research may in particular be used to support the assumptions implicit in the conditions for change, and the treatment of the counterfactual in the context of change. Where available, research on benchmarks can provide an anchor for the organisation’s past results and proposed future performance. - control groups The most conclusive evidence of the effectiveness of an intervention is to demonstrate through the use of a control group the difference between the outcomes achieved when the organisation is active, and when it is not. This, properly speaking, is the demonstrable impact: the real change brought about as a clear result of the organisation’s work. However, while randomised control trials (RCTs) represent the gold standard in evidence, they are expensive to carry out, and require specialised skills. It is also important to note that RCTs are significantly more practicable, and therefore favour, interventions of a very specific nature, with easily isolated, testable, and relatively short-term outcomes. Furthermore, RCTs are meaningful only when the sample sizes are large enough for other factors to cancel each other out, and therefore are often applicable only when the intervention is taking place at a relatively large scale. While all this means that it is unlikely there will be a widespread adoption of RCTs throughout the social-purpose sector anytime soon (and especially not at the early-stage end of the spectrum), the lesson is nevertheless a powerful one: that for an intervention to be truly valid, it must be able to outperform a control group. If a specific control group is not set up and monitored, then some evidence as to what such a control group might look like, typically based on research with comparable situations elsewhere, can serve to lower impact risk significantly on this front. The availability of a track record, precedents, extensive research, and control groups, will depend on a combination of the organisation’s stage of development, and the originality of its approach. Rarely will an organisation be able to provide an exhaustively evidenced treatment of the change, and its interplay with other factors, though it is important to look at what evidence there is, and to consider the impact risk it leaves. Evidence, in so far as it is available, should serve to promote confidence in the impact plan, and in particular in the relationship between the organisation’s proposed activities and outputs, and the outcomes and impact that it is hoped will follow. For an organisation proposing a completely new idea, and therefore with little or no direct evidence of how well it works, there may still be relevant research it is responding to, and that has informed the development of the approach (i.e. less proving the approach than showing how different approaches have failed in the past, and how this one learns from them). However an organisation working with well-established methods will inevitably have more to draw upon regarding evidence. As a result, excessive investor demand for high levels of evidence would lead to an inevitable bias toward mature organisations working with tried and tested methods, at the expense of investing in innovative, and in some cases possibly more effective, forms of intervention. The balance between conflicting desires for the impact plans to be, on the one hand evidenced, and on the other, to deliver something new, will depend upon an investor’s mission, strategy and appetite for impact risk. A less well-evidenced, and therefore riskier, approach may ultimately prove to be game-changing, and thereby high impact. These considerations will play into the investment decision when weighing impact risk against other criteria. Where there is less evidence available, it becomes increasingly important, with regard to impact risk, for the impact plan to be convincingly reasoned (see 2.2.2 above), and evidenceable (see 2.2.6 below). Will the impact be evidenced by carrying out the impact plan? An evidenceable impact plan is one that incorporates processes to ensure that carrying out the plan will produce sufficient evidence to demonstrate the outcomes and impact, and prove the approach. This requires that: - a robust impact measurement system is in place to track outputs and outcomes - where a link, relationship, assumption or claim is unproven, it is identified, and checks are in place to validate it in the future - measures will be taken to assess the other factors involved and the true role of the organisation’s outputs in the change (i.e. there is an anticipated address of the conditions for change and context of change — e.g. a reference is identified, or a control group set up, to establish a sense of what happens without the intervention, and to provide a degree of evidence in support of the hypothetical scenario of what would have happened anyway, what is happening elsewhere, and the role of other factors) - the anticipated evidence is inclusive of the beneficiary perspective (evidence features feedback from beneficiaries, and is communicated to beneficiaries) The impact plans of potential investee organisations are likely to present theories, links and impacts that are under-evidenced, and in some cases altogether untested. However these may still be testable, and the subject of planned tests. For the confidence of the investor to be gained, it is crucial that the organisation can show effective measures are in place to evidence its impact going into the future, especially when there is a lack of evidence currently. The impact plan must be clear as to which parts are evidenced, which are unevidenced but will be evidenced by the activities and measurement system proposed, and which will remain essentially reasoned. The timeline for the evidence is also important: if an impact plan is full of unproven elements, the investor will want to know, if the investment is made, what evidence there will be to show whether or not the plan is working by year one, three, five etc.. As the organisation carries out its plan, over the course of operations, and the period of the investment, it is expected that more and more elements will become evidenced. Also, as the organisation matures and scales, its measurement system may be expected to grow in scope proportionally, thus expanding the range of evidenceable and subsequently evidenced aspects of the plan. This will correspond naturally with diminishing impact risk, as operations successfully manifest the impact. Alternatively, if the approach is failing, the presence of evidence systems will be able to show this, giving the organisation and the investor the opportunity to change course.
0.9856
FineWeb
3.109375
Color is practically the “lifeblood” of good design, in this case —beaded jewelry designs. Color can work for or against your design. Color can set the mood of a jewelry piece —as an example: use a playful mix of bright colors to express a “happy” design. The key, really, is to “combine” colors harmoniously in such that it attracts the eyes, not repel it. Here are some tidbits and practical tips on using colors that can work for your designs: - Use color schemes to build your design ideas with —I personally find it a lot easier to start working on a design idea using my favorite scheme (I usually go for a “monochromatic” look) as the framework. As a refresher, here are 4 of the most used color schemes for every serious artist: - Monochromatic—uses a key color (example: amethyst) in combination with its various tones, shades and tints (lightness and darkness) to achieve a balanced look - Complementary—uses a dominant /base color (example: brown) in contrast with the color directly across it in the color wheel —use the complementary color as accent - Analogous—the curious combination of colors right next to each other (example: red); may not be as vibrant as the complementary scheme though a lot richer than the monochromatic - Split-complementary—this is a variation to the standard “complementary” scheme in that it uses a key color (example: violet) in combination with its complementary color’s two adjacent colors achieving a higher contrast - Working around a themewill add character (personal signature) to your designs —with a color scheme in place, a chosen theme will guide the process of creating your design ideas. Choosing design themes can be really easy —it can be according to each season(example: winter), or style (example: classic), or culture (example: ethnic), or occasion(example: bridal) - Give careful consideration to different color symbolism across cultures(that is, of course, if you plan to sell jewelry across the globe!) —Colors can convey different meanings as much as the written words. Here are a few samples of this cross-cultural color symbolism: - Black—it symbolizes death, as well as style and elegance in most Western nations. It also implies trust and high quality in China. - Red—expresses mourning for South Africans, but it signals good luck and fortune for the Chinese. It can also signify masculinity in some parts of Europe. - Yellow-distinguishes a feminine character in the US and many countries, but it can convey mourning in Mexico - Purple—is a symbol of expense for most Asian nations, but it signifies mourning in Brazil. It also expresses freshness and good health n many Western nations. - Green—it signifies h-tech in Japan, but it is a forbidden color in Indonesia.It can also mean luck for Middle East nations - Blue —it symbolizes immortality in Iran - Pink —it is the symbol of femininity in the US and most Asian nations - White —it signifies mourning in Japan and other far eastern nations, but it also conveys purity and cleanliness in most Western nations - Brown —it means disapproval for the Nicaraguans The choice and combination of colors make up your color palette. Use your palettes to achieve a pleasant color harmony to make your jewelry designs stand out.
0.9798
FineWeb
2.171875
PROVIDING for relatives comes more naturally than reaching out to strangers. Nevertheless, it may be worth being kind to people outside the family as the favour might be reciprocated in future. But when it comes to anonymous benevolence, directed to causes that, unlike people, can give nothing in return, what could motivate a donor? The answer, according to neuroscience, is that it feels good. Researchers at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, wanted to find the neural basis for unselfish acts. They decided to peek into the brains of 19 volunteers who were choosing whether to give money to charity, or keep it for themselves. To do so, they used a standard technique called functional magnetic resonance imaging, which can map the activity of the various parts of the brain. The results were reported in this week's Proceedings of the National Academy of Sciences. The subjects of the study were each given $128 and told that they could donate anonymously to any of a range of potentially controversial charities. These embraced a wide range of causes, including support for abortion, euthanasia and sex equality, and opposition to the death penalty, nuclear power and war. The experiment was set up so that the volunteers could choose to accept or reject choices such as: to give away money that cost them nothing; to give money that was subtracted from their pots; to oppose donation but not be penalised for it; or to oppose donation and have money taken from them. The instances where money was to be taken away were defined as “costly”. Such occasions set up a conflict between each volunteer's motivation to reward themselves by keeping the money and the desire to donate to or oppose a cause they felt strongly about. Faced with such dilemmas in the minds of their subjects, the researchers were able to examine what went on inside each person's head as they made decisions based on moral beliefs. They found that the part of the brain that was active when a person donated happened to be the brain's reward centre—the mesolimbic pathway, to give it its proper name—responsible for doling out the dopamine-mediated euphoria associated with sex, money, food and drugs. Thus the warm glow that accompanies charitable giving has a physiological basis. But it seems there is more to altruism. Donating also engaged the part of the brain that plays a role in the bonding behaviour between mother and child, and in romantic love. This involves oxytocin, a hormone that increases trust and co-operation. When subjects opposed a cause, the part of the brain right next to it was active. This area is thought to be responsible for decisions involving punishment. And a third part of the brain, an area called the anterior prefrontal cortex—which lies just behind the forehead, evolved relatively recently and is thought to be unique to humans—was involved in the complex, costly decisions when self-interest and moral beliefs were in conflict. Giving may make all sorts of animals feel good, but grappling with this particular sort of dilemma would appear to rely on a uniquely human part of the brain. This article appeared in the Science and technology section of the print edition under the headline "The joy of giving"
0.8689
FineWeb
3.09375
Humans look to nature for inspiration. Fortunately, cancer researchers don’t have to look too hard. Elephants and naked mole rats do exceptionally well at resisting cancer, and we are starting to learn why. Cancer is caused by mutations—a chance mistake in the genetic code. The greater the number of cells and the longer they live, the greater the chance of mutations. Elephants don’t go through menopause By that logic, however, elephants—which have perhaps 100 times as many cells as humans—should have gone extinct from the sheer number of cancer types they face. But only 5% of elephants die of cancer in comparison to more than 20% humans. So why the discrepancy? A new study in the Journal of the American Medical Association has one answer. When researchers from a host of US universities studied the genome of elephants, they found 20 copies of TP53 gene, which is known to help resist cancers by repairing damaged DNA. Humans have only one copy of that gene. Although the human lifespan has doubled in the last few centuries, it is only because of help from modern medicine. Elephants, on the other hand, have longer lifespans naturally and could not have that without the evolutionary advantage endowed by TP53. Also, unlike humans, as far as we know, elephants don’t go through menopause. So to ensure that elephant babies born to older females weren’t riddled with badly mutated genes, evolutionary pressure would have created resistance to DNA damage via more copies of TP53. The enigma of naked mole rats The story of naked mole rats is even more inspiring. These weird creatures live underground, survive on little oxygen and food, are nearly blind, and, as far as we know, never develop cancer. Even if researchers try to induce cancer through artificial means. Once the mutations set in, cancer cells proliferate by uncontrolled growth. This happens because the mechanisms inside the cell that usually regulate this process are broken. There are, however, external mechanisms that can help regulate this process. According to a 2013 study published in Nature, naked mole rats seem to exploit this mechanism to resist cancer. The study found a polymer in between the cells of a naked mole rat, called hyaluronan, which was providing mechanical strength to the cells but also regulating cell growth. The thickness of the polymer determined whether cells grew or not. When the researchers used an enzyme that degraded the polymer, they found that the rats’ cells started grow in clusters, just like normal rats’ cells do when they form a tumor. Even better, when they knocked out the genes responsible for producing the polymer and then injected cancer-causing virus, the rats’ cells became cancerous. We may not yet have a cure for cancer, but such exceptional cases give hope. As Rochelle Buffenstein, a physiologist at the University of Texas Health Science Center, once told me, “As we learn more about these cancer-resistant mechanisms that are effective and can be directly pertinent to humans, we may find new cancer prevention strategies.”
0.9111
FineWeb
3.78125
There is a right answer to that. But it may not be what you’ve been taught. So in this podcast, Bill and Bryan will review the difference and what they new model of selling requires from you. Here is the secret: Better positioning leads to less need for persuasion. Listen in and learn! Also mentioned in the podcast: - Want a second opinion on your slide deck? Bill and Bryan offer to help 2 people out! Send them your slide deck to [email protected] and we’ll let you know if you’ve been chosen. - The Golden Circle – How Great Leaders Inspire Action by Simon Sinek - A clip from Vacation with Chevy Chase
0.5647
FineWeb
0.703125
Dr. Erich Jauch is a mathematics instructor at UW-Eau Claire. He currently teaches Algebra for Calculus for UW Independent Learning. He enjoys teaching introductory math courses and working with students at the beginning of their mathematical journey. Recently, during a revision of Algebra for Calculus, Dr. Jauch added open educational resources to the course, removing the cost barrier of a textbook and online homework platform for students. He also added two types of activities to incorporate equity, diversity, and inclusion (EDI) principles and connect with his asynchronous, self-paced students. Moreover, the course is mindful of reducing math anxiety in students. The first activity is a series of math chats. In every unit, students are given a space to ask or answer a question about the material covered or read and reflect on an article pertaining to a mathematical topic. Through the math chats, students are able to: • Discuss diversity within the math community • Highlight the work of underrepresented mathematicians • See fun applications of math, like the “mathematically perfect” way to slice a pizza ✅ See an example of a math chat discussion The second activity is a three-part math mystery. Students apply course concepts to a fictional story about an international criminal stealing precious artifacts. Before and after students solve problems related to the math mystery, they are asked to reflect in discussions: • First, students discuss the concepts they might apply to the problem in an introductory discussion for each math mystery scenario. • Second, after they’ve worked out the problem and seen the answer, students complete a reflection discussion on what they understood from the activity, what they struggled with, and how they might apply concepts in the future. ✅ See an example of an introductory discussion and a reflection discussion from the math mystery activity. These discussions help reduce math anxiety and create equity by allowing students to see how others are thinking about and approaching the problems in the activity. The math mystery activities keep the focus on the learning process, not just the correct answer, by asking students to reflect on their solutions to the problems. In this spotlight, Dr. Jauch gives us more details about adding EDI to math courses and the benefits of these activities for students. Often, math and science courses are perceived to be “difficult” to incorporate EDI principles into. What has helped you include more EDI in your courses? While trying to source these principles from classic material is certainly more difficult, if we take the time to look we can find many opportunities to witness EDI topics in mathematics. Especially if we are willing to look into the applications of mathematics. Can you give a brief description of how these strategies work in your course? Tell us what students are expected to do when they complete this activity. How are they evaluated and what kind of feedback do they get? The math mysteries are a way for students to work through some problems that are interconnected and in a fun and playful way. Too often students are given math problems as busy work, so these were designed to be light-hearted but also an assessment of their abilities to that point in the class. Additionally, the types of problems were selected to best fit the written setting. The main process of the assignment is for students to first complete a pre-assessment of the topics and skills they may need for the assignment. Then they complete the worksheet by hand and upload their work to Canvas. Afterward, they are presented with partial answers and asked to reflect on the experience. Can you talk a little more about developing and including these strategies in your course? With the course being fully online, one benefit of the math chats is an opportunity for the students to interact with each other and see different perspectives about interesting current and EDI topics. This was important to me because student interactions are an important piece of a standard class and this brings it to an IL course. It was important however to not link the score [course grade] to the interactions as the number of students concurrently enrolled can vary greatly. What advice would you have for other faculty who may want to try similar activities in their courses? Be willing to look outside the normal topics covered in your course that are accessible to students. There are usually many modern topics that students have an interest in that you can make approachable to them. Using OERs, adding opportunities to reflect and collaborate, and reducing student anxiety are effective ways to add more equity, diversity, and inclusion into a course. Our course reflection tool is also a helpful resource when considering EDI-related changes to your course. Reach out to your instructional designer if you want to learn more!
0.9806
FineWeb
2.90625
- Jul 23, 2020 - Reaction score This might make a solid link to share with some folks: I've installed Kali Linux, or I'm trying to install it. Why is it so hard? Why doesn't it recognize my hardware? Why do I need to set up so many things manually? Why can't I install the application...
1
FineWeb
0.890625
It’s never too soon (or too late) to start saving for retirement. We’ll find a plan that works for you. - Manage Cards - No Setup or Maintenance Fees - Tax Advantages* - Competitive interest above standard savings rates - Traditional and Roth IRA options - No setup charges - No monthly or annual maintenance fees - $5,500 contribution limit per year - Additional $1,000 "catch-up" contribution allowed for ages 50+ - Funds can be used to purchase CDs within IRA - $500 minimum deposit to open *Consult a tax advisor. When do you want to enjoy your tax advantage? A traditional IRA provides potential tax relief today, while a Roth IRA has the potential for the most tax benefit at the time of retirement. - No income limits to open - No minimum contribution requirement - Contributions are tax deductible on state and federal income tax* - Earnings are tax deferred until withdrawal (when usually in lower tax bracket) - Withdrawals can begin at age 59½ - Early withdrawals subject to penalty** - Mandatory withdrawals at age 70½ - Income limits to be eligible to open Roth IRA*** - Contributions are NOT tax deductible - Earnings are 100% tax free at withdrawal* - Principal contributions can be withdrawn without penalty* - Withdrawals on interest can begin at age 59½ - Early withdrawals on interest subject to penalty** - No mandatory distribution age - No age limit on making contributions as long as you have earned income *Subject to some minimal conditions. Consult a tax advisor. **Certain exceptions apply, such as healthcare, purchasing first home, etc. ***Consult a tax advisor. - Set aside funds for your child's education - No setup or annual fee - Dividends grow tax free - Withdrawals are tax free and penalty free when used for qualified education expenses* - Designated beneficiary must be under 18 when contributions are made - To contribute to an ESA, certain income limits apply** - Contributions are not tax deductible - $2,000 maximum annual contribution per child - The money must be withdrawn by the time he or she turns 30*** - The ESA may be transferred without penalty to another member of the family *Qualified expenses include tuition and fees, books, supplies, board, etc. **Consult your tax advisor to determine your contribution limit. ***Those earnings are subject to income tax and a 10% penalty.
0.8173
FineWeb
1.242188
10.14 Young People, Risk and the Benefit of Saving Early We know that teens take risks, but we may not know why or that this tendency can continue even to our thirties. This is important for parents who waste much breath—and for growing money, too. A recent New Yorker article presents the two dominant neuroscience theories for why teens embrace risk. Neurologist Frances Jensen asserts that the electric lines from all over the brain to the frontal lobe are not fully developed until our twenties or even thirties. The frontal lobe is the center of planning, self-awareness, and judgment, so if it doesn’t receive enough impulses, it can’t exercise those functions to override poor decisions. The young aren’t heedless; they simply lack proper wiring. The second is Laurence Steinberg’s theory that the pleasure center, the nucleus accumbens, grows from childhood to its maximum size in our teens and declines thereafter. Therefore at puberty our dopamine receptors, which signal pleasure, multiply. He says this is why nothing ever feels as good again as when we are teens, whether listening to music, being with friends, or other things not printable in a family newspaper. Steinberg maintains that teens are no “worse than their elders at assessing danger. It’s just that the potential rewards seem—and from a neurological standpoint, genuinely are—way greater.” Teen brains balance risk and reward and choose the greater risk for greater potential reward. But if the young are wired to enjoy now, when the rewards seem greater, they may miss the key element of growing money: time. Money is wet snow rolling down a hill. The money snowball grows larger the longer the hill. Ergo, the younger you are, the longer your hill, the more money you will have. Take Jo and Joe. Each of them saves the same amount but Jo starts at 21 and Joe at 40. Each increases savings each year at the same rate, earns the same return. Joe’s savings never catch up to Jo’s; they remain 19 years behind hers forever. In fact, Jo could stop saving and investing at age 40 and Joe wouldn’t have the same amount until he’s 59: This is an artificial example, of course. There are all sorts of rational reasons for a late start—earning an advanced degree, investing in children, starting a business—but they are ones formed by more developed brains. So because we can see the obvious benefits of time, we must be creative to counter the money decisions of higher risk-taking young brains. My Dad required me to have a part-time job starting at 15, which eliminated after-school activities. However, he told me that so long as I saved x for college, the rest was mine. That got me over the resentment of missed activities and made me work longer hours, which not only produced spending money but also forced me to manage my scarce time better. Change “college” to any number of savings vehicles, and presto, you have an incentive plan. Brain science tells us young people take risks because they can’t help it and may miss the benefits of saving and investing earlier. A win-win approach like my wily Dad’s can work wonders.
0.6254
FineWeb
3.78125
Across a range of industries, data analytics are helping businesses to become smarter, more productive, and better at making predictions. Analysing these at a functional level constrains your thinking, but piecing these data sets together opens new opportunities to extract potential value such as understanding customer journeys, designing customer segmentation models and enhancing pricing. We can support you with: - General portfolio reporting – we can assess performance trends, identify risk segments and highlight future areas of focus for your portfolios. - In-depth portfolio analysis (one-off or regular analysis of data provided for externally managed portfolios). - Business forecasting – facilitating pro-active capacity planning. - Data cuts and reporting – provision of reporting for third parties (e.g. credit reference agencies). - Self-servicing capability – providing online and mobile solutions for receipt of arrears payments. - Consultancy to identify and implement intelligent arrears management strategies for your portfolios. - Identifying portfolio trends for early arrears forecasting. - Pricing loan pools for servicing as a part of a bid process. - Data scrubbing for accuracy and conformity to your specified criteria.
0.8501
FineWeb
1.21875
Occupational bifocals and trifocals are specialized multifocal lenses created for specific jobs, hobbies or tasks. They are designed for people – generally over 40 – who have developed presbyopia, a condition in which the lens of the eye weakens and it becomes difficult to see objects that are close up. They differ from regular multifocal lenses in that the magnified power areas to see close and intermediate objects are typically larger and positioned in a different area on the lens, according to needs of the designated task. Occupational bifocal and trifocal lenses are intended for specific tasks and not for everyday use. Here are a few examples: The most popular type of occupational lens is the Double-D lens. The lens is divided into three segments, with the top designed for intermediate vision, the bottom segment for near vision and the rest for distance. This design is ideal for people who need to see close both when looking down (to read something) and when looking overhead. Professionals that frequently use Double-D lenses are auto mechanics (who have to look overhead when under a car), librarians, clerks or office workers, (who have to look at shelves overhead) or electricians (that are often involved in close work on a ceiling). They are called Double-D lenses because the intermediate and near segments of the lens are shaped like the letter “D”. E-D Trifocal Lenses As opposed to Double-D lenses which have the majority of the lens for distance vision, E-D lenses focus on intermediate vision with an area for distance on the top and for near vision on the bottom. These are ideal for individuals who are working at about an arm’s-length away the majority of the time, such as on a multiple computer or television screens, but frequently need to look up into the distance or close to read something. The “E” in the name stands for “Executive Style” which represents the division between the top distance vision lens and the bottom intermediate vision lens which goes all the way across the lens. “D” in the name of the lens is due to the fact that the near section in the bottom of the lens is shaped like a “D”. Office or Computer Glasses Multifocal lenses designed for office work provides the largest section with an intermediate lens designated for viewing the computer screen and a smaller area for limited distance vision. You can have progressive or trifocal lenses that incorporate near vision as well. That’s right, there are even specialized lenses made for golfers! Golfers need to see a wide range of distances during their game from their scorecard, to their ball on the tee, to hole far away to line up their drive. In these lenses, the close segment is small and placed on an outer corner of one lens, to allow for brief close vision but not interfere with the distance game. Usually, right handed golfers will have the lens on the right side and vice versa. Standard multifocals can be redesigned to adapt to specific tasks or hobbies simply by changing the size, shape or location of the different segments. Many adults over 40 would benefit from having multiple pairs of multifocals to give optimal vision for different tasks or hobbies they enjoy. Note that occupational lenses are made specifically for the task they are designed for and should not be worn full-time, especially while driving.
0.5821
FineWeb
3.40625
Diabetes in Pregnancy adak island journeyGestational diabetes does not increase the risk of birth defects or the risk that the baby will be diabetic at birth. Also called gestational diabetes mellitus (GDM), this type of diabetes affects between 3% and 20% of pregnant women. It presents with a rise in blood glucose (sugar) levels toward the end of the 2nd and 3rd trimester of pregnancy. In 90% if cases, it disappears after the birth, but the mother is at greater risk of developing type 2 diabetes in the future. It occurs when cells become resistant to the action of insulin, which is naturally caused during pregnancy by the hormones of the placenta. In some women, the pancreas is not able to secrete enough insulin to counterbalance the effect of these hormones, causing hyperglycemia, then diabetes. adak island journeyPregnant women generally have no apparent diabetes symptoms. Sometimes, these symptoms occur: - Unusual fatigue - Excessive thirst - Increase in the volume and frequency of urination Importance of screening These symptoms can go undetected because they are very common in pregnant women. Women at risk Several factors increase the risk of developing gestational diabetes: - Being 35 years of age or older - Being overweight - Family members with type 2 diabetes - Having previously given birth to a baby weighing more than 4 kg (9 lb) - Gestational diabetes in a previous pregnancy - Belonging to a high-risk ethnic group (Aboriginal, Latin American, Asian, Arab or African) - Having had abnormally high blood glucose (sugar) levels in the past, whether a diagnosis of glucose intolerance or prediabetes - Regular use of a corticosteroid medication - Suffering from polycystic ovary syndrome (PCOS) - Suffering from ancanthosis nigricans, a discoloration of the skin, often darkened patches on the neck or under the arms The Canadian Diabetes Association 2018 Clinical Practice Guidelines for the Prevention and Treatment of Diabetes in Canada recommends diabetes screening for all pregnant women, between the 24th and 28th week of pregnancy. Women with a higher risk of developing gestational diabetes should be tested earlier. adak island journeyTwo screening methods: adak island journey1. Most centres use a method done at two separate times. It begins with a blood test measuring blood glucose (sugar) levels 1 hour after drinking a sugary liquid containing 50 g of glucose, at any time of day. If the result is: - Below 7.8 mmol/L, the test is normal. - Above 11.0 mmol/L, it is gestational diabetes. - If it is between 7.8 and 11.0 mmol/L, the attending physician will ask for a second blood test measuring fasting blood glucose (sugar) levels, then for blood tests taken 1 hour and 2 hours after drinking 75 g of glucose. This will confirm gestational diabetes if the values are equal to or greater than: - 5.3 mmol/L fasting - 10.6 mmol/L 1 hour after drinking the sugary liquid - 9.0 mmol/L 2 hours after drinking the sugary liquid 2. The second method the oral glucose tolerance test (OGTT), with a sweetened liquid containing 75 g of glucose and three blood tests. A diagnosis is made if at least one of the three blood tests has values equal to or greater than: 5.1 mmol/L fasting 10 mmol/L 1 hour after drinking the sugary liquid 8.5 mmol/L 2 hours after drinking the sugary liquid Risks and possible complications There are numerous risks when gestational diabetes is not properly controlled and blood glucose (sugar) levels remain high. adak island journeyFor the mother: - Excess amniotic fluid, increases the risk of premature birth - Risk of caesarean section or a more difficult vaginal birth (because of the baby’s weight, among other reasons) - Gestational hypertension or preeclampsia (high blood pressure and swelling) - Higher risk of staying diabetic after the birth or of developing type 2 diabetes in the future (a 20% to 50% risk within 5 to 10 years of the birth). adak island journeyFor the baby: - Bigger than normal at birth (more than 4 kg of 9 lb) - Hypoglycemia (drop in blood sugar levels) at birth - Risk of the baby’s shoulders getting stuck in the birth canal during the birth - Risk of obesity and glucose intolerance in early adulthood (especially if birth weight was above 4 kg or 9 lb) Slight risk of: - Jaundice, especially if the baby is premature - Lack of calcium in the blood - Breathing problems adak island journeyProper diabetes control considerably reduces the risks of complications. When gestational diabetes is diagnosed, a personalized meal plan should be developed to control the mother’s glycemia Generally, a healthy diet with proper portion control and distribution of carbohydrates (sugars), as well as a healthy lifestyle (stress management, enough sleep and physical activity), are sufficient to control gestational diabetes. If blood glucose (sugar) levels remain too high, the physician will prescribe insulin injections or, in some cases, oral antidiabetics. adak island journeyTarget blood glucose (sugar) levels for the majority of pregnant women: - Fasting <5.3 mmol/L - 1 hour after a meal <7.8 mmol/L - 2 hours after a meal <6.7 mmol/L The target values for controlling gestational diabetes differ from those of other types of diabetes. Importance of a balanced diet adak island journeyA balanced diet is essential for the control of blood glucose (sugar) levels and for a healthy pregnancy. When there is gestational diabetes, certain modifications need to be made to the mother’s diet, including to the amount of carbohydrates in each meal. A carbohydrate-controlled diet is the foundation of the treatment. It is essential not to eliminate carbohydrates completely but rather to distribute them throughout the day. Your meal plan A dietitian will help you establish or modify your meal plan based on your energy needs. The dietitian will also advise you about the important nutrients to incorporate in your diet during your pregnancy: adak island journeyFor more information about balanced meals, consult The Balanced Plate. Importance of being physically active adak island journeyPhysical activity helps control diabetes during pregnancy and has numerous health benefits for pregnant women. It is recommended that most pregnant women do a total of 150 minutes of physical activity per week, ideally in at least 3 to 5 sessions of 30 to 45 minutes each. If you weren’t active before your pregnancy, start gradually. adak island journeySafe cardiovascular activities (done at light to moderate intensity) during pregnancy include: - stationary exercise equipment - cross-country skiing Consult your doctor before starting these activities and avoid physical activities where you risk falling, losing your balance or have sudden changes in direction (for example: soccer, badminton, etc.). Stay well hydrated before, during and after exercise, in addition to having with you at all times your blood glucose (sugar) meter and a source of rapidly absorbed carbohydrates in case of hypoglycemia. Before engaging in physical activity, your insulin dosage may have to be reduced to limit the risk of hypoglycemia. Your medical team will help you adjust your dosage as required. During the birth During the birth, the medical team regularly monitors the mother’s blood glucose (sugar) levels and adjusts treatment based on the readings. The baby’s blood glucose (sugar) levels are also monitored in the hours following the birth. After the birth adak island journeyIn the majority of cases, the diabetes disappears after the birth. However, the risk of developing diabetes in the future increases, especially if you keep your excess weight. To avoid this situation, you should maintain a healthy weight, eat a balanced diet and exercise regularly. Furthermore, it is recommended that you have a blood glucose (sugar) test between 6 weeks and 6 months after the birth to check whether your blood glucose (sugar) levels have returned to normal values. Before getting pregnant again, you should consult a doctor. adak island journeyBreastfeeding is recommended for all women, diabetic or not. Mother’s milk is an excellent food for your infant. Breast feeding not only helps the mother lose the weight gained during pregnancy, it also reduces blood pressure and helps control blood glucose (sugar) levels and thus prevent type 2 diabetes. It also reduces the risk of obesity and diabetes later on in the child. The nutritional needs of nursing mothers are essentially the same as in the last trimester of pregnancy. It is recommanded to start breastfeeding immediately after birth to prevent hypoglycemia in the newborn, and to continue for a minimum of 6 months. See the list of high-risk pregnancy clinic (French only). Research and text: Diabetes Québec Team of Health Care Professionals Adapted from: Diabète Québec (2013), “Diabète et grossesse.” adak island journeyJune 2014 (updated on July 2018) ©All rights reserved Diabetes Quebec Feig D, Berger H, Donovan L et al. Diabetes Canada 2018 Clinical Practice Guidelines for the Prevention and Management of Diabetes in Canada: Diabetes and Pregnancy. Can J Diabetes 2018; 42 (Suppl 1): S255-S282. adak island journeyCanadian Paediatric Society (Feb 28 2018). Weaning from the breast [Online]. Found at https://www.cps.ca/fr/documents/position/sevrage-de-allaitement (Web page consulted on July 18, 2018).
0.8473
FineWeb
3.09375
Department of Archaeology Contributes to the Anthropocene Curriculum The Mississippi: An Anthropocene River initiative seeks to explore the ecological, historical, and social interactions between humans and the environment across the Mississippi River Basin. Scholars from both sides of the Atlantic are working directly with local and international scientists, social theorists, artists, and activists with interests and backgrounds spanning the biological and social sciences as well as the humanities and visual arts. The Department of Archaeology at the Max Planck Institute for the Science of Human History has teamed up with researchers from the Max Planck Institute for the History of Science and the Haus der Kulturen der Welt in Berlin, to better understand the extent of human impacts on the ecology of the Mississippi. The Anthropocene River initiative has proven to be an exemplary experiment in mingling intellectuals with diverse ways of thinking and ultimately producing a product greater than the sum of the constituent parts. Members of the Department of Archaeology have presented research at Anthropocene River conferences in Berlin and St. Louis, and three ongoing research sub-projects, directed by its scholars, fit into the broader mission of the Mississippi branch of the global Anthropocene Curriculum. The MPI for the Science of Human History researchers are probing the deeply intertwined history of humanity and the Muddy River, they are studying the legacy of human cultural practice in prehistory, and, ultimately, they are trying to understand the ways that ancient human populations shaped the course of the river and impacted the biotic communities that the river supports. The Mississippi is the drainage for the largest watershed in North America, it fosters some of the greatest temperate biodiversity on the continent, and facilitates the migration of millions of birds biannually. Additionally, it has been a life-supporting artery for humans for millennia, providing an abundance of aquatic and terrestrial resources, fresh water for irrigation, a rapid transit system, and a leach way, removing toxins and excess nitrogen from the Midwest. The river today would be unrecognizable to Audubon or Lewis and Clark; however, it still provides a life-support system to millions of Americans. In the face of recent political unrest in historically segregated towns along the course of the river, the recognition of the ongoing mass die-offs of birds and pollinating insects, hundreds of kilometers of dead zones in the Gulf of Mexico, and shrinking water tables, the research being conducted by the Mississippi initiative is more timely than ever. The projects spearheaded by the MPI for the Science of Human History seek to explain the processes that led to the Anthropocene. These scholars are studying the ripple effects that human impacts of the past have had on the landscape of today. Prior to the mid-1800s, herds of wild bison still existed across parts of the North American Midwest. These herds represented some of the last mass-herds of megafaunal grazers of the temperate north. Prior to the post-Pleistocene extinctions, dense populations of these megafaunal mammals would have had significant ecological impacts. While scholars have speculated about the role that these extinct animals played in ecological services, such as nitrogen cycling, woody vegetation suppression, and seed dispersal, there have been few attempts to systematically test these theories. Bison provide a unique case study in this regard as they: 1) were still providing these ecological services until two centuries ago and the impacts are still visible; and 2) are not extinct (unlike glyptodons or mastodons), and can, therefore, still be studied. Drs. Spengler and Mueller have theorized in their study Grazing Animals Drove Domestication of Grain Crops that the seed dispersal processes of the massive herds of bison shaped the ecology of the Midwest in a way that allowed the earliest farmers of this region to target certain plants that are rare on the landscape today. The bison herds may have concentrated these plants into easily harvestable wild field, supporting the domestication of, what some scholars refer to as, the North American Lost Crops (Figure 2). To better understand the evolutionary links between the ancient bison herds and the progenitors of these Lost Crops, watch the short lectures presented by Dr. Spengler (Plant Domestication and Dispersal) and Dr. Mueller (Understanding the North American Lost Crops) at the Anthropocene conference held at Cahokia Mounds (Figure 1), outside St. Louis, last autumn. A more detailed discussion of the project is available to read on the Anthropocene Curriculum website. The introduction of domesticated megafaunal animals, such as cattle and horses, has also provided clues to a greater understanding of the role of the extinct megafaunal herds of North America. European horses provide seed-dispersal services for several large-fruiting North American tree species, such as the Osage Orange, which may have been formerly dispersed by now-extinct Pleistocene horses. However, we know little about when horses were reintroduced into various parts of North America after European contact. Dr. Taylor, of the MPI for the Science of Human History as part of the Anthropocene River Project, is studying archaeological remains of horses across the American Midwest in an attempt to understand the rates of dispersal and how rapidly these animals were adopted by plains Indian groups for transportation, bison hunting, and warfare. Dr. Taylor is also using ZooMS and ancient DNA to distinguish different kinds of equid - donkeys, horses, and mules - among bones recovered from archaeological sites, and understand how each of these domesticates was used by early indigenous societies in the Mississippi Region. To read more about Dr. Taylor’s contributions to the initiative see the summary page or read about the Horses and Donkeys project. Ultimately, the majority of the anthropogenic ecological reshaping of the American Midwest has centered around farming, especially through intensive forms of maize cultivation. Modern varieties of maize grown in the Midwest are mostly GMO and hybrid, requiring heavy water, fertilizer, herbicide, and pesticide inputs. However, these are not the first varieties of maize grown in North America. The topics of: 1) when maize arrived in different regions and; 2) how long after its introduction it took people to start intensively cultivation it, have received considerable scholarly attention. The switch to intensive maize cultivation eventually facilitated the loss of the Lost Crops and may have fueled demographic and social expansions, as seen in archaeological remains at sites across the Midwest, such as Cahokia (Figure 1). Dr. Fernandes is running the IsoMaize project as a partner project of his IsoMemo initiative. Dr. Fernandes is collaborating with other scholars at the MPI for the Science of Human History and at universities in America to isotopically trace the ancient spread and intensification of maize cultivation across North America. More detailed discussion of IsoMaize you can find here.
0.9521
FineWeb
2.96875
60% of global land and species loss is down to meat-based diets Climate change and conflict leads to a rise in global hunger Fruit, nuts and vegetable need bees and they are dying There is concern around how food companies are treating farm animals Farm Animal Welfare: A benchmark measuring companies' performance. Save the bees: A Greenpeace campaign Conflict, climate change and hunger: See how climate change leads to serious conflicts and hunger Vast animal-feed crops to satisfy our meat needs are destroying planet: Overproduction of meat and animal feed is destroying whole species of animal and the environment. 10 things you need to know about sustainable agriculture - The Guardian: An article summarising ten important points about sustainable agriculture from a panel of industry experts. Feast or Famine: Business and Insurance Implications of Food Safety and Security - Lloyd's: A report on food insecurity and the role that insurance can play in risk mitigation and management by Lloyd’s, an international specialist insurance marketplace. Sustainable Agriculture in the UK - Farming and Countryside Education: A review of sustainable agriculture in the UK by Farming and Countryside Education, a charity providing education to children and young people about sustainable farming. The Living Planet Report 2014 – World Wide Fund for Nature (WWF): A report analysing the impact of human activity on the health of the planet by the WWF, an international non-governmental organisation working to conserve, research and restore the environment.
0.6248
FineWeb
2.703125
How to Find and Understand Your Animal Spirit Guides From the introduction: “Wild animals are reaching out to connect with us all of the time. You can benefit personally from exploring a relationship with spirit animals in a multitude of ways. Learning your spirit animal can change the way you look at yourself by bringing you a great sense of confidence and empowerment. You may have already started to notice that certain animals keep reappearing in your life. This is called synchronicity and this means that the spirits are talking to you. Congratulations! Now let’s learn what’s next!” Interested in finding the answer to the question, “What is my spirit animal?” In this free and comprehensive guide not only will you be given instructions on how to find your spirit animal, you will also learn the following: - The definition of a spirit animal - Common myths and fears surrounding spirit animals - How to find spirit animal meanings and interpretations - Where to look up a lot more information about your spirit animals online - Tips for how to create your own spirit animal reading Throughout the book are also a collection of meanings of different animals such as rabbits, magpies, ravens, and more. Instead of having an online quiz tell you what your spirit animal is, how about using your own intuition and know-how to discover your spirit animal yourself? This guide will lead the way.
0.8915
FineWeb
2.109375
Two recent articles have got my blood racing and the excitement has led to this post. Both relate to citizen science, a concept that involves the common man in science and making big science accessible to everyone. And how does citizen science work? One example is the Ardusat satellites (and such like) which are tiny satellites on which time can be hired by the high school student, the amateur astronomer, the layman virtually! These carry simple equipment like temperature sensors, Geiger counters, digital cameras and the like. For costs of $35-45 per day people can hire time (in blocks of a few days) on these to perform experiments in space, take photographs of Earth/celestial objects from space…and much more. These firms such as NanoSatisfi mostly raised money through crowd sourcing but have made it possible for everyone to work with a slice of space! The second example is just as exciting because the possibilities are endless! The GalaxyZoo project launched in 2007 by Chris Lintott and Kevin Schawinski asked volunteers to classify galaxies as spiral or other shapes. By their estimation the data they had collected from the Sloan Digital Sky survey, about a million images of galaxies would take years to sort through. Machine algorithms are still not as efficient as humans at recognising shapes. They reckoned that they would get 50 odd volunteers and finish the work in a year and a half. Instead thousands of volunteers from all over the world trawled through the data in 3 weeks! The amazing thing about this project is that it allows the average Joe and Jill to do big science, to connect with big projects and be part of the romance of Science even if they are not professional scientists. It cuts across age, race, culture, gender, profession… it brings people together in their love and wonder of the natural world. So now the question is: can we design other studies and experiments using this concept of citizen science and solve big problems, not just in astronomy but in all disciplines (projects that take forward the zooniverse principle)? Imagine the power of harnessing the talent and effort of hundreds of thousands of people who enjoy the discipline (even if they are amateurs). This diversity in thought and experience enriches the work so much, while sparking interest and a common sense of purpose amongst so many people. Countries like China, India, Brazil (and others) would do especially well to engage the millions of people who could contribute and may be this can even help bring down the costs of certain kinds of research! This much I know: I am itching to create a project like this of my own!
0.6553
FineWeb
3.015625
Understanding Performance in Human Development A Cross-National Study This paper introduces a new and comprehensive Human Development Index (HDI) trends dataset for 135 countries and 40 years of annual data. We apply this dataset to answer several empirical questions related to the evolution of human development over the last 40 years. The data reveal overall global improvements, yet significant variability across all regions. While we confirm the existence of continued divergence in per capita income, we find the inverse for HDI. We find no statistically significant correlation between growth and non-income HDI improvements over a forty year period. We also examine some basic correlates that are associated with countries performance in HDI.
0.9986
FineWeb
1.890625
Round each value to 1 significant figure then perform the calculation (without a calculator). Now, (correct to one significant figure); and (again, one significant figure). Applets: Begin by rounding to one significant figure. Then adjust your answer. Aim for less than 10% error (good), or less than 5% error (excellent!) Up to 100 multiplied by 100 A little more difficult Done here – go back to the menu for Unit 1.
0.913
FineWeb
2.8125
This section focuses on the medicinal chemistry of beta-lactam antibiotics; the second part of our series on the medicinal chemistry of antibacterial compounds. Penicillin derivatives, cephalosporins, monobactams and carbapenems all belong to this popular class of drugs. A four-membered lactam ring, known as a β-lactam ring, is a common structural feature of this class (see below). To this day, the pharmacology of beta-lactam antibiotics has clearly bore out an excellent safety and efficacy profile. Most of these medicines work by interfering with bacterial cell wall synthesis; the cell wall being an optimum drug target because it is something that bacterial cells possess, but not human cells. Penicillin and its Derivatives Penicillin consists of a fused a β-lactam ring and a thiazolidine ring; part of the heterocyclic bicyclic system is the β-lactam ring. The bicyclic system confers greater ring strain on the β-lactam ring, an aspect important for activity. An amide and a carboxylic acid group are also present. The carboxylic acid group is a possible site of modification to make prodrugs. Also – note the stereochemistry of the acylamino side chain with respect to the 4-membered ring and the cis stereochemistry for the hydrogen atoms highlighted in green. The key structural features of penicillins can be summarised as follows: - Fused β-lactam and thiazolidine ring forming a bicyclic system (Penam) - Free carboxylic acid - Acylamino side chain - Cis stereochemistry for the hydrogen Texts describing penicillins may appear to have conflicting numbering systems; as there are two different, widely used numbering systems. The USP assigns the nitrogen atom at number 1 and the sulfur atom at number 4. In contrast, the Chemical Abstracts system assigns sulfur as number 1 and the nitrogen as number 4. Keep these differing numbering systems in mind when reviewing texts on beta-lactam medicinal chemistry. Chemical Properties & Reactions Penicillin’s overall shape is similar to a half-open book. As we talked about earlier, the bicyclic ring system has large, torsional strain and angle strain. Unlike typical tertiary amides, the carbonyl group of the strained four-membered ring is very reactive and susceptible to nucleophilic attack. Think about amide resonance from introductory organic chemistry and its effect on amide reactivity. In the case of the β-lactam ring, amide resonance is diminished. For steric reasons the bonds to the nitrogen cannot be planar; the opening of the four-membered ring relieves strain. Penicillins can react with amines to form inactive amides. This has implications on co-administration and formulation. Activity – Can you draw a reaction mechanism for the scheme shown below? Penicillins are also generally susceptible to hydrolysis under alkaline conditions. Alkaline hydrolysis can be catalysed by the presence of metal ions such as Cu2+.The resulting hydrolysis products do not possess antibacterial activity; this is valuable knowledge for the storage, analysis, and processing of these medicinal chemistry compounds. Penicillins also tend to be sensitive to acids (see reaction scheme below). Penillic acid is the major product of acid degradation. In the stomach where conditions are acidic, the drug breaks down. Acidic conditions must also be avoided during production and analysis. Penicillins, such as phenoxymethylpenicillin (Penicillin V), have enhanced acid stability as they have electron withdrawing R groups. Like many other drugs, penicillins face enzyme-catalysed degradation in vivo. Amidases catalyse the conversion of the C6 amide to an amine. Amidases are useful in industry for the production of 6-Aminopenicillanic acid (6-APA). This compound is used a precursor for many semisynthetic penicillins. Drug resistance is also a growing problem.β-lactamases (beta lactamases) are mainly responsible for this. β-lactamases are serine protease enzymes that act against the β-lactam drugs through a similar mechanism as the transpeptidase enzyme; the bacterial enzyme targeted by penicillin. The mechanism of action will be shown later. β-lactamase inhibitors such as clavulanic acid are given to patients in combination with penicillins such as amoxicillin. The use of clavulanic acid in penicillin formulations allows for a reduction in dosage. Furthermore, the spectrum of activity is also improved. Note that there is no single β-lactamase enzyme. Clavulanic acid does not inhibit all β-lactamase enzymes. - Augmentin®: Amoxicillin + clavulanic acid - Timentin®: Ticarcillin + clavulanic acid Tazobactam and sulbactam are examples of β-lactamase inhibitors that contain the β-lactam ring. Avibactam is an example of a β-lactamase inhibitor that does not have the β-lactam ring in its structure. Now lets turn our attention to the synthesis of semisynthetic derivatives. Synthesis of Semisynthetic Derivatives We briefly mentioned 6-APA earlier. 6-APA is acquired through enzymatic hydrolysis of Penicillin G or Penicillin V, or through traditional organic chemistry. Reaction with an acid chloride at the C6 primary amine allows the synthesis of many semisynthetic penicillin derivatives (try to draw the reaction mechanism as practice!). The free carboxylic acid is, though, a possible site of modification. Ester prodrugs such as pivampicillin were developed to improve the pharmacokinetic properties of their parent drug. Pivampicillin is a pivaloyloxymethyl ester prodrug of ampicillin. We emphasized the cis stereochemistry of the H5-H6 protons at the start of this beta-lactam review. Generally speaking, the H5-H6 coupling constant (1H NMR) is in the range of 4-5 Hz. 13C NMR studies and DEPT experiments would help distinguish between CH, CH2 and CH3. When studying the IR spectra of penicillins, one must be on the lookout for the carbonyl stretches – particularly for the characteristic β-lactam carbonyl stretch at around 1770-1790 cm-1. Depending on which penicillin is being studied, the side chain would give characteristic NMR and IR signals.The predicted 1H NMR spectrum of ampicillin is shown below. Note that the spectrum below is merely an estimate. Can you explain why the benzylic protons appear shifted downfield than expected? Mechanism of Action Recall the structure of bacteria from microbiology. The bacterial cell wall is needed by most bacteria in order to survive. Gram-positive bacteria possess a thick peptidoglycan layer in the cell wall and an inner cell membrane. Gram-negative bacteria, on the other hand, possess an outer membrane and an inner cell membrane. Porins are present in the outer membrane of Gram-negative bacteria. The much thinner peptidoglycan layer of Gram-negative bacteria is found between these two membranes. The significance of this outer membrane in drug design of broad-spectrum penicillins is assessed later. Peptidoglycan (or murein) consists of sugar and amino acid units. This mesh-like polymeric layer outside the inner cell membrane forms the cell wall of bacteria. The sugars N-acetylglucosamine (GlcNAc or NAG) and N-acetylmuramic acid (MurNAc or NAM) alternate, and are connected through a β-(1,4)-glycosidic bond. An amino acid chain is found in each NAM. D-amino acids are also found in these amino acid chains. The sugars are cross-linked via these peptides; cross-linking adding structural integrity to the bacterial cell wall. Transpeptidase enzymes are bacterial enzymes responsible for the formation of these crosslinks. Penicillins act by interfering with the cross-linking of peptidoglycan by inhibiting the transpeptidase enzyme. Without an intact cell wall, bacteria are generally unable to survive. Thus, penicillins are bactericidal in effect. Through the synthesis and studies of many semisynthetic penicillins, the following conclusions were reached. - Cis stereochemistry of H5 and H6 essential - The bicyclic ring is very important - The free carboxylate is essential - The acylamino side chain is necessary Variation is mostly limited to the R group of the amide, and, as mentioned earlier, prodrugs have been developed by modifying the carboxylate group. So far, we’ve reviewed the following structural modifications: - Enhancing acid stability by using electron-withdrawing R groups - Converting the carboxylate functional group to an ester to give a prodrug We will now consider several other structural modifications. Other Structural modifications Gram-negative bacteria possess an outer lipopolysaccharide membrane which surrounds the thinner cell wall. The outer membrane serves as a protective layer against compounds that may pose harm to the bacteria. This partly explains why Gram-negative bacteria are generally resistant to antibacterial compounds. As examined earlier, porins are proteins present in the outer membrane. Water and essential nutrients can pass through these proteins. Small drugs can also pass through porins. The ability of drugs to pass through porins is dependent on their size, structure and charge. Generally speaking, molecules that are large, anionic and hydrophobic are unable to pass through proteins. On the other hand, molecules that are small, zwitterionic and hydrophilic tend to pass through easily. During the search for broad-spectrum penicillins through variation of the acylamino side chain, the following conclusions were reached: - Hydrophobic groups tend to enhance ability against Gram-positive bacteria - Hydrophilic groups generally increase ability against Gram-negative bacteria - Attachment of a hydrophilic group at Cα appear to improve activity against Gram-negative bacteria Broad-spectrum antibiotics such as amoxicillin and ampicillin both have -NH2 groups attached to Cα (as shown below); both compounds being orally active. As well as this, the presence of the electron-withdrawing amino group increases acid-stability. Ampicillin is poorly absorbed by the gut due to ionisation of both the amino and carboxylic groups. Oral absorption of amoxicillin is, in contrast, much higher. Modifications, such as changing the carboxylic acid to esters, were made at the carboxylic group to alleviate this problem. Pivampicillin and bacampicillin are examples of ester prodrugs of ampicillin. The prodrugs undergo metabolism in the body to give ampicillin. Can you remember the names of the enzymes involved in the ester hydrolysis of prodrugs? The ureidopenicillins are broad-spectrum penicillins typically used parenterally, and are active against Pseudomonas aeruginosa. As the name suggests, a urea group is present in the molecule. The urea group is situated at Cα. Azlocillin and piperacillin are examples of ureidopenicillins. β-lactamases gave bacteria resistance to the traditional penicillins, driving the need for β-lactamase-resistant penicillins. Steric shields may also be used. This strategy involves designing penicillins to resist β-lactamases. By placing a bulky group on the acylamino side chain, degradation of the drug by β-lactamases is minimized. However, if a steric shield is too bulky, the penicillin is not able to bind to transpeptidase. Methicillin is an example of a penicillin with a bulky group. This semi-synthetic penicillin possesses a dimethoxybenzene R group. Both the methoxy groups of the benzene are at the ortho position. Nafcillin possesses a naphthalene ring in its acylamino side chain which acts as a steric shield. Flucloxacillin contains a bulky and electron-withdrawing heterocyclic acylamino side chain. Thus, flucloxacillin is an acid-resistant, narrow-spectrum, β-lactamase-resistant penicillin. - Penicillins are bactericidal beta-lactam antibiotics - Penicillin’s core structure consists of a fused β-lactam ring and a thiazolidine ring - The bicyclic system is highly strained - Modifications can also be made at the acylamino side chain - Cis stereochemistry of H5 and H6 is essential - 6-Aminopenicillanic acid (6-APA) is mainly used as a precursor for semisynthetic penicillin drugs - The carboxylic acid group can be modified to give ester prodrugs - Attach electron-withdrawing groups at the amide to enhance acid stability - Hydrophilic groups at Cα at the acylamino side chain improves spectrum of activity - Steric shields at the acylamino side chain generally improves resistance to β-lactamase enzymes Cephalosporin C was the first cephalosporin discovered from a fungus obtained from Sardinian sewer waters during the mid-1940s. Just like penicillins, Cephalosporin C has a bicyclic system made up of a β-lactam ring fused with a sulfur heterocycle, which, in this case, is the dihydrothiazine ring.The side chain is referred to as the aminoadipic side chain. The acetoxy group of Cephalosporin C is a key feature; a feature examined in more detail later. Cephalosporin C is less potent compared to penicillins, but this compound has many advantageous properties. Cephalosporin C is more acid-resistant and has a better spectrum of activity, for example. Moreover, the likelihood of causing allergic reactions is considerably less. As a result, cephalosporin C became a useful lead compound for the development of better, more clinically robust antibiotics. Chemical Properties & Reactivity Compared to the penicillins, the ring system strain is not as great, but, like the β-lactam carbonyl of penicillins, the β-lactam carbonyl group of Cephalosporin C is also reactive for similar reasons. Diminished amide resonance and ring strain confers reactivity. The four-membered ring is also susceptible to nucleophilic attack. Cephalosporin C inhibits the transpeptidase enzyme through the mechanism shown below. A serine residue is involved. As shown below, the acetoxy group serves as a leaving group. Synthesis of Semisynthetic Derivatives 7-aminocephalosporinic acid (7-ACA) is used as the precursor of many cephalosporins. Unlike 6-APA which can be acquired from enzymatic hydrolysis of certain penicillins, 7-ACA cannot be acquired by enzymatic hydrolysis of Cephalosporin C. 7-ACA is produced by chemical hydrolysis of Cephalosporin C. Due to the presence of the reactive β-lactam ring, a special method of chemical hydrolysis was devised. Can you draw a mechanism for the formation of the imino chloride? Cephalosporin analogues may be formed by reacting 7-ACA with acid chlorides. - The β-lactam ring is crucial for activity - Bicyclic ring system important in increasing ring strain - The cis-stereochemistry at the positions highlighted in green is important - Other groups may be substituted for the acetoxy group which may or may not serve as good leaving groups. The nature of the leaving group is important for activity. Better leaving groups tend to give cephalosporin C analogues with better activity. - The acylamino side chain may be altered - Sites of possible modifications are highlighted in red boxes. First Generation Cephalosporins Cefalexin, cephalothin (cefalotin), cephaloglycin, and cephaloridine are examples of first-generation cephalosporins. The methyl group in cephalexin is a poor leaving group, which is bad for activity. However, the use of a methyl group appears to improve absorption. Cefalexin may be synthesized through an acid-catalysed ring expansion of a penicillin. Cephalothin has an acetoxy as a leaving group and a 1-(thiophen-2-yl)propan-2-one in its acylamino side chain. Despite being a good leaving group, the acetoxy moiety is susceptible to enzyme-catalysed hydrolysis. Cephaloglycin’s acylamino side chain is the same as that for ampicillin. Cephaloridine has pyridinium as a leaving group, giving pyridine. Unlike the acetoxy group, the pyridinium is stable to metabolism. First-generation cephalosporins share the following features: - In general, their activity is comparably lower than penicillins but possess a broader spectrum of activity - Apart from the methyl substituted cephalosporins, gut wall absorption is poor - Most of the first-generation cephalosporins are administered by injection - Activity against Gram-positive bacteria is greater than Gram-negative bacteria Second Generation Cephalosporins Cefamandole, cefaclor, and cefuroxime are examples of second-generation cephalosporins. The second-generation have increased activity against Gram-negative species of bacteria such as Neisseria gonorrhoeæ, while some have decreased Gram-positive activity. Many cephalosporins that fall under the second-generation group are also able to cross the blood-brain barrier. Cefuroxime is also an example of an oximinocephalosporin. The presence of the iminomethoxy group appears to increase stability against certain β-lactamases. Third Generation Cephalosporins Cefdinir and ceftriaxone belong to the third-generation of cephalosporins. During 2008, cefdinir was one of the highest-selling cephalosporins. Ceftriaxone is marketed by Hoffmann-La Roche under the trade name Rocephin® (known for its painful administration). Overall, third-generation cephalosporins are more stable to β-lactamase degradation, and have even greater anti-Gram-negative activity. Like the previous generation, the third-generation are able to cross the blood brain barrier, making them useful against meningococci. Fourth Generation Cephalosporins The fourth-generation compounds exist are zwitterionic. The fourth-generation cephalosporins are not only better at traversing the outer membrane of Gram-negative bacteria, but these compounds also have similar activity against Gram-positive bacteria as first-generation cephalosporins. Moreover, β-lactamase resistance is also greater. Like the 2nd and 3rd generations, many of the 4th generation can cross the blood brain barrier. They are also used against Pseudomonas aeruginosa. Fifth Generation Cephalosporins Currently, members of the scientific community have not reached agreement with regards to the use of the term ‘fifth-generation cephalosporins’. Fifth-generation compounds have demonstrable activity against MRSA. Ceftobiprole is often described as a fifth-generation cephalosporin. This compound possesses good anti-Pseudomonal activity. Ceftaroline fosamil is also another example of a cephalosporin described as fifth-generation. Recall that one of the hydrogens emphasised earlier is a possible site of modification. The methoxy substituted versions are referred to cephamycins. A compound called cephamycin C can be isolated from Streptomyces clavuligerus. A urethane group is present instead of the acetoxy group, enhancing the compounds metabolic stability. Can you explain why? Derivatives may be synthesised from the methoxy-substituted analogue of 7-ACA, or through reactions with cephalosporins. Note that some refer to cephamycins as a separate class of antibacterial compounds altogether. The cephamycins appear to have greater resistance against β-lactamases. Cephalosporins and cephamycins are sometimes collectively referred to as cephems. - Fused β-lactam and dihydrothiazine ring form a bicyclic system. - 7-aminocephalosporinic acid (7-ACA) is used as precursor for many semisynthetic cephalosporins - Structure-activity relationships are similar to penicillins - Nature of leaving group is important to activity Generations of cephalosporins: - 1st: Activity is comparably lower than penicillins but possess a broader spectrum of activity. Greater activity for Gram-positive organisms than Gram-negative organisms. - 2nd: Have increased activity against Gram-negative bacteria but some have a concomitant reduction of Gram-positive activity. Many can cross the blood-brain barrier. - 3rd: Even better activity against Gram-negative bacteria. Some compounds have the same problem of decreased Gram-positive activity as with the previous generation. They are also associated with improved β-lactamase resistance and many can also cross the blood-brain barrier. - 4th: Better Gram-negative bacteria activity, β-lactamase resistance, and many can also cross the blood-brain barrier. Gram-positive activity similar to the 1st generation. - 5th: Currently no universal agreement on its definition. Some drugs classified as ‘fourth-generation’ are classified as ‘fifth-generation’ and vice versa. Examples often include ceftobiprole, ceftaroline and ceftolozane. Ceftobiprole has potent anti-Pseudomonal activity and confers little resistance. Fifth-generation drugs also show activity against MRSA. As the name suggests, the monobactams, such as aztreonam ,are β-lactam compounds that are not fused to another ring. Monobactams exhibit moderate activity against certain Gram-negative bacteria in vitro, including Neisseria and Pseudomonas. The carbapenem class of β-Lactam antibiotics have broad-spectrum activity and exhibit resistance to many β-lactamases. Carbapenems are used as antibiotics of last resort for infections of bacteria such as Escherichia coli and Klebsiella pneumoniae. Thienemycin is a carbapenem first discovered and isolated from Streptomyces cattleya in 1976. Thienamycin exhibits excellent activity against Gram-positive and Gram-negative bacteria and displays resistance to many β-lactamases. Meropenem and doripenem are analogues of thienemycin and are examples of carbapenems currently in clinical use. Both compounds have been described as ultra-broad-spectrum. From the structure of the compounds shown, it is easy to see that the carbapenems have some structural features that penicillins do not. The double-bond on the five-membered ring leads to high ring strain. A sulfur atom is also missing in the five-membered ring. The acylamino side chain is absent. Also note the trans stereochemistry of the hydrogens. This concludes our review of the medicinal chemistry of beta-lactam antibiotics; a subject often featured on pharmacy, pharmaceutical chemistry and other medicinal-related courses. It informs the reader of structure-activity relationships, the development of synthetic derivatives, and how medicinal chemistry relates to broader concepts such as formulation and dosage. The growing problem of resistance is an ever-present challenge for medicinal chemists; a biological mechanism that spurs on the engine of research in this field. Total synthesis of thienemycin: - J. Org. Chem., 1990, 55 (10), pp 3098–3103 Antibacterial resistance worldwide: causes, challenges and responses - Nature Medicine., 2004, 10, S122 – S129
0.9718
FineWeb
2.9375
* '''Published:''' 1st August 2005 * '''Publisher:''' Wizards of the Coast * '''Author:''' David Noonan, Rich Burlew * '''Format:''' 160 page hardback * '''Rules:''' D&D 3.5 Edition * '''Product:''' * [[wp>Explorer's Handbook|Wikipedia]] The ultimate sourcebook for players wishing to explore the world of Eberron. The Explorer’s Handbook showcases the multi-continental aspect of the Eberron setting. The chapter on travel discusses instantaneous and played out travel and provides deck plans for airships, the lightning rail, and galleons, plus other methods of conveyance. A chapter on Explorer’s Essentials offers information on travel papers, pre-assembled equipment kits, how to join the Wayfarers’ Foundation, and more. This handbook encourages players to explore the entire world rather than remain fixed in one region.
0.7968
FineWeb
1.328125
Historian Amanda Foreman, author of the bestselling Georgiana, Duchess of Devonshire, has written a new book, A World on Fire: Britain’s Crucial Role in the American Civil War. In an article for the Wall Street Journal‘s “Word Craft” column about her creative process, Foreman provided a valuable lesson for presenters: The fruit of my 11 years of research meant that I had more than 400 characters scattered over four regions … This vast mass of material was so unwieldy that I could hardly work my way through the first day of the conflict, let alone all four years. While few presenters spend 11 years developing their stories about their businesses, they, like Foreman, have a vast mass of unwieldy material that they have to communicate to various audiences. Unfortunately, most presenters then proceed to deliver that mass to their audiences as is, inflicting the dreaded effect known as MEGO, “My Eyes Glaze Over.” Although Foreman is a respected scholar with a doctorate in history from Oxford University, she has storytelling in her DNA. Her father was Carl Foreman, on Oscar-winning screenwriter who wrote the classic The Bridge on the River Kwai. At the end of her research, Amanda Foreman realized that, even for a story as immense and complex as the Civil War, she had too much information for both writer and reader to process. Her solution: I plotted the time lines of my 400 characters and identified and discarded people who, no matter how interesting their stories, had no connection to anyone else in the book. This winnowed my cast down to 197 characters, all bound to one another by acquaintance or one degree of separation. Foreman was tapping into a practice — well-known among professional writers — called “kill your darlings.” In fact, a community of writers in Atlanta has adopted that name for its website. The phrase is often attributed to novelist William Faulkner, but it was actually coined by Sir Arthur Quiller-Couch, a British writer and critic who, in his 1916 publication, On the Art of Writing, said: Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it — whole-heartedly — and delete it before sending your manuscript to press. Murder your darlings. The sentiment was echoed by Christopher Markus and Stephen McFeely, the screenwriters of Captain America, the current Hollywood action film based on the 70-year-old comic strip character. In another Wall Street Journal “Word Craft” article, the team wrote: Adapting an existing work for film is usually a process of reduction. Whether it’s a novel or a short story, a true-crime tale or 70 years’ worth of comic books, the first job is distillation. If this means losing someone’s favorite character, so be it. The simple fact is that we can’t put everything on the screen. Darlings must die. The phrase rings true because writers, who labor over their ideas and words like expectant mothers, invariably fall in love with their offspring and are reluctant to find fault, and even more reluctant to part with them. In the same manner, presenters who live, breathe, walk, and talk their businesses want to share every last detail about them with their prospective audiences. But audiences do not share their interest, and so presenters, like writers, must kill their darlings. In presentations, the process begins by assembling all your story elements. A chef prepares for a meal by gathering all the ingredients, seasonings, and utensils, but doesn’t use every last one of them. Once you have assembled all your presentation ingredients, assess every item for its relevance and importance to your audience — not to you. Your audience cannot possibly know your subject as well as you do, and so they do not need to know all that you do. Tell them the time, not how to build a clock. Delete, discard, omit, slice, dice, or whatever surgical method you chose to eliminate excess baggage. Be merciless. Retain only what your audience needs to know. Once you have made that first cut, make another pass, and then another. Each time you do, you will see your draft with fresh eyes and find another candidate for your scalpel. Follow the advice of the classic Strunk and White’s The Elements of Style: “It is always a good idea to reread your writing later and ruthlessly delete the excess.” Bestselling horror novelist Stephen King — who knows a thing or two about ruthless killing — follows a similar practice. In his 2000 book On Writing, he shared a note his editor once sent to him: You need to revise for length. Formula: 2nd Draft = 1st Draft – 10%. Deal with your vast mass of unwieldy material in your preparation, not in your presentation; behind the scenes, not in front of the room. A gentler way of saying “kill your darlings” is, “when in doubt, leave it out.” A footnote: Amazon lists Amanda Foreman’s new book at 1,008 pages. Imagine how many more pages it would have run had she not killed those 203 characters.
0.536
FineWeb
2.46875
This document describes topics related to BTRFS that are not specific to the tools. Currently covers: This section describes mount options specific to BTRFS. For the generic mount options please refer to mount(8) manpage. The options are sorted alphabetically (discarding the no prefix). most mount options apply to the whole filesystem and only options in the first mounted subvolume will take effect. This is due to lack of implementation and may change in the future. This means that (for example) you can't set per-subvolume nodatacow, nodatasum, or compress using mount options. This should eventually be fixed, but it has proved to be difficult to implement correctly within the Linux VFS framework. Mount options are processed in order, only the last occurence of an option takes effect and may disable other options due to constraints (see eg. nodatacow and compress). The output of mount command shows which options have been applied. Enable/disable support for Posix Access Control Lists (ACLs). See the acl(5) manual page for more information about ACLs. The support for ACL is build-time configurable (BTRFS_FS_POSIX_ACL) and mount fails if acl is requested but the feature is not compiled in. Enable automatic file defragmentation. When enabled, small random writes into files (in a range of tens of kilobytes, currently it's 64K) are detected and queued up for the defragmentation process. Not well suited for large database workloads. The read latency may increase due to reading the adjacent blocks that make up the range for defragmentation, successive write will merge the blocks in the new location. Ensure that all IO write operations make it through the device cache and are stored permanently when the filesystem is at its consistency checkpoint. This typically means that a flush command is sent to the device that will synchronize all pending data and ordinary metadata blocks, then writes the superblock and issues another flush. The write flushes incur a slight hit and also prevent the IO block scheduler to reorder requests in a more effective way. Disabling barriers gets rid of that penalty but will most certainly lead to a corrupted filesystem in case of a crash or power loss. The ordinary metadata blocks could be yet unwritten at the time the new superblock is stored permanently, expecting that the block pointers to metadata were stored permanently before. On a device with a volatile battery-backed write-back cache, the nobarrier option will not lead to filesystem corruption as the pending blocks are supposed to make it to the permanent storage. check_int, check_int_data, check_int_print_mask=value These debugging options control the behavior of the integrity checking module (the BTRFS_FS_CHECK_INTEGRITY config option required). The main goal is to verify that all blocks from a given transaction period are properly linked. check_int enables the integrity checker module, which examines all block write requests to ensure on-disk consistency, at a large memory and CPU cost. check_int_data includes extent data in the integrity checks, and implies the check_int option. check_int_print_mask takes a bitmask of BTRFSIC_PRINT_MASK_* values as defined in fs/btrfs/check-integrity.c, to control the integrity checker module behavior. See comments at the top of fs/btrfs/check-integrity.c for more information. Set the interval of periodic transaction commit when data are synchronized to permanent storage. Higher interval values lead to larger amount of unwritten data, which has obvious consequences when the system crashes. The upper bound is not forced, but a warning is printed if it's more than 300 seconds (5 minutes). Use with care. compress, compress=type[:level], compress-force, compress-force=type[:level] Control BTRFS file data compression. Type may be specified as zlib, lzo, zstd or no (for no compression, used for remounting). If no type is specified, zlib is used. If compress-force is specified, then compression will always be attempted, but the data may end up uncompressed if the compression would make them larger. Both zlib and zstd (since version 5.1) expose the compression level as a tunable knob with higher levels trading speed and memory (zstd) for higher compression ratios. This can be set by appending a colon and the desired level. Zlib accepts the range [1, 9] and zstd accepts [1, 15]. If no level is set, both currently use a default level of 3. The value 0 is an alias for the default level. Otherwise some simple heuristics are applied to detect an incompressible file. If the first blocks written to a file are not compressible, the whole file is permanently marked to skip compression. As this is too simple, the compress-force is a workaround that will compress most of the files at the cost of some wasted CPU cycles on failed attempts. Since kernel 4.15, a set of heuristic algorithms have been improved by using frequency sampling, repeated pattern detection and Shannon entropy calculation to avoid that. Enable data copy-on-write for newly created files. Nodatacow implies nodatasum, and disables compression. All files created under nodatacow are also set the NOCOW file attribute (see chattr(1)). Enable data checksumming for newly created files. Datasum implies datacow, ie. the normal mode of operation. All files created under nodatasum inherit the "no checksums" property, however there's no corresponding file attribute (see chattr(1)). Allow mounts with less devices than the RAID profile constraints require. A read-write mount (or remount) may fail when there are too many devices missing, for example if a stripe member is completely missing from RAID0. Since 4.14, the constraint checks have been improved and are verified on the chunk level, not an the device level. This allows degraded mounts of filesystems with mixed RAID profiles for data and metadata, even if the device number constraints would not be satisfied for some of the profiles. Example: metadata --- raid1, data --- single, devices --- /dev/sda, /dev/sdb Suppose the data are completely stored on sda, then missing sdb will not prevent the mount, even if 1 missing device would normally prevent (any) single profile to mount. In case some of the data chunks are stored on sdb, then the constraint of single/data is not satisfied and the filesystem cannot be mounted. discard, discard=sync, discard=async, nodiscard Enable discarding of freed file blocks. This is useful for SSD devices, thinly provisioned LUNs, or virtual machine images; however, every storage layer must support discard for it to work. In the synchronous mode (sync or without option value), lack of asynchronous queued TRIM on the backing device TRIM can severely degrade performance, because a synchronous TRIM operation will be attempted instead. Queued TRIM requires newer than SATA revision 3.1 chipsets and devices. The asynchronous mode (async) gathers extents in larger chunks before sending them to the devices for TRIM. The overhead and performance impact should be negligible compared to the previous mode and it's supposed to be the preferred mode if needed. If it is not necessary to immediately discard freed blocks, then the fstrim tool can be used to discard all free blocks in a batch. Scheduling a TRIM during a period of low system activity will prevent latent interference with the performance of other operations. Also, a device may ignore the TRIM command if the range is too small, so running a batch discard has a greater probability of actually discarding the blocks. Enable verbose output for some ENOSPC conditions. It's safe to use but can be noisy if the system reaches near-full state. Action to take when encountering a fatal error. This option forces any data dirtied by a write in a prior transaction to commit as part of the current commit, effectively a full filesystem sync. This makes the committed state a fully consistent view of the file system from the application's perspective (i.e. it includes all completed file system operations). This was previously the behavior only when a snapshot was created. When off, the filesystem is consistent but buffered writes may last more than one transaction commit. A debugging helper to intentionally fragment given type of block groups. The type can be data, metadata or all. This mount option should not be used outside of debugging environments and is not recognized if the kernel config option BTRFS_DEBUG is not enabled. Enable free inode number caching. Not recommended to use unless files on your filesystem get assigned inode numbers that are approaching 2^64. Normally, new files in each subvolume get assigned incrementally (plus one from the last time) and are not reused. The mount option turns on caching of the existing inode numbers and reuse of inode numbers of deleted files. This option may slow down your system at first run, or after mounting without the option. The tree-log contains pending updates to the filesystem until the full commit. The log is replayed on next mount, this can be disabled by this option. See also treelog. Note that nologreplay is the same as norecovery. Specify the maximum amount of space, that can be inlined in a metadata B-tree leaf. The value is specified in bytes, optionally with a K suffix (case insensitive). In practice, this value is limited by the filesystem block size (named sectorsize at mkfs time), and memory page size of the system. In case of sectorsize limit, there's some space unavailable due to leaf headers. For example, a 4k sectorsize, maximum size of inline data is about 3900 bytes. Inlining can be completely turned off by specifying 0. This will increase data block slack if file sizes are much smaller than block size but will reduce metadata consumption in return. Specifies that 1 metadata chunk should be allocated after every value data chunks. Default behaviour depends on internal logic, some percent of unused metadata space is attempted to be maintained but is not always possible if there's not enough space left for chunk allocation. The option could be useful to override the internal logic in favor of the metadata allocation if the expected workload is supposed to be metadata intense (snapshots, reflinks, xattrs, inlined files). Do not attempt any data recovery at mount time. This will disable logreplay and avoids other write operations. Note that this option is the same as nologreplay. Force check and rebuild procedure of the UUID tree. This should not normally be needed. Skip automatic resume of an interrupted balance operation. The operation can later be resumed with btrfs balance resume, or the paused state can be removed with btrfs balance cancel. The default behaviour is to resume an interrupted balance immediately after a volume is mounted. space_cache, space_cache=version, nospace_cache Options to control the free space cache. The free space cache greatly improves performance when reading block group free space into memory. However, managing the space cache consumes some resources, including a small amount of disk space. There are two implementations of the free space cache. The original one, referred to as v1, is the safe default. The v1 space cache can be disabled at mount time with nospace_cache without clearing. On very large filesystems (many terabytes) and certain workloads, the performance of the v1 space cache may degrade drastically. The v2 implementation, which adds a new B-tree called the free space tree, addresses this issue. Once enabled, the v2 space cache will always be used and cannot be disabled unless it is cleared. Use clear_cache,space_cache=v1 or clear_cache,nospace_cache to do so. If v2 is enabled, kernels without v2 support will only be able to mount the filesystem in read-only mode. The btrfs(8) command currently only has read-only support for v2. A read-write command may be run on a v2 filesystem by clearing the cache, running the command, and then remounting with space_cache=v2. If a version is not explicitly specified, the default implementation will be chosen, which is v1. ssd, ssd_spread, nossd, nossd_spread Options to control SSD allocation schemes. By default, BTRFS will enable or disable SSD optimizations depending on status of a device with respect to rotational or non-rotational type. This is determined by the contents of /sys/block/DEV/queue/rotational). If it is 0, the ssd option is turned on. The option nossd will disable the autodetection. The optimizations make use of the absence of the seek penalty that's inherent for the rotational devices. The blocks can be typically written faster and are not offloaded to separate threads. The number of worker threads to start. NRCPUS is number of on-line CPUs detected at the time of mount. Small number leads to less parallelism in processing data and metadata, higher numbers could lead to a performance hit due to increased locking contention, process scheduling, cache-line bouncing or costly data transfers between local CPU memories. Enable the tree logging used for fsync and O_SYNC writes. The tree log stores changes without the need of a full filesystem sync. The log operations are flushed at sync and transaction commit. If the system crashes between two such syncs, the pending tree log operations are replayed during mount. Enable autorecovery attempts if a bad tree root is found at mount time. Currently this scans a backup list of several previous tree roots and tries to use the first readable. This can be used with read-only mounts as well. Allow subvolumes to be deleted by their respective owner. Otherwise, only the root user can do that. List of mount options that have been removed, kept for backward compatibility. Debugging option to force all block allocations above a certain byte threshold on each block device. The value is specified in bytes, optionally with a K, M, or G suffix (case insensitive). A workaround option from times (pre 3.2) when it was not possible to mount a subvolume that did not reside directly under the toplevel subvolume. Some of the general mount options from mount(8) that affect BTRFS and are worth mentioning. Note that noatime may break applications that rely on atime uptimes like the venerable Mutt (unless you use maildir mailboxes). The basic set of filesystem features gets extended over time. The backward compatibility is maintained and the features are optional, need to be explicitly asked for so accidental use will not create incompatibilities. There are several classes and the respective tools to manage the features: at mkfs time only after mkfs, on an unmounted filesystem after mkfs, on a mounted filesystem Whether a particular feature can be turned on a mounted filesystem can be found in the directory /sys/fs/btrfs/features/, one file per feature. The value 1 means the feature can be enabled. List of features (see also mkfs.btrfs(8) section FILESYSTEM FEATURES): the filesystem uses nodesize for metadata blocks, this can be bigger than the page size the lzo compression has been used on the filesystem, either as a mount option or via btrfs filesystem defrag. the zstd compression has been used on the filesystem, either as a mount option or via btrfs filesystem defrag. the default subvolume has been set on the filesystem increased hardlink limit per file in a directory to 65536, older kernels supported a varying number of hardlinks depending on the sum of all file name sizes that can be stored into one metadata block free space representation using a dedicated b-tree, successor of v1 space cache the main filesystem UUID is the metadata_uuid, which stores the new UUID only in the superblock while all metadata blocks still have the UUID set at mkfs time, see btrfstune(8) for more the last major disk format change, improved backreferences, now default mixed data and metadata block groups, ie. the data and metadata are not separated and occupy the same block groups, this mode is suitable for small volumes as there are no constraints how the remaining space should be used (compared to the split mode, where empty metadata space cannot be used for data and vice versa) on the other hand, the final layout is quite unpredictable and possibly highly fragmented, which means worse performance improved representation of file extents where holes are not explicitly stored as an extent, saves a few percent of metadata if sparse files are used extended RAID1 mode with copies on 3 or 4 devices respectively the filesystem contains or contained a raid56 profile of block groups indicate that rmdir(2) syscall can delete an empty subvolume just like an ordinary directory. Note that this feature only depends on the kernel version. reduced-size metadata for extent references, saves a few percent of metadata list of checksum algorithms supported by the kernel module, the respective modules or built-in implementing the algorithms need to be present to mount the filesystem The swapfile is supported since kernel 5.0. Use swapon(8) to activate the swapfile. There are some limitations of the implementation in btrfs and linux swap subsystem: The limitations come namely from the COW-based design and mapping layer of blocks that allows the advanced features like relocation and multi-device filesystems. However, the swap subsystem expects simpler mapping and no background changes of the file blocks once they've been attached to swap. With active swapfiles, the following whole-filesystem operations will skip swapfile extents or may fail: When there are no active swapfiles and a whole-filesystem exclusive operation is running (ie. balance, device delete, shrink), the swapfiles cannot be temporarily activated. The operation must finish first. # truncate -s 0 swapfile # chattr +C swapfile # fallocate -l 2G swapfile # chmod 0600 swapfile # mkswap swapfile # swapon swapfile There are several checksum algorithms supported. The default and backward compatible is crc32c. Since kernel 5.5 there are three more with different characteristics and trade-offs regarding speed and strength. The following list may help you to decide which one to select. CRC32C (32bit digest) XXHASH (64bit digest) SHA256 (256bit digest) BLAKE2b (256bit digest) The digest size affects overall size of data block checksums stored in the filesystem. The metadata blocks have a fixed area up to 256bits (32 bytes), so there's no increase. Each data block has a separate checksum stored, with additional overhead of the b-tree leaves. Approximate relative performance of the algorithms, measured against CRC32C using reference software implementations on a 3.5GHz intel CPU: maximum file name length maximum symlink target length The symlink target may not be a valid path, ie. the path name components can exceed the limits (NAME_MAX), there's no content validation at symlink(3) creation. maximum number of inodes maximum file length maximum number of subvolumes maximum number of hardlinks of a file in a directory GRUB2 (m[blue]https://www.gnu.org/software/grubm) has the most advanced support of booting from BTRFS with respect to features. U-boot (m[blue]https://www.denx.de/wiki/U-Boot/m) has decent support for booting but not all BTRFS features are implemented, check the documentation. EXTLINUX (from the m[blue]https://syslinux.orgm project) can boot but does not support all features. Please check the upstream documentation before you use it. The btrfs filesystem supports setting file attributes or flags. Note there are old and new interfaces, with confusing names. The following list should clarify that: When set on a directory, all newly created files will inherit this attribute. When set on a directory, all newly created files will inherit this attribute. When set on a directory, all newly created files will inherit this attribute. No other attributes are supported. For the complete list please refer to the chattr(1) manual page. There's overlap of letters assigned to the bits with the attributes, this list refers to what xfs_io(8) provides: There's a character special device /dev/btrfs-control with major and minor numbers 10 and 234 (the device can be found under the misc category). $ ls -l /dev/btrfs-control crw------- 1 root root 10, 234 Jan 1 12:00 /dev/btrfs-control The device accepts some ioctl calls that can perform following actions on the filesystem module: The device is usually created by a system device node manager (eg. udev), but can be created manually: # mknod --mode=600 c 10 234 /dev/btrfs-control The control device is not strictly required but the device scanning will not work and a workaround would need to be used to mount a multi-device filesystem. The mount option device can trigger the device scanning during mount. It is possible that a btrfs filesystem contains multiple block group profiles of the same type. This could happen when a profile conversion using balance filters is interrupted (see btrfs-balance(8)). Some btrfs commands perform a test to detect this kind of condition and print a warning like this: WARNING: Multiple block group profiles detected, see 'man btrfs(5)'. WARNING: Data: single, raid1 WARNING: Metadata: single, raid1 The corresponding output of btrfs filesystem df might look like: WARNING: Multiple block group profiles detected, see 'man btrfs(5)'. WARNING: Data: single, raid1 WARNING: Metadata: single, raid1 Data, RAID1: total=832.00MiB, used=0.00B Data, single: total=1.63GiB, used=0.00B System, single: total=4.00MiB, used=16.00KiB Metadata, single: total=8.00MiB, used=112.00KiB Metadata, RAID1: total=64.00MiB, used=32.00KiB GlobalReserve, single: total=16.25MiB, used=0.00B There's more than one line for type Data and Metadata, while the profiles are single and RAID1. This state of the filesystem OK but most likely needs the user/administrator to take an action and finish the interrupted tasks. This cannot be easily done automatically, also the user knows the expected final profiles. In the example above, the filesystem started as a single device and single block group profile. Then another device was added, followed by balance with convert=raid1 but for some reason hasn't finished. Restarting the balance with convert=raid1 will continue and end up with filesystem with all block group profiles RAID1. If you're familiar with balance filters, you can use convert=raid1,profiles=single,soft, which will take only the unconverted single profiles and convert them to raid1. This may speed up the conversion as it would not try to rewrite the already convert raid1 profiles. Having just one profile is desired as this also clearly defines the profile of newly allocated block groups, otherwise this depends on internal allocation policy. When there are multiple profiles present, the order of selection is RAID6, RAID5, RAID10, RAID1, RAID0 as long as the device number constraints are satisfied. Commands that print the warning were chosen so they're brought to user attention when the filesystem state is being changed in that regard. This is: device add, device delete, balance cancel, balance pause. Commands that report space usage: filesystem df, device usage. The command filesystem usage provides a line in the overall summary: Multiple profiles: yes (data, metadata)
0.515
FineWeb
1.984375
Innovative Thinker Through Human Intelligence (HI) Being exam smart or academically adept will not be sufficient for your child to succeed in the unpredictable future world. It is clear that machines are fast replacing humans in mundane repetitive jobs at the workplace. Unlike machines, we can develop intelligence that is unique to humans. Human Intelligence (HI) is the cognitive capacity to learn from experiences, adapt to changing situations, and comprehend and apply abstract concepts. To prepare for the future, it is therefore necessary for your child to develop HI. At Cambridge Pre-school, we do so through our HI curriculum, emphasizing on Multiple Intelligences (MI), Character Development and Executive Function (EF).
0.8965
FineWeb
3.484375
Rejection of your research paper by a journal does not necessarily imply that your research is fundamentally unsuitable for publication. This is because rejection depends on several factors that might not be solely linked to the main thrust of your research. Besides, the reviewers who evaluate your paper are not familiar with your credentials and therefore might not emphasize the positive factors in your paper. Therefore, it is important that you do not get disheartened or overly disappointed. With certain modifications and perseverance, it is definitely possible to resurrect your research and see it through to publication. In fact, there are several positive takeaways from a rejection. The well-known chemistry journal Angewandte Chemie carried out a systematic study of the rejection procedure and concluded that most manuscripts do not go through large-scale modifications on their way from a rejection to eventual publication. Therefore, a rejection does not signify that your paper is beyond redemption. In fact, there is every chance that the paper will ultimately find its destined forum for publication. On the other hand, a study by Vincent Calcagno, ecologist at the French Institute for Agricultural Research in Sophia-Antipolis, has concluded that a research paper goes through several iterations and modifications from the time of its first submission until its final acceptance. These changes contribute significantly to the improvement of the research. The study also observed that research papers that have gone through one or more rejections before publication tend to be cited more than those that have been published following their first submission. This trend is evident after about three to six years following publication. Calcagno argues that the influence of peer reviews and the inputs from referees and editors makes papers better and each rejection improves the quality of the manuscript from the last attempt. There is also a theory among certain editors to “reject more, because more rejections improve quality.” Therefore, instead of giving in to despair, it is important to patiently evaluate the reasons for rejection and the associated comments, and to act on them in future submissions of the paper. You can also take recourse to professional editing services to refine your manuscript and help in the submission of the paper to other journals. The following are some guidelines for first-time writers in making their papers more acceptable: - Select an innovative and interesting research topic. - Ensure that your writing is well-organized and lucid as it flows from its aim to the conclusion through the methodology, results, and discussion sections. - Stay away from plagiarized text and ensure that your research is original and unpublished. - Select the most suitable journal that has a good scope for your research topic. - Follow the reviewer’s suggestions on your paper in case of a rejection, so that it is in better shape for the next submission. In case the reviewers cite the reason of unsuitability of your research for the target journal, it is important to prepare and resubmit it to another more suitable journal. If it gets rejected again, keep working on your paper and make repeated attempts at submission until it gets accepted. After all, patience and perseverance are two important virtues of any writer. As the well-known 19th-century American writer Elbert Hubbard said, “A little more persistence, a little more effort, and what seemed hopeless failure may turn to glorious success.”
0.9413
FineWeb
1.835938
There are many foods in Nature that contribute to human health in no uncertain terms, but very few are aware of the positives of these foods! It is about one such food that you are going to read here, benefits of banana flower. Plenty of information is available on the health benefits of bananas. But not much is written about the banana flower. This article will show you that the banana flower is as healthy as the banana fruit, if not more. So, just go ahead and expand the horizon of your knowledge. About Banana Flower Before you start to read about the exact health benefits of banana flower, it would help to have a basic understanding of the nutrients found in it. This will put you in a better position to fully appreciate the contents of this article. - Minerals: A banana flower is an abundant source of several key minerals such as phosphorous, calcium, potassium, copper, magnesium and iron, etc. These minerals are vital to several bodily functions. - Estrak methanol: The flower comes packed with a very important element, estrak methanol, which has powerful antioxidant effects. - Vitamin E: You find plenty of vitamin E in the food, as well. The fact that this vitamin is very good for your health warrants no special mention. - Other nutrients: The banana flower also contains protein, dietary fiber and fat, etc. So, start to have this healthy food regularly and watch your health improve!
0.755
FineWeb
2.78125
This rule book applies to all members of the Western media corps and is copyright of BBC, CNN, Sky News, the Sun, Guardian, Independent, Times. Rule 1 (Rocket attacks against Israel) The rule here is simple: Never report on any Hamas/Hizbollah rocket attacks against Israel (also note: there is no Israeli town called Sderot). There are only two exceptions to this rule: - If Israel eventually attempts to stop the rocket fire. In that case you can lead with a major story reporting that Israel has launched a massive attack against Palestinian civilians and you can include the following statement at the end of your 26 page "The Israelis claim that their attack was in response to home made rocket attacks from Gaza/Lebanon.” - If (as is very common) the Hamas rockets fall short (or explode before being launched) and hence cause casualties in Gaza, then you must write a major report with the headline "Israeli attack kills X civilians in Gaza" (where the number X is determined by Rule 3 below). - If you are forced to mention Hamas and/or Hizbollah rockets (many of which are powerful long range missiles provided by Iran and Syria) you must refer to them using the terms: "home made", "harmless", and "nothing more than fire crackers". - You may also say things like "rockets were intercepted by the Iron Dome system" so that it is not even clear that rockets were fired or by whom. However, if you do mention the Iron Dome system ensure you say that it is an "American supplied system" even though it was actually funded, designed and built by Israel (Rafael Advanced Defence Systems) - Never refer to the fact that the missiles cause deaths, injuries, and major damage in Israel and require several million people in Israel to stay in shelters for hours and days on end. In fact simply say "These so-called rockets have never caused any damage or injuries". Rule 2 (War) Every two years Hamas will launch a concentrated series of terrorist attacks and multiple rocket attacks (which, no matter how intense, must still not be reported) to draw Israel into a more robust response. In this case you must drop everything and use the following front-page template for the next 4 weeks. Rule 3 (Arab casualties) If there is any incident either in Israel or near its borders in which there are claims of Arab casualties you must write a report with the following headline: “Israelis kill X Arab civilians including Y children”For the numbers X and Y simply choose the highest figures from the following sources: - The Palestinian Authority - Islamic Jihad - Syrian State Television - Islamic State - Al Jazeera or Press TV - Any person within 20 miles of the incident who is wearing a kaffiya or a Burka. “An Israeli spokesman claimed, without evidence, that they ‘acted in self-defence’.Note that there are certain circumstances where Arab deaths from violence should not be reported at all. This is when Arabs themselves openly claim to be the killers. This applies, for example, in the following cases: - Mass slaughter in fighting between different rival groups (Shia v Sunni, Hamas vs PA, Hamas vs Al Qaeda, PA vs Islamic Jihad etc) - Where the victims are accused of being Israeli collaborators “The underlying cause of the violence was the oppressive Israeli occupation”Any claims of deaths of Israelis during such incidents can be assumed to be false and hence ignored. Rule 4 (Demographics) Remember the following important demographics in any report, especially relating to casualties: - Every Palestinian, especially every member of Hamas and Islamic Jihad, is a civilian. Those who produce suicide videos armed with machine guns vowing to kill as many Jews as possible are simply civilians forced by the Israeli occupation into becoming ‘militants’. - Any Palestinian under the age of 26 is a child. - Any Palestinian under the age of 16 is a baby. - Any Palestinian over the age of 32 is a grandfather/grandmother. - Any Palestinian with any type of injury (including especially Hamas terrorists injured when launching attacks against Israel) is disabled. - Every building in Gaza is either a Hospital, School, Mosque, or a house filled exclusively with women and children. - There are no Israeli ‘civilians’ and certainly no Israeli ‘children’. They are just soldiers or settlers. - Palestinian cities and towns are refugees camps. - Israeli cities and towns are settlements. Rule 5 (Arab Terrorist attacks) If Israelis are killed in a terrorist attack, then treat this as an opportunity to take a vacation from reporting. However, you should immediately return from your vacation if it is discovered that an Israeli family in Jerusalem is planning to build an extra bedroom to accommodate their new baby. In that case you should write a story with the headline Israelis destroy chance of peace by announcing new West Bank settlement plans.At the end of the article you can use the following statement: An Israeli government spokesman claimed that the settlement plans were in response to what they claimed was a ‘terrorist attack’.The only exceptions to this rule are as follows: - If the terrorist was killed by Israeli security forces attempting to stop further slaughter then you may post a brief report with a very large headline saying "Palestinian murdered by Israeli security forces in occupied West Bank" (note that Tel Aviv can be considered part of the West Bank for such reporting). Under no circumstances must the actual terrorist attack or any of its victims be mentioned. You should, however, seek quotes and photos from the terrorists' family members showing him/her to be a loving person who supported Real Madrid. - In the event of a suicide bombing you may interview the suicide bomber’s family and write a sympathetic piece stating how the bomber was driven to his/her actions by the Israeli occupation. Be careful to refer to the actual suicide bombing only in vague abstract terms, never mentioning the victims or their families. - In the event of a particularly brutal terrorist attack, such as the slaughter of an entire family in their home in which a baby is decapitated, you can, if news of the attack reaches outside Israel, write a brief report with the following words: “Although the Israelis claim that a terrorist attack occurred at X, it is more likely to have been the result of a disgruntled Thai worker, Jewish militants intent on sparking anti-Arab violence, or simply a family dispute. ”Rule 6 (Jewish "Terrorist" attacks) Make sure you never mention the widespread celebrations that take place throughout the Palestinian territories. Instead you should quote a Palestinian Authority spokesman, who having just led the celebrations with proclamations such as "This will be the fate of all Jews" in Arabic tells you in English that the Palestinian Authority do not approve of the attack as it damages their cause, and that in any case such actions are the natural response to the Israeli occupation. Never mention the fact that in Palestinian suicide terrorist attacks, children are always specifically targeted. Just stress that such attacks are the inevitable result of the 'occupation' and 'poverty'. - In contrast to the multiple Arab terrorist attacks that must not be reported, the extremely rare attacks that might be attributed to Jews must be given total and uninterrupted prominence, irrespective of any other news story (including the death of the Queen). - Any rules we may have about not using the word 'terrorist' and not using the religion of the person carrying out an attack can be ignored. The words "Jewish terrorist(s)" must appear in the headline and in the first sentence of every paragraph. - No act of aggression should be considered too minor to report. This includes any acts against property (e.g. Hebrew graffiti sprayed on a mosque) and any physical contact by a Jew against any Arab. This is one area of reporting where you are allowed to listen to main stream Israeli reporters since they are especially eager to promote such stories widely. - Even when there is absolutely no evidence (as in most cases) that the acts were carried out by Jews never suggest that there may be some doubt (again you can follow main stream Israeli reporters in this respect since they are world-leaders in self-flagellation). The fact that 99% of all reported acts of 'Jewish terrorism' turn out to have actually been committed by Arabs - either to incite violence or simply as part of local tribal feuds - must never be mentioned. - Do not under any circumstances mention the fact that (in total contrast to Palestinians and their leaders who celebrate every terrorist attack against Jews and honour the terrorists) Israelis universally condemn every rare attack carried out by Jews and severely punish those responsible (even before any trial). - In addition to the story itself make sure that there are several editorials with headings such as "As a Jew I am ashamed of Israel", "If Israelis had any self-respect they would declare their state illegitimate and leave", "Proof that right wing Israelis seek the murder of all Arabs", "All Jews are responsible for this violence" "All Jews must now declare Israel a criminal state", "Jewish terrorism inevitable result of Israeli apartheid" Rule 7 (Terminology) - The word ‘terrorist’ must never be used except when referring to Jews (as in Rule 5) or to any Jewish resident of the West Bank accused of acting provocatively in the presence of an Arab. - Any Arab in a combat zone is unarmed if they are not carrying a mobile missile launcher. Any kind of guns, knives, swords, rocks, sticks or heavy metal items are, as a matter of course carried by unarmed Arabs since it is part of their cultural heritage. - In any report you must insert the word “settler” after the word “Israeli”, unless it is known that they live outside of Central Tel Aviv, in which case you must add the word “fanatical” before “Israeli”. - When mentioning the name of any Israeli city or town (i.e. settlement) you must also include the words occupied territory. - When mentioning any Israeli politician you must insert the word “hardline”, “extremist”, or “right-wing” before their name (if in doubt it is best to insert all three words). - In any report about Palestinians use the following words: “authentic”, “welcoming”, “poor”. - When mentioning any Palestinian politician you must insert the word “moderate” before their name, even if they are a leader of Hamas, Islamic Jihad or even Islamic State. - Be especially careful when referring to Arab-Israelis. If an Arab-Israeli is killed in a Palestinian terrorist attack, or in a cross border shooting/rocket attack then you must refer to them as "Palestinian" (but be careful not to mention who caused the killing). In other circumstances you must refer to them as Israelis to ensure that nobody knows that there are Arab-Israeli citizens prominent and represented in all strata of Israeli society. - Israel is the only country in the world that must behave with absolutely perfect ethics and morality in every single aspect of public and private life. Hence, even the smallest deviation from perfection (such as if you can find an Arab worker earning less than a Jewish worker, or an Arab rioter being manhandled by a policeman) can be treated as a major news story. - Arab countries, and especially the Palestinians, do not have to abide by any rules of morality or ethics at all. Hence, institutional antisemitism, violence, child abuse, racism, sexism, indoctrination of children, and the fact that 99.99% of the population dreams of massacring every Jew in the world are all examples of acceptable cultural behaviour and so must never be reported (doing so will result in you being disciplined for Islamaphobia or for being a Zionist stooge). - Never look at a map of the whole Middle East. That way you will never have to reveal that Israel is less than the size of Wales or New Jersey surrounded by Muslim countries with a land mass over a thousand times as large. - Although you can refer to the fact that 20% of Israel's population are Arabs you should only do so if you include the words "oppressed minority", "abused", "underclass", "impoverished". - Never ask why over 50% of Israel’s Jewish population appears to have dark or even black skin. This might otherwise force you to reveal that not all Israeli Jews are of European/American origin and that they are actually native Middle Eastern and Ethiopian. - Never ask why there are 0 Jews living in Gaza, 0 Jews living in the PA occupied part of the "West Bank", 0 Jews living in Jordan, 0 Jews living in Syria (down from 16,000 in 1947), 0 Jews living in Iraq (down from 200,000 in 1940), 0 Jews living in Algeria (down from 150,000 in 1940), 0 Jews living in Libya (down from 50,000 in 1940), 0 Jews living in Lebanon (down from 10,000 in 1940), 30 Jews living in Egypt (down from 100,000 in 1940), 2000 Jews living in Morocco (down from 270,000 in 1940) , 0 Jews living in Saudi Arabia, etc. Do, however, say that Israel's policies make it an apartheid state committed to ethnic cleansing of all Arabs. - Never ask why Israel is the only country in the Middle East whose Christian population is expanding. - Never ask why Jerusalem is mentioned over 600 times in the Old Testament and 0 times in the Koran. Do, however, refer at least once in every report to the fact that Jerusalem is sacred to Muslims. Rule 10 ('War crimes' and Tactics of War) In simple terms military acts against terrorist groups and regimes which - when carried out by say American, European, Arab or even Russian forces would be cause for widespread celebration and international approval - are heinous war crimes if carried out by Israelis. This applies especially to: - Targeted assassinations of terrorists: If carried out by Israel then, when reporting such crimes always invoke the spectre of a UN Security Council resolution when expressing your outrage and condemnation. Never mention that the assassination was carried out with such careful planning and skill that no civilian was harmed during the operation. Also never mention the direct threat the terrorist posed to Israel, nor the Israelis previously killed by the terrorist. In contrast, if carried out by others then praise the act and make sure you never mention the fact that, unlike assassinations carried out by Israelis, dozens of innocent civilians were also killed. Also never mention the fact that, unlike when Israel targets terrorists, the terrorist did not pose a direct threat to the country carrying out the attack. - Blockading borders and bombing: If carried out by Israel, such as the blockading of Gaza's sea border to stop weapons deliveries, and the targeted bombing of its terror facilities then additionally stress that it must be the subject of multiple UN resolutions (ignore the fact that the blockade is legal etc), enquiries, and 'peace' flotillas. Never mention the tyrannical nature of the Hamas regime but focus only on the civilians living there (without mentioning that they support the regime and all rejoice in the deaths of Israeli citizens). You should also point out the 'hypocrisy of the West' in not imposing a no-fly zone on Israel. In contrast, when the blockade and bombing is carried out by nations from the other side of the world (as in, e.g. the 'West' against Libya in 2011) report that this is a highly commendable tactic (even though the country poses no threat to its neighbours). Stress the tyrannical nature of the regime being bombed (and do not remind readers that 6 months earlier you were telling them what a wonderful reformer their leader was) and never remind readers about the civilians living there (who mostly hate the regime) Rule 11 (Using photographs) - Any photo that you find of dead children in conflict from anywhere in the world must be posted immediately to twitter with the caption "New proof that Israel murders babies for fun". Photos of victims from the civil wars in Syria and Iraq are an especially good source. You do not have to worry even if they are over 10 years old. You may even use stills from horror films. - Since Arabs/Muslims are always the victims and Israelis are always the aggressors any photograph showing: Muslims attacking Jews, dead or injured Jews, damage to Israeli property from Muslim violence or rockets, can all be assumed to be faked and must never be shown. There is one exception to this rule and that is if the photograph does not obviously identify Muslims as the aggressors and Jews as the victim. In that case you are allowed to show the photograph with an appropriate caption making it clear that Jews are the aggressors and Muslims the victims. - If at any time you cannot find relevant photos that can be used to portray Israel as aggressor and Arabs as victim, then simply use photoshop. You can either just doctor existing photos (update this is typical) or better still create your own scene using the ever obliging Palestinians. If you have no volunteers then simply find a pile of rubbish anyway and place a child's toy or doll on the top - this enables you to tweet a photo with the caption "all that remains of nursery destroyed by Israel". Other variations of this are to place a broken wheelchair on top and use the caption "all that remains of disabled people's home destroyed by Israel". Rule 12 (Speaking to the natives) Never interview an Israeli Jew (Israeli Arabs, especially those belonging to the Islamic Movement can be used instead). The only exceptions to this rule are: - Haaretz reporters - Anti-Zionist academics, writers, politicians and lawyers - Amos Oz - Any recent American Jewish immigrant living in the West Bank who uses Biblical quotations every other sentence. Rule 13 (Peace activists) Any non-Arab in the region who is involved in violent anti-Israel activities is a peace activist. If such a person dies under any circumstances then you must state before any evidence is produced that this person “was killed by Israeli troops”. You must also put out an urgent call to all journalists in the world to write about nothing else for the next 7 days. Also contact every playwright to request a play dedicated to this martyr of the Palestinian cause. Rule 14 (Arabic speeches) Never use the services of an Arabic translator to find out what Arab leaders and clerics are telling their own people, as in “We will not rest until every Jew is dead”. This will ensure you do not have to waste time telling readers that they have directly contradicted what they told you in English (as in “We want peace with Israel”). It will also save you having to explain (or even report) that wild anti-Semitic conspiracy theories are believed by 99.999% of all Arabs. A simple rule of thumb is the following: Any statement X made to you by an Arab in English is to be treated as unimpeachable truth. Any statement Y made by an Arab in Arabic that contradicts X, can be treated as false and hence ignored.In the unlikely event that news of the translation gets into the main stream media you simply have to report that the translation is the work of MEMRI – a Zionist organisation dedicated to mis-translation to make Arabs look like anti-Semitic, homophobic, misogynistic psychopaths. If, even after that, the translation is proved to be accurate then you can simply say that the statement was "theatrical rhetoric which was never supposed to be meant literally". Rule 15 (Israeli evidence) Be aware that in most major stories where you are able to cast Israel as the devil incarnate, the Israelis subsequently produce hard evidence (videos, documents, etc) that prove they were not guilty of the claims that had been made. In such cases there is no need to report this evidence. However, if you wish to remind readers of the original story you can include the following words at the end: “Israel has claimed that some of these accusations are not true, but sources cast doubt on the authenticity of the Israeli evidence”. Rule 16 (Peace Partners) Under no circumstances ever mention that the Hamas Charter calls for the death of all Jews and cites the Protocols of the Elders of Zion for its inspiration. Similarly, never mention that, even after the Oslo accords, the Palestinian Authority has never renounced its own charter calling for the destruction of Israel and that every one of its leaders openly seeks the destruction of the Jewish State. Rule 17 (Writing feature-length insight articles) This is covered by Daniel Greenfield in "How to write about Israel" Rule 18 (The Final Option) If you have any doubt about the content of a story, or if you simply cannot be bothered to write anything yourself, then simply copy and paste whatever Al Jazeera or Press TV is saying on its website. Or just use the template provided here for every day that the current conflict lasts. - Guidelines for reporting attacks against ISIS and Hamas - Simple questions about the Middle East that are never asked - Saving the British media a lot of effort in their Gaza reporting - The maps that lie - Following up on the guidelines ... - Imagine if the Tunisian attack had been by a Palestinian on Tel Aviv beach .... - A dedication to the useful idiots at the BBC and CNN
0.5865
FineWeb
2.734375
US 7018712 B2 Volatile organic compound or other materials are produced in the thermoplastic manufacture of thermoplastic polyester beverage containers. Such materials can be eluted into beverages such as carbonated beverages, sparkling or still water from the polyester. Such thermoplastic polyester resins can be manufactured with a material that can prevent the formation of, or react with, and absorb volatile by-products during the formation of thermoplastic preforms or containers from the thermoplastic pellet or chip. Further, as the preform is blown into a polyester container, the active materials of the invention prevent the generation of additional undesirable volatile materials. Lastly, the scavenger material can act as a barrier that prevents transport of materials from the exterior of the container into the container contents. 1. A method of making a polyester chip comprising the steps of: (a) forming a liquid comprising an effective amount of a substituted cyclodextrin compound to trap or prevent transport of a material through a polymer layer; (b) introducing the liquid comprising the substituted cyclodextrin into a stream of molten polyester proximate to a mixing means in a process device to form a treated stream; (c) passing the treated stream through an extruder exit orifice forming the polyester chip; and (d) subjecting the polyester chip to solid state polymerization. 2. The method of 3. The method of 4. The method of 5. The method of 6. The method of 7. The method of 8. The method of 9. The method of 10. The method of 11. The method of 12. The method of This application is a divisional of application Ser. No. 10/692,650, filed Oct. 24, 2003 now U.S. Pat. No. 6,878,457, which is a divisional of application Ser No. 10/163,817, filed Jun. 5, 2002, now U.S. Pat. No. 6,709,746 B2, which applications are incorporated herein by reference. Container structures can comprise an oriented thermoplastic polyester resin material. Such resins can be a source of reactive organic materials that can be eluted from the packaging into, for example, a food material held within the container. Such reactive materials, including an aldehyde material, can result in undesirable off-odors or off-flavors in a food, or off-taste in water or beverage drink. The invention relates to polyester pellet or chip coated with active materials that can prevent the formation of or scavenge the organic material during preform and bottle manufacturing methods. The invention further relates to the polyester preform comprising thermoplastic polyester and, dispersed in the thermoplastic resin, an active material that can act to prevent the formation of or scavenge volatile organic components. Lastly, the invention relates to a thermoplastic beverage container and methods of making the chip, preform or container. Polyethylene terephthalate (PET) packaging materials in the form of film, shaped containers, bottles, etc. have been known. Further, rigid, or semi-rigid, thermoplastic beverage containers have been made from preforms that are in turn molded from pellets or chips etc. Biaxially oriented blow molded thermoformed polyester beverage containers are disclosed in J. Agranoff (Ed) Modern Plastics, Encyclopedia, Vol. 16, No. 10A, P. (84) pp. 192–194. These beverage containers are typically made from a polyester, a product of a condensation polymerization. The polyester is typically made by reacting a dihydroxy compound and a diacid compound in a condensation reaction with a metallic catalyst. Dihydroxy compounds such as ethylene glycol, 1,4-butane diol, 1,4-cyclohexane diol and other diol can be copolymerized with an organic diacid compound or lower diester thereof such diacid. Such diacidic reactants include terephthalic acid, 2,6-naphthalene dicarboxylic acid, methyl diester thereof, etc. The condensation/polymerization reaction occurs between the dicarboxylic acid, or a dimethyl ester thereof and the glycol material in a heat driven metal catalyzed reaction that releases water or methanol as a reaction by-product leaving, a high molecular weight polyester material. Bulk resin is formed as a convenient flake, chip or pellet adapted for future thermal processing. Bulk polyester material can be injection blow molded directly into a container. Alternately, the polyester can be formed into an intermediate preform that can then be introduced into a blow-molding machine. The polyester is heated and blown to an appropriate shape and volume for a beverage container. The preform can be a single layer material, a bilayer or a multilayer preform. Metallic catalysts are used to promote a polymerization reaction between diacid material and the dihydroxy compound. At the beginning of the melt phase, ethylene glycol, terephthalic acid, or ester thereof, and metallic catalysts are added to the reactor vessel. Various catalysts are known in the art to be suitable for the transesterification step. Salts of organic acids with bivalent metals (e.g. manganese, zinc, cobalt or calcium acetate) are preferably used as—direct esterification or trans-esterification catalysts, which in themselves also catalyze the polycondensation reaction. Antimony, germanium and titanium compounds are preferably used as polycondensate catalysts. Catalysts that may be used include organic and inorganic compounds of one or more metals alone or in combination with the above-described antimony, also including germanium and titanium. Suitable forms of antimony can be used, including inorganic antimony oxides, and organic compounds of antimony, such as antimony acetate, antimony oxalate, antimony glycoxide, antimony butoxide, and antimony dibutoxide. Antimony-containing compounds are currently in widespread commercial use as catalysts that provide a desirable combination of high reaction rate and low color formation. Titanium may be chosen from the group consisting of the following organic titanates and titanium complexes: titanium oxalate, titanium acetate, titanium butylate, titanium benzoate, titanium isoproprylate, and potassium titanyl oxalate. Organic titanates are not generally used in commercial production. At the end of the melt phase, after polymerization is complete and molecular weight is maximized, the product is pelletized. The pellets are treated in solid-state polycondensation to increase intrinsic viscosity in order to obtain bottle resin of sufficient strength. The catalysts typically comprise metallic divalent or trivalent cations. The treatment of polyester materials containing such catalysts can result in byproduct formation. Such byproduct can comprise reactive organic materials such as an aldehyde material, commonly analyzed as acetaldehyde. The formation of acetaldehyde materials can cause off odor or off taste in the beverage and can provide a yellowish cast to the plastic at high concentrations. Polyester manufacturers have added phosphorus-based additives as metal stabilizers to reduce acetaldehyde formation. Many attempts to reduce aldehyde formation have also caused problems. Antimony present as Sb+1, Sb+2 and Sb+3 in the polyester as catalyst residues from manufacture can be reduced to antimony metal, Sb0, by the additives used to prevent aldehyde formation or scavenge such materials. Formation of metallic antimony can cause a gray or black appearance to the plastic from the dispersed, finely divided metallic residue. The high molecular weight thermoplastic polyester can contain a large variety of relatively low molecular weight compound, (i.e.) a molecular weight substantially less than 500 grams per mole as a result of the catalytic mechanism discussed above or from other sources. These compounds can be extractable into food, water or the beverage within the container. These beverage extractable materials typically comprise impurities in feed streams of the diol or diacid used in making the polyester. Further, the extractable materials can comprise by-products of the polymerization reaction, the preform molding process or the thermoforming blow molding process. The extractable materials can comprise reaction byproduct materials including formaldehyde, formic acid, acetaldehyde, acetic acid, 1,4-dioxane, 2-methyl-1,3-dioxolane, and other organic reactive aldehyde, ketone and acid products. Further, the extractable materials can contain residual diester, diol or diacid materials including methanol, ethylene glycol, terephthalic acid, dimethyl terephthalic, 2,6-naphthalene dicarboxylic acid and esters or ethers thereof. Relatively low molecular weight (compared to the polyester resin) oligomeric linear or cyclic diesters, triesters or higher esters made by reacting one mole of ethylene glycol with one mole of terephthalic acid may be present. These relatively low molecular oligomers can comprise two or more moles of diol combined with two or more moles of diacid. Schiono, Journal of Polymer Science: Polymer Chemistry Edition, Vol. 17, pp. 4123–4127 (1979), John Wiley & Sons, Inc. discusses the separation and identification of PET impurities comprising poly(ethylene terephthalate) oligomers by gel permeation chromatography. Bartl et al., “Supercritical Fluid Extraction and Chromatography for the Determination of Oligomers and Poly(ethylene terephthalate) Films”, Analytical Chemistry, Vol. 63, No. 20, Oct. 15, 1991, pp. 2371–2377, discusses experimental supercritical fluid procedures for separation and identification of a lower oligomer impurity from polyethylene terephthalate films. Foods or beverages containing these soluble/extractables derived from the container, can have a perceived off-taste, a changed taste or even, in some cases, reduced taste when consumed by a sensitive consumer. The extractable compounds can add to or interfere with the perception of either an aroma note or a flavor note from the beverage material. Additionally, some substantial concern exists with respect to the toxicity or carcinogenicity of any organic material that can be extracted into beverages for human consumption. The technology relating to compositions used in the manufacture of beverage containers is rich and varied. In large part, the technology is related to coated and uncoated polyolefin containers and to coated and uncoated polyester that reduce the permeability of gasses such as carbon dioxide and oxygen, thus increasing shelf life. The art also relates to manufacturing methods and to bottle shape and bottom configuration. Deaf et al., U.S. Pat. No. 5,330,808 teaches the addition of a fluoroelastomer to a polyolefin bottle to introduce a glossy surface onto the bottle. Visioli et al., U.S. Pat. No. 5,350,788 teaches methods for reducing odors in recycled plastics. Visioli et al. disclose the use of nitrogen compounds including polyalkylenimine and polyethylenimine to act as odor scavengers in polyethylene materials containing a large proportion of recycled polymer. Wyeth et al., U.S. Pat. No. 3,733,309 show a blow molding machine that forms a layer of polyester that is blown in a blow mold. Addleman, U.S. Pat. No. 4,127,633 teaches polyethylene terephthalate preforms which are heated and coated with a polyvinylidene chloride copolymer latex that forms a vapor or gas barrier. Halek et al., U.S. Pat. No. 4,223,128 teaches a process for preparing polyethylene terephthalate polymers useful in beverage containers. Bonnebat et al., U.S. Pat. No. 4,385,089 teaches a process for preparing biaxially oriented, hollow thermoplastic shaped articles in bottles using a biaxial draw and blow molding technique. A preform is blow molded and then maintained in contact with hot walls of a mold to at least partially reduce internal residual stresses in the preform. The preform can be cooled and then blown to the proper size in a second blow molding operation. Gartland et al., U.S. Pat. No. 4,463,121 teaches a polyethylene terephthalate polyolefin alloy having increased impact resistance, high temperature, dimensional stability and improved mold release. Ryder, U.S. Pat. No. 4,473,515 teaches an improved injection blow molding apparatus and method. In the method, a parison or preform is formed on a cooled rod from hot thermoplastic material. The preform is cooled and then transformed to a blow molding position. The parison is then stretched, biaxially oriented, cooled and removed from the device. Nilsson, U.S. Pat. No. 4,381,277 teaches a method for manufacturing a thermoplastic container comprising a laminated thermoplastic film from a preform. The preform has a thermoplastic layer and a barrier layer which is sufficiently transformed from a preformed shape and formed to a container. Jakobsen et al., U.S. Pat. No. 4,374,878 teaches a tubular preform used to produce a container. The preform is converted into a bottle. Motill, U.S. Pat. No. 4,368,825; Howard Jr., U.S. Pat. No. 4,850,494; Chang, U.S. Pat. No. 4,342,398; Beck, U.S. Pat. No. 4,780,257; Krishnakumar et al., U.S. Pat. No. 4,334,627; Snyder et al., U.S. Pat. No. 4,318,489; and Krishnakumar et al., U.S. Pat. No. 4,108,324 each teach plastic containers or bottles having preferred shapes or self-supporting bottom configurations. Hirata, U.S. Pat. No. 4,370,368 teaches a plastic bottle comprising a thermoplastic comprising vinylidene chloride and an acrylic monomer and other vinyl monomers to obtain improved oxygen, moisture or water vapor barrier properties. The bottle can be made by casting an aqueous latex in a bottle mold, drying the cast latex or coating a preform with the aqueous latex prior to bottle formation. Kuhfuss et al., U.S. Pat. No. 4,459,400 teaches a poly(ester-amid) composition useful in a variety of applications including packaging materials. Maruhashi et al., U.S. Pat. No. 4,393,106 teaches laminated or plastic containers and methods for manufacturing the container. The laminate comprises a moldable plastic material in a coating layer. Smith et al., U.S. Pat. No. 4,482,586 teaches a multilayer of polyester article having good oxygen and carbon dioxide barrier properties containing a polyisophthalate polymer. Walles, U.S. Pat. Nos. 3,740,258 and 4,615,914 teaches that plastic containers can be treated, to improve barrier properties to the passage of organic materials and gases, such as oxygen, by sulfonation of the plastic. Rule et al., U.S. Pat. No. 6,274,212 teaches scavenging acetaldehyde using scavenging compounds having adjacent to heteroatoms containing functional groups that can form five or six member bridge through condensation with acetaldehyde. Al-Malaika PCT WO 2000/66659 and Weigner et al., PCT WO 2001/00724 teach the use of polyol materials as acetaldehyde scavengers. Wood, et al. U.S. Pat. Nos. 5,837,339, 5,883,161 and 6,136,354, teach the use of substituted cyclodextrin in polyester for barrier properties. Further, we are aware that the polyester has been developed and formulated to have high burst resistance to resist pressure exerted on the walls of the container by carbonated beverages. Further, some substantial work has been done to improve the resistance of the polyester material to stress cracking during manufacturing, filling and storage. Beverage manufacturers have long searched for improved barrier material. In larger part, this research effort was directed to carbon dioxide (CO2) barriers, oxygen (O2) barriers and water vapor (H2O) barriers. More recently, original bottle manufacturers have had a significant increase in sensitivity to the presence of beverage extractable or beverage soluble materials in the resin or container. This work has been to improve the bulk plastic with polymer coatings or polymer laminates of less permeable polymer to decrease permeability. However, we are unaware of any attempt at introducing into bulk polymer resin or polyester material of a beverage container, an active complexing compound to scavenge metal catalyst residues contained in the polyester resin during the preform manufacturing process, reducing catalytically generated beverage extractable or beverage soluble material caused by catalyst residues in the resin or container. Even with this substantial body of technology, substantial need has arisen to develop biaxially oriented thermoplastic polymer materials for beverage containers that can substantially reduce the elution of reactive organic materials into a food or beverage in the container or reduce the passage of permeants in the extractable materials that pass into beverages intended for human consumption. Stabilization of polyester resins and absorption of reactive organics such as acetaldehyde have drawn significant attention. Proposals for resolving the problem have been posed. One proposal involves using active stabilizers including phosphor compounds and nitrogen heterocycles as shown in WO 9744376, EP 26713 and U.S. Pat. No. 5,874,517 and JP 57049620. Another proposal, which has obtained great attention, includes solid state polycondensation (SSP) processing. The materials after the second polymerization stage are treated with water or aliphatic alcohols to reduce residuals by decomposition. Lastly, acetaldehyde can be scavenged with reactive chemical materials including low molecular weight partially aromatic polyamides based on xylylene diamine materials and low molecular weight aliphatic polyamides. [See, U.S. Pat. Nos. 5,258,233; 6,042,908 and European Patent No. 0 714 832, commercial polyamides see WO9701427, polyethylene imine see U.S. Pat. No. 5,362784, polyamides of terephthalic acid see WO9728218 and the use of inorganic absorbents such as zeolytes, see U.S. Pat. No. 4,391,971.] Bagrodia, U.S. Pat. No. 6,042,908 uses polyester/polyamide blends to improve flavor of ozonated water. Hallock, U.S. Pat. No. 6,007,885 teaches oxygen-scavenging compositions in polymer materials. Ebner, U.S. Pat. No. 5,977,212 also teaches oxygen-scavenging materials in polymers. Rooney, U.S. Pat. No. 5,958,254 teaches oxygen scavengers without transitional catalysts for polymer materials. Speer, U.S. Pat. No. 5,942,297 teaches broad product absorbance to be combined with oxygen scavengers in polymer systems. Palomo, U.S. Pat. No. 5,814,714 teaches blended mono-olefin/polyene interpolymers. Lastly, Visioli, U.S. Pat. No. 5,350,788 teaches method for reducing odors in recycled plastics. In implementing the technologies using various scavenging materials in polyester beverage polymers, a significant need remains for technology that reduces the concentration of organic materials such as aldehyde, ketone and acids in polyester without the reduction of antimony to gray or black metallic residue. In particular, a reduction in acetaldehyde residues in polyester is required. Further, a need exists to obtain reduced acetaldehyde concentration in polyesters along with introducing barrier properties in the polyester material. We have found that polyester resin and polyester beverage containers can be made with an active component that can act to inhibit reactive organic chemical compound formation. The active components also offer an organic vapor barrier property to the container material. We have found that a small amount of a specific substituted cyclodextrin compound can be coated onto the polyester chip or pellet during bulk polyester resin manufacture. The polyester chip with the cyclodextrin compound can then be introduced into an extruder for the purpose of injection molding a polyester preform article or directly blowing the bottle. During extrusion, the cyclodextrin compound mixes with the melt polymer at high temperature during a set residence time. At the temperature of the melt extrusion, the cyclodextrin compound reacts with, complexes or associates with the metallic catalyst residues and prevents the production of catalytically generated reactive organic compounds, including aldehyde materials such as acetaldehyde. The cyclodextrin compound can also react with and scavenge volatile reactive materials such as acetaldehyde formed during the melt process. A preform or blow molding residence time is selected that results in effective aldehyde concentration reduction but without cyclodextrin or polymer degradation. Such a reduction in aldehyde concentration reduces or eliminates major off-odors and off-flavors in the thermoplastic polymer. We have found that a small, but critical, loading of a specific cyclodextrin material on the thermoplastic polymer obtains excellent scavenging and barrier properties. Preferably, the cyclodextrin is formed in a coating layer on the polyester chip or pellet. Such coatings are made by dispersing or dissolving the cyclodextrin compound typically in solvent preferably in an aqueous solution and dispersing or spraying such aqueous solution onto the polymer chip or pellet following polycondensation and preferably after SSP. This amount of cyclodextrin is sufficient to provide such properties without unacceptable commercial discoloration of the polymer resin or any reduction in polymer clarity or physical properties. The cyclodextrin compound is typically incorporated with, dispersed into or suspended in the bulk polymer material used to make the beverage container. We have also found that the purity of the cyclodextrin aqueous solution is important in achieving reduced aldehyde, reduced color formation and preventing antimony reduction. Once formed, an aqueous cyclodextrin solution can be purified by contacting the solution with an activated charcoal absorbent, an ion exchange resin or a filtration apparatus including nanofiltration, reverse osmosis, etc. equipment. Preferably, the cyclodextrin compound utilized in the technology of the invention involves a substituted β- or α-cyclodextrin. Preferred cyclodextrin materials are substituted on at least one of the 3-OH of the glucose moiety in the cyclodextrin ring. β-Cyclodextrin materials comprise seven glucose moieties forming the cyclodextrin ring. Any of such hydroxyl groups can be substituted. The degree of substitution (D.S.) of the cyclodextrin material can range from about 0.3 to 1.8; preferably the degree of substitution can range from about 0.5 to 1.2. We found that complexing metallic catalyst residues in the polymer material, a beta or alpha cyclodextrin is preferred. Further, the degree of substitution has an important role in ensuring that the cyclodextrin is compatible with the melt polymer, but is not so substituted that the cyclodextrin cannot participate in complexing catalyst residues. We have further found that the amount of the substituted cyclodextrin material useful in preventing the formation of aldehyde by complexing metallic catalyst residues is less than the amount of cyclodextrin active in barrier structures. The effective amount of a substituted cyclodextrin for aldehyde suppression ranges from about 100 ppm to 1,400 ppm based on the polymer composition as a whole preferably 350 ppm to 900 ppm. The principle mechanistic action of the substituted cyclodextrin material is a coordination complex of the metallic catalyst where more than one metal ion is bound per cyclodextrin. Metallocyclodextrins are formed from substituted cyclodextrins (6-position—OH) which consist of two cyclodextrins linked together through the secondary hydroxyl groups (3- and 2-positions) of the unmodified (native) cyclodextrin losing a proton to produce an alkoxide to coordinate a metal ion forming the simplest type of metallocyclodextrin. Accordingly, a substantial and effective fraction of the cyclodextrin must be available for catalyst residue complexation to accomplish the goal of the invention. The compatible cyclodextrin compounds are introduced into the melt thermoplastic substantially free of an inclusion complex or inclusion compound. For this invention the term “substantially free of an inclusion complex” means that the quantity of dispersed cyclodextrin material in the coating on the polyester chip or pellet is free of a complex material or “guest compound” in the central pore of the cyclodextrin molecule. A first aspect of the invention comprises a thermoplastic pellet or chip having a major proportion of the thermoplastic polyester material used in making the preform or the beverage container. The pellet or chip comprises an exterior coating layer, an effective metal catalyst scavenger and volatile organic barrier-providing amount of a cyclodextrin compound. Such an exterior coating of cyclodextrin can be made from an aqueous solution of the cyclodextrin material. The aqueous solution can be made by dissolving a cyclodextrin material in an aqueous medium to form a solution and purifying the solution. A second aspect of the invention comprises a process of forming a purified cyclodextrin solution by contacting a cyclodextrin solution with and activated carbon absorbent, an ion exchange resin, or membrane filtration equipment. A third aspect of the invention comprises a thermoplastic preform having within the polymer matrix, an effective amount of the cyclodextrin compound for reducing volatile organic materials such as acetaldehyde produced during injection molding and for introducing a barrier property into the thermoplastic polymer. A fourth aspect of the invention comprises a thermoplastic beverage container having the metal catalyst scavenger property and a volatile organic barrier property that results from the manufacture of the beverage container from the preform of the invention. Lastly, a fifth aspect of the invention comprises a method for manufacturing a polyester beverage container from the coated pellet or chip of the invention through a preform stage. In each of these aspects, the use of the purified cyclodextrin material results in a clear, substantially water white polyester material having little or no organic material to produce off odors or off flavors in the food material within a polyester container. We have found that the packaging properties of polyester materials can be substantially improved using a substituted cyclodextrin material at a concentration that can prevent the formation of an organic material such as an aldehyde, or scavenge the formed organic material. We further found that using a purified cyclodextrin material is preferred for polyester processing. We further found that a preferred degree of substitution, concentration of substituted cyclodextrin and processing conditions produces a high-quality polyester material. We have found that combining a modified cyclodextrin material with the polymer obtains improved reactive organic compound properties and a reduced tendency to release polymer residue (e.g. acetaldehyde). Suitable polyesters are produced from the reaction of a diacid or diester component comprising at least 60 mole percent terephthalic acid (TA) or C1–C4 dialkyl terephthalate, preferably at least 75 mole percent, and more preferably at least 85 mole percent; and a diol component comprising at least 60 mole percent ethylene glycol (EG), preferably at least 75 mole percent, and more preferably at least 85 mole percent. It is also preferred that the diacid component be TA, or the dialkyl terephthalate component be dimethyl terephthalate (DMT), and the diol component is EG. The mole percentage for all the diacids/dialkyl terephthalate components total 100 mole percent, and the mole percentage of all diol components total 100 mole percent. Alternatively, suitable polyesters are produced from the reaction of a diacid or diester component comprising at least 60 mole percent 2,6-naphthalene dicarboxylic acid (NDA) or C1–C4 dialkyl napthalate, preferably at least 75 mole percent, and more preferably at least 85 mole percent; and a diol component comprising at least 60 mole percent ethylene glycol (EG), preferably at least 75 mole percent, and more preferably at least 85 mole percent. Where the polyester components are modified by one or more diol components other than EG, suitable diol components of the described polyester can be selected from 1,4-cyclohexanedimethanol; 1,2-propanediol; 1,3-propanediol; 1,4-butanediol; 2,2-dimethyl-1,3-propanediol; 1,6-hexanediol; 1,2-cyclohexanediol; 1,4-cyclohexanediol; 1,2-cyclohexanedimethanol; 1,3-cyclohexanedimethanol; and diols containing one or more oxygen atoms in the chain, for example diethylene glycol, triethylene glycol, dipropylene glycol, tripropylene glycol or mixtures of these and the like. In general, these diols contain 2 to 18, and preferably 2 to 8 carbon atoms. Cycloaliphatic diols can be employed in their cis or trans configuration or as mixtures of both forms. Where the polyester components are modified by one or more acid components other than TA, suitable acid components of the linear polyesters may be selected from the class of isophthalic acid; 1,4-cyclohexanedicarboxylic acid; 1,3-cyclohexanedicarboxylic acid; succinic acid; glutaric acid; adipic acid; sebacic acid; 1,12-dodecanedioic acid; 2,6-naphthalene dicarboxylic acid; 2,7-naphthalene dicarboxylic acid, t-stilbene dicarboxylic acid, 4,4′-bibenzoic acid, or mixtures of these or their anhydride equivalents, and the like. In the case of polyethylene naphthalate, 2,6-naphthalene dicarboxylic acid can be used in place of the terephthalic acid listed above. A typical PET based polymer for the beverage container industry has about 97 mole percent PET and 3 mole percent isophthalate—thus it is the copolymer polyethylene terephthalate/isophthalate. In the polymer preparation, it is often preferred to use a functional acid derivative thereof such as dimethyl, diethyl or dipropyl ester of a dicarboxylic acid. The anhydrides or acid halides of these acids may also be employed where practical. These acid modifiers generally retard the crystallization rate compare to terephthalic acid. Conventional production of polyethylene terephthalate is well known in the art and comprises reacting terephthalic acid (TA) (or dimethyl terephthalate—DMT) with ethylene glycol (EG) at a temperature of approximately 200 to 250° C. forming monomer and water (monomer and methanol, when using DMT). Because the reaction is reversible, the water (or methanol) is continuously removed, thereby driving the reaction to the production of monomer. The monomer comprises primarily BHET (bishydroxyethylene terephthalate), some MHET (monohydroxyethylene terephthalate), and other oligomeric products and small amounts of unreacted raw materials. Subsequently, the BHET and MHET undergo a polycondensation reaction to form the polymer. During the reaction of the TA and EG it is not necessary to have a catalyst present. During the reaction of DMT and EG employing an ester interchange catalyst is required. Suitable ester interchange catalysts include compounds containing cobalt (Co), zinc (Zn), manganese (Mn), and magnesium (Mg), to name a few. Generally, during the polycondensation reaction the preferred catalyst is antimony in the form of an antimony salt or compound. Often bottle grade PET resin, during manufacture, is heated under inert ambient atmosphere to promote further polymerization in the resin or processed as an SSP resin. Typically bottle grade PET resin has an intrinsic viscosity (IV) of about 0.70 to about 0.85 dL/g. Injection blow molding processes are used to produce polyester bottles. Two manufacturing techniques are typically used. In one method, a preform is made by injection molding techniques in a preform shape having the neck and screw-cap portion of the bottle in approximately useful size but having the body of the preform in a closed tubular form substantially smaller than the final bottle shape. A single component or multi-layered perform can be used. The preform is then inserted into a blow-molding machine where it is heated enough to allow the preform to be inflated and blown into the appropriate shape. Alternatively, the resin can be injection blow molded over a steel-core rod. The neck of the bottle is formed with the proper shaped received closures (cap) and resin is provided around the temperature-conditioned rod for the blowing step. The rod with the resin is indexed into the mold and the resin is blown away from the rod against the mold walls. The resin cools while in contact with the mold forming the transparent bottle. The finished bottle is ejected and the rod is moved again in the injection molding station. This process is favored for single cylindrical bottles. The most common machine involves a four station apparatus that can inject resin, blow the resin into the appropriate shape, strip the formed container from the rod and recondition the core rod prior to the repeat of the process. Such containers. are typically manufactured with the closure fitment portion comprising a threaded neck adapted to a metal screw cap. The bottle bottom typically has a lobed design such as a four-lobe or five-lobe design to permit the bottle to be placed in a stable upright position. The manufacturing equipment has been continually upgraded to add blowing stations and increased throughput. The thermoplastic materials of the invention contain a cyclodextrin compound that can comprise a cyclodextrin having one substituent group, preferably on a primary carbon atom. Such cyclodextrin materials have been shown to be compatible with thermoplastic polyester materials in scavenging and barrier properties. The cyclodextrin material can be added to the thermoplastic and, during melt processing, provide scavenging properties and barrier properties in the preform and in the final beverage container. The cyclodextrin materials, under good manufacturing conditions of time and temperature, are compatible, do not burn, and do not result in the formation of haze or reduced structural properties or clarity in the appearance of the polymer in the final container. Cyclodextrin (CD) is a cyclic oligosaccharide consisting of at least five, preferably six, glucopyranose units joined by an α(1→4) linkage. Although cyclodextrin with up to twelve glucose residues are known, the three most common homologs (α-cyclodextrin, β-cyclodextrin and γ-cyclodextrin) having 6, 7 and 8 residues are known and are useful in the invention. Cyclodextrin is produced by a highly selective enzymatic synthesis from starch or starch-like materials. They commonly consist of six, seven, or eight glucose monomers arranged in a donut shaped ring, which are denoted α, β and γ cyclodextrin respectively (See The preferred preparatory scheme for producing a derivatized cyclodextrin material having a functional group compatible with the thermoplastic polymer involves reactions at the primary hydroxyls with a minimum of the secondary hydroxyls of the cyclodextrin molecule being substituted. Coordination compounds or metal complexes in which the modified cyclodextrin acts as a ligand requires the secondary hydroxyl groups to be free of a derivative. A sufficient number of primary hydroxyls need to be modified to possess compatibility with the polymer and thermal stability in the process. Generally, we have found that a broad range of pendant substituent moieties can be used on the molecule. These derivatized cyclodextrin molecules can include acylated cyclodextrin, alkylated cyclodextrin, cyclodextrin esters such as tosylates, mesylate and other related sulfo derivatives, hydrocarbyl-amino cyclodextrin, alkyl phosphono and alkyl phosphato cyclodextrin, imidazoyl substituted cyclodextrin, pyridine substituted cyclodextrin, hydrocarbyl sulfur containing functional group cyclodextrin, silicon-containing functional group substituted cyclodextrin, carbonate and carbonate substituted cyclodextrin, carboxylic acid and related substituted cyclodextrin and others. The substituent moiety must include a region that provides compatibility to the derivatized material. Acyl groups that can be used as compatibilizing functional groups include acetyl, propionyl, butyryl, trifluoroacetyl, benzoyl, acryloyl and other well-known groups. The formation of such groups on either the primary or secondary ring hydroxyls of the cyclodextrin molecule involve well-known reactions. The acylation reaction can be conducted using the appropriate acid anhydride, acid chloride, and well-known synthetic protocols. Peracylated cyclodextrin can be made. Further, cyclodextrin having less than all of available hydroxyls substituted with such groups can be made with one or more of the balance of the available hydroxyls substituted with other functional groups. Cyclodextrin materials can also be reacted with alkylating agents to produced an alkylated cyclodextrin, a cyclodextrin ether. Alkylating groups can be used to produce peralkylated cyclodextrin using sufficient reaction conditions to exhaustively react the available hydroxyl groups with the alkylating agent. Further, depending on the alkylating agent, the cyclodextrin molecule used in the reaction conditions can produce cyclodextrin substituted at less than all of the available hydroxyls. Typical examples of alkyl groups useful in forming the alkylated cyclodextrin include methyl, propyl, benzyl, isopropyl, tertiary butyl, allyl, trityl, alkyl-benzyl and other common alkyl groups. Such alkyl groups can be made using conventional preparatory methods, such as reacting the hydroxyl group under appropriate conditions with an alkyl halide, or with an alkylating alkyl sulfate reactant. The preferred cyclodextrin is a simple lower alkyl ether, such as methyl, ethyl, n-propyl, t-butyl, etc. and is not peralkylated but has a degree of substitution of about 0.3 to 1.8. Tosyl(4-methylbenzene sulfonyl) mesyl (methane sulfonyl) or other related alkyl or aryl sulfonyl forming reagents can be used in manufacturing compatibilized cyclodextrin molecules for use in thermoplastic resins. The primary —OH groups of the cyclodextrin molecules are more readily reacted than the secondary groups. However, the molecule can be substituted on virtually any position to form useful compositions. Such sulfonyl containing functional groups can be used to derivatize either of the secondary hydroxyl groups or the primary hydroxyl group of any of the glucose moieties in the cyclodextrin molecule. The reactions can be conducted using a sulfonyl chloride reactant that can effectively react with either primary or secondary hydroxyls. The sulfonyl chloride is used at appropriate mole ratios depending on the number of target hydroxyl groups in the molecule requiring substitution. Either symmetrical (per substituted compounds with a single sulfonyl moiety) or unsymmetrical (the primary and secondary hydroxyls substituted with a mixture of groups including sulfonyl derivatives) can be prepared using known reaction conditions. Sulfonyl groups can be combined with acyl or alkyl groups generically as selected by the experimenter. Lastly, monosubstituted cyclodextrin can be made wherein a single glucose moiety in the ring contains between one and three sulfonyl substituents. The balance of the cyclodextrin molecule remaining unreacted. Amino and other azido derivatives of cyclodextrin having pendent thermoplastic polymer containing moieties can be used in the sheet, film or container of the invention. The sulfonyl derivatized cyclodextrin molecule can be used to generate the amino derivative from the sulfonyl group substituted cyclodextrin molecule via nucleophilic displacement of the sulfonate group by an azide (N3 −1) ion. The azido derivatives are subsequently converted into substituted amino compounds by reduction. Large numbers of these azido or amino cyclodextrin derivatives have been manufactured. Such derivatives can be manufactured in symmetrical substituted amine groups (those derivatives with two or more amino or azido groups symmetrically disposed on the cyclodextrin skeleton or as a symmetrically substituted amine or azide derivatized cyclodextrin molecule. Due to the nucleophilic displacement reaction that produces the nitrogen containing groups, the primary hydroxyl group at the 6-carbon atom is the most likely site for introduction of a nitrogen-containing group. Examples of nitrogen containing groups that can be useful in the invention include acetylamino groups (—NHAc), alkylamino including methylamino, ethylamino, butylamino, isobutylamino, isopropylamino, hexylamino, and other alkylamino substituents. The amino or alkylamino substituents can be further reacted with other compounds that react with the nitrogen atom to further derivatize the amine group. Other possible nitrogen containing substituents include dialkylamino such as dimethylamino, diethylamino, piperidino, piperizino, quaternary substituted alkyl or aryl ammonium chloride substituents. Halogen derivatives of cyclodextrins can be manufactured as a feed stock for the manufacture of a cyclodextrin molecule substituted with a compatibilizing derivative. In such compounds, the primary or secondary hydroxyl groups are substituted with a halogen group such as fluoro, chloro, bromo, iodo or other substituents. The most likely position for halogen substitution is the primary hydroxyl at the 6-position. Hydrocarbyl substituted phosphono or hydrocarbyl substituted phosphato groups can be used to introduce compatible derivatives onto the cyclodextrin. At the primary hydroxyl, the cyclodextrin molecule can be substituted with alkyl phosphato, aryl phosphato groups. The 2, and 3, secondary hydroxyls can be branched using an alkyl phosphato group. The cyclodextrin molecule can be substituted with heterocyclic nuclei including pendent imidazole groups, histidine, imidazole groups, pyridino and substituted pyridino groups. Cyclodextrin derivatives can be modified with sulfur containing functional groups to introduce compatibilizing substituents onto the cyclodextrin. Apart from the sulfonyl acylating groups found above, sulfur containing groups manufactured based on sulfhydryl chemistry can be used to derivatize cyclodextrin. Such sulfur containing groups include methylthio (—SMe), propylthio (—SPr), t-butylthio (—S—C(CH3)3), hydroxyethylthio (—S—CH2CH2OH), imidazolylmethylthio, phenylthio, substituted phenylthio, aminoalkylthio and others. Based on the ether or thioether chemistry set forth above, cyclodextrin having substituents ending with a hydroxyl aldehyde ketone or carboxylic acid functionality can be prepared. Such groups include hydroxyethyl, 3-hydroxypropyl, methyloxylethyl and corresponding oxeme isomers, formyl methyl and its oxeme isomers, carbylmethoxy (—O—CH2—CO2H) and carbylmethoxymethyl ester (—O—CH2CO2—CH3). Cyclodextrin derivatives with compatibilizing functional groups containing silicone can be prepared. Silicone groups generally refer to groups with a single substituted silicon atom or a repeating silicone-oxygen backbone with substituent groups. Typically, a significant proportion of silicone atoms in the silicone substituent bear hydrocarbyl (alkyl or aryl) substituents. Silicone substituted materials generally have increased thermal and oxidative stability and chemical inertness. Further, the silicone groups increase resistance to weathering, add dielectric strength and improve surface tension. The molecular structure of the silicone group can be varied because the silicone group can have a single silicon atom or two to twenty silicon atoms in the silicone moiety, can be linear or branched, have a large number of repeating silicone-oxygen groups, and can be further substituted with a variety of functional groups. For the purposes of this invention, the simple silicone containing substituent moieties are preferred including trimethylsilyl, mixed methyl-phenyl silyl groups, etc. We are aware that certain β-CD and acetylated and hydroxy alkyl derivatives are available commercially. Preferably, the cyclodextrin compound utilized in the technology of the invention involves a substituted β- or α-cyclodextrin. Preferred cyclodextrin materials are substituted substantially on the 6-OH of the glucose moiety in the cyclodextrin ring. The free hydroxyl groups at the 3- and 2-position of the glucose moieties in the cyclodextrin ring are important for metallic catalyst complex formation. The degree of substitution (D.S.) of the cyclodextrin material can range from about 0.3 to 1.8; preferably the degree of substitution can range from about 0.5 to 1.2. Further the degree of substitution has an important role in ensuring that the cyclodextrin is compatible with the polymer melt, but is not so substituted that the cyclodextrin cannot participate in complexing catalyst residues. We have further found that the amount of substituted cyclodextrin material useful in preventing the formation of aldehyde by complexing metallic catalyst residues is less than the amount of cyclodextrin typically used in barrier structures for volatile organic compounds. The effective amount of a substituted cyclodextrin for aldehyde suppression ranges from about 100 ppm to 1400 ppm based on the polymer composition as a whole, preferably 350 ppm to 900 ppm. We believe the mechanistic action of the substituted cyclodextrin material is one or more of the secondary hydroxyl groups form a coordination complex with the catalyst residues to form a metallocyclodextrin where more than one metal ion is bound per cyclodextrin. While the amounts of cyclodextrin useful in preventing formation of organic residuals during preform and bottle manufacture are less and that used in barrier applications, even at reduced amounts, the cyclodextrin materials can provide a degree of barrier properties. According to the concentrations disclosed in this application, regenerated acetaldehyde formation is substantially reduced in the polyester and some degree of barrier property is achieved. To achieve these results, a substantial and effective fraction of the cyclodextrin must be available for catalyst residue complexation to accomplish the goal of the invention. The compatible cyclodextrin compounds are introduced into the melt thermoplastic substantially free of an inclusion complex or inclusion compound. For this invention the term “substantially free of an inclusion complex” means that the quantity of dispersed cyclodextrin material in the coating on the polyester chip or pellet is free of a complex material or “guest compound” in the central pore of the cyclodextrin molecule. Materials other than the catalyst residue can occupy the central pore or opening of the cyclodextrin molecule, however, sufficient unoccupied cyclodextrin must be available to remove the catalyst from its aldehyde-generating role. Raw material used in any of the thermoforming procedures is a chip form or a pelletized thermoplastic polyester. The thermoplastic polyester is made in the form of a melt and is converted to bulk polymer. The melt can be easily reduced to a useful pellet or other small diameter chip, flake or particulate. The pellet, chip, flake or particulate polyester can then be blended with the derivatized cyclodextrin material until uniform, dried to remove moisture, and then melt extruded under conditions that obtain a uniform dispersion or solution of the modified or derivatized cyclodextrin and polyester material. The resulting polyester pellet is typically substantially clear, uniform and of conventional dimensions. The pellet preferably contains about 0.01 to about 0.14 wt-% of the cyclodextrin compound, more preferably about 0.035 to about 0.09 wt-% of the cyclodextrin compound, polyester pellet containing the modified cyclodextrin material can then be incorporated into the conventional preform or parison with injection molding techniques. The products of these techniques contain similar proportions of materials. The cyclodextrin compound can be incorporated onto the chip or pellet by coating the chip or pellet or similar structure with a liquid coating composition containing an effective amount of the cyclodextrin or substituted cyclodextrin. Such coating compositions are typically formed using a liquid medium. Liquid media can include aqueous media or organic solvent media. Aqueous media are typically formed by combining water with additives or other components to form coatable aqueous dispersions or solutions. Solvent based dispersions are based on organic solvents and can be made using known corresponding solvent based coating technology. The liquid coating compositions of the invention can be contacted with the polyester pellet, chip or flake using any common coating technology including flood coating, spray coating, fluidized bed coating, electrostatic coating or any other coating process that can load the pellet, chip or flake with sufficient cyclodextrin to act as a scavenger or barrier material in the final polyester bottle. Careful control of the amount and thickness of the ultimate coating optimizes the scavenger and barrier properties without waste of material, maintains clarity and color in the thermoplastic bottle and optimizes polyester physical properties. The cyclodextrin materials present in the aqueous coating solutions can contain from about 1.0 to about 50 wt.-% of the cyclodextrin, preferably about 3.0 to 40 wt.-% of the cyclodextrin in the liquid material. The coatings are commonly applied to the pellet, chip or flake and the liquid carrier portion of the solution or dispersion is removed typically by heating leaving a dry coating on the polyester. When dry, substantially no solution or liquid medium is left on the pellet. Commonly, the coated polyester is dried in a desiccant-dryer to remove trace amounts of residual water before injection molding. Typically, the PET chips are dried to 50 ppm or less moisture. Sufficient cyclodextrin is added to the polyester chip, pellet or flake such that the final finished perform or parison are ultimately blow molded polyester beverage container contains less than about 1,400 ppm of the cyclodextrin compound based on the total weight of the polyester. Greater than this amount of cyclodextrin compound in the polyester may impact regenerated acetaldehyde reduction, clarity and cause yellowing. Preferably, the amount of material in the polyester material ranges from about 350 ppm to 900 ppm of cyclodextrin compound in the polyester material. Care must be taken during the manufacture of the preform or parison and the final manufacture of the container. During the manufacture of the perform and later during the manufacture of the container, sufficient heat history in terms of maintaining the melt polymer at a set temperature for a sufficient amount of time to obtain adequate scavenging and to thoroughly disperse the cyclodextrin material in the polymer matrix must be achieved. However, the time and temperature of the steps should not be so long as the cyclodextrin material can thermally decompose (i.e., ring open the cyclodextrin) resulting in a loss of scavenging capacity and barrier properties accompanied by polymer yellowing. Polymer haze can result during stretch blow molding unless a cyclodextrin derivative with a melting point below the preform reheat temperature is selected. Cyclodextrins with melting points greater than the preform reheat temperature will produce microvoids in the biaxially oriented bottle wall giving a hazy appearance to the polymer. Accordingly, depending on the equipment involved, the thermoplastic polyester is maintained in a melt form at a temperature greater than about 260° C., preferably about 270° C. to 290° C. for a total residence time greater than about 90 seconds preferably about 120±30 seconds to ensure adequate metal residue complexation during injection molding while ensuring that the cyclodextrin material prevents acetaldehyde generation. The total residence time is determined from the cycle time of the injection molding machine. We have also found the cyclodextrin material is important in achieving the goals of the invention. As discussed above, the cyclodextrin material is applied to the polyester pellet or chip in the form of an aqueous solution. Such solutions are made by dissolving or suspending the cyclodextrin material in an aqueous medium. The aqueous solution is prepared from cyclodextrin materials where the trace impurities have been removed. These impurities can arise from the enzymatic manufacture of the cyclodextrin material producing linear starches, saccharide and polysaccharide precursor materials or from the synthetic reaction between the cyclodextrin material and reactants used to form the derivatives. Materials that are present as impurities in the substituted cyclodextrin material that cause off-yellow color in injection molded PET include iron, sodium chloride, acetic acid, iron acetate, sodium acetate, furfurals, linear starches and sugars, dehydrated linear starches, levoglucosan, levoglucosenone and proteins. We have found that these cyclodextrin impurities can be effectively removing using purification techniques including contacting the aqueous cyclodextrin solution with activated charcoal or activated carbon absorbent, contacting the aqueous cyclodextrin solution with an ion exchange resin or by contacting the aqueous solution with nanofiltration or reverse osmosis equipment. We found that using these techniques reduced the concentration of impurities in the aqueous cyclodextrin solutions to levels that do not contribute to color generation in the polyester material, form undesirable organic materials or reduce antimony. In such purification processes, the aqueous cyclodextrin solution is prepared at concentration of about 3 to 50 wt. percent of the cyclodextrin in the aqueous solution. Such an aqueous solution can be contacted with the carbon absorbent or resin absorbent a rate of about 10 to 350 liters solutions per kilogram of absorbent. The residence time of the solution in contact with the absorbent can be adjusted to obtain substantial impurities removal. The solution, however, is generally maintained in contact with the absorbent for a time period of about 0.5 to 24 hours. In nanofiltration or reverse osmosis processing, the aqueous cyclodextrin material is directed into the appropriate purification equipment and is maintained, at an appropriate pressure, for appropriate period of time to ensure that a substantial proportion of the impurity in the cyclodextrin material passes through the filter or reverse osmosis membrane while the cyclodextrin material is retained in the reject aqueous solution. In this regard, about 700 to 1,200 liters of solution are passed through the equipment per square meter of filter or membrane and a rate of about 125 to 2,000 liters of solution per hour. The effluent passing through the filter or membrane comprises about 60 to 98 % of the input stream. Typically, the nanofiltration or reverse osmosis equipment is operated at an internal pressure of about 125 to 600 psi. Decolorizing resins like Dowex SD-2 (a tertiary amine functionalized macroporous styrene divinylbenzene copolymer) are used to remove PET yellow-color causing materials from aqueous cyclodextrin solutions. Other resins like Dowex Monosphere 77 (a weak base anion resin), Dowex MAC-3 (a weak cation resin), and Dowex 88 (a strong acid cation) can also be used in combination (infront) with Dowex SD-2. These resins can be operated with flow of 2 to 25 liters per minute per ft2 of resin. Outlined below is a method for evaluating dried cyclodextrin for thermal stability based upon the potential of generating off-color. This method mimics the processing of injection molding cyclodextrin coated PET chip. Approximately 2 mL of a 25 wt.-% cyclodextrin solution is placed into a 20 mL headspace vial (or equivalent). Evaporate water from the solution by heating the vial using a laboratory hot plate (or equivalent) at a moderate temperature. The vial is periodically agitated during heating, and the interior of the vial is swabbed with a lint free wipe to remove condensate. When the residue becomes viscous and begins to bubble the vial should be removed from the heat and gently rolled to coat the interior walls of the vial evenly. Place the coated vial into an oven at 60° C. for approximately 10 minutes to completely solidify the cyclodextrin residue by removing all remaining water. The clear CD residue may bubble and haze slightly when evaporation is complete. Remove the vial when dry and heat oven to 280° C. Place the vial into the 280° C. oven for exactly 2 minutes (if oven temperature drops when placing the vial into the oven, begin timing only when the oven temperature is >270° C.). Remove vial and allow to cool to room temperature. Cyclodextrin residue should remain colorless to just slightly off yellow. The foregoing discussion illustrates various embodiments of the application and the acetaldehyde reduction and the barrier and complexing properties of the materials of the invention. The following examples and data further exemplify the invention and contain a best mode. Intrinsic viscosity (IV) is determined by mixing 0.2 grams of typically amorphous polymer composition with 20 milliliters of dichloroacetic acid at a temperature of 25° C. using a Ubbelhode viscometer to determine the relative viscosity (RV). RV is converted to IV using the equation: IV=[(RV−1)×0.691]+0.063. The color of the polymer chips was determined by ASTM D 6290–98 using a Minolta Chroma-Meter CR-310 spectrophotometer, and reported as one or more of the CIE L*, a* and b* standard units. The haze of the preforms was also measured using this instrument. Acetaldehyde is a good model for the undesirable organic compound inhibiting properties of the invention. Table 1 contains (Examples 1–21) analytical test results showing acetaldehyde (AA) reductions in polycondensate amorphous, polyethylene terephthalate. Various cyclodextrin compounds (unmodified and modified), manufactured by Wacker Biochem Corporation, were added in the molten poly-condensate polyethylene terephthalate in the last two minutes before the molten PET is extruded from the batch reactor, quenched in cold water and chipped into pellets (also called chips). This test was done to evaluate various cyclodextrins for removing acetaldehyde. The acetaldehyde concentration equilibrium in this particular batch process prior to extruding the molten resin is around 60 ppm. The cyclodextrin compound is added during the last two minutes of the process where it is dispersed with the reactor mixer. After two minutes, the polyethylene terephthalate is extruded from the mixer. The stream of molten resin exiting the batch reactor into the quenching water is called a noodle. A number of minutes are required to drain the molten resin from the reactor. The noodle samples were cryogenically cooled, ground to 10 mesh or finer and placed into a glass sample jar, which is immediately sealed. A 0.25+/−0.002 g sample of granulated PET is placed into a 22-ml glass vial. The vial is immediately capped using a Teflon® faced butyl rubber septa and aluminum crimp top. Acetaldehyde is desorbed from the sample into the headspace by heating the vial at 160° C. for 90 minutes then analyzed for acetaldehyde by static headspace gas chromatography using flame ionization detection. The materials with the 0.05 wt.-% and 0.10-wt.-% unmodified β-cyclodextrin compound were clear for the entire noodle extrudate. These data show that cyclodextrin material having a limited degree of substitution can contribute to reduced acetaldehyde formation and, in some examples, reduced color formation in the polyester, while maintaining useful polyester mechanical properties. The data in Table 2 suggest that low loading amounts of the cyclodextrin material, having a limited degree of substitution, can provide excellent acetaldehyde reduction. These data suggest that further experimentation with optimized substituted cyclodextrin materials at low concentrations can provide excellent results. Polyethylene terephthalate based polyester prepared by conventional continuous process polycondensation procedures, well known in the art, can be used in combination with a process including the late-addition of a substituted cyclodextrin. The cyclodextrin derivative material can be added in a late stage of polyester manufacture. For example, the cyclodextrin dispersed in a liquid carrier can be injected into the molten polyester after initial polymerization but before it exits from the polycondensation reactor, prior to formation into a pelletized or other shaped form. The invention is further illustrated using a continuous pilot line (40 kg/hour polyester output) process to manufacture a commercial grade copolymer packaging resin (KoSa 1102) with a nominal intrinsic viscosity of 0.84 dl/g, a diethylene glycol content of <2.0%, a density of 1.39 g/cc and a melting point of 244° C. The cyclodextrin powder is delivered into the polyester flow using a material comprising a pumpable slurry containing a 50/50 by weight mixture of triacetyl β-cyclodextrin and an oil carrier (Emery 3004). The triacetyl beta cyclodextrin (W7TA from Wacker Biochem Corporation) had a differential scanning calorimetry (DSC) melting point of 191° C., 1200 ppm of residual acetic acid by sodium hydroxide titration with phenolphthalein indicator, 400 ppm acetate by ion chromatography, and when analyzed by Matrix Assisted Laser Desorption Time of Flight Mass Spectrometry (MALDI-TOF/MS) was found to contain 96% peracetylated beta cyclodextrin with the remaining 4% comprising a cyclodextrin moiety having one free hydroxyl group. Before mixing the triacetyl β-cyclodextrin into the carrier, it was dried in a vacuum oven at 105° C. under 1 mm Hg for sixteen (16) hours. The acetyl cyclodextrin derivative was dispersed into the carrier oil using low shear mixing. The mixture had a density of 1.05 grams/CC. During the operation of the continuous process pilot line, the slurry was pumped into the molten polyester using a microprocessor controlled syringe pump (ISCO, 500D Syringe Pump) to precisely meter the viscous slurry. The cyclodextrin/oil carrier slurry was introduced into the polyester melt before an inline baffled mixing chamber used to thoroughly mix the slurry into the polyester just prior to exiting the reactor, quenching water and chipping the polyester noodle. The residence time of the cyclodextrin in the 285° C. polyester flow before exiting the reactor was 1 to 2 minutes. Two cyclodextrin loadings (0.20% and 0.25% by weight) were produced by late-addition process described above. The pump was programmed to meter 152 mL and 190 mL per hour for the 0.20% and 0.25% cyclodextrin loadings based on the polyester resin, respectively. The following amorphous polyester test results were obtained from the control polyester and two polyester samples containing cyclodextrin. This method is not desirable if the polyester/acetyl cyclodextrin derivative mixture will be subjected to solid state polymerization since undesirable color may develop during extended time at elevated temperature. The off-color produced in SSP processed polyester chip containing triacetyl β-cyclodextrin is caused by the degradation of the cyclodextrin molecule. Initially, the cyclodextrin structure ring-opens through heterolytic scission of the glucosidic linkage, analogous to acid hydrolysis, resulting in the formation of a polysaccharide with a unit of levogucosan at one end. The presence of the polysaccharide leads to non-specific competing dehydration and deacetylation reactions that form highly colored materials The elimination of water or acetic acid from the linear reduced oligosaccharide with the formation of double bonds in one of the glucoside units followed by elimination of a molecule of hydroxyacetaldehyde leads to the formation of a linear structure with conjugated double bonds. These colored compounds provide the off-yellow color (b*) in the SSP processed chip. The reactions of thermal degradation of cellulose triacetate, similar to that of cyclodextrin, are well known in the art. The presence of small amounts of acetic acid accelerates the degradation process described above. Based on the experimental data shown above, a second late-addition polyester batch was produced with acetyl beta cyclodextrin with low (60 ppm) residual acetic acid and the same degree of acetyl substitution. Two late-addition samples were produced—a control polyester and a polyester containing 0.25 wt.-% of acetyl beta cyclodextrin. The chip was exposed to SSP treatment in an identical manner as the earlier example and the materials were checked for IV, color and acetaldehyde content and the data is given below: Table 7 contains acetaldehyde (AA) reductions obtained on aqueous acetyl β-cyclodextrin coated commercial KoSa amorphous polycondensate PET pellets. Three acetyl β-cyclodextrin coating weights—0.10%, 0.15% and 0.20% were used. The 0.20% acetyl β cyclodextrin coating reduced AA by 52%, the other coating weights were less effective for reducing AA. Amorphous chip is coated with the aqueous cyclodextrin solutions to provide the cyclodextrin loading weight percent. An aqueous cyclodextrin coating-solution (5 wt.-%) is prepared. An aliquot by weight of the coating solution is deposited into the center mass of resin chip contained in a glass jar (already having a tare weight). The amount of coating solution that is transferred is adjusted for the coating loss on the inside surface of the glass jar. The capped jar is rotated at approximately 30 rpm for 15 minutes on ajar roller mill to evenly distribute the cyclodextrin coating on the PET chips. Following coating, the chip is dried in a vacuum oven at 105° C. under 1 mm Hg for sixteen (16) hours. The dried chip is then molded and tested for acetaldehyde concentration in an Atlas mixer/molder at 270° C. for two minutes. The mixing is low shear and the melt is then injected from the mixing chamber after two minutes. A polyethylene terephthalate sample composite (three individual Atlas runs) was made from each sample, and then each sample was analyzed in triplicate. The resin mixing time in the molten state is important for optimum AA reduction. This suggests that some minimum amount of mixing time will be required in the injection molder melt phase when the preform is molded. The mixing/molding cycle time for commercial injection molding machines is typically from 2 to 3 minutes depending on the number of preform cavities and the injection cycle time. Commercial PET bottle grade resin is SSP processed before it is used to injection mold preforms. The SSP process decreases AA and carboxyl end groups, and achieves the desired IV, thus improving the physical properties of the finished blown bottle. The PET pellets in Table 7 were dry-coated with acetyl β-cyclodextrin in a glass jar by tumbling to adhere the cyclodextrin powder to the pellets and then vacuum oven drying (105° C. @ 1 mm Hg pressure for 14 hours to eliminate residual PET moisture). Vacuum drying also lowers the pellet acetaldehyde concentration down to approximately 1 ppm. During vacuum drying, the high AA concentration in the non-SSP PET pellet diffuses out of the pellet and through the exterior CD coating. The dried-coated chip samples and control sample were run under identical drying conditions and then molded on an Atlas Molder Mixer for two (2) minutes at 270° C. The molded samples were collected cryogenically, cryogenically ground to 10 mesh or smaller and then analyzed by static headspace gas chromatography with flame ionization detection using a sample conditioning temperature of 150° C. for 90 minutes. This coating method demonstrates commercial application of the technology is achievable when large concentrations of AA are in the chip prior to CD coating and drying. An acetaldehyde concentration of about 4.1 ppm, a reduction of more than 50%, using an acetyl substituted beta-cyclodextrin (DS=1.1) was achieved. In Table 8, PET pellets were coated with an aqueous solution of acetyl β-cyclodextrin and hydroxypropyl β-cyclodextrin. Initially, PET chips were coated with an aqueous CD solution and then vacuum dried for 14 hours at 120° C. and 2 mm Hg pressure. Following drying, the PET chips were extruded in a Killion single screw moderate shear extruder (PET melt temp. 282° C.). The PET residence time in the Killion extruder, was approximately 30 seconds. After the extruder reached equilibrium running each sample, the extrudate was collected by cryogenically cooling with liquid nitrogen, grinding to 10 mesh, and then analyzing by static headspace gas chromatography for acetaldehyde (solid graphed results, see The single screw extrudate above was processed a second time in a laboratory-scale Atlas Mixing Molder. The single screw extrudate samples were prepared for molding on the Atlas by grinding to 10 mesh after being cryogenically cooled with liquid nitrogen, vacuum oven drying as described above to remove moisture and residual acetaldehyde, and then molded on an Atlas Molder Mixer for two (2) minutes at 270° C. to regenerate acetaldehyde. The molded samples were also collected cryogenically, cryogenically ground to 10 mesh and then analyzed by static headspace gas chromatography for acetaldehyde (pattern graphed results). All test samples were analyzed in triplicate. These data show dispersing CD at moderate shear and short residence time (about 30 sec. on the Killion extruder) is less effective in lowering acetaldehyde levels as compared to control while dispersing CD at low shear and longer residence time (120 seconds on the Atlas Molder Mixer) does substantially reduce acetaldehyde levels as compared to control. Both hydroxypropyl substituted β-cyclodextrin and acetyl substituted β-cyclodextrin can achieve reduced acetaldehyde levels when cyclodextrin coated chip is processed with longer residence times and low shear. In particular, achieving 55% acetaldehyde reduction after Atlas processing (i.e., low shear longer residence time) illustrates that commercial injection molding machines are ideally suited to process CD coated PET chip. Using similar sample preparation techniques to those discussed above, additional experiments were conducted to evaluate AA reduction when the Atlas Molder Mixer, molding temperature and time are held constant but mixing speed was varied. Tables 9 and 10 show experimental data for two mixing speeds of 40 and 140 rpm. The best acetaldehyde reduction compared to the control in Table 9 reduced the acetaldehyde concentration from about 33 ppm to about 13 ppm at 40 rpm. In Table 10, at 140 RPM, substantial acetaldehyde reduction was also achieved. Holding molding temperature (275° C.) and time (2 min.) constant, then changing the low shear mixing speed (40 rpm vs. 140 rpm) does not significantly affect AA reductions obtained from various CD coated PET chips. In the following examples, two cyclodextrins (unmodified α-cyclodextrin and acetyl β-cyclodextrin DS=1.1) were coated onto a commercial resin, Polyclear 1101, obtained from KoSa. An aqueous cyclodextrin coating solution (5 wt.-%) is prepared. An aliquot of the coating solution (measured by weight) is deposited into the center mass of 2.5 Kg of resin chip in a 1 gallon glass jar (already having a tare weight). The amount of coating solution that is transferred is adjusted for the coating loss on the inside surface of the glass jar. The capped jar is rotated at approximately 30 rpm for 15 minutes on ajar roller mill to evenly distribute the cyclodextrin coating on the PET chips. After the jar roller coating procedure, the jar cap is removed and the glass jar is placed into a vacuum oven operated at 130° C. at about 2 mm Hg for sixteen (16) hours 20 to remove water from the coating procedure. The coated PET chip is removed and the jar is weighed; the exact chip coating weight is determined after determining the CD coating weight remaining on the jar's inside surface. The previously dried, coated chip samples and control were dried in an Arburg inline dryer at 175° C. for at least 4 hours. Each coated resin variable, along with a control, was injection molded (48 gram preforms) on the Arburg single-cavity injection-molding machine. Injection molding was carried out at 275° C. for all samples. Preform IV, color, and AA were measured in triplicate and the average value reported. Samples for AA analysis were removed from the center section of the perform. Preform samples were cryogenically ground to 10 mesh or smaller and then analyzed by static headspace gas chromatography with flame ionization detection using a sample conditioning temperature of 160° C. for 90 minutes. The preform data are summarized in Table 11. The higher yellow b* values obtained from the acetyl derivative were caused from residual acetic acid, acetate and iron. The yellow color can be reduced by treating an aqueous solution of acetyl β cyclodextrin with activated charcoal to reduce the acetic acid and acetate concentration. The acetate and iron contaminants can effectively be removed by reverse osmosis or nanofiltration. Residual acetic acid is the principal contaminant responsible for producing high b* values. Unmodified α-cyclodetxrin causes haze in the injection molded polyester preform due to its incompatibility with the resin. Acetyl β (DS=1.1) reduced regenerated acetaldehyde more effectively than unmodified α-cyclodextrin. A concentration of 350 ppm of acetyl β-cyclodextrin reduced regenerated acetaldehyde 30.4%. Based on the experimental data shown above, attention was focused on defining the preferred cyclodextrin substituent, the preferred concentration of substituted cyclodextrin in the polyester, and the preferred degree of substitution. A methyl ether substituent was selected as a model for other simple ether and ester substituents. Methylated beta cyclodextrin (Me β) materials were used in amounts of about 250 ppm, 500 ppm and 600 ppm. Aqueous solutions (4.8 wt-%) of Me β was coated onto KoSa 1101 chip with an IV of 0.83 dL/g to provide the appropriate CD coating weight. The coated chip was vacuum dried 14 hours at 140° C. Dried samples were then molded on an Atlas Molder Mixer for two (2) minutes at 275° C., 280° C. The molded samples were collected cryogenically, cryogenically ground to 10 mesh or smaller and then analyzed by static headspace gas chromatography with flame ionization detection using a sample conditioning temperature of 150° C. for 90 minutes. These experiments produced the following results shown below in Tables 12 and 13 and These data demonstrate that the use of a substituted cyclodextrin material with the correct degree of substitution, substituted substantially at the -6-OH position, used at an appropriate concentration can achieve residual acetaldehyde levels substantially less than control uncoated chip. The most common stoichiometric ratio for cyclodextrin complexes is 1:1 or 2:1 (guest-acetaldehyde:host-cyclodextrin). Using this basis to calculate the theoretical acetaldehyde concentration (parts per million) reduction as a function of weight-% cyclodextrin loading (i.e., as a complex ratio of 1:1) in PET, a linear relationship can be established for both methylated cyclodextrin substitutions (DS=0.6 and DS=1.8). The theoretical relationships mathematically show that a given coating weight of Me β (DS=0.6) is more effective than the same coating weight of Me β (DS=1.8) for acetaldehyde removal due to the difference in molecular weights. Working with the 275° C. molding temperature experimental data in Tables 12 and 13, a second relationship between cyclodextrin loading and acetaldehyde can be calculated. On an experimental test basis, after weight normalizing one Me β cyclodextrin substitution molecular weight to the other Me β cyclodextrin substitution molecular weight (Me β (DS=1.8) has a greater molecular weight), experimentally Me β (DS=0.6) is >40% more effective than Me β (DS=1.8). In particular, achieving residual acetaldehyde levels between 2 and 3 ppm are a surprising result. In the following examples, regenerated acetaldehyde concentrations were experimentally studied in two different bottle grade PET resins (KoSa Polyclear 1101 and 3301). PET resin 1101 is a higher molecular weight (IV of 0.83 dL/g ) resin than the 3301 (IV of 0.75 dL/g) resin. By wavelength dispersive x-ray fluorescence, 1101 and 3301 show antimony concentrations of 317 ppm and 264 ppm, respectively. In this experiment, the two bottle grade resins were aqueous coated with similar weights of two Me β cyclodextrins (DS=0.6 and DS=1.8) and molded at three different temperatures. Following coating, the chip was vacuum dried at 120° C. under 1 mm Hg for 14 hours resulting in 500 to 600 ppm cyclodextrin in the polyester. The dried coated chip samples and control sample run under identical drying conditions were molded on an Atlas Molder Mixer for two (2) minutes at 270° C., 275° C. and 280° C. The molded samples were collected cryogenically, cryogenically ground to 10 mesh or smaller and then analyzed by static headspace gas chromatography with flame ionization detection using a sample conditioning temperature of 150° C. for 90 minutes. Table 14 shows residual acetaldehyde concentration (average of three replicates) as a function of resin type, molding temperature and degree of substitution. Bottle resin 1101 and 3301 produce different concentrations of acetaldehyde (1101 is greater than 3301) at a given temperature, but achieve almost identical levels of regenerated acetaldehyde when coated with Me β (DS=0.6) and molded. The percent (%) acetaldehyde reduction by Me β DS=0.6 is dependent on the initially acetaldehyde concentration generated by the resin at a specific temperature. This is illustrated when comparing 1101 and 3301 resins with and without a CD coating run at 280° C. Resin 1101 generates greater acetaldehyde than 3301 when injection molded at a given temperature, but both resins coated with Me β DS=0.6 are reduced to the same acetaldehyde concentration. Higher injection molding temperatures impact acetaldehyde generation more in uncoated CD chip than coated chip. The percent (%) AA reduction is greater for a given resin at higher injection molding temperatures than at lower injection temperatures. In the following examples, three cyclodextrin derivatives were coated onto KoSa 3301 PET chip as described previously. Acetyl β and Me β (DS=1.8) were treated with activated charcoal to remove color-causing impurities, and ME β (DS=0.6) was treated with Dowex SD-2 to remove color-causing impurities. Each coated sample and control pair was dried in an Arburg inline dryer at 175° C. for at least 4 hours. Each coated resin variable along with a control was injection molded (48 gram preforms) on the Arburg single-cavity injection-molding machine. Injection molding was carried out at 270° C. for all samples. Preform IV, color, haze, and AA were measured in triplicate and the average value reported. Samples for AA analysis were removed from the center of the preform. Preform samples were cryogenically ground to 10 mesh or smaller and then analyzed by static headspace gas chromatography with flame ionization detection using a sample conditioning temperature of 160° C. for 90 minutes. The preform data are summarized in Table 15. In the following examples, two different degrees of methylated substituted cyclodextrins were coated onto KoSa 3301 PET chip as described previously. Me β (DS=0.6) and Me β (DS=1.8) were treated with activated charcoal to remove color-causing impurities. The control 3301 and cyclodextrin-coated samples were dried in a vacuum dryer at 140° C. for at 6 hours before injection molding. Each coated resin variable, along with the control, was injection molded (50.5 gram preforms) on Nissei ASB 250 injection-molding machine. The injection molder barrel zone temperature (setting and actual) profiles are provided in Table 16. Preform IV, b*, and AA were measured in triplicate and the average value reported. Samples for AA analysis were removed from the ring of the preform. Preform samples were cryogenically ground to 10 mesh or smaller and then analyzed by sample gas chromatography with flame ionization detection using a sample conditioning temperature of 160° C. for 90 minutes. The preform data are summarized in Table 16: The above explanation of the nature of the cyclodextrin compounds, the thermoplastic polyester material, the pellet or chip, the parison or preform, the beverage container and methods of making the beverage container provide sufficient manufacturing details to provide a basis for understanding the technology involving incorporating the cyclodextrin material in a polyester thermoplastic for the purpose of organic compound scavenging and barrier purposes. However, since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.
0.8577
FineWeb
2.140625
View attachment 29358 View attachment 29358 View attachment 29358 - Face-up artist(s): - Date of acquisition: - 01. Jan. 2008 - Yahoo! Japan - Reason for choice: - Love with the first eyecontact. I was just enjoying looking the dolls for a long time and never thought about having one, she was the reason why I made my first step in this Hobby. - Best Points: - She has an elegant and androgyn look in her original setting, this is not often by female-dolls (at that time at least) and it made me totally fall in love with her. - Worst Points: - Non. She's absolutley perfect. - Head Sculpt: - SDGr Girl To view comments, simply sign up and become a member!
0.8206
FineWeb
0.277344
Space Shuttle Challenger The Space Shuttle Challenger disintegrated due to O-ring failure 73 seconds after lift-off on January 28, 1986, killing all seven members on board. Those killed included the first private citizen to fly on a space shuttle, who was a high school teacher. Well, today we can say of the Challenger crew: Their dedication was, like Drake's, complete. The crew of the space shuttle Challenger honored us by the manner in which they lived their lives. Despite an "investigative" process (apparently designed to prevent discovery of the cause), physicist Richard Feynmann eventually learned that O-rings which provided crucial seals failed in the unusually cold weather of the launch. It was freezing cold that day (29 degrees F), whereas it had been at least 53 degrees on all previous launches. NASA had said the O-rings had a safety factor of 3, which Richard Feynman later exposed as untrue. - SRB engineers had previously warned about problems in the o-rings, but had been dismissed by NASA management. - For example. in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an experiment cutting the ring that cutting it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted, there was "a safety factor of three." This is a strange use of the engineer's term, "safety factor."
0.8696
FineWeb
3.609375
We are experiencing some wonderfully foggy conditions today. Fog is this librarian’s very favorite weather. I love the way it changes the look and feel of everything around you and the mystery that it holds. It’s somewhat like reading a good mystery novel, when you can’t quite make out what’s ahead. Fog is a weather condition that results in very low-lying clouds, made up of suspended water droplets. It forms when the difference between air temperature and dew point is less than 4 °F. An area the size of an Olympic sized swimming pool contains about 2 pints of water. Besides just looking really cool, all of this moisture in the air can be environmentally useful. The Redwood forests of California get around a third of their moisture from fog. In desert areas, fog can serve as a source of moisture when nets are used to collect the droplets from the fog. Here a few other interesting facts about fog: - The foggiest place on Earth is Grand Banks off of the coast of Newfoundland, where the cold Labrador Current mixes with the warmer Gulf Stream. The area sees over 200 foggy days per year. - Shadows that are cast through fog are three dimensional. - Foghorns use a low-pitched tone because low-pitched notes lose less energy to water droplets than high-pitched ones and thus travel further. - Fog can be both good and bad for your skin. The moisture in fog acts as a natural emollient for your skin. But, even though fog blocks visible light, you may not realize that it does let through the ultraviolet light that causes wrinkles and sunburn. - Fog affects our perception of speed. Because of reduced visibility we should drive more slowly and in fog we have to be extra careful to do so. Since the fog obscures our surroundings, our brains don’t perceive the contrasts in the objects around us as well and thus we think we are going slower than we actually are. - Be sure to use your low beam headlights when driving in foggy conditions. High beams will be reflected back at you in the fog, making it even harder to see. - Fog helped us win the Revolutionary War. At the Battle of Long Island, on August 27, 1776, George Washington and troops were beginning to be surrounded by British forces and needed to retreat. A heavy fog rolled in and provided cover for just long enough for the retreat of 9,000 Americans. When the fog lifted, the British moved into the area to find it empty.
0.7649
FineWeb
3.25
Clinical history: 50 year-old female with unilateral complex cystic mass in the right ovary measuring 25.0cm in maximum dimension. What is the most probable diagnosis? Borderline mucinous carcinoma with intraepithelial carcinoma Ovary mucinous tumours are the second most frequent epithelial tumor after serous representing ~15% of ovarian neoplasms. They usually present in middle aged adults, pre-menopausal. 80% are benign, 10% are borderline and 10% are carcinoma. The majority (77%) of ovarian mucinous carcinomas are metastases, so it is important to rule these out. Grossly they are usually unilateral, very cystic and present as very large masses (>10cm). They usually have a smooth outer surface with variable solid areas and are filled with sticky, gelatinous fluid rich in glycoproteins Histologically the mucin producing cells can resemble endocervical, gastric, or intestinal type epithelium. This case was a multilocular tumour that contained an atypical glandular cell proliferation with most areas lined with single layer of mucinous cells whereas the other areas show pseudostratified epithelium with frequent tufting. Approximately 30% of the tumour demonstrated a cribiform architecture with elongated fibrovascular cores along the stroma of the cysts consistent with the findings of intraepithelial carcinoma. According to Khunamornpong et al. (International J Gynaecol Pathol., 2011; 30 :218-230) Stage 1C and greater mucinous borderline tumours that harbour more than 10% intraepithelial carcinoma component have significantly higher recurrence rate (>10%). However, an increment up to 80% of tumour with intraepithelial carcinoma showed only a slight increase in recurrence. The entire appendix was submitted for microscopic examination and had no evidence of dysplasia or malignancy. No history of carcinoma elsewhere.
0.788
FineWeb
1.640625
Sample Type / Medical Specialty: Cardiovascular / Pulmonary Sample Name: ER Report - COPD Patient in ER complaining of shortness of breath (COPD) (Medical Transcription Sample Report) The patient is a 49-year-old Caucasian male transported to the emergency room by his wife, complaining of shortness of breath.HISTORY OF PRESENT ILLNESS: The patient is known by the nursing staff here to have a long history of chronic obstructive pulmonary disease and emphysema. He has made multiple visits in the past. Today, the patient presents himself in severe respiratory distress. His wife states that since his recent admission of three weeks ago for treatment of pneumonia, he has not seemed to be able to recuperate, and has persistent complaints of shortness of breath. Today, his symptoms worsened and she brought him to the emergency room. To the best of her knowledge, there has been no fever. He has persistent chronic cough, as always. More complete history cannot be taken because of the patientís acute respiratory decompensation.PAST MEDICAL HISTORY: Hypertension and emphysema.MEDICATIONS: Lotensin and some water pill as well as, presumably, an Atrovent inhaler.ALLERGIES: None are known.HABITS: The patient is unable to cooperate with the history.SOCIAL HISTORY: The patient lives in the local area with his wife.REVIEW OF BODY SYSTEMS: Unable, secondary to the patientís condition.PHYSICAL EXAMINATION: VITAL SIGNS: Temperature 96 degrees, axillary. Pulse 128. Respirations 48. Blood pressure 156/100. Initial oxygen saturations on room air are 80. GENERAL: Reveals a very anxious, haggard and exhausted-appearing male, tripoding, with labored breathing. HEENT: Head is normocephalic and atraumatic. NECK: The neck is supple without obvious jugular venous distention. LUNGS: Auscultation of the chest reveals very distant and faint breath sounds, bilaterally, without obvious rales. HEART: Cardiac examination reveals sinus tachycardia, without pronounced murmur. ABDOMEN: Soft to palpation. Extremities: Without edema.DIAGNOSTIC DATA: White blood count 25.5, hemoglobin 14, hematocrit 42.4, 89 polys, 1 band, 4 lymphocytes. Chemistry panel within normal limits, with the exception of sodium of 124, chloride 81, CO2 44, BUN 6, creatinine 0.7, glucose 182, albumin 3.3 and globulin 4.1. Troponin is 0.11. Urinalysis reveals yellow clear urine. Specific gravity greater than 1.030 with 2+ ketones, 1+ blood and 3+ protein. No white cells and 0-2 red cells. Chest x-ray suboptimal in quality, but without obvious infiltrates, consolidation or pneumothorax.CRITICAL CARE NOTE: Critical care one hour. Shortly after the patientís initial assessment, the patient apparently began to complain of chest pain and appeared to the nurse to have mounting exhaustion and respiratory distress. Although O2 had been placed, elevating his oxygen saturations to the mid to upper 90s, he continued to complain of symptoms, as noted above. He became progressively more rapidly obtunded. The patient did receive one gram of magnesium sulfate shortly after his arrival, and the BiPAP apparatus was being readied for his use. However, the patient, at this point, became unresponsive, unable to answer questions, and preparations were begun for intubation. The BiPAP apparatus was briefly placed while supplies and medications were assembled for intubation. It was noted that even with the BiPAP apparatus, in the duration of time which was required for transfer of oxygen tubing to the BiPAP mask, the patientís O2 saturations rapidly dropped to the upper 60 range. All preparations for intubation having been undertaken, Succinylcholine was ordered, but was apparently unavailable in the department. As the patient was quite obtunded, and while the Dacuronium was being sought, an initial trial of intubation was carried out using a straight blade and a cupped 7.9 endotracheal tube. However, the patient had enough residual muscle tension to make this impractical and further efforts were held pending administration of Dacuronium 10 mg. After approximately two minutes, another attempt at intubation was successful. The cords were noted to be covered with purulent exudates at the time of intubation. The endotracheal tube, having been placed atraumatically, the patient was initially then nebulated on 100% oxygen, and his O2 saturations rapidly rose to the 90-100% range. Chest x-ray demonstrated proper placement of the tube. The patient was given 1 mg of Versed, with decrease of his pulse from the 140-180 range to the 120 range, with satisfactory maintenance of his blood pressure. Because of a complaint of chest pain, which I myself did not hear, during the patientís initial triage elevation, a trial of Tridil was begun. As the patientís pressures held in the slightly elevated range, it was possible to push this to 30 mcg per minute. However, after administration of the Dacuronium and Versed, the patientís blood pressure fell somewhat, and this medication was discontinued when the systolic pressure briefly reached 98. Because of concern regarding pneumonia or sepsis, the patient received one gram of Rocephin intravenously shortly after the intubation. A nasogastric and Foley were placed, and an arterial blood gas was drawn by respiratory therapy. Dr. X was contacted at this point regarding further orders as the patient was transferred to the Intensive Care Unit to be placed on the ventilator there. The doctorís call was transferred to the Intensive Care Unit so he could leave appropriate orders for the patient in addition to my initial orders, which included Albuterol or Atrovent q. 2h. and Levaquin 500 mg IV, as well as Solu-Medrol. Critical care note terminates at this time.EMERGENCY DEPARTMENT COURSE: See the critical care note.MEDICAL DECISION MAKING (DIFFERENTIAL DIAGNOSIS): This patient has an acute severe decompensation with respiratory failure. Given the patientís white count and recent history of pneumonia, the possibility of recurrence of pneumonia is certainly there. Similarly, it would be difficult to rule out sepsis. Myocardial infarction cannot be excluded.COORDINATION OF CARE: Dr. X was contacted from the emergency room and asked to assume the patientís care in the Intensive Care Unit.FINAL DIAGNOSIS: Respiratory failure secondary to severe chronic obstructive pulmonary disease.DISCHARGE INSTRUCTIONS: The patient is to be transferred to the Intensive Care Unit for further management. cardiovascular / pulmonary, copd, emergency room, er, chronic obstructive pulmonary disease, emphysema, chest x-ray, critical care, intensive care unit, dacuronium, pneumonia, oxygen, respiratory, atrovent, intubation, transcribed medical transcription sample reports and examples are provided by various users and are for reference purpose only. MTHelpLine does not certify accuracy and quality of sample reports. These transcribed medical transcription sample reports may include some uncommon or unusual formats; this would be due to the preference of the dictating physician. All names and dates have been changed (or removed) to keep confidentiality. Any resemblance of any type of name or date or place or anything else to real world is purely incidental.
0.8742
FineWeb
1.765625
If We opened for them a gate in the sky and they were to continue ascending to it, - Why do they not acknowledge? - "If they see an object falling from the sky, they will say, "Piled clouds" - Disbelievers do not even believe in signs: - "If you are truthful, let the pieces from sky fall upon us." - Disbelievers will not believe even they see the miracle: - "As you claim, you should make the pieces from the sky to fall upon us."- Disbelievers will not believe even if they see the miracle: - As for the messengers, rejectors said “a magician”; as for the books “a magic book”: - If they see every sign they do not acknowledge it:
0.9998
FineWeb
2.21875
Enhance separation and lower your maintenance costs with General Kinematics’ turnkey recycling systems. Our seasoned experts design, build and install your system to provide a complete solution trusted to meet the toughest requirements. GK systems are dedicated to improving your material purity while providing profitable, worry-free results: - Enhanced Separation. The leader in vibratory screening and classification technology, GK’s jam-proof, dependable and high-capacity sorting solutions increase recoverable commodities’ profitability. - Lower Maintenance. GK’s recycling equipment is engineered to require less maintenance while maintaining a longer service life. - Turnkey solutions. GK systems are designed so that each component seamlessly integrates to create a proven process that will maximize your profitability and recovery. Get started on your system today! Contact GK.
0.7201
FineWeb
0.84375
Friday, October 19th is the day Mason will observe the National Day on Writing! The National Day on Writing celebrates the role of writing in our everyday lives. Whether it’s academic papers, email, texts, tweets, or graphic novels, we all write. This National Day on Writing, Mason’s Writing Across the Curriculum Program will be celebrating writing on the Fairfax campus. Join us in the Fenwick Library Atrium and the JC South Plaza to tell us your writing story! The WAC program will be at these locations from 10:00AM to 2:00PM to help faculty, staff, and students share why they write, what they write, where they write, when they write, and how they write. We will be sharing these writing stories across social media to join in the national celebration. So come celebrate the many forms that writing takes in our everyday lives! We are looking for volunteers to help us generate content for social media. This can be a fun, in-class writing activity. For more information about how you, your students, and your colleagues can participate, please click on this link.
0.6547
FineWeb
2.140625
Many people report using marijuana to cope with anxiety, especially those with social anxiety disorder. THC seems to lower anxiety with lower doses and increase anxiety with higher doses. CBD seems to lower anxiety at all doses that have been tested. Some studies show that cannabis use can cause anxiety symptoms. However, other research shows that cannabis, when used correctly, can be beneficial in treating anxiety symptoms. Marijuana, in particular CBD and low THC levels, show potential benefits for temporarily reducing anxiety symptoms. Marijuana is said to help facilitate relaxation, calm and improve sleep. It is sometimes used to treat social anxiety, agoraphobia, PTSD, panic disorders, and anxiety-related sleep disorders. However, marijuana has a complicated effect on anxiety and seems to help some and make others worse. However, marijuana-induced anxiety seems to be related to higher doses of THC and lower doses of CBD. Cannabis has been documented to relieve anxiety. However, research has also shown that it can cause feelings of anxiety, panic, paranoia and psychosis. In humans, Δ9-tetrahydrocannabinol (THC) has been associated with an anxiogenic response, while anxiolytic activity has been mainly attributed to cannabidiol (CBD). In animal studies, the effects of THC are highly dose-dependent and the biphasic effects of cannabinoids on anxiety-related responses have been extensively documented. A more precise evaluation of the anxiolytic and anxiogenic potentials of phytocannabinoids is required, with the aim of developing the “holy grail” in cannabis research, a formulation with medicinal activity that can aid in the treatment of anxiety or mood disorders without causing effects anxiogenic. While marijuana can cause anxiety in a small number of people, it can help treat the condition in others. Specifically, some components of the herb, such as cannabidiol, may be helpful in treating anxiety in some people. Marijuana, in general, can be beneficial in treating anxiety, especially in low doses. Cannabidiol (CBD) can help treat anxiety at all doses, while THC can help relieve anxiety at low doses but cause it at high doses. Experts believe that one of the brain's natural cannabinoid receptors, the cannabinoid receptor type 1 (CB1R), is closely related to why marijuana can help treat anxiety in some people.
0.5063
FineWeb
2.65625
Report 1: Indonesean Government Report On Sugarcane Cutters Sugarcane cutting, loading and the transport process of sugar cane (TMA) in the sugar industry in Indonesia and even the industry in the world often becomes a "bottle neck". There are so many problems that require attention and require the best solution to solve. In the harvesting process, the sugar cane is often not properly cut since the butts are left. There is cane left behind in the field and there is cane that falls off the trucks during transshipment and is wasted. It is very ironic that so much work is done to get the optimal growth of sugar cane by good land preparation, planting, care, nurturing but then these profits are lost in the harvesting and delivery. Not to mention the capacity of sugar factories that do not meet supply, will increase new problems, which in turn influence and increases the operational losses of sugar factories. In the 2010/2011 harvest, PTPN II plans to operate two sugar mills, with capacity of 3400 tons per day for each and need 6800 people per day to cut (the average performance of the cutters is 1 ton of cane per person per day). The management of the labour force is made even worse by the following: - The geographical condition of PTPN II, sugar cane plantation is in the vicinity of Medan (capital of province and the third largest city in Indonesia), this causes the people to rather get more respectable jobs then to cut sugar cane which is difficult physical work that is outside (hot, sunny ) and uncomfortable due to the leaves causing a rash. - There is lots of infrastructure development in Medan Binjai, Langkat and Deli Serdang that offer better jobs with higher wages. - The earning to cut sugar cane is not attractive enough and will rather do other work on the farm then cut sugarcane. This shows that there has to be a shift toward mechanical harvesting or cutting of sugarcane. Time and Place of Visit: Visit was conducted in PT Laju Perdana Indah ( Ogan Komering-Soutn Sumatera ) on 28-30 October 2010. The purpose of Visit: The purpose of visit to PT Laju Perdana Indah is as follows: - To have a look and evaluate the performance of cutting tools for sugar cane that operates semi mechanized cutting machinery at PT Laju Perdana Indah Palembang. - To study the feasibility of a mechanical cutting tool to be used in the sugar cane field at PT Perkebunan Nusantara II Benefit of the Visit: The benefit of this visit report are as follows: - As an input to select the alternative cane cutting machine in operation at the PT Perkebunan Nusantara II - For the authors, the benefit of the visit is to applying this knowledge in the application of agricultural mechanization of sugarcane cultivation - As reference for observers of the sugar industry
0.8979
FineWeb
2.1875
Springer Online Journal Archives 1860-2000 Chemistry and Pharmacology Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics Abstract A well-characterized flame-assisted plasma was developed to understand the role of flow nonuniformities and plasma/wall interactions in plasma devices for use in validation of laser-based Doppler shift spectroscopic methods. A hydrogen/oxygen capillary diffusion flame burner was used as a plasma source, with barium seeded into the reactants to provide a source of ions and electrons. For analysis the plasma was assumed to be a stationary, partially ionized, collision dominated, thermal plasma consisting of barium ions, electrons, and neutrals between two parallel-plate electrodes. The plasma was examined in terms of the continuum equations for ions and electrons, together with Poisson's equation to predict spatial profiles of electron and positive ion density and potential as functions of applied potential. First an analytic solution based on constant plasma properties and negligible difusion was introduced. The model was then extended by including effects of diffusion and variable plasma properties. Experimentally, current/voltage characteristics of the plasma were measured conventionally, relative ion concentration and temperature were measured with laser-induced fluorescence, and local potential distribution was measured using an electrostatic probe. The diffusionless theory predicted well the bulk behavior of the plasma, but not the correct spatial distributions of ion concentration and potential. The extended model produced a more satisfactory fit to the data. At conditions of 1.4 equivalence ratio, 70 torn pressure, 300 ppm seed concentration, and 100–400 V applied potentials, electric fields of the order of 102, 103 V/cm were observed near the powered electrode, and of few tens of V/cm in the hulk of tire plasma. The field strength in the sheath ensures the operation of the Doppler shift diagnostics, once the recommendations tor LIF signal detectability are fulfilled. Type of Medium:
0.9927
FineWeb
1.914063
-the ability to read and write. -synonyms: ability to read and write, reading/writing proficiency; -learning, book learning, education, scholarship, schooling “literacy and numeracy are the first goals of education” -competence or knowledge in a specified area. “wine literacy can’t be taught in three hours” Literacy, to me, has always been so intriguing. The thought that we can process our thoughts through writing, and not only the verbal. When the words seem to simply escape our minds before they exit from our mouths, writing can provide such a power and grace to the person trying to portray any sort of information. It gives time to process information, elaborate, and convey exactly how one is feeling in that moment. Grammar, sentence structure, and fluidity in any form of literature are my jam. There are few things I adore as much as a well written piece of work. I think my own passion for this area stems from my earlier days, through the work I was exposed to and experienced, in and through writing letters to pen pals. Up until this point, I had never really written or received a letter specifically addressed to me. The idea that communication can expand beyond friends that I see everyday at school, and family that I spend everyday with at home, was revolutionary to me. This was before any sort of instant messenger, and long before texting. We were able to get to know students of a similar age, in another province, by writing letters back and forth to one another. We were blessed with the opportunity to actually meet these friends in person a while later, after we had gotten to know each other over the course of the year. By this point in time, it already felt like we knew each other fairly well, as if we had been friends for ages. I am well aware that the concept of letter writing is not new or innovative, by any means, but to little me, it opened a new door to this entire world that I hadn’t a clue existed. From this moment onwards, my life was changed, and I really haven’t halted my passion ever since. I would much rather send a letter across the country that very well may take weeks to arrive, than send a message that will be received almost instantaneously. There’s more effort, thought, and love put into a hand written letter, and I thrive off of that. Literacy does not comprise solely of written works, it encompasses a great deal, which surely also includes literature. written works, especially those considered of superior or lasting artistic merit. “a great work of literature” While I am certain that there are some who may prefer reading over writing, or vice versa, these two really do go hand in hand. They work alongside each other, and arguably help better one another. If there is an adult or a child who struggles with writing, they can improve their skills in this area by grasping further knowledge of literature. This can expand vocabulary, improve spelling, and provide new ideas for the writer to explore. I had a desire for reading, for as long as I can remember. I would spend my recess in the library sorting, organizing, and reading books. I was most certainly “that library kid”, and I loved every minute of it. The library was a sort of saving grace for me, as I knew that I was always welcomed there, and it gave me a place to escape the chaotic and refocus. It brought comfort to me, and through this, my intrigue into the world of literature grew into a passion that I still hold dear to me to this day. This, in fact, sparked enough of an interest in me, that I am now pursuing a career with the end goal of becoming an elementary school librarian, so I can help create this safe place for students to come and explore their own paths.
0.5042
FineWeb
3.03125
“I think that cooking is one of the most gratifying chores in the house. I love it! If I’m in a bad mood or having a bad day, cooking always cheers me up,” Susana assures. Susana Saporiti is from Buenos Aires, Argentina. She graduated from the Industrial School of Applied Arts Fernando Fader as a Drawing, Engraving, and Metal Teacher. Throughout her career as a painter, Susana has experimented with many materials and artistic styles, such as charcoal, pastels, oil, collage, woodcut; portraits, nature, interior, figures, landscapes; realism and abstraction…Her intrinsic artistic sensitivity makes an appearance in many other aspects of her life. “When I cook,” she says, “I feel that the color combinations and the presentation of the food, in the menu and on the table, are essential.” Color is, indeed, a very important factor when pursuing a healthy and balanced diet. According to registered dietitian, author, and former Academy of Nutrition and Dietetics spokesperson, Karen Ansel, adding a splash of colorful seasonal foods to your plate makes for more than just a festive meal. “A rainbow of foods creates a palette of nutrients, each with a different bundle of potential benefits for a healthful eating plan,” she explains. These recipes, Roasted Bell Peppers and Eggplants in Vinegar, are a small sample of Susana’s palette. To learn more about her artistic career and follow her on Facebook, visit Susana Saporiti’s page. Roasted Bell Peppers ♥ Bell Peppers→ great source of antioxidants, vitamin C and A, carotene, folate, iron, phosphorus, magnesium, niacin, and riboflavin. A bell pepper’s nutritional content increases as it ripens, so allow your bell peppers to ripen outside of the fridge before you eat them. Also, their nutritional content varies with their color (green, yellow, orange, red, purple, and black). Since high temperatures can damage the vegetable’s phytonutrients, eating your bell peppers raw maximizes their health benefits. Red bell peppers have a higher nutritional value than green, yellow or orange bell peppers. Purple or black bell peppers, which are usually more difficult to find, also have a high nutritional content, so if you ever see them at your local farmers market, make sure you give them a try, or even better, grow them at home! - 6 bell peppers (two orange, two red, and two yellow) - 6 to 8 garlic cloves - Pepper corns - 3 or 4 bay leaves - A large glass jar - Vegetable or canola oil - Wash the bell peppers and place them on a cookie sheet. Preheat the oven to 375 degrees Fahrenheit. Place cookie sheet in oven and let the peppers bake for about an hour, or until they get wrinkly and brown. - When they are ready, take them out of the oven and wrap them in newspaper (or any paper you have) until they cool off. - Once they have cooled off, remove the paper, peel them, and remove the seeds and stem. Cut them into slices and put them in the glass jar. Try to mix the colors and add the garlic cloves, bay leaves and pepper corns in between layers of peppers. - Do not add salt. Salt can cause your peppers to spoil sooner. - Finally, fill the jar with oil, enough to cover all the peppers. Do not use olive oil because it will thicken once refrigerated. Eat your delicious roasted bell peppers in sandwiches or on crackers. Use them as a pizza toping or to decorate and enhance any dish! Note: “Once you ate all the peppers, you can recycle the oil and use it for salads or for cooking, or you can roast more bell peppers and refill the jar,” Susana says, and she adds, “These roasted bell peppers are great to accompany with a glass of wine!” According to Susana, these marinated eggplants are simply delicious when served with bread or crackers. They are also a great compliment to salads and sandwiches and can be served as a side to other dishes. ♥ Eggplant→ also known as aubergine, eggplant is a species of nightshade–members of the family Sonalaceae–which are usually grown for they economical importance (tomatoes, bell peppers, and potatoes are also part of this family). Nightshade plants are also known for being poisonous to humans, belladonna or “deadly nightshade” is an example of a toxic nightshade plant. Eggplants are rich in fiber and a great source of vitamin K, thiamin, vitamin B6, folate, potassium and manganese. They are also effective antioxidants and help lower LDL cholesterol levels. - 1 large or 2 small eggplants (if you want less seeds, avoid large eggplants, the bigger they are, the more seeds they will have) - 1 1/2 cup of white vinegar - 3 cups of water - 1/2 teaspoon of salt - 1-2 bay leaves - 2-3 cloves of garlic (crushed) - Pepper corns–or freshly ground pepper–and chili flakes - Chopped parsley - Canola oil - A large glass jar - Cut the eggplants into 1/2 inch slices. Then cut slices into quarters. - In a medium pot on high heat, boil the water and vinegar. Add salt, a crushed garlic and bay leaves. Once the water has begun to boil, add the eggplants to the pot. Allow eggplants to cook for 4-5 minutes or until they look cooked. - With a skimmer, remove the quartered eggplant from the pot and place them in a colander (save the vinegar-water in the pot). Allow eggplants to completely cool off. - Put eggplant quarters in the glass jar. Add chopped parsley, one or two crushed garlic cloves, chili flakes, pepper corns or ground pepper, and a generous squirt of oil. - Finally, fill the jar with the cooled vinegar-water, enough to cover all of the eggplants. Refrigerate and enjoy!
0.7199
FineWeb
2.15625
Yesterday I went to the opening of a new exhibit at the Peabody Museum. It was a neat topic - using old spy satellite data to see what landscapes used to look like, because apparently a lot of landscapes have lost important archaeological features in the meantime. Using those old pictures, you can see the things that have since disappeared, where you would otherwise have no way of studying them. It is also very cool that this exhibit was worked on by archaeology students, so two of my closest friends had been working on this for months. It was great to go see them present it. Here they are, with the wall they worked on.
0.5076
FineWeb
1.554688
What is The National Coat of Arms of Ireland? The Coat of Arms of Ireland is blazoned as Azure a harp Or, stringed Argent (a gold harp with silver strings on a blue background). These arms have long been Ireland‘s heraldic emblem. References to them as being the arms of the king of Ireland can be found as early as the 13th century. When the crowns of England, Scotland and Ireland were united in 1603, they were integrated into the unified royal coat of arms of kingdoms of England, Scotland, and Ireland. The harp was adopted as the emblem of the Irish Free State when it separated from the United Kingdom in 1922. They were registered as the arms of Ireland with the Chief Herald of Ireland on 9 November 1945.
0.9568
FineWeb
3.375
European Americans Arrive In the late 1700s, fur traders traveled the great tributary of the Missouri River, the Yellowstone, in search of Native Americans with whom to trade. They called the river by its French name, “Roche Jaune.” As far as we know, pre-1800 travelers did not observe the hydrothermal activity in this area but they probably learned of these features from Native American acquaintances. The Lewis and Clark Expedition (1804–1806), sent by President Thomas Jefferson to explore the newly acquired lands of the Louisiana Purchase, bypassed Yellowstone. They had heard descriptions of the region, but did not explore the Yellowstone River beyond what is now Livingston, Montana. A member of the Lewis and Clark Expedition, John Colter, left that group during its return journey to join trappers in the Yellowstone area. During his travels, Colter probably skirted the northwest shore of Yellowstone Lake and crossed the Yellowstone River near Tower Fall, where he noted the presence of “Hot Spring Brimstone.” Not long after Colter’s explorations, the United States became embroiled in the War of 1812, which drew men and money away from exploration of the Yellowstone region. The demand for furs resumed after the war and trappers returned to the Rocky Mountains in the 1820s. Among them was Daniel Potts, who also published the first account of Yellowstone’s wonders as a letter in a Philadelphia newspaper. Jim Bridger also explored Yellowstone during this time. Like many trappers, Bridger spun tall tales as a form of entertainment around the evening fire. His stories inspired future explorers to travel to see the real thing. Osborne Russell wrote a book about fur trapping in and around Yellowstone during the 1830s and early 1840s. As quickly as it started, the trapper era ended. By the mid-1840s, beaver became scarce and fashions changed. Trappers turned to guiding or other pursuits. Looking for Gold During 1863–1871, prospectors crisscrossed the Yellowstone Plateau every year and searched every crevice for gold and other precious minerals. Although gold was found nearby, no big strikes were made inside what is now Yellowstone National Park. Last updated: June 14, 2016
0.5961
FineWeb
4.03125
The move toward digitalization of substations brings with it several challenges when it comes to data capture and transmission. Fibre optic technology is one of the solutions possible since it is immune to EMI/RFI and has also achieved a high level of maturity that has helped drive down cost. Moreover, optical sensor technology does not need a power source at the measuring point and the light source can be hundreds of meters away and outside the substation itself. Installingfibre optic cable at a substation is another challenge since it must pass through the insulators. Fortunately, the manufacture of fibre optic post insulators has become possible in an industrial, cost-effective manner using isostatic technology. The fiber optic hole is located on the neutral axis and as such does not affect mechanical performance while making it possible to measure acoustic waves and vibrations inside the ceramic ost. The stress level of the insulator does not affect the acoustic signal frequency or propagation within the material itself. Data collected, once filtered and analyzed, can then be used for substation monitoring and in this sense is a first step toward an ‘intelligent’ insulator that senses its environment and provides important system data needed for substation management. This presentation explains the method of manufacturing ceramic insulators containing a fibre optic hole. It also explores potential future applications in monitoring forces, movements, and vibrations of ceramic insulators as part of the digital substation. Learn more in the study prepared for the INMR conference by Markku Ruokanen, Group R&D Director, PPC Insulators
0.7829
FineWeb
2.59375
Presentation on theme: "Normal Forms By Christopher Archibald October 16 th 2007."— Presentation transcript: Normal Forms By Christopher Archibald October 16 th 2007 Overview Database Normalization 1 st Normal Form 2 nd Normal Form 3 rd Normal Form Boyce- Codd Normal Form (BCNF) Lossless-Join Normalization Normalization is a technique for designing relational table to: Minimize duplication of information Minimize duplication of information Reduce the potential for data anomalies Reduce the potential for data anomalies Normal Form Normal forms provide a stepwise progression toward the goal of fully normalized relation schema that are free for data redundancies. First Normal Form (1NF) 1NF definition: A schema R is in 1NF only when the attributes comprising the schema are atomic and single-valued No Multi-valued attributes No Multi-valued attributes No composite attributes No composite attributes No repeating groups (2 columns can not store similar information) No repeating groups (2 columns can not store similar information) Can’t have a Null Attribute Can’t have a Null Attribute Must have a Primary Key Must have a Primary Key First Normal Form Example This is in 1NF (Has primary Key, no repeating group, No Null attributes and No multivariable What happens if James gets a Second Phone Number? First Normal Form Example No longer in 1NF because Telephone Number has a multivariable. Now we need to redesign our table First Normal Form Example Not in First Normal forum Tel. No. 3 is a null attribute Tel. No. 1-2 repeat similar information (Repeating group) First Normal Form This is in First Normal Form Telephone Number is no long a repeating group No Multivariable No Null Attributes Has a Primary Key Second Normal Form (2NF) 2NF Definition: A relation schema R is in 2NF if every non-prime attribute in R is fully functionally dependent on the primary key of R. Must be 1NF Must be 1NF An Attribute that is not part of the candidate key must be dependent on the candidate key and not a part of the candidate key An Attribute that is not part of the candidate key must be dependent on the candidate key and not a part of the candidate key Second Normal Form Example Only Candidate key is (Employee, Skill) Not in 2NF Current Work Location is dependent on Employee Can Cause an Anomaly Updating Jones Work location for Typing and Shorthand but not Whittling. Then asking “What is Jones current work location”, can cause a contradictory answer, because there are 2 different locations. Second Normal Form Example Both tables are in 2NF Meets 1NF requirements No non-primary key attribute is dependent on part of a key. 1NF and 2NF 1NF and 2NF remove most anomalies Following table is in 2NF There is redundancy under Winner/Winner DoB Al Fredrickson and Chip Masterson Al Fredrickson and Chip Masterson Can cause an anomaly Can cause an anomaly Third Normal Form (3NF) 3NF Definition: A relation schema R is in 3NF if no non-prime attribute is functionally dependent on another non- prime attribute in R Table must be in 2NF Table must be in 2NF Eliminate field that do not depended on the primary key by placing them in different tables Eliminate field that do not depended on the primary key by placing them in different tables Third Normal Form Example Table is in 2NF but fails to meet 3NF Winner Date of Birth is Dependent on Winner If Al Fredrickson Date of birth is update in the first row but not the second ask, “What Al Fredrickson Date of birth” will result in 2 different dates. If Al Fredrickson Date of birth is update in the first row but not the second ask, “What Al Fredrickson Date of birth” will result in 2 different dates. Third Normal Form Example Table is in 3NF Meets 1NF and 2NF No non-primary Key attribute is Dependent on another non- primary Key attribute Update Anomalies cannot occur in these tables Boyce-Codd Normal Form (BCNF) BCNF Definition: A relation Schema R is in BCNF if for every non-trivial functional dependency in R, the determinant is a superkey of R Does not allow Functional Dependency that is not part of a Candidate key Does not allow Functional Dependency that is not part of a Candidate key Most 3NF meet the requirement of a BCNF Boyce-Codd Normal Form Example Candidate key (Tutor ID, Student ID) And (SSN, Student ID) Table is in 3NF, but not BCNF SNN is dependent on Tutor ID but (Tutor id, SNN) is not a Candidate key Other Normal Forms There is also Fourth normal form Fourth normal form Fifth normal Form Fifth normal Form Domain/key Normal form Domain/key Normal form Sixth normal form Sixth normal form Which will be covered in chapter 9 Lossless-Join Decomposition The principle behind Lossless-Join decomposition is that the decomposition of a relation schema, R, should be strictly reversible, i.e. When we break tuples in to different tables for normalization we should be able to combined them and get what we started Lossless-Join Decomposition Flight # OriginDestinationMileage DL723Boston St. Louis 1214 DL577Denver Los Angeles 1100 DL5219Minneapolis St. Louis 580 DL357ChicagoDallas1058 DL555DenverHouston1100 DL5237Cleveland 580 Lossless-Join Decomposition OriginDestinationMileage Boston St. Louis 1214 Denver Los Angeles 1100 Minneapolis St. Louis 580 ChicagoDallas1058 DenverHouston1100 Cleveland 580 Flight # MileageDL7231214 DL5771100 DL5219580 DL3571058 DL5551100 DL5237580 Lossless-Join Decomposition Flight # OriginDestinationMileage DL723Boston St. Louis 1214 DL577Denver Los Angeles 1100 DL577DenverHouston1100 DL5219Minneapolis St. Louis 580 DL5219Cleveland 580 DL357ChicagoDallas1058 DL555Denver Lost Angeles 1100 DL555DenverHouston1100 DL5237Minneapolis St. Louis 580 DL5237Cleveland 580
0.7785
FineWeb
3.21875
Three Guidelines To Understanding The Delta Variant Delta is quickly becoming the dominant coronavirus variant in multiple countries. The variant has spread so fast because it is more contagious than the variants that came before it. At the same time, the U.S. is equipped with highly effective vaccines. Ed Yong, science writer for The Atlantic, talks with Maddie about the interaction between the variants and the vaccines and how that will be crucial in the months ahead.
0.7218
FineWeb
2.546875
From studies of other stars which astronomers can see in many different stages of their 'life cycle', it seems pretty convincing from the data that the sun must have started out as a large collapsing cloud of gas inside some ancient interstellar cloud. This cloud was 'polluted' by a supernova several million years before the collapse phase ended, because we see certain isotopes of aluminum which could not have been a part of this cloud for very long unless they had been implanted by such an event. The cloud collapsed for millions of years until it formed a rotating disk with a large central bulge. Out of the disk would eventually form the planets, and out of this central bulge where most of the mass wound up, formed the sun. We see such rotating disks of gas around many infant stars embedded in nebulae so this has confirmed this basic picture during the last 15 years or so. This isn't just 'theory' anymore. The central bulge continued to collapse under its own gravity until deep in its interior the temperatures got so high...several million degrees....that deuterium atoms began to fuse and give off thermonuclear energy. This slowed the collapse down a bit and eventually led to a second stage where hydrogen nuclei could fuse into helium, which then started the sun's current evolutionary phase. While all this was happening, the surface of the sun became very active and produced a powerful wind which blew out all of the remaining gas and dust in the surrounding disk of gas which had not settled into the bodies of the new planets that had formed. This 'T-Tauri wind' also scoured clean the atmospheres of the inner planets so that they were bare rock. Those that were volcanically active, however, were able to regenerate their atmospheres from the gases ejected by volcanic activity. From start to finish, it took something like 100 million years to form the sun and planets from a collapsing cloud of gas, and this is not very long at all!! All answers are provided by Dr. Sten Odenwald (Raytheon STX) NASA IMAGE/POETRY Education and Public Outreach program.
0.9442
FineWeb
3.96875
TMJ Treatment in Skokie, IL Causes of TMJ Disorder The primary cause of TMJ disorder is problems with the jaw muscles or the joint. Injuries to the jaw, neck, and head can lead to TMJ Disorder. Common causes of TMJ Disorder include whiplash, clenching or grinding of teeth, arthritis pain, and stress that might cause clenching and tightening of facial muscles or the jaw. Symptoms of TMJ Disorder TMJ disorder can be quite painful and cause a large amount of discomfort. Some of the common symptoms include difficulty opening the mouth, jaws that get stuck open or closed, a clicking or popping sound while moving the mouth or chewing, or tenderness and pain in the face, neck, jaw, and around the ear. Other symptoms that someone with TMJ disorder might experience are headaches, toothaches, dizziness, ringing in the ear, and upper shoulder pain. TMJ Treatment in Skokie It is essential to seek assistance from a dental expert to treat the TMJ disorder. Here is how a dentist can help you: - Over-the-counter-drugs: Dentists will prescribe the medications to soothe the dental swelling and pain. He might also suggest you relax the muscles by avoiding the clenching or grinding of the teeth. - Nightguard or splint: It is one of the most recommended TMJ treatments by dentists as a plastic mouthpiece is fit over the lower and upper teeth so that they don’t touch each other. It minimizes the effect of grinding or clenching and helps in improving the bite. - Dental work: The dentist might replace missing teeth or use bridges or crowns to balance the biting surface of the teeth and correct any bite problems. Only in severe cases are dental surgeries suggested. If case you are experiencing TMJ problems, contact us and we will guide you through this dental treatment process and answer all your concerns.
0.7315
FineWeb
2.15625
Travel Management Information Service (MIS) Notice: GSA Travel MIS is offering CO2 Scope 3 Business Travel Report Training ! This training is open to government agencies, departments, and organizations. Topics include: - Logging into the GSA Travel MIS reporting tool and navigating to the CO2 reports; - Running the CO2 reports for the appropriate time periods; - Reading and interpreting the reports; - Pulling the relevant reporting information from the reports to populate the FEMP GHG Reporting Spreadsheet; and - A basic overview of the data environment and methodology behind the emissions estimations. What is the Travel Management Information Service (MIS)? Data is critical for managing travel spend, optimizing travel programs, ensuring compliance to travel policy, and monitoring traveler care. Travel MIS supports effective travel data management and utilization. Travel MIS is a data management service that aims to identify, isolate, and mine multiple data sources to support agency decision making and travel data utilization across a variety of functions that include: - Travel management; Policy; Conference planning sustainability; Strategic sourcing; and Much more! Current aggregation efforts focus on ticketed and reservation data. To date, 103 federal organizations participate in MIS. What are the benefits for federal agencies? Travel MIS gives customers access to over 50 reports and dashboards. These tools empower federal organizations to: - Utilize data to make decisions that impact travel; - Report on travel activities and behaviors; and - Meet several annual reporting requirements, including the Premium Class Travel Report and annual scope 3 CO2 accounting exercises. How can travel managers make it happen? Agencies currently participating can access their data immediately through a secure, web-based portal. Agencies not currently participating can be brought into the system within days. GSA’s Travel MIS welcomes you to sign up for email notifications about the program. Your Email Address The shortcut to this page is www.gsa.gov/travelmis Posted In: NEWS
0.6305
FineWeb
1.640625
Direct-Glue Horseshoes – Assuring Adhesive Bond Joint Quality While virtually all polyurethanes can be painted and/or bonded, coatings do not adhere to every resin system in the same way. Therefore, before attempting to bond/glue any polyurethane surface, several steps must be taken to properly prepare the part for bonding. The quality and durability of an adhesive bond to a polyurethane surface is directly affected by, at least, three factors: - Proper resin/hardener mix ratio and curing – Sound Horse controls this in factory production processes - Cleanliness of part surfaces - Abrasion or etching of bonding surfaces (Flexx shoes have factory prepared bonding surfaces) CLEANING: Remember: Saving time via “shortcuts” is a fast way to produce reject parts and failed bonds! Part surfaces must be thoroughly prepared before adhesive bonding. Release agents found on a urethane part (horseshoe) are production lubricants that will contaminate any surface that is touched while handling. SOUND HORSE has added a parts cleaning step to remove residual contaminants, including release agents that have been used on production tools when manufacturing our FLEXX horseshoes. For this cleaning process, we use an aqueous process with appropriate detergents that are particularly effective in removing release agents, grease and oils from polyurethane horseshoes.
0.8354
FineWeb
1.875
Create Fetish is part arcane knowledge and part crafting skill. It is the ability to craft a physical objects that is tied to a specific spell that focuses power from the spells source. Arcanists especially those of the Wisdom and Charisma persuasion (but could be used for Int based as well) often must appease, satisfy, bargain with spirits and other world powers. Crafting a Fetish, which could be a holy symbol, object dedicated to a demon, or element is a sort of ritual. First an arcane knowledge roll must be made to know what Fetish will be appropriate. Correct items must be assembled and crafted into the fetish. The Fetish must be held by the Arcanist for 24 hours to bond with the Fetish. Or for a quick bond it must be held for at least 4 hours, then held is the casters hand while the spell is cast. If the Arcanist is ever with out or seperated from his fetish for more than 4 hours the Fetish becomes inert base materials. The Fetish is tied to a specific spell. Holding a fetish while casting the corresponding spell makes that spell easier, safer or to greater effect. For every level of the Arcanist with Create Fetish he can create a single +1 fetish. The +1 is added to spell casting rolls, +1 to the effect (damage or other efect die rolls). Or +1 spell protection (making a specific spell harder to caste on you). Total level of bonuses are limited to 3 for any one Fetish. For example a 5th level Arcanist (shaman) could have up to 5 lvs in Fetishes. (yes I know he could not have all these spells, its just an example.) First Fetish, A stick man with Rubie eyes and straw hair. vine wrapped wrapped around its body. Tied to Snare,+1 cast, +1 effect (in this care raises snares AC by 1 to 8). Second Fetish, A necklace with a ruby in it, inscribed with arcane mark for flame. Tied to flame strike. +1 cast, +1 protection (casting this spell on the wearer adds a +1 to difficulty). Third fetish, A bracelet of small shields, Each shield inscribed with a different elemental rune mark. Tied to shield. +1 cast.
0.5116
FineWeb
1.335938
Who Is Dada | What Is Dada What is Dadaism | What is A Dadaist Dada or Dadaism was a form of artistic anarchy born out of disgust for the social, political and cultural values of the time. It embraced elements of art, music, poetry, theatre, dance and politics. Dadaor Dadaism, which in French is a child’s word for hobbyhorse. The name Dada from historic evidence is said to have been randomly picked from a German-French dictionary by Tristan Tzara, a Romanian poet, essayist, and editor. The meeting took place in Hugo Ball’s Cabaret Voltaire in Zurich, Switzerland in 1916 where a paper knife inserted into the dictionary pointed the name “DADA”. The group saw the word as appropriate for their anti-aesthetic creations and protest activities. Generally, it was an artistic movement (1916 to 1923) born in Europe. It first rose during the horrorsof the World War I(1914-18). The founders of the movement are said to be the German writer Hugo Ball, Tristan Tzara, Jean Arp the Alsatian born artist, and other various intellectuals from Zurich, Switzerland. At the time of formation, around 1916, it is said that it was the war [World War I (1914-1918)] that led to the rise of these artists, writers and intellectuals. History has it that they all converged or congregated in Zurich, Switzerland as refugees. At that time, Switzerland was a neutral nation. In addition, at the time the movement was established, in 1916 in Europe, it simultaneously established in New York City, America and in Paris, France. In America Marcel Duchamp, Man Ray, and Francis Picabia led it. The leaders are said to have been very young as they were in their early twenties. The nihilistic revolution mainly flourished in France, Germany, and Switzerland from 1916 to 1923. They deliberately embraced irrational, anarchy and cynicism behavior leading to rejection of laws of beauty and social organization. The Irma in goal was to undermine the rules and laws of the ruling establishment, which to them had allowed the war to happen. They were so angry at the fact that, the modern European society and the ruling systems allowed such a senseless war to happen. They (The writers and artists) used art to protest. They literally, took center stage as they used any public forum they could find to undermine rationalism, nationalism, materialism and any other activity to their perceptive, led and contributed to the war. History of Dadaism As mentioned earlier, the Dada movement rose during the barbaric war, World War I, the movement’s proponents was distributed throughout Europe and the United States at that time. The Dada revolution did not restrict from being artists, writers, dancers, or musicians. In other words, a large number were involved in various art cultures and destroying those boundaries that kept the arts distinct from one another. They were never content to create art; they only wanted to affect all the Western civilization aspects eventually influencing them to take part in the revolutionary changes, which were because of the First World War. The Dada movement was not affiliated to writing books and painting portraits or pictures that the society would admire, however their goal was to provoke the society into reacting to their activities. They mainly took public forums to show case various activities. This has a major influence on the development of twentieth-century art. Founded in 1916 in Zurich, Switzerland by a group of exiles from various countries that shared the conflict, Dada consisted of different kinds of people; pacifists, dodgers etc. they found refuge in the lands of Swiss, a neutral country in Europe. Outraged by the killings that were ongoing on both sides of the continent, that same year, in February, Hugo Ball, Emmy Hennings, Tristan Tzara, and the rest formed the Cabaret Voltaire. According to the Dadaists, the Cabaret Voltaire was dedicated to presenting the ideals of arts and culture for variety show. Later that year, about two months later, the name Dada was formed as well. When the movement began, they were dedicated to fighting the cultural values or morals that they believed led to the war. They used the basic forms of modern art as the tool of attacking these values of the society. For instance, abstraction, collage, chance, audience confrontation, sound and visual poetry, eclectic typography, among others. The Cabaret and its prototypes held performances across Europe with various presentations of the art, poetry, and drama for different avant-gardes that took the Europe by a storm; Cubism, Futurism, and Expressionism. They recited poems simultaneously in German, French, and in English. Hugo Ball was reported to be dressed in a bizarre cardboard costume as he chanted his sound poetry. Nevertheless, Richard Huelsen beck also flavored the meetings with continuous drumbeats. After the war was over, it was again free to travel to most parts of the continent and the Dada members distributed to different parts of Europe. Berlin and Paris were the major strong holds of the movement as most members left Switzerland and regrouped there. Richard Huelsen beck was recorded to have grouped with a team of writers and artists who adapted the name and the legacy of Dada. During this time, there were a turn of events such as the collapse of the German Empire that sent the society into a state of disorder. This led the Dada movement to take a political stage. They were involved in disruptive activities across Berlin including the violent disruption of the services held at the Berlin cathedral, distribution of flyers or leaflets with manifestos, as well as demonstrations at the National Assembly at Weimar. Other activities included theater and Cabaret performances, exhibitions, lecture tours, among others. Apart from Berlin, other parts of Germany such as Cologne and Hanover were also filled with Dada activities. Jean Arp and Max Ernst of the Cologne Dada movement(1919-1920) were less political as opposed to those in Berlin. However, they were strict and harsh on the social and moral aesthetics. They performed various activities such as distributing printed material, depicting the grotesque and the rest. However, their major performance was in May 1920 where an event was suspected to show case pornographic exhibition. In Paris, the movement under the leadership of Tristan Tzara also distributed their literature reviews and pamphlets published from 1919-1924. The reviews contained remarks and writings by Andr Breton, Louis Aragon, Philippe Soupault, and Paul luard. Some of the early and famous Dada activities include Picabia's stuffed-monkey Portrait of Cezanne, Renoir and Rembrandt (1920), and Duchamp's picture of Leonardo's portrait of the Mona Lisa complete with beard and moustache in 1919.In addition, the event that took place such as Schamberg's "God" in 1917 and Man Ray's "Gift" 1921. The Dada expressed their interests through publications such as The Blind Man, Rongwrong, and the New York dada. Their main aim was to fight the current social aesthetics or values that people believed in as they did not believe in anything. Their activities mainly stretched between the US and Europe. Some historic pre-Dadaism activities include the posteriori, sensitized by EbuRoi(1896) by Alfred Jarry and the Ballet Parade (1916-1917) by Erik Satie. They all sensitized the public about Dadaism activities. Later after the freedom, that is after the war was over, cubism and the development of collage led to most Dada movements turning political as opposed to their initial aim of fighting values of the society. After 1922, the movement faded away as many Dada members got interested in surrealism. Nevertheless, today most innovations and creativity and the developments in are owed to Dadaism.
0.8344
FineWeb
2.6875
Could farmers be a solution to climate change? Here’s a bit of hope amongst the doom and gloom of climate change. Set aside for a moment the massive engineering feat that would be required of pumping CO2 into underground storage tanks. That idea is going who knows where. A potentially viable solution to restoring carbon back into the earth resides with simple changes in the way we farm. A piece in Discover magazine, Could Dirt Help Health the Climate?, outlines the way agriculture is both a problem and a solution to climate change. Ohio State soil scientist Rattan Lal says in the piece that the world’s soils could soak up 13 percent of the CO2 in the world today — the equivalent of removing every bit from the atmosphere since 1980. Disruptive farming practices like slash and burn, fertilizing, ploughing, and overgrazing have sent massive amounts of carbon up into the atmosphere. Lal estimates that between 70 billion to 100 billion tons of carbon has been stripped from exposed soil since the advent of agriculture 10,000 years ago. One-third of greenhouse gas emissions come from land use changes like agriculture. It stands to reason then that changing farming practices could return much of that carbon back to the soil where it belongs. Mixing compost into agricultural soils is one way, so is the planting of perennial grasses to keep the soil covered in vegetation. Perhaps most importantly is getting farmers to see value in banking carbon, rather than extracting as much productivity as possible from the land. With soil so valuable, farmers can be, in turns out, stewards of the climate.
0.9727
FineWeb
3.296875
Older people make up a significant portion of our population, and projections show the proportion of people over the age of 60 within the global population is set to rise even further over the coming years. ONS data shows by 2066 there will be a further 8.6 million projected UK residents aged 65 years and over, taking the total number in this group to 20.4 million and making up 26% of the total population. Supporting people to age well, and age healthily is something which both local and national policymakers will have to take account of in order to not only ensure good quality of life for their ageing populations but also ensure that services are not overwhelmed. Studies show the higher levels of deprivation people face in their earlier years, the more likely they are to enter older age in poor health and die younger compared with people who experience lower levels of deprivation. This highlights the need to tackle inequality across the life course, with the preventative action having a positive knock on impact on health inequalities in later life. Some of the main drivers of inequalities include: social exclusion and isolation; access to and awareness of health and other community services; financial difficulties including fuel poverty and housing issues; insecure or low paid employment, with reduced opportunity to save or enrol in a formal pension to prepare for retirement; a lack of transport and distance from services; low levels of physical activity; and mobility or existing poor health, often characterised by long term chronic health issues. These inequalities often combine and overlap to create even more challenging situations as people move into older life. More recent research has shown that the Covid-19 pandemic has only exacerbated these inequalities further. Tackling inequalities at the local level Alongside the national discussions around ageing, local demographic change has received comparatively less attention, despite place-based policies and concepts like “ageing well in place” being used in public health conversations for a number of years. Research from the Resolution Foundation explores the intersection between demography and place, and its implications for politics and policy while further research is looking increasingly at local level case studies to highlight pockets of best practice which could help to inform the national approach. A review from Public Health England looked at the specific experiences of older people in coastal and rural areas and the specific challenges they face in comparison to people living urban areas, exploring local level interventions and interventions which adopt a place- based approach, responding to the specific needs of people living in the area. Other research in this area stresses that councils have a clear leadership role in supporting an ageing society and that they are uniquely placed to create strategies which reflect the needs of their populations. Through local engagement of older people systematically and regularly, and through co-production and co-design in the production of local policies and services, councils are in a position to underpin a more positive outlook on ageing, ensuring that older people are regarded as full citizens, rather than objects of charity or pity. Approaches to poverty reduction in Greater Manchester In Greater Manchester, healthy ageing and age inequalities have been made mayoral priorities and the Greater Manchester Combined Authority set up the Greater Manchester Ageing Hub to respond to what policymakers there see as the opportunities and challenges of an ageing population. In 2018 the city published an “Age Friendly Strategy” to promote increased social inclusion within the city by trying to tackle the barriers to inclusion created by poverty and inequality, including creating age friendly places which allow older people to participate within their local communities, and promoting healthy ageing through strategies like GM Active Ageing, a partnership with Sport England. Creating a consensus on healthy ageing The Centre for Ageing Better and Public Health England established 5 principles for healthy ageing which they are urging government and other policy actors to adopt to support future healthy ageing the five principles are: - Good homes and neighbourhoods - Narrowing inequalities - Tackling ageism These principles can be used as building blocks to help organisations create strategies and policies which accurately reflect the core needs of people as they age. One thing which continues to be a challenge, however, is integrating intersectionality into both research and strategies or frameworks on ageing. Not treating “older people” as one homogenous group, but taking account of the individual experiences of specific groups and how this may impact on their experience of inequalities: this is something researchers are making efforts to resolve in their work, and while there are limited studies which look specifically at BAME or LGBT groups, in the future taking account of intersectionality in ageing and inequalities will become more commonplace. The future of ageing We are living longer than ever before. Taking steps to reduce inequalities and support healthy ageing will ensure that those extra years are fulfilling, both for the individual and for society. Helping people to continue to contribute to society, to really live into old age, embrace and enjoy it – and not just exist – in old age should be a priority for everyone, Reducing inequalities to support people to age well will be a major contributor to ensuring this happens. If you enjoyed this article you might like to read: Follow us on Twitter to see which topic areas are interesting our research team.
0.5304
FineWeb
2.90625
Animals: The Consumers The Desert Food Chain - Part 11 As the name “consumers” suggests, animals, unlike typical plants, eat other organisms to survive. Additionally, most animals, unlike plants, can move themselves from place to place. They can seek refuge from extreme environmental conditions such as the high heat and prolonged droughts of the desert. They have specialized tissues, including, as a few examples, muscles used for movement, a nervous system used for processing and sending signals, and internal chambers used for digesting food. Animal organisms (excepting those of animals such as sponges, jellyfish and barnacles) have a basically bilateral symmetry, or mirror-image left and right halves. By comparison, typical plants, the “producers,” manufacture their own food, or carbohydrates, using the process of photosynthesis; that is, plants fabricate glucose, a major component in the food chain, using water and carbon dioxide as raw materials and sunlight as fuel. They remain anchored in place by root systems. Since they cannot take refuge from extreme environmental conditions, they rely on various adaptations to withstand desert heat and drought. They have no muscles, nervous systems or digestive chambers. Typically, a plant organism lack bilateral symmetry, although some parts (for instance, the compound leaves of a mesquite tree) may have bilateral symmetry. Stems and flowers have other geometrical arrangements. The animals comprise a relatively small fraction less than one-tenth of the biomass (total living matter) of the earth; the plants, about nine-tenths. On the other hand, the animals account for a relatively large fraction roughly three-quarters of all of the 1.6 million named species on the earth; the plants, less than one-fifth, according to Michigan University’s Global Change Internet site. (Bacteria, fungi, protozoa, algae and other life forms make up comparatively small percentages of the biomass and the species population.) Compared with the animal and plant communities of, say, a highly productive tropical rainforest, those of our deserts, faced with limited and highly variable seasonal rainfall, punishing summer temperatures and organically impoverished soils, produce a disproportionately small part of earth’s total biomass and biodiversity. (The total biomass, scientists estimate, equals more than a trillion tons of dry, or water-free, organic matter. The total number of species of animals, plants and the other life forms may range anywhere from 10,000,000 to 30,000,000, including both the ones known and those unknown to science.) As our deserts have evolved following the end of the last Ice Age, some 8000 to 10,000 years ago, the animals, plants and environment have woven a tapestry of complex, uneasy and often contrary relationships. Of course, the animals herbivores, carnivores or omnivores depend totally on the plants, the foundation of the food chain, for survival. Simultaneously, the plants depend totally on the unpredictable desert environment: the availability and timeliness of moisture, the intensity of seasonal temperatures and the organic richness of the soil. Mother Nature, on the other hand, follows her own agenda, with utter disregard for the desert animals or plants. Capricious and whimsical, she produces an ever-changing mosaic of “mirco-climates,” or ephemeral localized climatic conditions spawned by irregular “pulses” of rainfall sometimes followed by high heat and winds. Typically, she delivers most of her annual rains during the monsoonal seasons of late summer in the Chihuahuan Desert, late summer and winter in the eastern Sonoran Desert, and winter in the western Sonoran Desert and the Mojave Desert. In a late-summer thunderstorm in the Sonoran Desert, she may pound a talus slope along a mountain range with a torrential rain, producing a surge of water than rushes away before it can soak into the soil. At the same moment, she may leave a neighboring slope totally dry, teasing with towering cumulus clouds and a brilliant rainbow. She may bring a more gentle and soaking rain to a drainage basin, favoring established plants with radiating root systems but essentially ignoring seeds of species that are not prepared for germination. Other times, she brings no rain at all. In the hottest of our deserts, she routinely raises the midday temperature of the air in summer to more than 120 degrees Fahrenheit and soil temperature to 150 to 180 degrees. She may turn up the winds of spring to gale force levels, raising dust clouds that envelop lower mountain ranges and accelerating already high water evaporation rates. By inhibiting the prosperity of the animals and plants, she limits the organic richness of the soils. Mother Nature makes the survival of the animals and plants of the desert a game of chance. Adaptation and Survival During periods of prolonged drought and heat, the animals, especially those with no access to free-standing water, can become severely tested. Herbivores and omnivores may have to depend heavily on plants for moisture, taken from the tissues, fruits and flowers. Carnivores and omnivores may depend on prey for moisture. Scavengers such as the Turkey Vulture may depend on carrion for moisture. Some, for instance, beetles, have a hard shell encasing their bodies, helping them preserve their store of moisture. As summer sets in, smaller animals look to the shade of plants or to the shelter of burrows to escape the desert heat. Some larger animals, for instance, mountain sheep, may turn to the coolness of natural caves. Other animals, for instance, the Black-tailed Jackrabbit with its strikingly large, heat-dispensing ears, rely on physiological adaptations to help cope. Highly mobile animals, including numerous birds and larger mammals, simply migrate to areas that promise more water and cooler temperatures. By comparison, the desert plants, immobile and fully exposed, have developed several basic strategies for survival. Some, for example, the cacti, yuccas and agaves, endure drought and heat by conserving and rationing water within spongy tissues encased in waxy coatings. Other plants, for instance, some of the shrubs, avoid drought and heat by shedding leaves and twigs so they can reduce their need for water, or they put down deep tap roots in a reach for ground water. Still other plants such as grasses and forbs (non-woody plants other than grasses) escape the drought and heat by racing when Mother Nature does deliver timely and sufficient rainfall to produce prolific crops of seeds, prudently banking them in the surrounding soil to await the next timely and sufficient rainfall, perhaps years later. From year to year in the desert, the animals depend on a variable and uncertain plant menu to survive, creating a dynamic and constantly changing food chain. Our Deserts’ Animal Population Broadly, our deserts’ animal population, like all of earth’s animal population, falls into one of two main groups, the invertebrates those without backbones and the vertebrates those with backbones. Our desert invertebrates, stunningly complex in their diversity, include, for a few examples, the arthropods (insects, spiders, scorpions, centipedes, millipedes, desert shrimp and many others), the mollusks (snails) and annelids (segmented earthworms). Our desert vertebrates consist of representatives from all five of the best known categories: reptiles, amphibians, fish, birds and mammals. Our native invertebrates include perhaps 10,000 to 20,000 known species of arthropods, several dozen species of mollusks, and the communities of earthworms. The native vertebrate population comprises more than 100 species of reptiles, perhaps two dozen species of amphibians, several dozen species of freshwater fishes, over 500 species of birds and well over 100 species of large and small mammals. Some Invertebrates of the Desert The number of species of insects far exceeds the number of species of all other animal life in the desert combined. In a single example, the “University of Arizona insect collection has more than 13,000 identified species of only Arizona insects,” Floyd Werner and Carl Olson said in their 1994 book Insects of the Southwest. “There are many more that we have been unable to name or are waiting description.” The insects have forged a labyrinthine web in the food chain. Most herbivorous species feed on a few related plants throughout their lives. Others feed on a wide selection of plants. Carnivorous insects, including the predators, blood-suckers and parasites, feed on animal tissue. The spiders of the desert, eight-legged carnivorous arthropods that total roughly 1000 species, “can create fear and hysteria in movies and homes,” but they “really are gentle predators,” Werner and Olson said. The spiders do, however, have a strange way of expressing their gentleness. Most trap, ambush or attack insects or other spiders, injecting them with a venom that liquefies the insides, which become a nutritious cocktail for the predator. Tarantulas, the largest spiders in the desert, prey not only on insects, but also small reptiles (sometimes including even young poisonous snakes), amphibians and even mammals. In some species, female spiders, in an act of feminine cannibalism, prey gently on their males. The scorpions, with an ancestry dating back hundreds of millions of years, include “many species” in the Southwest, according to Werner and Olson. Most, according to the University of California at Berkeley Museum of Paleontology Internet site, “are nocturnal, hiding under rocks, in crevices, or within burrows during the day, and coming out after sunset” to hunt. Primarily, the scorpions eat insects, using powerful pincers to catch and crush their prey. Amazingly, the scorpions, supremely adapted to the desert environment, can survive by eating as little as one insect a year, according to Brian Handwerk, writing in National Geographic News, June 24, 2003. They have the uncanny ability, he said, to reduce their need for food by slowing “their metabolism to a third of the rate of another typical arthropod” The various species of centipedes and millipedes, with their segmented and elongated bodies and multiple legs, would seem to hold much in common, but they have fundamental differences, as Werner and Olson point out, and they play quite different roles in the desert food chain. The centipedes, swift carnivorous creatures typically three to six inches long, have fairly flat bodies with a single pair of legs on each segment. The larger species may have a pair of fanglike claws actually modified legs near their mouths that they use for injecting venom into their prey or, for that matter, into unwitting human beings. Nocturnal, the centipedes remain secreted under rocks or within burrows during the day, emerging to hunt at night, seeking out, for instance, beetles and other insects. By comparison, the millipedes, slow-moving herbivorous or scavenging animals typically three to six inches long, have fairly cylindrical bodies with two pairs of legs on each segment. They have no poisonous claws or fangs or stingers, but they do have orifices along the sides of their bodies that emit evil-smelling chemicals they use to repel predators. Normally secretive, millipedes feed on plants and organic material, but they come out after a rain to celebrate the event. Desert Shrimp, which live in ephemeral playas and water holes, rank as true crustaceans, like the shrimp, crabs and lobsters of the oceans. The Desert Shrimps’ eggs, provided they dry completely, hatch in vast numbers when rain brings water to their playas and water holes. Adults, depending on the species, range from a half inch to two inches in length. Omnivores, Desert Shrimp eat fungi, algae and microscopic organisms. Remarkably adapted to the desert, they produce eggs that may lie desiccated for years awaiting the hatching cues prompted by rainfall. Some species breathe through their feet, where gills are located. Their great numbers following a hatch attract large populations of waterbirds during migratory seasons. The shrimp die as their water evaporates. Snails, members of the mollusks, occupy widely diversified environments. They live in mountain ranges, rock slides, ephemeral water holes and the deserts’ few permanent springs. Ranging from a mere speck to thumbnail in size, they likely descended from species that covered wide areas of the Southwest during the Ice Ages. Constrained by limited mobility and sensory systems, they have, in many instances, evolved into species unique to their restricted individual habitats. “The average snail moves at a speed of 0.0000362005 miles per hour [roughly five feet a day],” according to the AmusingFacts.com Internet site. Desert snails survive the heat and drought by taking refuge in stony crevices or burrowing into mud, relying on their shells to preserve their moisture until the next rains bring more water. “They will withdraw into their shells, and hibernate or sleep, for as much as 2-3 years, until conditions improve,” says AmusingFacts. Snails feed on plants, fungi and plant detritus, and they serve as prey for several animals. “Worms,” said Charles Darwin in The Formation of Vegetable Mould, the last of his books, “have played a more important part in the history of the world than most persons would at first suppose.” The earthworms’ ancestors have been stirring the soil of the earth for perhaps 120 million years, according to the spring 2004 edition of the Utah Agriculture in the Classroom Bulletin. In the desert, the earthworms live, not in the organically poor desert sands, but primarily in the richer riverine floodplains where, daily, each worm can ingest its weight in decaying organic materials and minerals, converting them into nutrients, enriching the soil. Numbering as many as hundreds of thousands per acre, earthworms not only contribute in a major way to increasing the fertility of the soil, they also serve as an important food source for a diverse array of other animals, including the vertebrates. Some Vertebrates of the Desert Like all reptiles, those of our deserts, including snakes, lizards, turtles and tortoises, have thick scaly skins, an especially valuable feature for terrestrial species because it inhibits the loss of water. They eat less than comparable sized mammals because they have slower metabolic rates. The several dozen species of snakes, including at least 10 rattlesnakes and the Arizona Coral Snake, all feed on other animals. Their prey, depending on their species, ranges from small mammals to birds, reptiles, amphibians, insects and even centipedes. The various lizards, many of them active during the day even during the desert summer, eat a wide range of foods. Most prey on other animals, especially insects, although some eat other vertebrates. A few, for instance, the Chuckwalla, primarily eat plants. The fearsome-looking Gila Monster scavenges, feeding on the new-borns of small mammals, birds and reptiles. The some half-dozen turtles and a tortoise live in diverse environments. Some live in the few waterholes of the desert, feeding on animals such as snails, tadpoles, worms and aquatic insects. The Desert Box Turtle, an omnivore and scavenger, lives in the open grasslands, feeding on plants, insects, worms, reptile eggs and carrion. The endangered Desert Tortoise, 10 to 15 inches in length, leads an entirely terrestrial life, feeding on various cacti, herbs and grasses. The amphibians, which include a relatively few frog species and salamanders, inhabit the deserts’ occasional streams and ephemeral ponds, where they find the moisture they require for breeding. The frogs, primarily toads and spadefoots, have developed several distinctive adaptations for survival in the desert. For instance, during drought, the Couch’s Spadefoot may excavate a two-foot-deep burrow, where it can spend two or more years in a dormant state, according to James A. MacMahon in his book Desert. When rain finally comes, the spadefoot replenishes its need for moisture, takes a position at an ephemeral pond, issues a reverberating call for a mate, consummates one or two nights of romance, and quickly produces a new generation of tadpoles. The adults eat enough insects to meet their nutritional needs for another period of dormancy. The tadpoles eat plant and animal matter and even each other should resources be limited. The three- to six-inch-long Tiger Salamanders, the most common in our deserts, live on the desert floor, occupying their own burrows or appropriating other animals’ burrows. Cued by monsoonal rains, they head for the nearest water to breed. Voracious, night-feeding carnivores, they prey on insects, spiders, earthworms, other amphibians and small mammals. The several dozen native fishes of the desert of the Southwest live in the Colorado River drainage system, the Rio Grande drainage system or the rare permanent springs. “The fishes in these communities range from long-lived, large-bodied fishes found in large, highly variable rivers to small specialized fishes that have been isolated for thousands of years in relatively stable environments,” according to the U. S. Geodetic Survey Internet site, Science for a Changing World. Like their terrestrial vertebrate bretheren, they have had to develop adaptations for surviving in the desert environment. Desert fishes can, for instance, tolerate wide fluctuations of temperature, mineralization and oxygen content. In fact, says MacMahon the desert pupfishes “have survived at the lowest oxygen concentration known for any fish…” The larger species may prey on smaller fish and aquatic insects, and the smaller, for instance, the pupfish, feed on algae, detritus and aquatic invertebrates. Unfortunately, the native fishes of our deserts rank among the most imperiled in the United States. Their range and water quality have been altered by dams in the Colorado and Rio Grande drainage basins. They suffer from predation and competition from introduced species. For a specific example, according to the Phoenix Zoo’s Mike Demlong, Conservation Spotlight: Desert Fish, “the Bonytail Chub is the most endangered fish in the Colorado River Basin, perhaps in the entire United States.” Across the Southwest, says the USGS, 85 percent of the fish fauna are threatened in Arizona; 72 percent, in California; 30 percent, in New Mexico, and 42 percent, in Utah. Our desert bird population, with maybe 500 species, mirrors the diverse, intersecting environments of the Southwestern landscape. They range in size from the Black-chinned Hummingbird, with a wingspan of perhaps three inches, to the Sandhill Crane, with a wingspan of perhaps four feet. They vary in color from the American Goldfinch, with a bright yellow body, to the Curve-bill Thrasher, with a dull grayish brown body. Some, for instance, the quails, stay close to home all their lives. Others, for instance, the Black-chinned Hummers and the Snow Geese, migrate hundreds to thousands of miles every year to spend a season in the desert. According to MacMahon, the birds of the desert cope with the heat and drought by capitalizing on physiological adaptations, feeding in the early mornings and late afternoons or (for the large soaring birds) flying at higher and cooler altitudes. They find water in plants or in drainages or in ponding areas. They feed on a range of foods as varied as their sizes, colors and behaviors. The hummers sip the nectar from the flowers of the desert blooming season. The herbivorous White-winged Doves, abundant across much of the desert brushlands, eat the seeds of the ephemeral plans and the fruits of prickly pear cacti. The carnivorous American Dippers, which may appear at streams issuing from the mountains into the desert during the winter months, feed on aquatic animal life at the bottom of the rushing waters. The carnivorous Roadrunner feeds on arthropods, reptiles, rodents and other bird species’ nestlings. The carnivorous Golden Eagles feed on Black-tailed Jackrabbits and other large rodents. The opportunistic omnivore Common Raven, or Crow, feeds on seeds, insects, small rodents, garbage and carrion. The scavenging Turkey Vulture, so elegant in its soaring flight, eats the rotting flesh of dead animals. While some stay active through the day, the mammals the fur-bearing vertebrates that nurse their young really take center stage in the desert during the cooler hours from late afternoon through the night into the early morning. Most turn to burrows and natural shade as shelter from the fierce midday summer heat. The smaller desert mammals, like the Black-tailed Jackrabbit with its large ears, rely heavily on physiological adaptations to cope with the desert. The Merriam’s Kangaroo Rat, for another example, has kidneys designed to reabsorb water before urination, according to MacMahon. Many small mammals have slow metabolic rates, slowing the use of water. Under periods of high stress, the smallest rodents can go into an energy- and water-saving torpor. The larger mammals can follow a different strategy for desert survival. With much greater range than their smaller relatives, they can travel miles to reach streams and ponds to meet their water needs. Their greater mass tempers the rise and fall of body temperatures. Like the birds, mammals feed on a wide range of foods. Bats, for instance, depending on the species, feed on nectar and insects. The rodents, depending on the species, eat seeds, nuts, plant matter and arthropods (including scorpions). The nocturnal, carnivorous Ringtail, said MacMahon, “ambushes prey, then pounces, forcing the prey down with its paws and delivering a fatal bite to the neck. Its diet includes grasshoppers, crickets; small mammals, small birds; fruit, spiders, and frogs.” The skunks, omnivores, eat vegetable matter, insects, bird eggs, amphibians, and small mammals. Badgers eat small mammals. The Raccoon “will eat almost anything.” The Collared Peccary can lay waste to a stand of prickly pear cacti, thorns and all. Coyotes, like Raccoons, will eat almost anything. Pronghorns graze on grasses, forbs, cacti and, in winter, sagebrush. Mule Deer browse primarily on a wide range of woody plants. The diversity of the animal life of in the punishing venue of our Southwestern deserts validates the resourcefulness of nature. As the eminent naturalist Roy Chapman Andrews said in his book Nature’s Ways: How Nature Takes Care of Its Own, “one of the most fascinating aspects of nature is the way it equipped every creature, be it of high or low degree, to withstand enemies and to obtain the necessities of life. Some animals had to change their entire physiology or anatomy to enable them to meet competition and to survive; more often less drastic adaptations in skin, color, or habits made the difference between life and death of a species in the struggle for existence.” Part 1 Desert Food chain - Introduction Part 2 Desert Food chain - The Producers Part 3 Desert Food chain - The Cacti: A Thorny Feast Part 4 Desert Food chain - The Yuccas Part 5 Desert Food chain - The Agave Part 6 Desert Food chain - Desert Grasslands Part 7 Desert Food chain - Desert Shrubs Part 8 Desert Food chain - The annual forbs Part 9 Desert Food chain - Mavericks of the Desert Plant Part 10 Desert Food chain - Outlaw desert plants Part 11 Desert Food chain - Animals: The Consumers Part 12 Desest Food chain - The Insects Part 13 Desest Food chain - The Ugly, the Uglier and the Ugliest SEARCH THIS SITE The Mountain Lion, also known as the Cougar, Panther or Puma, is the most widely distributed cat in the Americas. It is unspotted -- tawny-colored above overlaid with buff below. It has a small head and small, rounded, black-tipped ears. Watch one in this video. View Video about The Black Widow Spider. The female black widow spider is the most venomous spider in North America, but it seldom causes death to humans, because it only injects a very small amount of poison when it bites. Click here to view video. The Rattlesnake Video Rattlesnakes come in 16 distinct varieties. There are numerous subspecies and color variations, but they are all positively identified by the jointed rattles on the tail. Take a look at a few of them, and listen to their rattle! Click here to see current desert temperatures!
0.9554
FineWeb
3.921875
From Wikipedia, the free encyclopedia CCS may refer to: - Calculus of communicating systems, a process algebra (concurrency theory) - Coded Character Set, a function from a subset of non-negative integers to characters. - Code Composer Studio, an integrated development environment for embedded systems by Texas Instruments - Common-channel signaling, a type of communication signaling - Call Control Server, a server between terminals in telecommunications - Capsanthin/capsorubin synthase, an enzyme - Chinese Chemical Society (Taipei), a scholarly organization - Critical community size, parameter used in vaccination and eradication campaigns - Callahan's Crosstime Saloon, a series of novels and vignettes by author Spider Robinson. - Cardcaptor Sakura, a magical girl manga and anime series created by Clamp. - Collective Consciousness Society, a British popular music group led by blues guitarist Alexis Korner, more often known by its initials CCS. - College for Creative Studies, Detroit, Michigan - Caroline Chisholm School, England - Cowbridge Comprehensive School, Wales - Calvary Christian School - Cornway College a private, co-educational, day and boarding school in Zimbabwe - Coventry Christian Schools - Cases Computer Simulations, video game company specializing in strategy and war games - Community Combat System, a "meter" used to allow Second Life roleplay to behave more like traditional MMO and video games. - Candy Crush Saga, a video game that was released on April 12, 2012 for Facebook, and then released on November 14, 2012 for smartphones - Cellular confinement systems (geocells), a honeycombed geosythethetic matrix filled with granular material used for soil confinement, stabilization and reinforcement. - Cabinet Committee on Security, a committee of Cabinet of India which decides on important issues such as India's defence expenditure and matters of National Security. - Centre de Coordination et Surêté, Government of Québéc - Canadian Cardiovascular Society - Ceylon Civil Service - China Communications Services - China Classification Society - Circuit City Stores, USA and Puerto Rico - Combined Chiefs of Staff, supreme military command for the western Allies during World War II - Communist Corresponding Society, a Marxist discussion and policy-development group - Comparative Cognition Society, a scientific society for the study of animal cognition and comparative psychology - Computer Conservation Society - Consumers Cooperative Services, a network of New York City consumer cooperatives founded in the 1920s - Crown Commercial Service, an executive agency and trading fund of the Cabinet Office of the UK Government - Simón Bolívar International Airport in Caracas, Venezuela - IATA airport code - Canadian Cue Sport Association - CC Sabathia, baseball player - Cardiff City Stadium, Home of Cardiff City FC - Central Coast Section part of the California Interscholastic Federation - Championship Cup Series, Motorcycle Road Racing sanctioning body - Capital City Service, Football hooligan gang attached to Hibernian F.C. - Combined charging system - A plug standard for fast charging vehicles |This disambiguation page lists articles associated with the same title. If an internal link led you here, you may wish to change the link to point directly to the intended article.
0.6499
FineWeb
2.0625
Memory exists not only as a personal creation we form as the result of what we can call “real life experiences”, but exists as the very basis for knowledge itself as ideas that can be used to create realities. There’s a two-way flow between a reality becoming a memory, and a memory becoming a reality. We often form a memory from an event in our life that has a strong emotional impact, that we then use over and over throughout our life as a form of perceptual filter and template for creating more of the same ‘type’ of realities. The memory itself isn’t formed as an exact recollection of the factual details of what happened, but rather as the story we told ourselves about what happened and why, and what it meant about us and the world in general as a result. We make up stories about the events of our life as a means of trying to process and make sense of them, and the story we form as the meaning and significance of the event becomes a kind of life theme that the memory itself merely serves to represent. Once a memory is formed from an intense emotional experience, and is turned into a kind of template of meaning as a perceptual filter, and we begin referencing it on a regular basis as the means of providing us with an instant interpretation for all current events of a similar nature, it becomes an habitual perception as a life theme that we use to form our routine experiences of life. In a like manner, what we call ‘Universal knowledge” exists also as a form of memory in the Akashic or Unified field of infinite potential as universal archetypes or principles. Archetypes are a form of memory as ideas that are holistic in nature as a hologram that provides the perceptual lens and template for creating realities out of that provide us with certain types of life experiences. An archetype serves as a generic prototype that we absorb into our mental field and reform into a personalized version by how we adapt and integrate it into our mental paradigm as applied to our life conditions and circumstances. When ideas “come to us” out of nowhere, they rise up in our mind and play out in much the same way as a normal memory is recollected. They come to us as a form of living concept or scenario that’s a potential pattern for using as a theme to produce a similar form or reality out of. Archetypes as universal memory exist in the causal field as generic prototypes for utilizing at the personal level to create experiences that produce personal memory of the same kind. An archetype is an “idea” in its whole unified form that contains an overall pattern as a story complete with attributes and qualities in varying degrees that form characteristics that produce a form of personality with distinct behaviors. In beings bestowed with the capacity of higher conscious, an identity emerges out of the personality, that “creates” by telling a certain type of story as the expression of its character or inner nature within specific conditions, situations, and life circumstances. A personal memory that’s produced by either an actual life experience or by applying an archetypal memory to produce a life experience, that’s then used in a habitual manner as a theme for creating more experiences of the same nature, creates a ‘thought-form’. Thought-forms are produced by the modified adaptations of a universal theme as a personal and unique version, that’s imagined, concentrated on with intensity, and embodied through action that forms just like a universal archetype. Thoughts as ideas that are imagined as full sensory experiences with strong emotions (made real), produce an electromagnetic field that forms into a holographic form or reality that serves to populates the astral plane of the collective unconscious as a memory that can be accessed, absorbed, and utilized by others through morphic resonance. Those of a similar nature (vibratory frequency) are able to accommodate the same type of thoughts while mistaking them for their own. Anyone who is “tuned” to the same frequency can access, receive, and utilize the thought-form in the same way they utilize a universal archetype or they perceive it as their own self-generated thoughts. All similar things influence each other through morphic resonance. Memory is not stored in the brain or body as many believe, but is stored in the soul as the morphic electromagnetic field that surrounds, permeates, and is boded to the body that’s also directly connected to the greater mind-field of the astral plane of mass consciousness. Only those of the same species, kind, type, nature, and character as a cluster of accumulated memories are “tuned to the same frequency” and act as a “receiver” through resonance and sympathetic induction, which acquires the memory as a thought-form that it believes is its own or is self-produced. Thoughts come to us and play out in our mind in much the same way as our own memories do. It’s an imaginary process that gives us a new pattern as an idea for acting out in our life in a congruent manner to create more memories of the same kind. The spirit, soul, and body are simultaneously unformed, evolutionary, and stabilized patterns. The spiritual plane, which is causal in nature, is unformed and exists as pure consciousness, raw information, archetypal ideas, and patterns in their free- flowing potential state. This is the unified field (Akasha) of pure information as archetypes and universal memory that are available for self-creating at the physical level of the soul and body. They can only create (become formed as something specific) in the material world through a physical channel, individual mind and will (life-force). The soul acts as a ‘medium’ between the spiritual and the physical, and is both of the spirit and form whose constitution is comprised of accumulated memory that forms a unique variation as a multidimensional being with a three-fold nature. The soul walks between worlds, vibrating at the frequency of its integrated memory, where it draws on the archetypes of the spiritual realm, and acts as a channel and transducer for interpreting it through adaptation to its personal model, modifying it and creating a unique variation by how it builds it into its astral-etheric constitution to produce new forms of thought and behavior that modifies and evolves its character to higher and more expanded levels of understanding. The entire physical world is created by a (the) soul. The soul in-forms and structures physical reality as a combination of archetypal themes that are correlated, coordinated, and congruent in nature. The soul provides the astral-etheric holographic blueprint that acts to fashion its own body, while tuning into and resonating with that same reality in the outer world. The body is grown, organized, and spatially structured through the etheric interface of the soul. Its energetic memory as a series of dynamic interlaced patterns produces a morphogenetic electromagnetic field as a holographic, ghost-like image (invisible light) that provides the vibratory structure as a torsion field (life-force) that orchestrates the cellular regeneration, specialization, and spatial orientation in sync with the genetic code inherited from the parents, which is of the same memory-frequency of the soul utilizing it. They’re compatible in other words. The body as a cellular organism is given life, structured and produced, animated, and “held together” by the soul. Physical matter has no life or structure of its own without a soul. The soul is the morphic field of accumulated and integrated memory as a vibratory frequency that produces the “self-organizing mechanism” and tension-field for producing the body as its physical equivalent or twin. Matter is the passive and inactive (feminine) aspect of consciousness and only obtains the active and formative aspect (masculine) of consciousness by absorbing it and acting as a host for it, which is what serves to shape and animate it with natural behaviors. The subconscious mind of the body and DNA is passive and receptive in nature and is programmed with new ideas as patterns of formation and behavior by the higher, active mind of the soul. The soul exists in polar relationship with both the higher realm and the lower, and is receptive to the higher and active in the lower. The ideas that are “received” from the higher and transmitted to the lower, and then acted on repeatedly forms a memory that becomes habit. Once an idea is brought into practice consistently, and is “built into the muscle”, so to speak, it becomes natural and automatic. It becomes a permanent part of our nature as our soul’s essence. Humans are bestowed with higher capacities of the mind and have the ability to self-create by producing its own memory in an intentional and deliberate manner. The animal kingdom and the entire natural world operates strictly from memory as instinct, where all members of the same species and kind, behave in much the same way, and changes occur only on rare occasion where a new behavior is developed, that then makes that behavior available to all others of the same species, through what we call the collective unconscious, which is synonymous with the astral plane of the Earth, which is the “soul” of the Earth. Humans have the ability to produce thought-forms that serve to populate the astral plane with new memory that acts in much the same manner as universal archetypes do. These thought-forms are transmitted fluently throughout the astral plane where they can be picked up on and received or absorbed by anyone who is of the same kind and “tuned to” the same type of memory. Likewise, whatever thoughts as memories we dwell in and constantly recollect and relive in our imagination, we tune ourselves to, resonate with, and act as a receiver for more of the same type of thoughts. Because the tuning, receiving, and transmitting of thought-forms are all of the same nature, we don’t realize we’re being “given them” and usually mistake them as our own, much like we do universal archetypes that usually come through moments of inspiration, insight into something, or a “ah-ha” moment as a sudden realization. When we live out of our unconscious, lower, non-creative nature, we fail in the most basic sense to evolve through the incorporation of new and diverse thought, and can live our entire life out of our past. We have a natural tendency to use past memories to create present experiences in a habitual fashion. This is because there’s no memories we resonate with more than our own memories of the past, so we tend to consistently think, feel, behave, and perceive more of the same type of experiences. We only see our past in everything else. This is a feature of our lower, animal, subconscious mind which operates in a purely automatic and habitual fashion to bring the memories of the past into the present, using them to perceive, interpret, and produce more of the same type of behavior and experiences. We simply repeat the same themes, story-lines, and behavioral dynamics over and over, without realizing that through choice and free will, we can choose to intentionally break old habits and create new ones by embodying new qualities and ways of thinking and being. By providing ourselves with new knowledge and information that we first imagine as a reality and practical application that we then act on repeatedly, we create new types of experience that produce new types of memory, which when made into a habit, change our vibratory constitution and evolve us at the soul level according to those memories. By applying knowledge to form new memories, we not only see the world differently, but we use those memories to create our experiences in the present, and continue to produce more of the same type of experiences, accumulating to the point where they modify our existing model. However we intentionally act to shape our mental paradigm becomes our soul’s constitution which is eternal, and carried forward as natural tendencies, temperament, and disposition. It literally changes our personality and identity, which is how we self-create by way of our own self-expression and the corresponding creation of our outer world. As we change internally, our outer world changes as a direct correspondence. We tune into a different reality, act to consistently perceive that reality, and receive, process, and transmit more of that same type of reality. To change others and the world around us, we have to change ourselves and our inner world which is what’s producing it through morphic resonance. When we change our “mind” we change how everything in the outer world appears. We are tuned to a new frequency, organize the outer world to that frequency by what stands out and is enhanced and embellished, and what doesn’t. We begin noticing and perceiving different things. We look at the same thing in a new light and see something different as a result. We act to attract new things and begin interacting with the outer world in a whole new way. Old relationships and situations die and fade away, and new ones are birthed and come to life. We act on ourselves to grow and evolve ourselves to new types of form and reality. A memory as an archetype (prototype) is a frequency that’s comprised of different types of information as patterns (dynamic form) that have a self-organizing mechanism as life-force energy. Information itself is unformed and passive until it’s energized (charged) by life-force and organized into a living biological (fully functioning) form that’s self-sustaining and self-perpetuating. Whatever form we take on as archetypal forces becomes the memory of our soul, substance, and behavior, and tunes us to the vibration of that memory, where we connect and resonate with more of the same memory in everything else, while acting as a receiver and channel for it. The archetypal forces that make up our soul’s nature, represented by the zodiac in various degrees and angles, form our mind as our paradigm and perceptual lens that structures the entire outer world to be of the same nature and memory as we are. The soul not only fabricates the body as its physical equivalent (temple and vehicle), but also forms the outer reality as a larger, more diverse and complex version of the same idea that sets the stage for acting it out. We only see in others and everything else, what’s in us as memory. Our soul’s memory forms our personality and all of our natural inclinations, propensities, and behaviors, which spontaneously births our identity, based on what we naturally associate with and relate to. We attract (and are attracted to), engage in, and act to facilitate more of the same type of relationships and experiences that are inherent in our memories. Energetically we’re an antenna that’s tuned into a frequency as accumulated memory that acts to receive, adapt and modify, and produce more experiences of the same kind as a life theme. Our life theme is expressed by the story we’re always in the process of telling ourselves about things that give them meaning. We naturally act to produce a congruent and consistent version of reality as a story-line that plays out as dynamic interactive theme. Vibration produces a morphic field as an electromagnetic field (spiraling motion that both pulls together and pushes apart, forming a stable pattern) as a holographic 3-D virtual image that organizes and concentrates essence (light) into a biological form that functions in a way that’s natural to that form. Vibration creates this same basic structure on multiple levels simultaneously as an image, scenario, and greater reality that “in-forms matter” as astral light that becomes a holographic-etheric blueprint for growing, organizing, and spatially forming the physical equivalent. As we think, feel, and imagine, we shape light into the reality of our thoughts, imprinting the ether around us with that impression as a vibratory frequency that becomes our “signature” for creating as our style and how we “handle our subject”. I spend hundreds of hours a month writing, editing, designing, and managing this website in order to provide the highest quality of knowledge possible in the area of Spiritual Sciences and Applied Psychology for FREE to those who are seeking. If you have found the information on this site to be valuable and beneficial to your personal growth and development, please consider making a donation and actively supporting my work . . . It’s interesting that the idea of creating good health or being healthy is often perceived as diffic... All living organisms operate out of the same basic group of principles represented in Sacred Geometr... Our Paradigm forms our Perceptual Lens through which we Experience Reality Our mind is a ... Our true spiritual development can seem a bit elusive because we’re often not clear on the differen... The symbolic interpretation of the Tarot card 14 - Temperance by Dr. Linda Gadbois Many people have been conditioned to very conventional ways of looking at their own mind-body system... The things that come most natural to us often go completely unrecognized because we do them without... When we say that we tend to live a life of illusions that we mistaken for reality, many don’t know f...
0.7051
FineWeb
2.15625
Coins are a source of information much used by historians. Elaborately detailed mining landscapes on 16th-century German coins in the National Museum, discovered by the curator of numismatics and brought to the authorís attention, led to this study of early mine-pumping devices. (Images not included - please see Project Gutenberg for illustrations.) W. B. Parsons, Engineers and engineering in the Renaissance, Baltimore, 1939. Abraham Wolf, A history of science, technology, and philosophy in the 16th and 17th centuries, New York, 1935; and A history of science, technology and philosophy in the eighteenth century, London, 1938. C. M. Bromehead, "Mining and quarrying to the seventeenth century," in Charles Singer and others, A history of technology, vol. 2, Oxford, 1956. According to Parsons (op. cit., footnote 1, p. 629) the introduction of machinery worked by animals and falling water, "radical improvements" of the 15th century, fixed the development of the art "until the eighteenth, and, in some respects, even well into the nineteenth century." Wolf in his History of science ... in the eighteenth century (p. 629, see footnote 1) agrees, saying that "apart from [the steam engine] mining methods remained [during the 18th century] essentially similar to those described in Agricola'
0.8284
FineWeb
3.140625
A network is set of artifacts (or simply procedures) that allows network terminals to share information each other. So, a tin can phone can be considered a kind of network without doubts. In the Hyperuranium network model ( usually Hyperuranium citizens are software people and CEOs ), a network is a complete graph. It means that every network end is connect to all other ends. Please, have a look at following picture: |This isn’t my grandma doily. This is a complete graph.| A so designed network is perfect. When the network end that we call Joe wants to talk with network end that we call Mary, he simply picks up the phone, presses the Mary button and talks. No resources competition is expected in a doily (sorry complete graph) network model. On another hand, if Mary breaks up with Joe, after the last sad phone call, the edge that connects Mary and Joe became useless. It still needs to be maintained but no traffic is going to travel over it at least until Mary decides to give Joe a second chance (if she does). In the real world, we can’t waste money over Mary and Joe’s affair maintaining useless network connections. A good compromise between traffic congestion and communication availability must be identified. An interesting high level network concept is the Circuit Switched Network. In this model, we don’t have a full connection between network ends. If Mary wants to call James (yes, she decided to find an alternative), she calls the operator and asks to be connected to James’s terminal. The operator asks a line to an operator in James city, both operators link up phone plugs and a physical connection is established between Mary’s and James phones. When they end their call, the operators are free to disconnect the lines that can be used for other communications. This is a good model. When Mary and James aren’t connected, operators and lines can be used for other communications. For this reason, the circuit switching model doesn’t need a complete graph to connect network ends. Unfortunately, there are two problems: If Mary and James base their relationship on long, meaningful and romantic silences, the circuit established remains unused but cannot be used for other talkative users. It stays allocated until the connection between Mary and James lasts. This phenomena can induce the second problem: the network congestion. If all circuits are assigned, no other resources can’t be assigned to other users. |A fully racked set of pre Cisco era switches with a couple of walking routers managing network traffic. Please, note that old network elements were prettier than actual ones. In circuit switching model, the network has solid logic and the peripheral terminals can be dumb objects. If we refer to the old telephone network, a couple of microphones and speakers were “physically connected” each other with long, long cables (and repeaters). By the physical point of view, not so different from a tin can phone. The logic of communication is fully placed in the network (the operators) and no smart terminals are required. On the other side, we can imagine to have a smarter terminal and a kind of less smart network. In this case, we want that network ends take a part of the work trying to split their data stream in small pieces (packets) embedded in an envelope containing information about packet’s source, packet’s destination, some wake up signals, and an error control code. In this way, the data network becomes like a road network where packets travel from source to destination filling all available resources. Sender and receiver are smart guys on a packet switched network. They assemble/disassemble packets, they put/check some error correction code and, sometimes, they put/decode a sequence number to be sure that the information will be fully read in the right sequence. The network, on its side, does it best (it works at best effort). Traffic lights and circulation rules will still be necessary, but the circuit established between ends is only “virtual”. It means that resources will be fully used multiplexing data over physical connections. The intelligence on the network’s ends and the rules of the routing managed by network elements all create a fascinating set of technologies based on information exchange models, routing policies and network services definitions. This will be discussed in future posts.
0.6149
FineWeb
2.78125
Health and Wellbeing St Maria Goretti Catholic Academy operates a healthy food policy. Pupils in Foundation Stage and Key Stage 1 are given fruit every day. Children in Key stage 2 may also bring fruit for playtime snacks. In addition through the DFE funded Magic Breakfast programme, the school also provides a nutritional breakfast for every pupil in the form of a bagel or cereal. The Magic Breakfast pupil leads ensure this is distributed equally across the school and in classes where pupils require an additional breakfast. Through our Health and Wellbeing week, each year the school focuses upon healthy lifestyle choices inclusive of a balanced diet and physical exercise. Diet and Nutrition According to advice from the Food Standards Agency, a healthy packed lunch should include: - Meat, fish or a dairy source of protein - Starchy carbohydrate, such as a wholegrain sandwich, to provide energy - At least one portion each of a fruit and vegetable or salad - Water or milk to drink, but diluted fruit juice and yogurt drinks or smoothies are acceptable The key foods to avoid are: - Sweets and chocolate - Snacks, like crisps, with added salt/sugar/fat - Sugary and fizzy drinks - Deep-fried foods and processed meats - White bread - if children won't eat brown, try wholemeal/white combined sliced bread Chocolate, fizzy drinks and sweets etc. are not allowed. The Children's Food Trust - an independent body set up to advise schools on healthy eating - says there are no plans to issue statutory guidance on packed lunches, but it has produced some sample lunchbox menus. Click on the link below to see ideas for healthy lunch boxes. Helpful website to provide cheap healthy meals for families. Health & Wellbeing Personal, Social and health Education helps to give pupils the knowledge, skills and understanding they need to lead confident, healthy, independent lives and to become informed, active responsible citizens. Through PHSE, we endeavour to foster the notions of responsibility and empowerment to promote a sense of achievement and to enhance self-confidence. All PSHE sessions throughout the school will promote positive mental health and wellbeing. PSHE education is guided by the values of: At St Maria Goretti Catholic Academy, we aim to promote positive mental health and wellbeing for our whole school community (Children, staff, parents and carers), and recognise how important mental health and emotional wellbeing is to our lives in just the same way as physical health. We recognise that children’s mental health is a crucial factor in their overall wellbeing and can affect their learning and achievement. All children go through ups and downs during their school career and some face significant life events. The UK had the least mentally healthy children in Europe (UNICEF). In 2017, about 1 in 10 children aged 5 to 16 have a diagnosable mental health need and these can have an enormous impact on quality of life, relationships and academic achievement. In many cases it is life-limiting. Half of mental illnesses in adults first occur before the age of 14 years. Mental health patterns are laid down in childhood and this will determine a child’s mental health throughout their life. The Department for Education (DfE) recognises that: “in order to help their children succeed; schools have a role to play in supporting them to be resilient and mentally healthy”. Schools can be a place for children and young people to experience a nurturing and supportive environment that has the potential to develop self-esteem and give positive experiences for overcoming adversity and building resilience. For some, school will be a place of respite from difficult home lives and offer positive role models and relationships, which are critical in promoting children’s wellbeing and can help engender a sense of belonging and community. According to Barnardos (2015) 7 in 10 children are not getting help when they need it, Our role in school is to ensure that children are able to manage times of change and stress, and that they are supported to reach their potential or access help when they need it. We also have a role to ensure that children learn about what they can do to maintain positive mental health, what affects their mental health, how they can help reduce the stigma surrounding mental health issues, and where they can go if they need help and support. Our aim is to help develop the protective factors which build resilience to mental health problems and to be a school where: - All children are valued. - Children have a sense of belonging and feel safe. - Children feel able to talk openly with trusted adults about their problems without feeling any stigma. - Positive mental health is promoted and valued. - Bullying is not tolerated. - Children who require support to manage their mental health will be identified and supported quickly. - We promote the importance of staff mental health and wellbeing.
0.9508
FineWeb
2.796875
Editor’s importunate role towards medical journalism in 2016-way to go! “Man needs his difficulties because they are necessary to enjoy success” – APJ Abdul Kalam “As an editor you know your heart and soul are stapled to that manuscript but what we see are the words on the paper” – all needing to be given a framework. A medical journal provides a gist of recent advances of medical science in that branch of medicine. Through its contents, it provides a status of local health needs/problems and issues; it encourages to do research; it creates a platform for dialogue and prepares future researchers, editors and academicians. These duties of a journal are given a shape by the editor. How does he perform this formidable a task? Salient Features of an Editor’s Responsibility: The most daunting task is to create an informative and attractive website along with a full proof, smooth system of online submissions and their management. The next is creating a robust editorial board with faculty of more than a decade of editing experience! This is the one work of the editor, wherein friends become foes and vice versa. It is imperative at all times to keep a double blind peer review system in place. Rejected articles of other journals, being taken on your journal are not a bane – as they come after a revision of comments for rejection and undergo a double blind peer review, which accepts it. Unfortunately, though we have high end plagiarism softwares, we do not yet have one for detecting a rejected article being submitting at your journal end! Is it a boon or a bane? Only time will tell!! As all eyes remain glued to you performances, an editor is at his wits end to maintain an ahead of print of contents; have submissions sent with revisions to the publisher on time; have hard – hitting editorials on board in their journal. Read full Article at : Annals of Cardiac Anesthesia
0.5411
FineWeb
1.820313
Understanding the phylogeography and genetic structure of populations and the processes responsible of patterns therein is crucial for evaluating the vulnerability of marine species and developing management strategies. In this study, we explore how past climatic events and ongoing oceanographic and demographic processes have shaped the genetic structure and diversity of the Atlanto- Mediterranean red starfish Echinaster sepositus. The species is relatively abundant in some areas of the Mediterranean Sea, but some populations have dramatically decreased over recent years due to direct extraction for ornamental aquariums and souvenir industries. Analyses across most of the distribution range of the species based on the mitochondrial cytochrome c oxidase subunit I gene and eight microsatellite loci revealed very low intraspecific genetic diversity. The species showed a weak genetic structure within marine basins despite the a priori low dispersal potential of its lecithotrophic larva. Our results also revealed a very recent demographic expansion across the distribution range of the species. The genetic data presented here indicate that the species might be highly vulnerable, due to its low intraspecific genetic diversity.
0.9995
FineWeb
2.765625
Expand LTE Network Spectrum with Cognitive Radios -- From Research to Implementation (c) 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Wireless data traffic is growing extraordinarily, with new wireless devices such as smart phones and bandwidth-demanding wireless applications such as video streaming becoming ever increasingly popular and widely adopted. Correspondingly, we have also witnessed the phenomenal wireless technology evolutions to support higher system capacities from generation to generation. Long Term Evolution (LTE) has been developed as 4G wireless technology that can support next generation multimedia applications with high capacity and high mobility needs. However, the peak data rate from 3G UMTS to 4G LTE-Advanced only increases 55% annually while global mobile traffic increases 66 times with an annual growth rate of 131% between 2008 and 2013. Clearly there is a huge gap between the growth rate of new air interface and the growth rate of consumer's needs. A promising way to alleviate the contention between the actual traffic demands and the actual system capacity growth is to exploit more available spectrum resources. Recently, cognitive radio (CR) technology has been under extensive research and study. It aims to provide abundant new spectrum opportunities by exploiting under-utilized or un-utilized spectrum opportunistically. In this paper, we discuss the technical solutions to expand LTE spectrum with CR technology (LTE-CR), and survey the advances in LTE-CR from both research and implementation aspects. We present detailed key technologies that enable LTE-CR in TVWS, we have conducted extensive system level simulation and also developed a LTE-CR prototype. Both simulation and lab testing results show that applying LTE-CR in TVWS can achieve satisfactory performance.
0.6833
FineWeb
1.960938
This week we are gearing up for the start of school next week. We will be meeting L’s teacher tonight for the first time so we made her a handmade gift. This toilet paper tube pencil has a chocolate bar hidden inside. You never know when the teacher might have a bad day and need a bit of chocolate! How To | Toilet Paper Tube Pencil - Toilet Paper Tube - Yellow Paint - Paint Brush - Clear Tape - Masking Tape - Black Book Binding Tape or Electrical Tape - 4″ Square of Pink Tissue Paper - Letter Stickers - Download: Pencil Tip Template - Paint toilet paper tube yellow and let dry. (Pic.1) - Download and print Pencil Tip Template. Cut pencil tip from paper (I used recycled newsprint to get the right color). Shade the tip of the pencil tip with a pencil to make it look like a pencil tip. - Form a cone shape with the paper pencil tip. Use clear tape to hold it together and secure it to one end of the toilet paper tube. - Cut a 6″ piece of masking tape. Cut the long edge of the masking tape into a zig-zag pattern. (Pic. 2) - Wrap masking tape around toilet paper tube where pencil tip and the tube meet. (Pic. 3) - Use a pencil to draw lines down the side of the toilet paper tube. (Pic. 4) - Cut out a 4″ square of pink tissue paper. To form the eraser tip, wrap the tissue paper over the open end of the toilet paper tube. Wrap a piece of masking about 1.5″ from the top. (Pic. 5) - Remove tissue paper eraser from end of tube. Trim off the extra tissue paper that is below the masking tape. (Pic. 6) - Add candy or chocolate bar to the inside of the pencil tube. (Pic. 7) - Slide tissue paper eraser back over the end of the pencil tube. (Pic. 8) - Secure eraser to pencil tube using black tape. - Add teacher’s name using letter stickers. - Add a tag or note that reads, “If you have a stressful day, break this open for a chocolate treat.” * Click on the thumbnails to see larger images.
0.8704
FineWeb
2.21875
Plastics: Microstructure and Engineering Applications Compact discs, cycle helmets and gas pipelines - these are diverse applications of plastic materials, each involving a unique engineering analysis. 'Plastics' highlights these examples, and its thorough and balanced treatment will provide students, engineers and scientists with a solid foundation for the analysis and development of many other plastic products. In this edition, the material has been reordered to emphasize the structural divisions of topic areas and to support teaching on practically oriented courses. Major new sections on materials selection and on the three design case-studies have been added and each chapter includes questions to test the reader's understanding. 82 pages matching pressure in this book Results 1-3 of 82 What people are saying - Write a review We haven't found any reviews in the usual places. Other editions - View all amorphous applied atoms axis beam bending stiffness biaxial birefringence blow moulding bond calculated cell Chapter compressive constant cooling copolymer crack growth crack tip craze creep rupture cross section crosslinking crystal crystalline deformation density diameter diffraction diffusion direction disc elastic energy equation extruder fibre film foam fracture glass glassy polymers HDPE heat helmet hoop stress impact increases injection moulding layer length liquid loading material maximum mechanical melt flow metal microstructure molecular weight molecules monomer network chain occur orientation phase pipe plane plane strain plastic plate PMMA polarised polycarbonate polyethylene polymerisation polypropylene polystyrene polyurethane pressure properties radius ratio residual stresses ribbed rotational rubber semi-crystalline semi-crystalline polymers shape shear strain shear strain rate shear stress sheet shows specimen spherulites surface temperature tensile stress thermal thermoplastics thickness velocity versus viscoelastic viscosity wall weld yield stress yielded zone Young's modulus
0.8401
FineWeb
2.234375
Also found in: Dictionary, Thesaurus, Wikipedia. a genus of perennial bulbous plants of the family Amaryllidaceae. The leaves are wide and oblong. The flowers are usually brightly colored and gathered in umbellate inflorescences. More than 50 species are found in tropical and southern Africa. Some species are grown in hothouses and rooms. The most popular species are H. coccineus with red flowers, H. katherinae with bright orange flowers, and H. albiflos with plain whitish flowers and pulpy strap-shaped leaves.
0.7251
FineWeb
3.234375
Arginine and vasodilation For example, arginine is the rate-limiting amino acid in the synthesis of nitric oxide (NO), a gaseous substance that causes blood vessels to dilate. It’s well known that increasing the bioavailability of NO improves vasodilation and blood pressure, but emerging evidence suggests exercise efficiency and performance may benefit as well. In one study, healthy men performed two separate exercise cycle tests one hour after consuming either 6 grams of arginine or a placebo. Arginine supplementation reduced the amount of oxygen required to perform exercise (i.e. increased exercise efficiency). This means that individuals accomplished the same exercise load but with less energy expended. Arginine supplementation also increased time to exhaustion by 26% during high intensity cycling. Arginine and growth hormone Other work has shown that arginine supplementation may increase growth hormone (GH). When arginine is infused directly into a vein, GH levels increase dramatically. In fact, arginine infusion is used clinically as a diagnostic test when GH deficiency is suspected. The dose of oral arginine needed to increase GH levels appears to be at least 5 grams, with larger responses shown with 9 grams. Arginine and muscle gains In a different study, a combination formula consisting of 7 grams of arginine, 1.5 grams of HMB, 7 grams of glutamine and 3 grams of taurine resulted in striking improvements in body composition. Compared to a control group who received a placebo, healthy young men who took these supplements during 12 weeks of heavy resistance training showed a 10-pound greater increase in lean body mass.
0.9229
FineWeb
2.609375
Volume 577, May 2015 |Number of page(s)||10| |Section||Galactic structure, stellar clusters and populations| |Published online||19 May 2015| The role of neutron star mergers in the chemical evolution of the Galactic halo Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 2 INAF, Osservatorio Astronomico di Bologna, via Ranzani 1, 40127 Bologna, Italy 3 Dipartimento di Fisica, Sezione di Astronomia, Università di Trieste, via G. B. Tiepolo 11, 34143 Trieste, Italy 4 INAF, Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, 34143 Trieste, Italy 5 INFN, Sezione di Trieste, via A. Valerio 2, 34127 Trieste, Italy 6 Astrophysics Group, Keele University, Keele ST5 5BG, UK 7 Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo, 277-8583 Kashiwa, Japan Received: 20 January 2015 Accepted: 6 March 2015 Context. The dominant astrophysical production site of the r-process elements has not yet been unambiguously identified. The suggested main r-process sites are core-collapse supernovae and merging neutron stars. Aims. We explore the problem of the production site of Eu. We also use the information present in the observed spread in the Eu abundances in the early Galaxy, and not only its average trend. Moreover, we extend our investigations to other heavy elements (Ba, Sr, Rb, Zr) to provide additional constraints on our results. Methods. We adopt a stochastic chemical evolution model that takes inhomogeneous mixing into account. The adopted yields of Eu from merging neutron stars and from core-collapse supernovae are those that are able to explain the average [Eu/Fe]–[Fe/H] trend observed for solar neighbourhood stars, the solar abundance of Eu, and the present-day abundance gradient of Eu along the Galactic disc in the framework of a well-tested homogeneous model for the chemical evolution of the Milky Way. Rb, Sr, Zr, and Ba are produced by both the s- and r-processes. The r-process yields were obtained by scaling the Eu yields described above according to the abundance ratios observed in r-process rich stars. The s-process contribution by spinstars is the same as in our previous papers. Results. Neutron star binaries that merge in less than 10 Myr or neutron star mergers combined with a source of r-process generated by massive stars can explain the spread of [Eu/Fe] in the Galactic halo. The combination of r-process production by neutron star mergers and s-process production by spinstars is able to reproduce the available observational data for Sr, Zr, and Ba. We also show the first predictions for Rb in the Galactic halo. Conclusions. We confirm previous results that either neutron star mergers on a very short timescale or both neutron star mergers and at least a fraction of Type II supernovae have contributed to the synthesis of Eu in the Galaxy. The r-process production of Sr, Zr, and Ba by neutron star mergers – complemented by an s-process production by spinstars – provide results that are compatible with our previous findings based on other r-process sites. We critically discuss the weak and strong points of both neutron star merging and supernova scenarios for producing Eu and eventually suggest that the best solution is probably a mixed one in which both sources produce Eu. In fact, this scenario reproduces the scatter observed in all the studied elements better. Key words: Galaxy: evolution / Galaxy: halo / stars: abundances / nuclear reactions, nucleosynthesis, abundances / stars: neutron / stars: rotation © ESO, 2015 Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform. Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days. Initial download of the metrics may take a while.
0.5823
FineWeb
1.75
There are concerns about the impact that global warming will have on our environment, and which will inevitably result in expanding deserts and rising water levels. AUVs (Autonomous Underwater Vehicle) were considered and chosen, as the most suitable tool for conducting surveys concerning these global environmental problems. JAMSTEC has started to build a long range cruising AUV. An AUV, named "URASHIMA" was built in1999, and sea trials have been held since 2000. At the end of February 2005, the vehicle was able to cruise autonomously and continuously for 317km. Recently the vehicle has begun to undertake cruises for scientific applications. These applications require precise maneuvering of the vehicle for detailed investigations. For high performance maneuvering of the vehicle, it is necessary to design a control system based on a mathematical model for the vehicle. Since the vehicle was built, PI control has been adopted. In order to improve control performance, the motion controller can be design by means of a model-based approach (e.g. LQI control). This paper describes the mathematical model of the vehicle, experimental results of maneuverability obtained with the AUV "URASHIMA" during the sea trials, the design method of the motion controller, and shows calculated results.
0.9385
FineWeb
2.796875