Dataset Viewer
content
stringlengths 167
607k
| score
float64 0.5
1
| source
stringclasses 1
value |
---|---|---|
Science Misconceptions – How Should Teachers Deal With Them?
We’ve all heard or expressed the common teacher refrain or some variation of “I taught it to them so many times and in so many different ways and yet they still got it wrong on the exam!” It’s frustrating and hard to comprehend how something which may have been thoroughly and skillfully taught, and by all indications well understood by the students, just doesn’t take hold. Perhaps what is happening is that we are trying to teach something that contradicts the students’ existing erroneous conceptions on the subject. Unfortunately such existing misconceptions have more “sticking” power and often remain as the student’s dominant explanation.
For example, if you ask your Secondary 2 students to explain why summers are warmer than winters, you may often get the explanation that in summer the Sun is closer to the Earth than in winter. Many teachers have found that even if you take them through a teaching unit which explains the seasons as the result of the tilt of the earth’s axis, students will often remain faithful to their original misconception that seasons are a result of the earth’s relative proximity to (or of a possible variation of the intensity of) the sun.
Dr. Patrice Potvin, a science education professor at the Université du Québec à Montréal (UQAM), has done considerable research into student misconceptions in science (more correctly referred to as alternate conceptions!). He has studied the nature of these conceptions with an eye to helping teachers help their students deal with them and direct them to more acceptable scientific understandings. But he has discovered, as so many science teachers have too, that student misconceptions can be very tenacious. Dr Potvin notes that “a growing number of studies have argued that many frequent non-scientific conceptions (sometimes designated as “misconceptions”) will not vanish or be recycled during learning, but will on the contrary survive or persist in learners’ minds even though these learners eventually become able to produce scientifically correct answers.” Potvin et al. (2015).
What then can teachers do in the classroom to mitigate the learning obstacles presented by these misconceptions? Dr. Potvin has recently done research in which he has exposed students in different science disciplines and of different ages to “treatments”. In all cases students were given a pre-test, then exposed to a “treatment” i.e. a teaching situation designed to teach the correct concept, and then a post-test to see if the initial misconception had changed for the better. In one study of Grade 5 and 6 students, for example, he tackled the factors which influence an object’s buoyancy in water – trying to steer them away from the erroneous idea that size or weight alone determine buoyancy. In another study of physics students he worked to correct incorrect notions of electric currents – that a single wire can light a bulb or that a bulb consumes current, for example. Both of these studies involved large numbers of students, rigorous experimental methodology and sophisticated statistical analysis to determine whether or not the results were significant. The results showed the tenacity of student misconceptions. They were written up in peer-reviewed journals.
Dr. Potvin’s research makes a couple of suggestions to teachers:
- Be aware that initial misconceptions may persist and so teach with durability in mind.
- Provoke “conceptual conflicts” by giving illustrations which dramatically illustrate the differences between the correct and the erroneous conceptions. For example when trying to dispel the idea that the weight of an object is the main factor in its buoyancy, he suggests “comparing the buoyancy of a giant tanker boat (that floats even though it weights thousands of tons) to that of a sewing needle would provoke a stronger conceptual conflict than, say, comparing a wooden ball with a slightly bigger lead ball” Potvin (2015)
This is just a brief glimpse of the research being carried out in this complex area of science education, both locally at UQAM as well as internationally and being reported in many academic journals of science education.
With this in mind, an interesting project is being undertaken at McGill University to help teachers tackle science misconceptions that their students bring to the class. As a joint bilingual undertaking of McGill and UQAM, its aim is to help teachers of Cycle 1 secondary Science and Technology (S&T) diagnose and hopefully correct their students’ alternate conceptions in as many of the 85 concepts of the MELS S&T program as possible. Teachers from 3 school boards (two English and one French) have been working hard to develop diagnostic questions for the concepts – questions whose incorrect answers help identify misconceptions their students have. Corrective measures are also being developed to help teachers guide their students. LEARN Quebec is a partner in the project and will be the online distributor to teachers across the province once the question bank has been completed. Hopefully, along with the current research being done, this will help advance our students’ understanding of the science concepts needed to make them scientifically literate members of society.
Potvin, P., Mercier, J., Charland, P., & Riopel, M. (2012). Does classroom explicitation of initial conceptions favour conceptual change or is it counter-productive. Research in Science Education, 42(3), 401–414.
Potvin, P., Sauriol, É. and Riopel, M. (2015), Experimental evidence of the superiority of the prevalence model of conceptual change over the classical models and repetition. J Res Sci Teach, 52: 1082–1108. doi:10.1002/tea.21235 | 0.915 | FineWeb |
Ready to Buy?
+ Free Shipping
1-2 Business Days
Alera Interval Task Chair
Compact Design, Tilt Controls, Green Fabric, Black Frame
Item #: ALEIN4871
FREE Shipping on this item
Description: The Alera Interval Series Task Chair is ideal for all-day seating in tight spaces.
- Designed to fit in tight workspaces.
- Molded plastic shell resists impact.
- Waterfall seat edge helps relieve pressure points on the underside of legs.
- Five-star base with casters for easy mobility.
- Optional Arms sold separately.
- Supports up to 250 lbs.
- 360 Degree Swivel: Chair rotates a full 360 degrees in either direction for ease of motion.
- Back Height Adjustment: Simple lift motion positions lumbar support within a fixed range to alleviate back stress.
- Pneumatic Seat Height Adjustment: Quick and easy adjustment regulates height of chair relative to floor.
- Tilt: Pivot point located directly above center of chair base.
- Tilt Lock: Locks out tilt function when chair is in upright position.
- Tilt Tension: Controls rate and ease with which chair reclines to different weight and strengths of users.
- Seat: 19-1/2"W x 17-3/4"D
- Back: 16-1/2"W x 15-1/4"H
- Seat Height Range: 18-3/4" to 23-1/2"
- Overall Height: 34" to 39"
Some Assembly Required
General Office & Task
Pneumatic Seat Height Adjustment:
Back Height Adjustment:
Tilt Tension/Tilt Lock:
Overall Width Maximum:
Overall Depth Maximum:
Overall Height Minimum:
Overall Height Maximum:
Seat Width Maximum:
Seat Depth Maximum:
Seat Height Minimum:
Seat Height Maximum:
Back Width Maximum:
Back Height Minimum:
Back Height Maximum:
Five 2" hooded casters.
Supports up to 250 lbs.
Alera Interval Series
For Use With:
Alera Fixed Height T-Arms, Alera Optional Height-Adjustable T-Arms
Meets or exceeds ANSI/BIFMA Standards
Pre-Consumer Recycled Content Percent:
Post-Consumer Recycled Content Percent:
Total Recycled Content Percent:
Casters supplied with this chair are not suitable for all floor types. Optional Arms sold and shipped separately.
This product is not yet rated. Be the first to Write a Review! | 0.8031 | FineWeb |
An Ethical Framework for Global Vaccine Allocation
Emanuel, E., Persad, G., Kern, A., et al.. (2020). An Ethical Framework for Global Vaccine Allocation. (Added 12/28/2020.) Science. 369(6509):1309-1312.
The authors of this article describe a three-phased Fair Priority Model for distribution of COVID-19 vaccine that prioritizes preventing urgent harms earlier. Phase 1 addresses premature deaths and other irreversible health effects, phase 2 addresses other enduring health harms and economic and social deprivations, and phase 3 addresses community transmission. | 0.6436 | FineWeb |
International Journal of Mathematics and Mathematical Sciences
Volume 20 (1997), Issue 1, Pages 19-32
Generalized transforms and convolutions
1Department of Mathematics, Northwestern College, Orange City 51041, IA, USA
2Department of Mathematics and Statistics, Miami University, Oxford 45056, OH, USA
3Department of Mathematics and Statistics, University of Nebraska, Lincoln 68588, NE, USA
Received 27 June 1995; Revised 8 August 1995
Copyright © 1997 Timothy Huffman et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this paper, using the concept of a generalized Feynman integral, we define
a generalized Fourier-Feynman transform and a generalized convolution product. Then for
two classes of functionals on Wiener space we obtain several results involving and relating
these generalized transforms and convolutions. In particular we show that the generalized
transform of the convolution product is a product of transforms. In addition we establish a
Parseval's identity for functionals in each of these classes. | 0.5535 | FineWeb |
Peg + Cat
This new animated preschool series that follows Peg and her sidekick Cat as they embark on adventures and learn foundational math concepts and skills.
In each episode, Peg and Cat encounter an unexpected challenge that requires them to use math and problem-solving skills in order to save the day. Their adventures take viewers from a farm to a distant planet, from a pirate island to a prehistoric valley, from Romeo and Juliet’s Verona to Cleopatra’s Egypt to New York’s Radio City Music Hall. While teaching specific math lessons, the series displays the value of resilience and perseverance in problem-solving.
The program’s curriculum is grounded in principles and standards for school mathematics as established by the National Council of Teachers of Mathematics and the Common Core State Standards for Mathematics for kindergarten and first grade.
Peg and Cat’s website provides viewers with interactive games, videos, apps and more.
- Go on a treasure hunt
- Journey on a math adventure
- Play dozens of games like Knights of the Round Table
- Watch videos
Airs weekdays at 10:30 a.m. and Saturdays at 7:30 a.m. on WCNY. | 0.8038 | FineWeb |
By packaging Leukemia Inhibitory Factor (LIF) inside biodegradable nanoparticles, scientists developed a nanoparticle-based system to deliver growth factors to stem cells in culture, resulting in cell colony growth with a 10,000 fold lower dose of LIF when using the nanoparticle-based delivery system compared to traditional methods using soluble LIF in a growth medium.
Stem cells – unspecialized cells that have the potential to develop into different types of cells – play an important role in medical research. In the embryotic stage of an organism’s growth, stem cells develop into specialized heart, lung, and skin cells, among others; in adults, they can act as repairmen, replacing cells that have been damaged by injury, disease, or simply by age. Given their enormous potential in future treatments against disease, the study and growth of stem cells in the lab is widespread and critical. But growing the cells in culture offers numerous challenges, including the constant need to replenish a culture medium to support the desired cell growth.
Tarek Fahmy, Associate Professor of Biomedical Engineering & Chemical & Environmental Engineering, and colleagues have developed a nanoparticle-based system to deliver growth factors to stem cells in culture. These growth factors, which directly affect the growth of stem cells and their differentiation into specific cell types, are ordinarily supplied in a medium that is exchanged every day. Using the researchers’ new approach, this would no longer be necessary.
“Irrespective of their scale or nature, all cell culture systems currently in practice conventionally supply exogenous bioactive factors by direct addition to the culture medium,” says Paul de Sousa, a University of Edinburgh researcher and co-principal investigator on the paper. With that approach, he explains, “Cost is one issue, especially during prolonged culture and when there is a requirement for complex cocktails of factors to expand or direct differentiation of cells to a specific endpoint.”
A second issue, says de Sousa, is specificity: growth factors supplied by direct addition to the culture medium can lead to the growth of undesired cell populations, which can end up competing with the growth of the desired cell types.
“A relatively unexplored strategy to improve the efficiency of stem cell culture is to affinity-target critical bioactive factors sequestered in biodegradable micro or nanoparticles to cell types of interest,” explains de Sousa, “thereby achieving a spatially and temporally controlled local ‘paracrine’ stimulation of cells.”
Fahmy and his colleagues packaged leukemia inhibitory factor, which supports stem cell growth and viability, inside biodegradable nanoparticles. The nanoparticles were “targeted” by attaching an antibody – one specific to an antigen on the surface of mouse embryonic stem cells being grown in culture. As a result, the nanoparticles target and attach themselves to the stem cells, ensuring direct delivery of the bioactive factors packaged inside.
The researchers have previously demonstrated the potential uses of this approach in drug delivery and vaccination, including targeted delivery of Leukocyte Inhibitory Factor (LIF), which prevents certain types of white blood cells from migrating, in order to regular immune responses. In stem cell cultures, LIF is also the key factor required to keep the cells alive and let them retain their ability to develop into specialized types of cells.
In this research, Fahmy and his colleagues packed LIF into the biodegradable nanoparticles for slow-release delivery to the stem cells in culture. Their results showed cell colony growth with a 10,000 fold lower dose of LIF when using the nanoparticle-based delivery system compared to traditional methods using soluble LIF in a growth medium. While a stem cell culture sustained using a traditional method of exchanging growth medium consumes as much as 25 nanograms of LIF in a day – about 875 nanograms after five weeks of culture – only 0.05 total nanograms of LIF would be required to achieve the same level of growth using the nanoparticle delivery system, a remarkable reduction in the required materials.
The next step is to use these systems with human cells to direct their differentiation into hematopoietic cells—blood products. Clinical and industrial translation of this ability requires efficient and cost effective strategies for cell manufacturing. In principle, this method offers a means to produce standardized or individually tailored cells to overcome challenges associated with donated blood products.
Reference: “Paracrine signalling events in embryonic stem cell renewal mediated by affinity targeted nanoparticles” by Bruna Corradettia, Paz Freilea, Steve Pellsa, Pierre Bagnaninchia, Jason Park, Tarek M. Fahmy and Paul A. de Sousaa, 30 June 2012, Biomaterials. | 0.9745 | FineWeb |
The 5G Evolution; An Advancement in Technology; How Can it Affect Us?
According to industry proponents, 5G technology is considered a necessary evolution in wireless transmission to accommodate the increasing number of wireless devices, such as mobile phones, internet transmitting devices and many cutting edge technologies, such as robotics.
The technological advancement to 5G allows more devices to communicate and more data to transmit, more rapidly. The high frequency microwaves required will necessitate more 5G networks and thus, cell phone towers to accommodate the level of the increased technological velocity of 5G.
Cell phone towers in closer proximity, such as in our neighborhoods, can result in more difficulty minimizing the amount of radiation we are exposed to.
Research conducted by University of Washington professor Dr. Henry Lai demonstrated that brain cells are clearly damaged by microwave levels far below the US government’s safety guidelines. Dr. Lai notes that even minimal doses of radio frequency can cumulate over time and lead to harmful effects.
What is our solution to the potentially harmful side effects of an expeditiously expanding wireless network industry?
Our proven patented and proprietary products function to help neutralize the adverse effects of the increasing daily exposures to harmful radiation.
Implement The Cell Phone Chip Store`s full-scale product line of radiation guards as a front line of defense against the long term, cumulative effects of harmful radiation! | 0.8792 | FineWeb |
700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ ReadersThis Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)
The present study was conducted to determine the effect of raw anchovy (Engraulis encrasicolus L.) as wet feed on growth performance and production cost of rainbow trout (Oncorhynchus mykiss W.) reared in net pen during winter season in the Black Sea. The fish with an initial body weight of 100 g were hand fed to apparent satiation with only raw anchovy, only pellet and anchovy/pellet combination over 58 days. Final mean body weight of the groups fed anchovy and anchovy/pellet were significantly higher (P<0.05) than that of the group fed with only pellet. However, no difference was found between the groups fed anchovy and anchovy/pellet combination. Raw anchovy was well accepted than the pellet by the fish during the low water temperature. The use of raw anchovy as wet feed made positive effect on the production cost. In conclusion, by-catch anchovy must be evaluated as a supplemental diet to the pellet for rainbow trout, especially over a period of low water temperature in the Black Sea.
Rainbow trout, (Oncorhynchus mykiss W.) wet feed, anchovy (Engraulis encrasicolus L.), growth, Fisheries feed, Aquatic (both freshwater and marine) systems, Aquatic health management, Ciguatera Fish Poisoning | 0.5779 | FineWeb |
What does it mean to be a hero? In The Heroic Heart, Tod Lindberg traces the quality of heroic greatness from its most distant origin in human prehistory to the present day. The designation of “hero” once conjured mainly the prowess of conquerors and kings slaying their enemies on the battlefield. Heroes in the modern world come in many varieties, from teachers and mentors making a lasting impression on others by giving of themselves, to firefighters no less willing than their ancient counterparts to risk life and limb. They don’t do so to assert a claim of superiority over others, however. Rather, the modern heroic heart acts to serve others and save others. The spirit of modern heroism is generosity, what Lindberg calls “the caring will,” a primal human trait that has flourished alongside the spread of freedom and equality.
Through its intimate portraits of historical and literary figures and its subtle depiction of the most difficult problems of politics, The Heroic Heart offers a startlingly original account of the passage from the ancient to the modern world and the part the heroic type has played in it. Lindberg deftly combines social criticism and moral philosophy in a work that ranks with such classics as Thomas Carlyle’s nineteenth-century On Heroes, Hero-Worship and the Heroic in History and Joseph Campbell’s twentieth-century The Hero with a Thousand Faces. | 0.8493 | FineWeb |
Introduced in the 1950’s, skinny jeans were first worn by film stars like Roy Rogers, Lone Ranger, Cisco Kid, Zorro, Gene Autry, Marilyn Monroe, and Sandra Dee. Known for its thigh hugging, form fitting silhouette- skinny jeans tapper at the ankle and are widely recognized for its slim cut that exudes sex appeal. By the 1960’s, women began pushing gender roles by widely adopting slim cut denim and other male dominated fashion statements. Skinny jeans served as a means of communicating gender empowerment and equality through channeling female sexuality while drawing attention to feminine curves. Skinny jeans epitomized both sexuality and sex appeal when Elvis, the king of rock n’ roll, began wearing them during his tantalizing performances in the late 1950’s and early 1960’s. During the 1970’s, skinny jeans became synonymous with a ‘bad boy’ rock n’ roll image and served as a uniform staple for fashion-forward rockers in the alternative music industry.
Rock band legends like Mick Jagger from The Rolling Stones and The Beatles also helped paved the way for the skinny jean phenomenon through fusing fashion with performance/entertainment.
The 1970’s set the British punk rock movement in motion- where self-proclaimed ‘scenester’ bands like The Clash, The Sex Pistols and The Ramones put a notorious punk spin on the growing skinny jean trend. Through incorporating dark color palettes, leather and zipper embellishments, the punk rock movement was the first fashion wave to truly individualize and stylize slim cut denim. In 1971, fashion designer, Vivienne Westwood, opened SEX (Boutique), one of the fist stores to ever specialize in punk and fetish-inspired clothing. Never before had a retail boutique been solely dedicated to selling skinny jeans and other British ‘scenester’ attire- bringing slim cut denim to the masses. Tight fitted clothing like skinny jeans functioned as a form of rebellion for fashion-conscious nonconformists in the 1970’s. Skinny jeans’ fashion uprising was sustained well into the 1980’s with the origination of movements surrounding heavy metal and glam metal. Bands like Poison, Mötley Crüe, Bon Jovi, Guns N’ Roses and Kiss were prominent in the 1980’s and all donned skinny jeans along with other form fitting bottoms such as spandex during their concert performances.
The skinny jeans trend made a steep declined in the 1990’s with the advancement of hip hop and grunge music. Both grunge and hip hop dictated a uniform consisting of baggy jeans, flannel shirts and over sized outerwear- starkly contrasting the considerably contoured fashion trends of the 1970’s and 1980’s. In 2000, skinny jeans made a comeback thanks to fashion icon, Kate Moss, garage rock and the formation of indie rock in popular music culture. Moss, who once dated Peter Doherty of The Libertines, was photographed with Doherty dressed in skinny jeans and boots- letting fashionistas around the world know that wearing skinny jeans was once again appropriate for daily attire. The overlapping trends within the fashion and music industry is undeniable, and the history of skinny jeans greatly exemplifies this widespread notion. Today, the appeal of skinny jeans has reached other industries that have little to do with fashion and music. While many find skinny jeans to be rather restricting, professional skate borders and BMX bike riders prefer sporting skinny jeans due to their stretchy material that accommodates movement and flexibility. | 0.6228 | FineWeb |
People at Google must be aficionados of the Spanish painter Diego Velázquez [1599-1660], because they've celebrated his birthday by creating a graphic Google banner based upon the famous painting called Las Meninas [Maids of Honor].
Here's a fragment of the original Velázquez masterpiece:
The intriguing nature of this painting was first brought to my attention back in 1966 when I read a popular work of modern philosophy, Les mots et les choses by Michel Foucault [translated into English as The Order of Things], which starts with an in-depth analysis of the Velázquez painting. Foucault suggests that this painting demonstrates, or at least symbolizes, the existence of an invisible emptiness at the heart of the world that we attempt vainly to circumscribe... not by images, but by language. So, let us see rapidly what is so upsetting about this painting.
At first sight, one has the impression that the subject of the painting is the blonde child between the two maids. Her name is Margarita, and she's the eldest daughter of the Spanish queen. When we examine the individuals more closely, however, we find that the artist Velázquez himself is present, standing behind the left-hand maid, and that he is looking directly, not at the little princess, but at us, the viewers. Then a blurry mirror on the rear wall, just to the right of the painter's head (as we see things), reveals the true subject of the painter's work: the barely-recognizable king and queen of Spain, Philip IV and Marianna.
The painting is inverted in such a way that we see, not the true subject, but rather the regard of those who can see this subject. In the antipodean sense that I evoke often in this blog, the painter has turned his world upside-down and inside-out. At a visual level, the two most prominent subjects in the foreground of the painting, from our viewpoint, are a bulky pet dog and a plump male dwarf in female attire (said to be an Italian jester). Meanwhile, supposedly major individuals such as the royal couple and a noble man are seen as mere images on rear-wall mirrors, suggesting that Velázquez himself was not overly preoccupied with the task of reproducing their image on his canvas.
This complex work of art (designated by many admirers as the greatest painting ever made) is an excellent symbol for Google. We throng to Google in the hope of receiving profound knowledge about our world... whereas Google, in reality, is simply throwing back at us, through its endless lists of websites of all kinds, our own imperfect image. Maybe a vast but essentially empty image. | 0.5895 | FineWeb |
- Education’s Woes and Pros: A new study conducted by UNESCO reveals that less than 30% of schools have access to electricity and only half of them have toilets for girls. In order to address such woeful capacity, the Rajasthan’s state government has signed a public private partnership with UNICEF to expand education across the state — the program will particularly focus on educating young girls.
- Healthcare’s Woes and Pros: A new report by the UN reveals that India suffers from the highest abseentism rate with regard to healthcare workers, and that these no-shows will likely result in India failing to meet the Millennium Development Goals. However, a more positive story is that a new HIV test can be administered rapidly to pregnant women in rural areas, enabling doctors to administer the necessary treatment to prevent transmission to the baby.
- Mobile Technology: With the advent of 3G coming to India soon, Bharat Sanchar Nigam (BSNL) is looking to new ways to use the increased speeds to connect to the rural poor of India.
- Energy: In Jharkhand, the government looks to wind to help power the future of that region. | 0.5287 | FineWeb |
Bullying is a serious workplace issue.
According to the Canadian Center for Occupational Health and Safety, workplace bullying generally involves repeated incidents intended to “intimidate, offend, degrade or humiliate a particular person or group of people.” CCOHS notes that although a fine line exists between strong management and constructive criticism and bullying, workplace bullying exists and can lead to a number of issues.
The agency provides a number of examples of workplace bullying. Those include:
- Spreading malicious, untrue rumors
- Socially isolating someone
- Purposefully hindering someone’s work
- Physically injuring someone or threatening abuse
- Taking away a worker’s responsibility without justification
- Yelling or swearing
- Not assigning enough work or assigning an unreasonable amount of work
- Setting impossible-to-meet deadlines in an effort to make the worker fail
- Blocking a worker’s request for leave, training or a promotion
Bullying can have serious repercussions. Victims of bullying may feel angry or helpless and experience a loss of confidence. Additionally, bullying can cause physical side effects, including an inability to sleep, loss of appetite, headaches, or panic attacks. According to CCOHS, organizations with a culture of bullying may experience many unfavorable side effects, including increased turnover and absenteeism, increased stress among workers, and decreased morale.
CCOHS states that the most important thing management can do to express a commitment to preventing workplace bullying is to have a comprehensive written policy. The agency provides the following advice for creating a policy:
- Involve both management and employees in the creation of the policy.
- Be very clear in your definition of workplace bullying. Provide examples of what is and is not acceptable behavior.
- Clearly state the consequences of bullying.
- Encourage workers to report bullying behavior by making the reporting process completely confidential. Let workers know they will not be punished in any way for reporting bullying.
- If your workplace has an Employee Assistance Program, encourage workers experiencing problems to use it.
- Regularly review the policy and update it as needed. | 0.9921 | FineWeb |
Amy's husband deploys for months at a time, so she discusses the 5 things you should never say to a military spouse.
1. "You must be used to this"
2. Do you worry about his safety"
3. "My spouse travels for work too. I totally know what you're going thru"
4. "Wow, you must miss him"
5. How do you go such a long time without.... (being physical)"
When Amy's husband gets deployed he sends her flowers and Bobby a stick. | 0.6357 | FineWeb |
Use these 6 times 7 table worksheets to evaluate your kid’s multiplication skill. It may sound so basic, but it can prove to be useful for you or your child to memorize your times table. Once your child has a full set of 6 or 7 times tables, they need to practice so that they are automatic in their times table drill. Ensure that your child learns the standard methods of multiplication using these worksheets, for better evaluation and assessment.
Good times-tables knowledge is vital for quick mental multiplication math. If a child knows that 6 x 3 = 18 they will be able to comprehend that 6 x 30 = 180 or 60 x 3 = 180. Using these 6 times 7 multiplication worksheets will help develop a good understanding of the relationship between numbers in multiplication. Try the worksheet below for more practice with basic multiplication facts.
A strong grasp of times tables helps increase enjoyment of the subject. The multiplication printable worksheets below will take your child through their multiplication learning step-by-step so that they are learning the math skills to solve and master multiplication.
Help your students achieve the ability to rapidly recall their times table facts with these fabulous new times table worksheets that your students are going to love! These fun math worksheets are free to download and print for educational use. | 0.9997 | FineWeb |
Writing for the screen : creative and critical approaches
- Craig Batty and Zara Waldeback.
- Houndmills, Basingstoke, Hampshire [England] ; New York : Palgrave Macmillan, 2008.
- Physical description
- ix, 201 p. ; 22 cm.
- Approaches to writing.
- Includes filmography: p. 192-194.
- Includes bibliographical references (p. 189-191) and index.
- Acknowledgments Introduction PART I: FOUNDATIONS Establishing Practice Subject: Ideas into Character Structure and Narrative Visual Storytelling Dialogue and Voice The Cultures of Screenwriting Key Points and Foundations Exercises PART II: SPECULATIONS Exploring Possibilities Subjects: Ideas into Character Structures and Narratives Visual Storytelling Dialogues and Voices Further Cultures of Screenwriting Key Points and Speculations Exercises Notes Bibliography Index.
- (source: Nielsen Book Data)9780230550759 20160528
- Publisher's Summary
- This book presents an innovative approach to the art and practice of screenwriting, using contemporary case studies and interactive exercises. It presents an innovative and fresh approach to the art and practice of screenwriting, developing creative and critical awareness for writers, students and critics. It includes contemporary case studies, in-depth analysis and unique writing exercises. The book explores a wide variety of techniques, from detailed scene writing and non-linear structure, to documentary drama and the short film. This fresh approach to scriptwriting, innovative in style and approach, incorporates both creativity and critical appraisal as essential methods in writing for the screen. Contemporary case studies, in-depth analysis and interactive exercises create a wealth of ideas for those wishing to work in the industry or deepen their study of the practice.
(source: Nielsen Book Data)9780230550759 20160528
- Motion picture authorship.
- Publication date
- Approaches to writing
- 9780230550759 (pbk.)
- 0230550754 (pbk.)
Browse related items
Start at call number: | 0.6682 | FineWeb |
- HSK 1 - 4
- Pronunciation – Pinyin
- Chinese Measure Words
- Chinese Course
- Games to Learn Chinese
- HSK Test
- Chinese Words
- Chinese Characters
- Chinese Phrases
Do you know how to translate the chinese word 九?
The pronunciation in pinyin is written jiǔ or jiu3.
Here the english translation of that chinese word and audio file (mp3).
|Example sentences in Chinese|
|Today is the 9th of September.|
|At half past nine I go to sleep.|
|In China, every child has to go to school for 9 years. (compulsory education)|
|This school has 20 teachers, among them are 9 chinese teachers. (from China)|
|Today I learned one, two, three, four, five, six, seven, eight, nine, ten.|
|There are 194 countires in the world.| | 0.9036 | FineWeb |
In Myanmar, the number of medical professionals including doctors and nurses is remarkably low, and the training of medical assistance personnel called caregiver will become the key to the future of medical development. We aim to create a large number of high global standard quality talents to Myanmar by providing educational programs with applying Japan KAIGO Know-How.
Why do we train medical personnel? Two social issues in Myanmar
In Myanmar's medical environment, shortage of medical personnel is becoming a big issue. Especially the number of nurses which is fewer than doctors makes human medical infrastructure improvement by training nurses become a big development task. The reason is because of the lacking of public educational institutions. There are only three nursing colleges in the countries whose degrees are officially accepted. (University of Nursing, Yangon / Mandalay Institute of Nursing/ Defence services institute of nursing and paramedical science)
In Myanmar, caregivers often carry out medical assistance in place of nurses, but there is no curriculum standard for private educational institutions that educate those personnel. The nursing assistants who graduated from a private educational institution has a low salary level, and there are many cases of changing jobs to other industries instead of medical care or going overseas through an opaque institution. By utilizing the internship program in Japan, we can solve these social problems in Myanmar. We aim to greatly improve the quality of medical care and welfare in Myanmar
Achieve advanced human resources by taking advantage of the resource of each country
We have cultivated more than 15,000 KAIGO talents in Japan, and we aim to develop human resources that have global standard. If you can acquire Japanese advanced KAIGO technology, you will be able to work in all countries around the world. Education of KAIGO technology will be conducted in Japan after finishing Japanese Language education. We are aiming for the world's highest level of KAIGO education by harmonizing the "virtuous culture" rooted in Myanmar and the "hospitality heart" of Japan.
In collaboration with Yangon Japanese Language School "Better Life", we aim to reach N3 level from N5 level in 6 months. In our own developed curriculum, we learn not only daily expressions but also about frequently used Japanese phrase in nursing care practice, so that interns can come to Japan and get into work smoothly. Also, native Japanese speakers will support your study. We will carefully answer what you care about and what you do not understand when you travel to Japan
Myanmar talents who are cultivated by our company aim to achieve skills that can be employed not only in their countries but also in Japan, Thailand and Singapore. For that reason, we aim not only nursing skills and language, but also to educate about business manners and communication that form the basis of work, and thereby aim to produce global standard talents that can work internationally. | 0.6386 | FineWeb |
In this suggestive VIS image, taken by the NASA - Mars Odyssey Orbiter on December, 29th, 2015, and during its 62.289th orbit around the Red Planet, we can see an a (truly) small portion of the Martian Region known as Nilus Chaos. Located to the North of the Kasei Valles System, this Chaotic Region formed (approximately) at the Elevation Boundary between the aforementioned Kasei Valles System and the surrounding (and relatively flat) Northern Plains.
Latitude (centered): 25,7934° North
Longitude (centered): 283,4270° East
This image (which is an Original Mars Odyssey Orbiter b/w and Map Projected frame published on the NASA - Planetary Photojournal with the ID n. PIA 20417) has been additionally processed, magnified to aid the visibility of the details, extra-contrast enhanced and sharpened, Gamma corrected and then colorized in Absolute Natural Colors (such as the colors that a normal human eye would actually perceive if someone were onboard the NASA - Mars Odyssey Orbiter and then looked down, towards the Surface of Mars), by using an original technique created - and, in time, dramatically improved - by the Lunar Explorer Italia Team. | 0.7976 | FineWeb |
At a glance
- Legitimate interests is the most flexible lawful basis for processing, but you cannot assume it will always be the most appropriate.
- It is likely to be most appropriate where you use people’s data in ways they would reasonably expect and which have a minimal privacy impact, or where there is a compelling justification for the processing.
- If you choose to rely on legitimate interests, you are taking on extra responsibility for considering and protecting people’s rights and interests.
- Public authorities can only rely on legitimate interests if they are processing for a legitimate reason other than performing their tasks as a public authority.
- There are three elements to the legitimate interests basis. It helps to think of this as a three-part test. You need to:
- identify a legitimate interest;
- show that the processing is necessary to achieve it; and
- balance it against the individual’s interests, rights and freedoms.
- The legitimate interests can be your own interests or the interests of third parties. They can include commercial interests, individual interests or broader societal benefits.
- The processing must be necessary. If you can reasonably achieve the same result in another less intrusive way, legitimate interests will not apply.
- You must balance your interests against the individual’s. If they would not reasonably expect the processing, or if it would cause unjustified harm, their interests are likely to override your legitimate interests.
- Keep a record of your legitimate interests assessment (LIA) to help you demonstrate compliance if required.
- You must include details of your legitimate interests in your privacy information.
- We have checked that legitimate interests is the most appropriate basis.
- We understand our responsibility to protect the individual’s interests.
- We have conducted a legitimate interests assessment (LIA) and kept a record of it, to ensure that we can justify our decision.
- We have identified the relevant legitimate interests.
- We have checked that the processing is necessary and there is no less intrusive way to achieve the same result.
- We have done a balancing test, and are confident that the individual’s interests do not override those legitimate interests.
- We only use individuals’ data in ways they would reasonably expect, unless we have a very good reason.
- We are not using people’s data in ways they would find intrusive or which could cause them harm, unless we have a very good reason.
- If we process children’s data, we take extra care to make sure we protect their interests.
- We have considered safeguards to reduce the impact where possible.
- We have considered whether we can offer an opt out.
- If our LIA identifies a significant privacy impact, we have considered whether we also need to conduct a DPIA.
- We keep our LIA under review, and repeat it if circumstances change.
- We include information about our legitimate interests in our privacy information.
What’s new under the GDPR?
The concept of legitimate interests as a lawful basis for processing is essentially the same as the equivalent Schedule 2 condition in the 1998 Act, with some changes in detail.
You can now consider the legitimate interests of any third party, including wider benefits to society. And when weighing against the individual’s interests, the focus is wider than the emphasis on ‘unwarranted prejudice’ to the individual in the 1998 Act. For example, unexpected processing is likely to affect whether the individual’s interests override your legitimate interests, even without specific harm.
The GDPR is clearer that you must give particular weight to protecting children’s data.
Public authorities are more limited in their ability to rely on legitimate interests, and should consider the ‘public task’ basis instead for any processing they do to perform their tasks as a public authority. Legitimate interests may still be available for other legitimate processing outside of those tasks.
The biggest change is that you need to document your decisions on legitimate interests so that you can demonstrate compliance under the new GDPR accountability principle. You must also include more information in your privacy information.
In the run up to 25 May 2018, you need to review your existing processing to identify your lawful basis and document where you rely on legitimate interests, update your privacy information, and communicate it to individuals.
What is the ‘legitimate interests’ basis?
Article 6(1)(f) gives you a lawful basis for processing where:“processing is necessary for the purposes of the legitimate interests pursued by the controller or by a third party except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.”
This can be broken down into a three-part test:
- Purpose test: are you pursuing a legitimate interest?
- Necessity test: is the processing necessary for that purpose?
- Balancing test: do the individual’s interests override the legitimate interest?
A wide range of interests may be legitimate interests. They can be your own interests or the interests of third parties, and commercial interests as well as wider societal benefits. They may be compelling or trivial, but trivial interests may be more easily overridden in the balancing test.
The GDPR specifically mentions use of client or employee data, marketing, fraud prevention, intra-group transfers, or IT security as potential legitimate interests, but this is not an exhaustive list. It also says that you have a legitimate interest in disclosing information about possible criminal acts or security threats to the authorities.
‘Necessary’ means that the processing must be a targeted and proportionate way of achieving your purpose. You cannot rely on legitimate interests if there is another reasonable and less intrusive way to achieve the same result.
You must balance your interests against the individual’s interests. In particular, if they would not reasonably expect you to use data in that way, or it would cause them unwarranted harm, their interests are likely to override yours. However, your interests do not always have to align with the individual’s interests. If there is a conflict, your interests can still prevail as long as there is a clear justification for the impact on the individual.
When can we rely on legitimate interests?
Legitimate interests is the most flexible lawful basis, but you cannot assume it will always be appropriate for all of your processing.
If you choose to rely on legitimate interests, you take on extra responsibility for ensuring people’s rights and interests are fully considered and protected.
Legitimate interests is most likely to be an appropriate basis where you use data in ways that people would reasonably expect and that have a minimal privacy impact. Where there is an impact on individuals, it may still apply if you can show there is an even more compelling benefit to the processing and the impact is justified.
You can rely on legitimate interests for marketing activities if you can show that how you use people’s data is proportionate, has a minimal privacy impact, and people would not be surprised or likely to object – but only if you don’t need consent under PECR. See ICO’s Guide to PECR for more on when you need consent for electronic marketing.
You can consider legitimate interests for processing children’s data, but you must take extra care to make sure their interests are protected. See our detailed guidance on children and the GDPR.
You may be able to rely on legitimate interests in order to lawfully disclose personal data to a third party. You should consider why they want the information, whether they actually need it, and what they will do with it. You need to demonstrate that the disclosure is justified, but it will be their responsibility to determine their lawful basis for their own processing.
You should avoid using legitimate interests if you are using personal data in ways people do not understand and would not reasonably expect, or if you think some people would object if you explained it to them. You should also avoid this basis for processing that could cause harm, unless you are confident there is nevertheless a compelling reason to go ahead which justifies the impact.
If you are a public authority, you cannot rely on legitimate interests for any processing you do to perform your tasks as a public authority. However, if you have other legitimate purposes outside the scope of your tasks as a public authority, you can consider legitimate interests where appropriate. This will be particularly relevant for public authorities with commercial interests.
See our guidance page on the lawful basis for more information on the alternatives to legitimate interests, and how to decide which basis to choose.
How can we apply legitimate interests in practice?
If you want to rely on legitimate interests, you can use the three-part test to assess whether it applies. We refer to this as a legitimate interests assessment (LIA) and you should do it before you start the processing.
An LIA is a type of light-touch risk assessment based on the specific context and circumstances. It will help you ensure that your processing is lawful. Recording your LIA will also help you demonstrate compliance in line with your accountability obligations under Articles 5(2) and 24. In some cases an LIA will be quite short, but in others there will be more to consider.
First, identify the legitimate interest(s). Consider:
- Why do you want to process the data – what are you trying to achieve?
- Who benefits from the processing? In what way?
- Are there any wider public benefits to the processing?
- How important are those benefits?
- What would the impact be if you couldn’t go ahead?
- Would your use of the data be unethical or unlawful in any way?
Second, apply the necessity test. Consider:
- Does this processing actually help to further that interest?
- Is it a reasonable way to go about it?
- Is there another less intrusive way to achieve the same result?
Third, do a balancing test. Consider the impact of your processing and whether this overrides the interest you have identified. You might find it helpful to think about the following:
- What is the nature of your relationship with the individual?
- Is any of the data particularly sensitive or private?
- Would people expect you to use their data in this way?
- Are you happy to explain it to them?
- Are some people likely to object or find it intrusive?
- What is the possible impact on the individual?
- How big an impact might it have on them?
- Are you processing children’s data?
- Are any of the individuals vulnerable in any other way?
- Can you adopt any safeguards to minimise the impact?
- Can you offer an opt-out?
You then need to make a decision about whether you still think legitimate interests is an appropriate basis. There’s no foolproof formula for the outcome of the balancing test – but you must be confident that your legitimate interests are not overridden by the risks you have identified.
Keep a record of your LIA and the outcome. There is no standard format for this, but it’s important to record your thinking to help show you have proper decision-making processes in place and to justify the outcome.
Keep your LIA under review and refresh it if there is a significant change in the purpose, nature or context of the processing.
If you are not sure about the outcome of the balancing test, it may be safer to look for another lawful basis. Legitimate interests will not often be the most appropriate basis for processing which is unexpected or high risk.
If your LIA identifies significant risks, consider whether you need to do a DPIA to assess the risk and potential mitigation in more detail. See our guidance on DPIAs for more on this.
What else do we need to consider?
You must tell people in your privacy information that you are relying on legitimate interests, and explain what these interests are.
If you want to process the personal data for a new purpose, you may be able to continue processing under legitimate interests as long as your new purpose is compatible with your original purpose. We would still recommend that you conduct a new LIA, as this will help you demonstrate compatibility.
If you rely on legitimate interests, the right to data portability does not apply.
If you are relying on legitimate interests for direct marketing, the right to object is absolute and you must stop processing when someone objects. For other purposes, you must stop unless you can show that your legitimate interests are compelling enough to override the individual’s rights. See our guidance on individual rights for more on this.
The Article 29 Working Party includes representatives from the data protection authorities of each EU member state. It adopts guidelines for complying with the requirements of the GDPR.
There are no immediate plans for Article 29 Working Party guidance on legitimate interests under the GDPR, but WP29 Opinion 06/2014 (9 April 2014)gives detailed guidance on the key elements of the similar legitimate interests provisions under the previous Data Protection Directive 95/46/EC.
Thank you for reading. | 0.7598 | FineWeb |
Cumulative Impact Study
This study aims to examine the cumulative effects of activities and practices in the Lower Platter River Corridor over time and their impact on the terrestrial and aquatic habitats of the Platte River.
Scope development was completed in August 2005.
Data Acquisition was the focus of Phase ll. Compiling aerial photos and transect data for six time periods (1850, 1938, 1950's, 1970's, 1993, and 2003) with land-use classification lead to a hydrologic study looking at changes in the river over time and the development of an online internet mapping service to access the GIS information. A final report on the Cumulative Impact Study (CIS), Phase II was completed September, 2008.
For access to the CIS interactive GIS program, click here.
Prediction Model Development: Meetings for the development of a Conceptual Ecological Model are continuously being held throughout Phase III to identify missing information needed to: determine the character of the river, assess threats to endangered and threatened species, identify the processes of concern, and prioritize research and management actions. A select group of representatives of UNL, USFWS, USACE, USGS, NGPC, and the NRDs continue to identify components of the conceptual model and identify "knowns" and "gaps" as far as research is concerned. In spring of 2011, this group of representives and the LPRCA made significant head way in alliterating the basic components of the river's system and how they are related to one another.
Research: Phase III research has focused on water flow and how it affects sediment transfer. Using data tools from Phase II and Phase III, we can identify how water flow and sediment changes could affect the amount of habitat for threatened and endangered species. The USGS, in coordination with the Army Corps of Engineers, spent the summer of 2010 collecting sediment samples and GIS cross-sections of the river, and then conducted a sediment budget analysis. Draft results of their studies, entitled "Sediment Samples and Channel-Geometry Data, Lower Platte River Watershed in Nebraska, 2010" and "Geomorphic Classification and Evaluation of Channel Width and Emergent Sandbar Habitat Relationships on the Lower Platte River, Nebraska", are available and can be viewed via the link below. The sediment budget analysis was completed by USGS and the Corps of Engineers in 2014. The USGS report can be found below or in the Publications section of the website. A final full report of all 3 phases of the CIS is expected in 2015.
Future of the CIS: Items identified as priorities for the next phase of the CIS include: a 3-year Sandbar Monitoring study with USGS; a full reconnaissance study of bank stabilization along the Lower Platte; and continued development of the conceptual model. | 0.7664 | FineWeb |
Build from scratch an Automatic Speech Recognition system that could recognise spoken numerical digits from 0 to 9. We discuss how Convolution Neural Networks, the current state of the art for image recognition systems, might just provide the prefect solution!
This is a beginner level tutorial to practice coding in Python. Prove a trivia of the famous sitcom, Friends using simple pattern recognition and basic scripting in Python. You will also get familiar with some built in modules in python. | 0.9533 | FineWeb |
All of the conflicts selected for inclusion in the Shenandoah Valley Study have been referred to by historians as battles, but the range of comparison among these battles is so large that use of the term ``battle'' to describe all equally could be questioned.
The nineteenth- and early twentieth-century archivists who compiled the Official Records, and other event lists and chronologies used a ranking system of ``battle,'' ``engagement,'' and ``action'' based on the command structure of the forces engaged (typically the Union forces engaged). Rather than providing guidance as to the size and intensity of an encounter, these terms tell us only that: a battle was directed by the ranking general of the military district and involved the bulk of the forces under his command; an engagement might be directed by a subordinate leader or involve only a portion of the armies in the field; an action was a conflict, typically limited in scope, that could not be easily labeled a battle or an engagement. This early ranking system was not designed to describe or interpret events but to award appropriate plaudits to the commanding officers and the units involved.
Figure 10 portrays a range of comparison among the battlefields selected for the Shenandoah Valley Study, ranking them according to the relative size of the forces engaged and indicating their traditional ranking of battle (B), engagement (E), or action (A). The figures provided are the best approximations that can be offered, considering the uneven reliability of the sources. Confederate strengths, in particular, are often only estimated since many Confederate records were lost. Also, the full forces of one army or the other were not always brought to the field and were not all engaged. The number of troops on the field and actively engaged must be estimated, and existing estimates often differ widely.
A second way to compare battles is to rank the number of fatalities incurred at each. More deaths in a conflict typically equated to determined, close-quarters fighting. Battles of maneuver and surprise, on the other hand, often resulted in lower numbers of fatalities and higher numbers of captured and missing. Figure 11 shows the Shenandoah Valley battlefields ranked according to the approximate number of fatalities.
A third way to compare the battles is to rank attrition (total killed, wounded, captured, and missing) of the forces engaged, a useful measure of a battle's influence on the progress of its campaign. High attrition rates incurred by one side or the other in a single battle might cripple its force and compel a retreat. In many cases, higher than average attrition rates resulted from a disastrous rout by one side or the other with large numbers of prisoners falling into enemy hands. Figure 12 provides a ranking by estimating the combined attrition of the forces engaged.
The battles of Opequon and Cedar Creek stand out in terms of size, fatalities, and attrition. Although the size of Confederate armies in the Valley remained surprisingly consistent from 1862 to 1864, averaging 16,000-24,000 men, the size of the Union armies increased dramatically under Sheridan's command in 1864, to nearly 40,000. At Opequon, Sheridan outnumbered Early 2.6 to 1, and both armies were fully engaged. Together, Opequon and Cedar Creek accounted for nearly 52 percent of the fatalities of the fifteen battles and 43 percent of the combined attrition. Considering that these two battles were fought only a month apart, the toll, in the context of Valley warfare, is staggering.
In the six representative battles of Jackson's 1862 Campaign, the Confederate army inflicted 393 fatalities at a cost of 367 dead (total 760). This ratio is near parity. Looking at attrition, the tally diverges more dramatically. The Union armies suffered about 6,400 casualties compared to Confederate losses of 2,745 (total 9,145). Many of the surplus Union casualties were prisoners taken at First Winchester and Front Royal.
In the six representative battles of the Early-Sheridan 1864 campaign, the Confederate army inflicted 1,587 fatalities at a cost of 776 dead (total 2,363), a two-to-one ratio. Overall, however, the Union armies closed the gap somewhat, suffering about 12,890 casualties compared to Confederate losses of 9,130 (total 22,020), a ratio of about three-to-two. These figures provide a useful comparison of scale between the 1862 and 1864 campaigns.
Numbers engaged, fatalities, and attrition rates are indicators of how intensely a battle was fought. Yet these indicators tend to obscure the strategic significance of some of the smaller conflicts. While it is true that the larger battles achieved significance by sheer firepower and weight of numbers, the significance of a battle is best determined by its campaign context, a context that must be carefully assessed as to its influence on regional and national events. Often it was the battle that was not fought or the conflict cheaply won, that determined the course of a campaign and the ultimate strategic and political outcome. Thus a battle, such as Front Royal, which was won at little cost to Stonewall Jackson, attains a heightened importance when examined in light of his strategy of flanking the main Union army at Strasburg. Jackson's tactical loss at First Kernstown, for example, achieved strategic success by diverting thousands of Union soldiers as reinforcements to the Valley. Future historians will continue to debate the relative significance of these events.
Return to contents page | 0.704 | FineWeb |
Written and illustrated by Honoria Tox
The moon flickers like a gaslight behind the torn, torrid clouds as I watch out the upper window, straining my ears for the sound of horse-hooves. The earth falls away from my home and down to the river, only one thin horse-trail separating its wildness from mine; and the darkness courses above us.
I sigh at the silence, leaving the window to move about the room: first to the stack of thick azure paper that sits on my work-bench. I cut the paper into cottony slices with my knife in strong, swooping gestures, like a factory-woman tossing the shuttle-cock back and forth across a loom. I fold the paper with quick, skilled strokes, my dainty fingers darting them into points and curves. Then I fit them with their mechanisms, small gears and springs thrust into their wings, and set them free: a hundred tiny blue-birds, my automata, winding their way through the air and into the night, flapping all their pretty wings against the moonlight as they go. | 0.7993 | FineWeb |
Bradenton Christian School was established in 1960 with the goal of providing an academically rich education built on the infallible Word of God. Over the years, the curriculum offerings have expanded and include both Christian and secular texts to provide the best possible educational tools. Yet each subject is taught from a Christian perspective to ensure each child understands how the Word of God applies.
Admission is offered to students with a broad range of academic abilities. Yet BCS students consistently score an average of one and half to two years above their grade level on the Iowa Test of Basic Skills. This is an assessment tool given each year through grade 7.
The curriculum elements include:
- Social Studies
- Language Arts
- Resource Room
- Physical Education
- Bible / Spiritual Development
- Band / Strings / Music Appreciation (Grades 5-6)
Innovative activities inside and outside the classroom bring learning to life. | 0.9964 | FineWeb |
Excel always interpret the "." of my keyboard as ",". My regional setting are correct ("," is the decimal separator) and my keyboard layout is correct (FR-BE). I don't find how to change the behavior, specific to Excel (and Powerpoint) only. All the other
applications use the actual keyboard key (".") but Excel uses instead the decimal separator from the regional setting. In my case, the sign on the numerical part of my keyboard is a dot, not a comma, so it's NOT a decimal separator. How to change that behavior
This thread is locked. You can follow the question or vote as helpful, but you cannot reply to this thread.
What is the operating system? Which version of Excel is installed on the computer?
You may try following these steps:
Excel 2000 – 2003: Tools Menu - Options - International Tab - Separators > check ‘Use System Separators’ > then click ok.
Excel 2007: Office button > Excel options > Advanced > Editing options >check the ‘Use System separators’ > click ok.
Click on Start > Control Panel > Clock, Language and Region (Regional and languages) > Change the date, time, or number format > Click on Additional settings button. This would display the Customize Format window where the Decimal Separator is defined.
64 people were helped by this reply
Did this solve your problem?
Sorry this didn't help.
Great! Thanks for marking this as the answer.
How satisfied are you with this reply?
Thanks for your feedback, it helps us improve the site. | 0.5006 | FineWeb |
Why should we care about lake mud? Part II
The Great Lakes hold 20% of the Earth’s surface fresh water, and are important natural and economic resources for the US and Canada. During the past several thousand years they have been strongly influenced by climate change and the evolving glacial landscape of the Great Lakes region. Lake Erie, the shallowest, has been very sensitive to environmental changes during its Holocene evolution and also to human influences during the modern era.
Lake level changes impact erosion rates; temperature shifts affect productivity and water chemistry; precipitation changes influence inflow from rivers and the Upper Great Lakes. Understanding these relationships and predicting future trends is important to maintaining these crucial natural resources. Also, understanding Lake Erie’s past climate is essential to predicting how the Great Lakes region will respond to both natural and human-induced climate change in the future.
My senior thesis project investigates Lake Erie’s history by analyzing sediments deposited in the lake’s eastern basin during the Holocene (to about 3,500 years ago). Since changes in the physical, biological, and chemical proxies found in these lake sediments can be influenced by a variety of factors, clear identification of the primary factor or factors acting at the time of deposition is not always possible. However, good interpretations can be made based on a critical analysis of the combined data. In these Lake Erie sediments, we use a variety of proxies, looking at relationships between them to better understand the lake’s paleo-depositional environment.
How do we know when proxy changes happen? (Or, how do we date mud?)
Cores can be correlated by matching magnetic susceptibility peaks that appear in sediment across the central and eastern basins. Radiocarbon dates from above the magnetic susceptibility shift are out of stratigraphic order, indicating contamination or sediment re-working. An approximate age for the shift in our Station 23 sediment was estimated from 2900 14C yrs BP from immediately above and below the shift.
CONCLUSIONS (so far)
Multi-proxy data from a Lake Erie sediment core indicate a warm climate event, peaking at about 2900 14C years BP, followed by a period of greater climate variability.
Lake Erie’s climate record differs from New York lake records, potentially indicating high regional variability.
Understanding the response of Lake Erie to climate change is crucial to predicting and preparing for future changes. Because it has such a shallow basin, Lake Erie’s water levels are particularly sensitive to climate. As we continue to see shifts in regional and global temperatures, and as human impact on the Great Lakes increases, we need to prepare for major environmental consequences.
Lake level fluctuations will impact coastal wetlands, commercial shipping, pleasure boating, and beach erosion. Temperature and water chemistry changes will impact primary productivity, fisheries, and invasive organisms. Further high-resolution paleo-climate work needs to be done in order to address these concerns. | 0.9945 | FineWeb |
Finally, you can slim and contour the area below and around your chin and neck – without surgery. Dr. Covey is proud to be one of the first physicians to offer the new revolutionary Kybella procedure: the first and only FDA – approved, non-surgical treatment to reduce submental fullness, more commonly known as “double chin.” Submental fullness affects both men and women, and can be influenced by several factors such as aging, genetics and weight gain. Submental fullness is often resistant to diet and exercise and can detract from a balanced facial appearance- resulting in an older and heavier look. According to a 2014 survey conducted by the American Society for Dermatologic Surgery, 68 percent of people said they are bothered by their double chin. With Kybella, we can achieve surgical results without the pain and downtime typically associated with traditional surgery. These injections have the potential, even, to replace liposuction.
Kybella is a series of injections in the chin and neck area to contour and improve the appearance of moderate to severe submental fullness due to submental fat. The active ingredient in Kybella is deoxycholic acid, a naturally-occurring molecule in the body that aids in the breakdown and absorption of dietary fat. When injected into the fat around the chin and neck, Kybella causes the destruction of fat cells. Once destroyed, the cells in the treated area can no longer store or accumulate fat so re-treatment is not expected. Kybella injections are tolerable, as topical anesthesia is used to numb the skin. Post treatment, you can immediately return to work or normal daily activities as Kybella treatments require no downtime.
Benefits of Kybella:
- Kybella is safe, effective and non-invasive
- Kybella requires no downtime – you can resume normal activities immediately
- Kybella treatments are performed in approximately 15-20 minutes
- Kybella is capable of providing permanent improvement to treated areas
Frequently Asked Questions
How many treatments are necessary?
Dr. Covey will provide a tailored treatment plan depending on your needs and aesthetic goals. A series of injections will be administered at each treatment session. Usually, two to four treatment sessions will achieve your desired look.
How often will I need to visit Dr. Covey for treatments?
Kybella treatments are usually spaced a month or more apart.
How long do the results last?
Kybella works by destroying the unwanted fat cells so they cannot store or accumulate future fat in the treated areas. So, your newly contoured look will last, and last. | 0.5524 | FineWeb |
Changes in literacy practices, created by rapidly evolving technologies, have had many implications for the teaching and learning of literacy.This synthesis will reflect on the ten annotated articles to highlight how literacy teaching and learning has changed, and how teachers can best assist students in their learning.Traditionally, literacy was taught via approaches, such as “drilled in skills”, or in immersion processes, drowning students in experiences of print and visuals prior to developing semantics, syntax, or phonological skills (Henderson, R. slide 3). These pedagogies were at a time when texts were explicitly from a two dimensional print- based world of books and images (NSW Department of Education and Training, p. 3.).
Nowadays, the very concept of „text „encompasses print and digital modes through what Cope and Kalantzis (2009) define as Learning by Design. These designs set out how students make meaning in all modes of texts via the linguistic, visual, audio, gestural, spatial and multimodal aspects (New London Group, p.78).Having the ability to comprehend or interpret the design modes in literacy ensures students become multiliterate it today‟s technological society. This requires not only a cognitive practice but also having an understanding, or an awareness, of the social concepts (Anstey and Bull, p.), aiming to empower students through literacy to read the “word and the world”, encouraging them to firstly indentify texts as social constructions and then to analyse their meanings (Freire & Macedo, 1987).
It is little wonder that students are more adept at newer technologies then teachers, as these technologies have embedded themselves into the culture of the students, taking on complex roles and new mindsets in regards to communication (Asselin and Moayeri, p.1).Blogs, Skype and texting are just a snippet of the new forms of communications, transforming the very act of literacy learning, and progressing at such a rate that pedagogical practices are falling behind (Marsh, p.13).
There is no one right way to teach literacy skills, but there are a number of pedagogical approaches that benefit the learning processes; didactive teachings, discovery based and exploratory approaches, are just a few (National Curriculum Board, p.16). These styles provide grounded experiences that are meaningful to students and relatable to their personal experiences both in and out of school. Structured dialogue is another approach teachers can take on board when teaching as it builds on robust learning environments and improves learning outcomes (Abbey, 2010). Dialogue, along with pedagogies and technologies, develop the cognitive mind as well as having social functions that enhance students‟ vocabulary.
There are also a number of frameworks that assist both with the teaching, and learning of literacy and the Four Resource Model is one framework that lends itself into all subject areas (Santoro, p.52, Stewart-Dore, p.6) seeing literacy taught across all domains of the curriculum, not just in English.Faced with a digital driven and globalised world, teachers must adopt a pedagogy of multiliteracies and embed the new technologies into the learning frame in order to develop inclusivity, cultural knowledge and connectedness to the real world (Mills, p.7).
Abbey is an Australian consultant and researcher with much experience in the government and community sectors. His article explores the benefits of structured dialogue and examines a four dimensional model and a stage-by-stage process for teachers to implement into the learning environment. Through his research, he suggests that new pedagogies and technologies need to align in order to bring optimal performance in the classrooms. Abbey argues that pedagogy and technology need to merge in order to transform classroom conversations into a structured dialogue, developing cognitive as well as social functions. How students are read to is just as important as how often they are read to as this will enhance their vocabulary.
Anstey and Bull‟s article explores the term multiliteracies and the skills required by students to be cognitively and socially literate within the technology used. The implication for pedagogy begins at examining what constitutes text in an age of multimedia. Previously education worked within paper based text, hence a linguistic semiotic system dominated literacy pedagogy, however as texts are increasingly multimodal the term „literate persons‟ requires knowledge of all five semiotic systems as well as an understanding of how they work together. This means that teachers need to help students explore the changing nature of texts as they develop understandings about them.
Asselin and Moayeri„s article offers examples of classroom practices drawing on social elements of „social webbing‟ (Web 2.0) which they believe are necessary in extending students ideas of new literacies. Expanding literacies for learning with Web 2.0 include criticality, metacognition, reflection and skills, all needed for creating and publishing, yet schools still remain to use Web 1.0 for games/activities and resources. The authors suggest social bookmarking sites as examples of collaborative cataloguing and indexing tools due to their collaborative nature of ranking information based on the number of people who have bookmarked them. The use of these technologies provides students with a collaborative environment with them being active participants in the development of new social literacy practices.
Cope and Kalantzis refer to The New London Groups theory of multiliteracies pedagogy. They believe that due to a changing world and changing environment , pedagogy needs to change also. Instead of the traditional basics of reading, they call for a transformative pedagogy, allowing the learner to actively analyse and apply meaning making in four major dimensions of teaching. This article suggests that empirical activities will aid in the development of strategies for diversity among students enabling equity within the classroom and enabling students to be active participants in their learning providing them with the framework to be literate participants in society.
Marsh‟s analysis suggests that schools take into account the way in which students are engaged in “innovative literacy practices” in order to adopt productive pedagogies. Because of the range of learning opportunities afforded by digital technologies, new pedagogical approaches are required in schools if the content is to be engaging and appropriate, and if students are to become competent and effective analysers and producers within a range of multimodal texts. Marsh draws on Bernstein‟s‟ (2000) Pedagogic Recontextualizing Field in relation to literacy learning and education to critique two different pedagogies (The National Literacy Strategy and Productive Pedagogies). Schools need to revisit how they teach literacy, and Information and Communication Technologies, and attempt to meld the two in order to achieve a more productive pedagogy.
Mills‟ research paper looks at the findings of research regarding the interactions between pedagogy and access to multiliteracies among culturally and linguistically diverse learners. Conducted in an upper primary classroom of a low socio-economic area, Ms Mills conducted her research using the multiliteracies pedagogy and critical ethnographic methodology. Unfortunately the observations made by Ms Mills showed the teacher‟s relapse to existing pedagogies and traditional text thus prohibiting access to culturally diverse textual practices and multimodality. This article highlights the shortcomings of theories into classrooms as well as the importance for teachers to constantly re-evaluate their pedagogical beliefs and practices.
The New South Wales Government delves into how digital technologies effect learning environments via teacher pedagogy, the nature of the learner, and reading and writing. They acknowledge that although these are still central to being literate, globalisation has created new literacy needs, which should equip students to become critical creators and consumers of the information they encounter. They draw on a range of frameworks they believe are influential in determining curriculum content, yet applying these frameworks alone do not ensure success in literacy learning amongst students. Pedagogical beliefs and knowledge in technology are also important ensuring teachers have understandings of what technology and media do .Educators need to adjust their literacy practices in order to stay at least on par with the changes occurring in literacies.
Santoro‟s perspective in this article is that literacy learning is a complex set of practices operating within a variety of texts and within certain sets of social situations. He contrasts this to teachers who believe that once students have learnt to read and write, they are able to do so in all contexts. Santoro quells these beliefs by pointing out that there are many distinctive school and social literacies characterised by written, oral, aural, visual, digital and multimodal texts. Santoro advocates the use of the four-resource model as a “valuable tool” for middle year‟s teachers and student teachers.
Students need to be strategic learners, acquiring a multitude of skills and strategies enabling them to gain, construct and communicate new knowledge‟s whilst building higher order thinking skills and experiences, according to Stewart-Dore. He examines the popular reading frameworks and touches on their shortcomings (linear, systematic progression, lacking in critical reflections regarding contents and processes). In turn, Stewart-Dore proposes an alternative framework through the Practicing Multiliteracies Learning Model comprising of four phases: accessing knowledge, interrogating meanings, selecting and organising information and representing knowledge. This article suggests that teachers require some guidelines ensuring their teaching strategies are appropriate to literacy education.
The New London Group argues that the cultural and linguistic diversity occurring in society calls for extensive views on literacy rather than the traditional based language approaches. This article, written by ten academics, is concerned about the changes occurring in literacy due to globalisation, technology and the social and cultural diversity. It was through them that the term „multiliteracies was coined, acknowledging the many diverse ways that literacy is used. This new approach to literacy pedagogy combats the “limitations of traditional” pedagogies, taking on a transformative approach by introducing the “what” and “how” of literacy pedagogy. This article has been very influential regarding literacy within the educational system. | 0.9935 | FineWeb |
I'm currently developing a game, and one of the features involves breeding the creatures you collect.
Since it's a game, a large amount of promiscuity is to be expected, and individuals could potentially have hundreds of siblings.
How exactly can I go about presenting this information?
My current setup is along these lines:
|Grandparent 1|Grandparent 2|Grandparent 3|Grandparent 4| | Parent 1 | Parent 2 | |Individual | Siblings (if any) in list form | |Partner 1 | Children of Individual and Partner 1 | |Partner 2 | Children of Individual and Parther 2 | .....
It works, and by clicking on a relative you can make them the focus of the tree. But it just seems clunky and I don't think it's particularly user-friendly.
Can anyone suggest a suitable way to go about presenting this information? | 0.7314 | FineWeb |
MangleHide your e-mail address from spammers. This script changes a spam-proof e-mail address into a readable, mailto address link.
Simply click inside the window below, use your cursor to highlight the script, and copy (type Control-c or Apple-c) the script into a new file in your text editor (such as Note Pad or Simple Text) and save (Control-s or Command-s). The script is yours!!! | 0.8048 | FineWeb |
The impact plan sets out what the prospective impact is, and how the organisation proposes to generate it. The assessment of impact risk appraises the plan for its validity, and for the confidence it inspires that the organisation, through carrying out its activities and delivering its outputs, will achieve the intended outcomes, and generate real positive change.
- impact risk
Impact risk is a measure of the certainty that an organisation will deliver on its proposed impact, as detailed in the impact plan. The question implied is: How sure is the impact plan to work, and what is the risk that the impact won’t be generated?
An assessment of impact risk looks to the impact plan for six key qualities:
Is the impact plan explicit in all particulars?
The starting point for any structured and rational treatment of impact is being explicit. This involves ensuring that the impact plan displays:
The impact plan articulates clearly each of its components and the linkages between them. This includes setting out what will be done, what processes will be used, and how the activities — within the defined context, and in combination with other conditions — will bring about the desired change.
The impact plan is specific and concrete about what is to be used (resources, budget), who will be effected (target beneficiaries and their context), what is to be achieved (how much, how many), and the timelines involved (when will the activities be carried out, and the change happen). The impact plan is concrete also regarding the measurement system that will be used to track what is taking place.
The impact plan gives a fair, true and complete picture of the processes and changes it presents, including implicit claims and assumptions, and appropriate consideration of how the change relates to other factors and the surrounding environment (including impacts upon other stakeholders). These are covered in the conditions for change and context of change sections of the impact plan. An impact plan that covers only the organisation’s own processes, with no address of the context, is deemed to be incomplete.
A full address of the context, and all the ramifications of change (including deadweight, displacement, attribution, drop off, and unintended consequences), is likely to be beyond the scope of most impact plans, and the organisation must therefore make an assessment of materiality — i.e. a determination of the bounds of what is relevant and material to include in a true account of the impact. The impact plan is explicit as to where these bounds of materiality lie. The information that is deemed material is therefore provided, and gaps or holes in the information, or links that are unproven, are acknowledged and justified.
Does the impact plan present a compelling and well-reasoned theory of change?
Once the impact plan and its various components have been laid out explicitly, attention turns to how well reasoned an overall narrative or theory of change it presents. Pertinent questions include:
- Do the mission and activities express a coherent response to the context (i.e. the problem and the target beneficiaries)?
- Is the link between the proposed outputs and the anticipated outcomes thought-through and convincing? Do the outputs really drive the outcomes? Have the conditions for change been addressed, and their role in the change soundly reasoned?
- Is the address of the context of change credible and fair, with the bounds of materiality set at a sensible level?
A full address of the context of change can most likely only be achieved through conducting a control experiment (typically a randomised control trial, or RCT). However this is often impractical given the resources and the scale of operations. Under such circumstances, investors and organisations are often reliant upon a reasoned treatment of the counterfactual (a hypothetical scenario of “what would have happened anyway, what is happening elsewhere, and the role of other factors” that can be used to deal with questions of deadweight, displacement, and attribution).
There may be uncertainties, and therefore impact risk, around how the outcomes are really brought about, and how reliably they are a result of the organisation’s work. Most important to the impact is that the organisation can make a compelling case for how it plays a critical role in the desired change (i.e. without it the change wouldn’t have happened). Backwards-mapping can be a powerful tool for testing the reasoning involved throughout the impact plan.
Is the generation of impact integral to the organisation’s business and operations?
A form of impact risk may arise if there is a potential tension within the organisation between its impact-generating and revenue-generating activities. Where there is a clear financial motive for the organisation to pursue less impactful strategies, and the business and impact interests are in this sense not well-aligned, there is a risk that the operational needs of the business will threaten the impact.
This risk however is greatly reduced if the impact plan is integral to the organisation’s business strategy, operations, and revenue model. In this case, the business plan clearly supports the impact plan, with impact and operational sustainability going hand in hand.
Where there is tension and potential risk regarding the integration of impact into the business model, the investor may look to some form of mission lock or protection via the governance or legal structure of the organisation (e.g. governance obligations, incorporation as a registered charity or CIC).
Is the impact plan feasible?
The question of feasibility focuses mainly on the links in the impact plan between the organisation, its activities and its outputs. For the impact plan to be feasible, it must show:
- the organisation has the resources, capacity, skills and relevant experience to execute the plan
- the operational risks inherent in the plan are identified and addressed, with measures in place to mitigate them where appropriate
A significant aspect of the overall feasibility of the plan will relate to the financial and operational strength of the organisation. This however will generally fall within financial due diligence considerations, and typically go into a credit rating, and be given separate consideration. The question of feasibility, for impact risk therefore, focuses on those aspects not covered in the financial analysis — i.e. assuming credit-related issues are secure, is the impact plan feasible in other respects?
This may include attention to:
- key personnel
Does the organisation have the right people to carry out the plan with respect to impact, with the necessary skills and relevant experience, as well as the vision, leadership and drive?
- operational processes
Does the organisation have processes in place to manage activities, and ensure they are reaching the right beneficiaries, and having the desired effect? Are the activities an effective means to deliver the desired outputs?
Does the organisation have the staff, time, technology and facilities required to carry out activities?
- projections around other factors
Where the impact is reliant upon factors beyond the organisation’s direct control (e.g. conditions in the local economy, support or services to be delivered by other organisations, among the conditions for change), and assumptions are therefore made about them, are these assumptions feasible?
Is there evidence to support the impact plan’s approach to impact generation?
Evidence may include:
- track record
The organisation has carried out similar activities in the past, with robust impact measurement of past performance demonstrating the validity and effectiveness of the approach. For evaluating the track record, see quality of information and verification of results (in 4.2 Impact Reporting). To be considered as convincing evidence, a track record must demonstrate a change in the measured outcome (typically involving pre- and post-intervention measurements), and that, where used, samples are representative, and survey questions are neutral and non-leading. An independent evaluation of the activities and outputs of the organisation, where available, provides the best evidence on this front (and thereby lowest impact risk).
The track records of other organisations, working with similar methods and assumptions, and again appropriately evidenced by measurement, may be used to demonstrate the validity of the approach.
Studies or relevant expert knowledge may be used to back up the claims involved. Research can situate the organisation’s approach in the context of the problem and other relevant interventions, which it may align with or differ from according to the position taken. Research may in particular be used to support the assumptions implicit in the conditions for change, and the treatment of the counterfactual in the context of change. Where available, research on benchmarks can provide an anchor for the organisation’s past results and proposed future performance.
- control groups
The most conclusive evidence of the effectiveness of an intervention is to demonstrate through the use of a control group the difference between the outcomes achieved when the organisation is active, and when it is not. This, properly speaking, is the demonstrable impact: the real change brought about as a clear result of the organisation’s work. However, while randomised control trials (RCTs) represent the gold standard in evidence, they are expensive to carry out, and require specialised skills. It is also important to note that RCTs are significantly more practicable, and therefore favour, interventions of a very specific nature, with easily isolated, testable, and relatively short-term outcomes. Furthermore, RCTs are meaningful only when the sample sizes are large enough for other factors to cancel each other out, and therefore are often applicable only when the intervention is taking place at a relatively large scale. While all this means that it is unlikely there will be a widespread adoption of RCTs throughout the social-purpose sector anytime soon (and especially not at the early-stage end of the spectrum), the lesson is nevertheless a powerful one: that for an intervention to be truly valid, it must be able to outperform a control group. If a specific control group is not set up and monitored, then some evidence as to what such a control group might look like, typically based on research with comparable situations elsewhere, can serve to lower impact risk significantly on this front.
The availability of a track record, precedents, extensive research, and control groups, will depend on a combination of the organisation’s stage of development, and the originality of its approach. Rarely will an organisation be able to provide an exhaustively evidenced treatment of the change, and its interplay with other factors, though it is important to look at what evidence there is, and to consider the impact risk it leaves. Evidence, in so far as it is available, should serve to promote confidence in the impact plan, and in particular in the relationship between the organisation’s proposed activities and outputs, and the outcomes and impact that it is hoped will follow.
For an organisation proposing a completely new idea, and therefore with little or no direct evidence of how well it works, there may still be relevant research it is responding to, and that has informed the development of the approach (i.e. less proving the approach than showing how different approaches have failed in the past, and how this one learns from them). However an organisation working with well-established methods will inevitably have more to draw upon regarding evidence.
As a result, excessive investor demand for high levels of evidence would lead to an inevitable bias toward mature organisations working with tried and tested methods, at the expense of investing in innovative, and in some cases possibly more effective, forms of intervention. The balance between conflicting desires for the impact plans to be, on the one hand evidenced, and on the other, to deliver something new, will depend upon an investor’s mission, strategy and appetite for impact risk. A less well-evidenced, and therefore riskier, approach may ultimately prove to be game-changing, and thereby high impact. These considerations will play into the investment decision when weighing impact risk against other criteria.
Where there is less evidence available, it becomes increasingly important, with regard to impact risk, for the impact plan to be convincingly reasoned (see 2.2.2 above), and evidenceable (see 2.2.6 below).
Will the impact be evidenced by carrying out the impact plan?
An evidenceable impact plan is one that incorporates processes to ensure that carrying out the plan will produce sufficient evidence to demonstrate the outcomes and impact, and prove the approach. This requires that:
- a robust impact measurement system is in place to track outputs and outcomes
- where a link, relationship, assumption or claim is unproven, it is identified, and checks are in place to validate it in the future
- measures will be taken to assess the other factors involved and the true role of the organisation’s outputs in the change (i.e. there is an anticipated address of the conditions for change and context of change — e.g. a reference is identified, or a control group set up, to establish a sense of what happens without the intervention, and to provide a degree of evidence in support of the hypothetical scenario of what would have happened anyway, what is happening elsewhere, and the role of other factors)
- the anticipated evidence is inclusive of the beneficiary perspective (evidence features feedback from beneficiaries, and is communicated to beneficiaries)
The impact plans of potential investee organisations are likely to present theories, links and impacts that are under-evidenced, and in some cases altogether untested. However these may still be testable, and the subject of planned tests. For the confidence of the investor to be gained, it is crucial that the organisation can show effective measures are in place to evidence its impact going into the future, especially when there is a lack of evidence currently.
The impact plan must be clear as to which parts are evidenced, which are unevidenced but will be evidenced by the activities and measurement system proposed, and which will remain essentially reasoned. The timeline for the evidence is also important: if an impact plan is full of unproven elements, the investor will want to know, if the investment is made, what evidence there will be to show whether or not the plan is working by year one, three, five etc..
As the organisation carries out its plan, over the course of operations, and the period of the investment, it is expected that more and more elements will become evidenced. Also, as the organisation matures and scales, its measurement system may be expected to grow in scope proportionally, thus expanding the range of evidenceable and subsequently evidenced aspects of the plan. This will correspond naturally with diminishing impact risk, as operations successfully manifest the impact.
Alternatively, if the approach is failing, the presence of evidence systems will be able to show this, giving the organisation and the investor the opportunity to change course. | 0.9856 | FineWeb |
Color is practically the “lifeblood” of good design, in this case —beaded jewelry designs. Color can work for or against your design. Color can set the mood of a jewelry piece —as an example: use a playful mix of bright colors to express a “happy” design. The key, really, is to “combine” colors harmoniously in such that it attracts the eyes, not repel it.
Here are some tidbits and practical tips on using colors that can work for your designs:
- Use color schemes to build your design ideas with —I personally find it a lot easier to start working on a design idea using my favorite scheme (I usually go for a “monochromatic” look) as the framework. As a refresher, here are 4 of the most used color schemes for every serious artist:
- Monochromatic—uses a key color (example: amethyst) in combination with its various tones, shades and tints (lightness and darkness) to achieve a balanced look
- Complementary—uses a dominant /base color (example: brown) in contrast with the color directly across it in the color wheel —use the complementary color as accent
- Analogous—the curious combination of colors right next to each other (example: red); may not be as vibrant as the complementary scheme though a lot richer than the monochromatic
- Split-complementary—this is a variation to the standard “complementary” scheme in that it uses a key color (example: violet) in combination with its complementary color’s two adjacent colors achieving a higher contrast
- Working around a themewill add character (personal signature) to your designs —with a color scheme in place, a chosen theme will guide the process of creating your design ideas. Choosing design themes can be really easy —it can be according to each season(example: winter), or style (example: classic), or culture (example: ethnic), or occasion(example: bridal)
- Give careful consideration to different color symbolism across cultures(that is, of course, if you plan to sell jewelry across the globe!) —Colors can convey different meanings as much as the written words. Here are a few samples of this cross-cultural color symbolism:
- Black—it symbolizes death, as well as style and elegance in most Western nations. It also implies trust and high quality in China.
- Red—expresses mourning for South Africans, but it signals good luck and fortune for the Chinese. It can also signify masculinity in some parts of Europe.
- Yellow-distinguishes a feminine character in the US and many countries, but it can convey mourning in Mexico
- Purple—is a symbol of expense for most Asian nations, but it signifies mourning in Brazil. It also expresses freshness and good health n many Western nations.
- Green—it signifies h-tech in Japan, but it is a forbidden color in Indonesia.It can also mean luck for Middle East nations
- Blue —it symbolizes immortality in Iran
- Pink —it is the symbol of femininity in the US and most Asian nations
- White —it signifies mourning in Japan and other far eastern nations, but it also conveys purity and cleanliness in most Western nations
- Brown —it means disapproval for the Nicaraguans
The choice and combination of colors make up your color palette. Use your palettes to achieve a pleasant color harmony to make your jewelry designs stand out. | 0.9798 | FineWeb |
PROVIDING for relatives comes more naturally than reaching out to strangers. Nevertheless, it may be worth being kind to people outside the family as the favour might be reciprocated in future. But when it comes to anonymous benevolence, directed to causes that, unlike people, can give nothing in return, what could motivate a donor? The answer, according to neuroscience, is that it feels good.
Researchers at the National Institute of Neurological Disorders and Stroke in Bethesda, Maryland, wanted to find the neural basis for unselfish acts. They decided to peek into the brains of 19 volunteers who were choosing whether to give money to charity, or keep it for themselves. To do so, they used a standard technique called functional magnetic resonance imaging, which can map the activity of the various parts of the brain. The results were reported in this week's Proceedings of the National Academy of Sciences.
The subjects of the study were each given $128 and told that they could donate anonymously to any of a range of potentially controversial charities. These embraced a wide range of causes, including support for abortion, euthanasia and sex equality, and opposition to the death penalty, nuclear power and war. The experiment was set up so that the volunteers could choose to accept or reject choices such as: to give away money that cost them nothing; to give money that was subtracted from their pots; to oppose donation but not be penalised for it; or to oppose donation and have money taken from them. The instances where money was to be taken away were defined as “costly”. Such occasions set up a conflict between each volunteer's motivation to reward themselves by keeping the money and the desire to donate to or oppose a cause they felt strongly about.
Faced with such dilemmas in the minds of their subjects, the researchers were able to examine what went on inside each person's head as they made decisions based on moral beliefs. They found that the part of the brain that was active when a person donated happened to be the brain's reward centre—the mesolimbic pathway, to give it its proper name—responsible for doling out the dopamine-mediated euphoria associated with sex, money, food and drugs. Thus the warm glow that accompanies charitable giving has a physiological basis.
But it seems there is more to altruism. Donating also engaged the part of the brain that plays a role in the bonding behaviour between mother and child, and in romantic love. This involves oxytocin, a hormone that increases trust and co-operation. When subjects opposed a cause, the part of the brain right next to it was active. This area is thought to be responsible for decisions involving punishment. And a third part of the brain, an area called the anterior prefrontal cortex—which lies just behind the forehead, evolved relatively recently and is thought to be unique to humans—was involved in the complex, costly decisions when self-interest and moral beliefs were in conflict. Giving may make all sorts of animals feel good, but grappling with this particular sort of dilemma would appear to rely on a uniquely human part of the brain.
This article appeared in the Science and technology section of the print edition under the headline "The joy of giving" | 0.8689 | FineWeb |
Humans look to nature for inspiration. Fortunately, cancer researchers don’t have to look too hard. Elephants and naked mole rats do exceptionally well at resisting cancer, and we are starting to learn why.
Cancer is caused by mutations—a chance mistake in the genetic code. The greater the number of cells and the longer they live, the greater the chance of mutations.
Elephants don’t go through menopause
By that logic, however, elephants—which have perhaps 100 times as many cells as humans—should have gone extinct from the sheer number of cancer types they face. But only 5% of elephants die of cancer in comparison to more than 20% humans. So why the discrepancy?
A new study in the Journal of the American Medical Association has one answer. When researchers from a host of US universities studied the genome of elephants, they found 20 copies of TP53 gene, which is known to help resist cancers by repairing damaged DNA. Humans have only one copy of that gene.
Although the human lifespan has doubled in the last few centuries, it is only because of help from modern medicine. Elephants, on the other hand, have longer lifespans naturally and could not have that without the evolutionary advantage endowed by TP53.
Also, unlike humans, as far as we know, elephants don’t go through menopause. So to ensure that elephant babies born to older females weren’t riddled with badly mutated genes, evolutionary pressure would have created resistance to DNA damage via more copies of TP53.
The enigma of naked mole rats
The story of naked mole rats is even more inspiring. These weird creatures live underground, survive on little oxygen and food, are nearly blind, and, as far as we know, never develop cancer. Even if researchers try to induce cancer through artificial means.
Once the mutations set in, cancer cells proliferate by uncontrolled growth. This happens because the mechanisms inside the cell that usually regulate this process are broken. There are, however, external mechanisms that can help regulate this process. According to a 2013 study published in Nature, naked mole rats seem to exploit this mechanism to resist cancer.
The study found a polymer in between the cells of a naked mole rat, called hyaluronan, which was providing mechanical strength to the cells but also regulating cell growth. The thickness of the polymer determined whether cells grew or not.
When the researchers used an enzyme that degraded the polymer, they found that the rats’ cells started grow in clusters, just like normal rats’ cells do when they form a tumor. Even better, when they knocked out the genes responsible for producing the polymer and then injected cancer-causing virus, the rats’ cells became cancerous.
We may not yet have a cure for cancer, but such exceptional cases give hope. As Rochelle Buffenstein, a physiologist at the University of Texas Health Science Center, once told me, “As we learn more about these cancer-resistant mechanisms that are effective and can be directly pertinent to humans, we may find new cancer prevention strategies.” | 0.9111 | FineWeb |
There is a right answer to that. But it may not be what you’ve been taught. So in this podcast, Bill and Bryan will review the difference and what they new model of selling requires from you.
Here is the secret: Better positioning leads to less need for persuasion. Listen in and learn!
Also mentioned in the podcast:
- Want a second opinion on your slide deck? Bill and Bryan offer to help 2 people out! Send them your slide deck to [email protected] and we’ll let you know if you’ve been chosen.
- The Golden Circle – How Great Leaders Inspire Action by Simon Sinek
- A clip from Vacation with Chevy Chase | 0.5647 | FineWeb |
Dr. Erich Jauch is a mathematics instructor at UW-Eau Claire. He currently teaches Algebra for Calculus for UW Independent Learning. He enjoys teaching introductory math courses and working with students at the beginning of their mathematical journey.
Recently, during a revision of Algebra for Calculus, Dr. Jauch added open educational resources to the course, removing the cost barrier of a textbook and online homework platform for students. He also added two types of activities to incorporate equity, diversity, and inclusion (EDI) principles and connect with his asynchronous, self-paced students. Moreover, the course is mindful of reducing math anxiety in students.
The first activity is a series of math chats. In every unit, students are given a space to ask or answer a question about the material covered or read and reflect on an article pertaining to a mathematical topic. Through the math chats, students are able to:
• Discuss diversity within the math community
• Highlight the work of underrepresented mathematicians
• See fun applications of math, like the “mathematically perfect” way to slice a pizza
✅ See an example of a math chat discussion
The second activity is a three-part math mystery. Students apply course concepts to a fictional story about an international criminal stealing precious artifacts. Before and after students solve problems related to the math mystery, they are asked to reflect in discussions:
• First, students discuss the concepts they might apply to the problem in an introductory discussion for each math mystery scenario.
• Second, after they’ve worked out the problem and seen the answer, students complete a reflection discussion on what they understood from the activity, what they struggled with, and how they might apply concepts in the future.
✅ See an example of an introductory discussion and a reflection discussion from the math mystery activity.
These discussions help reduce math anxiety and create equity by allowing students to see how others are thinking about and approaching the problems in the activity. The math mystery activities keep the focus on the learning process, not just the correct answer, by asking students to reflect on their solutions to the problems. In this spotlight, Dr. Jauch gives us more details about adding EDI to math courses and the benefits of these activities for students.
Often, math and science courses are perceived to be “difficult” to incorporate EDI principles into. What has helped you include more EDI in your courses?
While trying to source these principles from classic material is certainly more difficult, if we take the time to look we can find many opportunities to witness EDI topics in mathematics. Especially if we are willing to look into the applications of mathematics.
Can you give a brief description of how these strategies work in your course? Tell us what students are expected to do when they complete this activity. How are they evaluated and what kind of feedback do they get?
The math mysteries are a way for students to work through some problems that are interconnected and in a fun and playful way. Too often students are given math problems as busy work, so these were designed to be light-hearted but also an assessment of their abilities to that point in the class. Additionally, the types of problems were selected to best fit the written setting. The main process of the assignment is for students to first complete a pre-assessment of the topics and skills they may need for the assignment. Then they complete the worksheet by hand and upload their work to Canvas. Afterward, they are presented with partial answers and asked to reflect on the experience.
Can you talk a little more about developing and including these strategies in your course?
With the course being fully online, one benefit of the math chats is an opportunity for the students to interact with each other and see different perspectives about interesting current and EDI topics. This was important to me because student interactions are an important piece of a standard class and this brings it to an IL course. It was important however to not link the score [course grade] to the interactions as the number of students concurrently enrolled can vary greatly.
What advice would you have for other faculty who may want to try similar activities in their courses?
Be willing to look outside the normal topics covered in your course that are accessible to students. There are usually many modern topics that students have an interest in that you can make approachable to them.
Using OERs, adding opportunities to reflect and collaborate, and reducing student anxiety are effective ways to add more equity, diversity, and inclusion into a course. Our course reflection tool is also a helpful resource when considering EDI-related changes to your course. Reach out to your instructional designer if you want to learn more! | 0.9806 | FineWeb |
- Jul 23, 2020
- Reaction score
This might make a solid link to share with some folks:
I've installed Kali Linux, or I'm trying to install it. Why is it so hard? Why doesn't it recognize my hardware? Why do I need to set up so many things manually? Why can't I install the application... | 1 | FineWeb |
It’s never too soon (or too late) to start saving for retirement. We’ll find a plan that works for you.
- Manage Cards
- No Setup or Maintenance Fees
- Tax Advantages*
- Competitive interest above standard savings rates
- Traditional and Roth IRA options
- No setup charges
- No monthly or annual maintenance fees
- $5,500 contribution limit per year
- Additional $1,000 "catch-up" contribution allowed for ages 50+
- Funds can be used to purchase CDs within IRA
- $500 minimum deposit to open
*Consult a tax advisor.
When do you want to enjoy your tax advantage? A traditional IRA provides potential tax relief today, while a Roth IRA has the potential for the most tax benefit at the time of retirement.
- No income limits to open
- No minimum contribution requirement
- Contributions are tax deductible on state and federal income tax*
- Earnings are tax deferred until withdrawal (when usually in lower tax bracket)
- Withdrawals can begin at age 59½
- Early withdrawals subject to penalty**
- Mandatory withdrawals at age 70½
- Income limits to be eligible to open Roth IRA***
- Contributions are NOT tax deductible
- Earnings are 100% tax free at withdrawal*
- Principal contributions can be withdrawn without penalty*
- Withdrawals on interest can begin at age 59½
- Early withdrawals on interest subject to penalty**
- No mandatory distribution age
- No age limit on making contributions as long as you have earned income
*Subject to some minimal conditions. Consult a tax advisor.
**Certain exceptions apply, such as healthcare, purchasing first home, etc.
***Consult a tax advisor.
- Set aside funds for your child's education
- No setup or annual fee
- Dividends grow tax free
- Withdrawals are tax free and penalty free when used for qualified education expenses*
- Designated beneficiary must be under 18 when contributions are made
- To contribute to an ESA, certain income limits apply**
- Contributions are not tax deductible
- $2,000 maximum annual contribution per child
- The money must be withdrawn by the time he or she turns 30***
- The ESA may be transferred without penalty to another member of the family
*Qualified expenses include tuition and fees, books, supplies, board, etc.
**Consult your tax advisor to determine your contribution limit.
***Those earnings are subject to income tax and a 10% penalty. | 0.8173 | FineWeb |
10.14 Young People, Risk and the Benefit of Saving Early
We know that teens take risks, but we may not know why or that this tendency can continue even to our thirties. This is important for parents who waste much breath—and for growing money, too.
A recent New Yorker article presents the two dominant neuroscience theories for why teens embrace risk. Neurologist Frances Jensen asserts that the electric lines from all over the brain to the frontal lobe are not fully developed until our twenties or even thirties. The frontal lobe is the center of planning, self-awareness, and judgment, so if it doesn’t receive enough impulses, it can’t exercise those functions to override poor decisions. The young aren’t heedless; they simply lack proper wiring.
The second is Laurence Steinberg’s theory that the pleasure center, the nucleus accumbens, grows from childhood to its maximum size in our teens and declines thereafter. Therefore at puberty our dopamine receptors, which signal pleasure, multiply. He says this is why nothing ever feels as good again as when we are teens, whether listening to music, being with friends, or other things not printable in a family newspaper.
Steinberg maintains that teens are no “worse than their elders at assessing danger. It’s just that the potential rewards seem—and from a neurological standpoint, genuinely are—way greater.” Teen brains balance risk and reward and choose the greater risk for greater potential reward.
But if the young are wired to enjoy now, when the rewards seem greater, they may miss the key element of growing money: time. Money is wet snow rolling down a hill. The money snowball grows larger the longer the hill. Ergo, the younger you are, the longer your hill, the more money you will have.
Take Jo and Joe. Each of them saves the same amount but Jo starts at 21 and Joe at 40. Each increases savings each year at the same rate, earns the same return. Joe’s savings never catch up to Jo’s; they remain 19 years behind hers forever. In fact, Jo could stop saving and investing at age 40 and Joe wouldn’t have the same amount until he’s 59:
This is an artificial example, of course. There are all sorts of rational reasons for a late start—earning an advanced degree, investing in children, starting a business—but they are ones formed by more developed brains. So because we can see the obvious benefits of time, we must be creative to counter the money decisions of higher risk-taking young brains.
My Dad required me to have a part-time job starting at 15, which eliminated after-school activities. However, he told me that so long as I saved x for college, the rest was mine. That got me over the resentment of missed activities and made me work longer hours, which not only produced spending money but also forced me to manage my scarce time better. Change “college” to any number of savings vehicles, and presto, you have an incentive plan.
Brain science tells us young people take risks because they can’t help it and may miss the benefits of saving and investing earlier. A win-win approach like my wily Dad’s can work wonders. | 0.6254 | FineWeb |
Across a range of industries, data analytics are helping businesses to become smarter, more productive, and better at making predictions.
Analysing these at a functional level constrains your thinking, but piecing these data sets together opens new opportunities to extract potential value such as understanding customer journeys, designing customer segmentation models and enhancing pricing.
We can support you with:
- General portfolio reporting – we can assess performance trends, identify risk segments and highlight future areas of focus for your portfolios.
- In-depth portfolio analysis (one-off or regular analysis of data provided for externally managed portfolios).
- Business forecasting – facilitating pro-active capacity planning.
- Data cuts and reporting – provision of reporting for third parties (e.g. credit reference agencies).
- Self-servicing capability – providing online and mobile solutions for receipt of arrears payments.
- Consultancy to identify and implement intelligent arrears management strategies for your portfolios.
- Identifying portfolio trends for early arrears forecasting.
- Pricing loan pools for servicing as a part of a bid process.
- Data scrubbing for accuracy and conformity to your specified criteria. | 0.8501 | FineWeb |
Occupational bifocals and trifocals are specialized multifocal lenses created for specific jobs, hobbies or tasks. They are designed for people – generally over 40 – who have developed presbyopia, a condition in which the lens of the eye weakens and it becomes difficult to see objects that are close up. They differ from regular multifocal lenses in that the magnified power areas to see close and intermediate objects are typically larger and positioned in a different area on the lens, according to needs of the designated task.
Occupational bifocal and trifocal lenses are intended for specific tasks and not for everyday use. Here are a few examples:
The most popular type of occupational lens is the Double-D lens. The lens is divided into three segments, with the top designed for intermediate vision, the bottom segment for near vision and the rest for distance. This design is ideal for people who need to see close both when looking down (to read something) and when looking overhead. Professionals that frequently use Double-D lenses are auto mechanics (who have to look overhead when under a car), librarians, clerks or office workers, (who have to look at shelves overhead) or electricians (that are often involved in close work on a ceiling). They are called Double-D lenses because the intermediate and near segments of the lens are shaped like the letter “D”.
E-D Trifocal Lenses
As opposed to Double-D lenses which have the majority of the lens for distance vision, E-D lenses focus on intermediate vision with an area for distance on the top and for near vision on the bottom. These are ideal for individuals who are working at about an arm’s-length away the majority of the time, such as on a multiple computer or television screens, but frequently need to look up into the distance or close to read something. The “E” in the name stands for “Executive Style” which represents the division between the top distance vision lens and the bottom intermediate vision lens which goes all the way across the lens. “D” in the name of the lens is due to the fact that the near section in the bottom of the lens is shaped like a “D”.
Office or Computer Glasses
Multifocal lenses designed for office work provides the largest section with an intermediate lens designated for viewing the computer screen and a smaller area for limited distance vision. You can have progressive or trifocal lenses that incorporate near vision as well.
That’s right, there are even specialized lenses made for golfers! Golfers need to see a wide range of distances during their game from their scorecard, to their ball on the tee, to hole far away to line up their drive. In these lenses, the close segment is small and placed on an outer corner of one lens, to allow for brief close vision but not interfere with the distance game. Usually, right handed golfers will have the lens on the right side and vice versa.
Standard multifocals can be redesigned to adapt to specific tasks or hobbies simply by changing the size, shape or location of the different segments. Many adults over 40 would benefit from having multiple pairs of multifocals to give optimal vision for different tasks or hobbies they enjoy. Note that occupational lenses are made specifically for the task they are designed for and should not be worn full-time, especially while driving. | 0.5821 | FineWeb |
Diabetes in Pregnancy
adak island journeyGestational diabetes does not increase the risk of birth defects or the risk that the baby will be diabetic at birth.
Also called gestational diabetes mellitus (GDM), this type of diabetes affects between 3% and 20% of pregnant women. It presents with a rise in blood glucose (sugar) levels toward the end of the 2nd and 3rd trimester of pregnancy. In 90% if cases, it disappears after the birth, but the mother is at greater risk of developing type 2 diabetes in the future.
It occurs when cells become resistant to the action of insulin, which is naturally caused during pregnancy by the hormones of the placenta. In some women, the pancreas is not able to secrete enough insulin to counterbalance the effect of these hormones, causing hyperglycemia, then diabetes.
adak island journeyPregnant women generally have no apparent diabetes symptoms. Sometimes, these symptoms occur:
- Unusual fatigue
- Excessive thirst
- Increase in the volume and frequency of urination
Importance of screening
These symptoms can go undetected because they are very common in pregnant women.
Women at risk
Several factors increase the risk of developing gestational diabetes:
- Being 35 years of age or older
- Being overweight
- Family members with type 2 diabetes
- Having previously given birth to a baby weighing more than 4 kg (9 lb)
- Gestational diabetes in a previous pregnancy
- Belonging to a high-risk ethnic group (Aboriginal, Latin American, Asian, Arab or African)
- Having had abnormally high blood glucose (sugar) levels in the past, whether a diagnosis of glucose intolerance or prediabetes
- Regular use of a corticosteroid medication
- Suffering from polycystic ovary syndrome (PCOS)
- Suffering from ancanthosis nigricans, a discoloration of the skin, often darkened patches on the neck or under the arms
The Canadian Diabetes Association 2018 Clinical Practice Guidelines for the Prevention and Treatment of Diabetes in Canada recommends diabetes screening for all pregnant women, between the 24th and 28th week of pregnancy. Women with a higher risk of developing gestational diabetes should be tested earlier.
adak island journeyTwo screening methods:
adak island journey1. Most centres use a method done at two separate times. It begins with a blood test measuring blood glucose (sugar) levels 1 hour after drinking a sugary liquid containing 50 g of glucose, at any time of day. If the result is:
- Below 7.8 mmol/L, the test is normal.
- Above 11.0 mmol/L, it is gestational diabetes.
- If it is between 7.8 and 11.0 mmol/L, the attending physician will ask for a second blood test measuring fasting blood glucose (sugar) levels, then for blood tests taken 1 hour and 2 hours after drinking 75 g of glucose. This will confirm gestational diabetes if the values are equal to or greater than:
- 5.3 mmol/L fasting
- 10.6 mmol/L 1 hour after drinking the sugary liquid
- 9.0 mmol/L 2 hours after drinking the sugary liquid
2. The second method the oral glucose tolerance test (OGTT), with a sweetened liquid containing 75 g of glucose and three blood tests. A diagnosis is made if at least one of the three blood tests has values equal to or greater than:
5.1 mmol/L fasting
10 mmol/L 1 hour after drinking the sugary liquid
8.5 mmol/L 2 hours after drinking the sugary liquid
Risks and possible complications
There are numerous risks when gestational diabetes is not properly controlled and blood glucose (sugar) levels remain high.
adak island journeyFor the mother:
- Excess amniotic fluid, increases the risk of premature birth
- Risk of caesarean section or a more difficult vaginal birth (because of the baby’s weight, among other reasons)
- Gestational hypertension or preeclampsia (high blood pressure and swelling)
- Higher risk of staying diabetic after the birth or of developing type 2 diabetes in the future (a 20% to 50% risk within 5 to 10 years of the birth).
adak island journeyFor the baby:
- Bigger than normal at birth (more than 4 kg of 9 lb)
- Hypoglycemia (drop in blood sugar levels) at birth
- Risk of the baby’s shoulders getting stuck in the birth canal during the birth
- Risk of obesity and glucose intolerance in early adulthood (especially if birth weight was above 4 kg or 9 lb)
Slight risk of:
- Jaundice, especially if the baby is premature
- Lack of calcium in the blood
- Breathing problems
adak island journeyProper diabetes control considerably reduces the risks of complications.
When gestational diabetes is diagnosed, a personalized meal plan should be developed to control the mother’s glycemia
Generally, a healthy diet with proper portion control and distribution of carbohydrates (sugars), as well as a healthy lifestyle (stress management, enough sleep and physical activity), are sufficient to control gestational diabetes.
If blood glucose (sugar) levels remain too high, the physician will prescribe insulin injections or, in some cases, oral antidiabetics.
adak island journeyTarget blood glucose (sugar) levels for the majority of pregnant women:
- Fasting <5.3 mmol/L
- 1 hour after a meal <7.8 mmol/L
- 2 hours after a meal <6.7 mmol/L
The target values for controlling gestational diabetes differ from those of other types of diabetes.
Importance of a balanced diet
adak island journeyA balanced diet is essential for the control of blood glucose (sugar) levels and for a healthy pregnancy. When there is gestational diabetes, certain modifications need to be made to the mother’s diet, including to the amount of carbohydrates in each meal. A carbohydrate-controlled diet is the foundation of the treatment. It is essential not to eliminate carbohydrates completely but rather to distribute them throughout the day.
Your meal plan
A dietitian will help you establish or modify your meal plan based on your energy needs. The dietitian will also advise you about the important nutrients to incorporate in your diet during your pregnancy:
adak island journeyFor more information about balanced meals, consult The Balanced Plate.
Importance of being physically active
adak island journeyPhysical activity helps control diabetes during pregnancy and has numerous health benefits for pregnant women.
It is recommended that most pregnant women do a total of 150 minutes of physical activity per week, ideally in at least 3 to 5 sessions of 30 to 45 minutes each. If you weren’t active before your pregnancy, start gradually.
adak island journeySafe cardiovascular activities (done at light to moderate intensity) during pregnancy include:
- stationary exercise equipment
- cross-country skiing
Consult your doctor before starting these activities and avoid physical activities where you risk falling, losing your balance or have sudden changes in direction (for example: soccer, badminton, etc.).
Stay well hydrated before, during and after exercise, in addition to having with you at all times your blood glucose (sugar) meter and a source of rapidly absorbed carbohydrates in case of hypoglycemia.
Before engaging in physical activity, your insulin dosage may have to be reduced to limit the risk of hypoglycemia. Your medical team will help you adjust your dosage as required.
During the birth
During the birth, the medical team regularly monitors the mother’s blood glucose (sugar) levels and adjusts treatment based on the readings. The baby’s blood glucose (sugar) levels are also monitored in the hours following the birth.
After the birth
adak island journeyIn the majority of cases, the diabetes disappears after the birth. However, the risk of developing diabetes in the future increases, especially if you keep your excess weight. To avoid this situation, you should maintain a healthy weight, eat a balanced diet and exercise regularly.
Furthermore, it is recommended that you have a blood glucose (sugar) test between 6 weeks and 6 months after the birth to check whether your blood glucose (sugar) levels have returned to normal values. Before getting pregnant again, you should consult a doctor.
adak island journeyBreastfeeding is recommended for all women, diabetic or not. Mother’s milk is an excellent food for your infant. Breast feeding not only helps the mother lose the weight gained during pregnancy, it also reduces blood pressure and helps control blood glucose (sugar) levels and thus prevent type 2 diabetes. It also reduces the risk of obesity and diabetes later on in the child. The nutritional needs of nursing mothers are essentially the same as in the last trimester of pregnancy.
It is recommanded to start breastfeeding immediately after birth to prevent hypoglycemia in the newborn, and to continue for a minimum of 6 months.
See the list of high-risk pregnancy clinic (French only).
Research and text: Diabetes Québec Team of Health Care Professionals
Adapted from: Diabète Québec (2013), “Diabète et grossesse.”
adak island journeyJune 2014 (updated on July 2018)
©All rights reserved Diabetes Quebec
Feig D, Berger H, Donovan L et al. Diabetes Canada 2018 Clinical Practice Guidelines for the Prevention and Management of Diabetes in Canada: Diabetes and Pregnancy. Can J Diabetes 2018; 42 (Suppl 1): S255-S282.
adak island journeyCanadian Paediatric Society (Feb 28 2018). Weaning from the breast [Online]. Found at https://www.cps.ca/fr/documents/position/sevrage-de-allaitement (Web page consulted on July 18, 2018). | 0.8473 | FineWeb |
Department of Archaeology Contributes to the Anthropocene Curriculum
The Mississippi: An Anthropocene River initiative seeks to explore the ecological, historical, and social interactions between humans and the environment across the Mississippi River Basin. Scholars from both sides of the Atlantic are working directly with local and international scientists, social theorists, artists, and activists with interests and backgrounds spanning the biological and social sciences as well as the humanities and visual arts.
The Department of Archaeology at the Max Planck Institute for the Science of Human History has teamed up with researchers from the Max Planck Institute for the History of Science and the Haus der Kulturen der Welt in Berlin, to better understand the extent of human impacts on the ecology of the Mississippi. The Anthropocene River initiative has proven to be an exemplary experiment in mingling intellectuals with diverse ways of thinking and ultimately producing a product greater than the sum of the constituent parts. Members of the Department of Archaeology have presented research at Anthropocene River conferences in Berlin and St. Louis, and three ongoing research sub-projects, directed by its scholars, fit into the broader mission of the Mississippi branch of the global Anthropocene Curriculum. The MPI for the Science of Human History researchers are probing the deeply intertwined history of humanity and the Muddy River, they are studying the legacy of human cultural practice in prehistory, and, ultimately, they are trying to understand the ways that ancient human populations shaped the course of the river and impacted the biotic communities that the river supports.
The Mississippi is the drainage for the largest watershed in North America, it fosters some of the greatest temperate biodiversity on the continent, and facilitates the migration of millions of birds biannually. Additionally, it has been a life-supporting artery for humans for millennia, providing an abundance of aquatic and terrestrial resources, fresh water for irrigation, a rapid transit system, and a leach way, removing toxins and excess nitrogen from the Midwest. The river today would be unrecognizable to Audubon or Lewis and Clark; however, it still provides a life-support system to millions of Americans. In the face of recent political unrest in historically segregated towns along the course of the river, the recognition of the ongoing mass die-offs of birds and pollinating insects, hundreds of kilometers of dead zones in the Gulf of Mexico, and shrinking water tables, the research being conducted by the Mississippi initiative is more timely than ever. The projects spearheaded by the MPI for the Science of Human History seek to explain the processes that led to the Anthropocene. These scholars are studying the ripple effects that human impacts of the past have had on the landscape of today.
Prior to the mid-1800s, herds of wild bison still existed across parts of the North American Midwest. These herds represented some of the last mass-herds of megafaunal grazers of the temperate north. Prior to the post-Pleistocene extinctions, dense populations of these megafaunal mammals would have had significant ecological impacts. While scholars have speculated about the role that these extinct animals played in ecological services, such as nitrogen cycling, woody vegetation suppression, and seed dispersal, there have been few attempts to systematically test these theories. Bison provide a unique case study in this regard as they: 1) were still providing these ecological services until two centuries ago and the impacts are still visible; and 2) are not extinct (unlike glyptodons or mastodons), and can, therefore, still be studied. Drs. Spengler and Mueller have theorized in their study Grazing Animals Drove Domestication of Grain Crops that the seed dispersal processes of the massive herds of bison shaped the ecology of the Midwest in a way that allowed the earliest farmers of this region to target certain plants that are rare on the landscape today. The bison herds may have concentrated these plants into easily harvestable wild field, supporting the domestication of, what some scholars refer to as, the North American Lost Crops (Figure 2). To better understand the evolutionary links between the ancient bison herds and the progenitors of these Lost Crops, watch the short lectures presented by Dr. Spengler (Plant Domestication and Dispersal) and Dr. Mueller (Understanding the North American Lost Crops) at the Anthropocene conference held at Cahokia Mounds (Figure 1), outside St. Louis, last autumn. A more detailed discussion of the project is available to read on the Anthropocene Curriculum website.
The introduction of domesticated megafaunal animals, such as cattle and horses, has also provided clues to a greater understanding of the role of the extinct megafaunal herds of North America. European horses provide seed-dispersal services for several large-fruiting North American tree species, such as the Osage Orange, which may have been formerly dispersed by now-extinct Pleistocene horses. However, we know little about when horses were reintroduced into various parts of North America after European contact. Dr. Taylor, of the MPI for the Science of Human History as part of the Anthropocene River Project, is studying archaeological remains of horses across the American Midwest in an attempt to understand the rates of dispersal and how rapidly these animals were adopted by plains Indian groups for transportation, bison hunting, and warfare. Dr. Taylor is also using ZooMS and ancient DNA to distinguish different kinds of equid - donkeys, horses, and mules - among bones recovered from archaeological sites, and understand how each of these domesticates was used by early indigenous societies in the Mississippi Region. To read more about Dr. Taylor’s contributions to the initiative see the summary page or read about the Horses and Donkeys project.
Ultimately, the majority of the anthropogenic ecological reshaping of the American Midwest has centered around farming, especially through intensive forms of maize cultivation. Modern varieties of maize grown in the Midwest are mostly GMO and hybrid, requiring heavy water, fertilizer, herbicide, and pesticide inputs. However, these are not the first varieties of maize grown in North America. The topics of: 1) when maize arrived in different regions and; 2) how long after its introduction it took people to start intensively cultivation it, have received considerable scholarly attention. The switch to intensive maize cultivation eventually facilitated the loss of the Lost Crops and may have fueled demographic and social expansions, as seen in archaeological remains at sites across the Midwest, such as Cahokia (Figure 1). Dr. Fernandes is running the IsoMaize project as a partner project of his IsoMemo initiative. Dr. Fernandes is collaborating with other scholars at the MPI for the Science of Human History and at universities in America to isotopically trace the ancient spread and intensification of maize cultivation across North America. More detailed discussion of IsoMaize you can find here. | 0.9521 | FineWeb |
60% of global land and species loss is down to meat-based diets
Climate change and conflict leads to a rise in global hunger
Fruit, nuts and vegetable need bees and they are dying
There is concern around how food companies are treating farm animals
Farm Animal Welfare: A benchmark measuring companies' performance. Save the bees: A Greenpeace campaign Conflict, climate change and hunger: See how climate change leads to serious conflicts and hunger
Vast animal-feed crops to satisfy our meat needs are destroying planet: Overproduction of meat and animal feed is destroying whole species of animal and the environment.
10 things you need to know about sustainable agriculture - The Guardian: An article summarising ten important points about sustainable agriculture from a panel of industry experts.
Feast or Famine: Business and Insurance Implications of Food Safety and Security - Lloyd's: A report on food insecurity and the role that insurance can play in risk mitigation and management by Lloyd’s, an international specialist insurance marketplace.
Sustainable Agriculture in the UK - Farming and Countryside Education: A review of sustainable agriculture in the UK by Farming and Countryside Education, a charity providing education to children and young people about sustainable farming.
The Living Planet Report 2014 – World Wide Fund for Nature (WWF): A report analysing the impact of human activity on the health of the planet by the WWF, an international non-governmental organisation working to conserve, research and restore the environment. | 0.6248 | FineWeb |
How to Find and Understand Your Animal Spirit Guides
From the introduction:
“Wild animals are reaching out to connect with us all of the time. You can benefit personally from exploring a relationship with spirit animals in a multitude of ways. Learning your spirit animal can change the way you look at yourself by bringing you a great sense of confidence and empowerment. You may have already started to notice that certain animals keep reappearing in your life. This is called synchronicity and this means that the spirits are talking to you. Congratulations! Now let’s learn what’s next!”
Interested in finding the answer to the question, “What is my spirit animal?”
In this free and comprehensive guide not only will you be given instructions on how to find your spirit animal, you will also learn the following:
- The definition of a spirit animal
- Common myths and fears surrounding spirit animals
- How to find spirit animal meanings and interpretations
- Where to look up a lot more information about your spirit animals online
- Tips for how to create your own spirit animal reading
Throughout the book are also a collection of meanings of different animals such as rabbits, magpies, ravens, and more.
Instead of having an online quiz tell you what your spirit animal is, how about using your own intuition and know-how to discover your spirit animal yourself?
This guide will lead the way. | 0.8915 | FineWeb |
Two recent articles have got my blood racing and the excitement has led to this post. Both relate to citizen science, a concept that involves the common man in science and making big science accessible to everyone.
And how does citizen science work?
One example is the Ardusat satellites (and such like) which are tiny satellites on which time can be hired by the high school student, the amateur astronomer, the layman virtually! These carry simple equipment like temperature sensors, Geiger counters, digital cameras and the like. For costs of $35-45 per day people can hire time (in blocks of a few days) on these to perform experiments in space, take photographs of Earth/celestial objects from space…and much more. These firms such as NanoSatisfi mostly raised money through crowd sourcing but have made it possible for everyone to work with a slice of space!
The second example is just as exciting because the possibilities are endless! The GalaxyZoo project launched in 2007 by Chris Lintott and Kevin Schawinski asked volunteers to classify galaxies as spiral or other shapes. By their estimation the data they had collected from the Sloan Digital Sky survey, about a million images of galaxies would take years to sort through. Machine algorithms are still not as efficient as humans at recognising shapes. They reckoned that they would get 50 odd volunteers and finish the work in a year and a half. Instead thousands of volunteers from all over the world trawled through the data in 3 weeks!
The amazing thing about this project is that it allows the average Joe and Jill to do big science, to connect with big projects and be part of the romance of Science even if they are not professional scientists. It cuts across age, race, culture, gender, profession… it brings people together in their love and wonder of the natural world.
So now the question is: can we design other studies and experiments using this concept of citizen science and solve big problems, not just in astronomy but in all disciplines (projects that take forward the zooniverse principle)? Imagine the power of harnessing the talent and effort of hundreds of thousands of people who enjoy the discipline (even if they are amateurs). This diversity in thought and experience enriches the work so much, while sparking interest and a common sense of purpose amongst so many people.
Countries like China, India, Brazil (and others) would do especially well to engage the millions of people who could contribute and may be this can even help bring down the costs of certain kinds of research!
This much I know: I am itching to create a project like this of my own! | 0.6553 | FineWeb |
Understanding Performance in Human Development
A Cross-National Study
This paper introduces a new and comprehensive Human Development Index (HDI) trends dataset for 135 countries and 40 years of annual data. We apply this dataset to answer several empirical questions related to the evolution of human development over the last 40 years. The data reveal overall global improvements, yet significant variability across all regions. While we confirm the existence of continued divergence in per capita income, we find the inverse for HDI. We find no statistically significant correlation between growth and non-income HDI improvements over a forty year period. We also examine some basic correlates that are associated with countries performance in HDI. | 0.9986 | FineWeb |
Round each value to 1 significant figure then perform the calculation (without a calculator).
Now, (correct to one significant figure); and (again, one significant figure).
Applets: Begin by rounding to one significant figure. Then adjust your answer. Aim for less than 10% error (good), or less than 5% error (excellent!)
Up to 100 multiplied by 100
A little more difficult
Done here – go back to the menu for Unit 1. | 0.913 | FineWeb |
This section focuses on the medicinal chemistry of beta-lactam antibiotics; the second part of our series on the medicinal chemistry of antibacterial compounds.
Penicillin derivatives, cephalosporins, monobactams and carbapenems all belong to this popular class of drugs. A four-membered lactam ring, known as a β-lactam ring, is a common structural feature of this class (see below). To this day, the pharmacology of beta-lactam antibiotics has clearly bore out an excellent safety and efficacy profile. Most of these medicines work by interfering with bacterial cell wall synthesis; the cell wall being an optimum drug target because it is something that bacterial cells possess, but not human cells.
Penicillin and its Derivatives
Penicillin consists of a fused a β-lactam ring and a thiazolidine ring; part of the heterocyclic bicyclic system is the β-lactam ring. The bicyclic system confers greater ring strain on the β-lactam ring, an aspect important for activity. An amide and a carboxylic acid group are also present. The carboxylic acid group is a possible site of modification to make prodrugs. Also – note the stereochemistry of the acylamino side chain with respect to the 4-membered ring and the cis stereochemistry for the hydrogen atoms highlighted in green. The key structural features of penicillins can be summarised as follows:
- Fused β-lactam and thiazolidine ring forming a bicyclic system (Penam)
- Free carboxylic acid
- Acylamino side chain
- Cis stereochemistry for the hydrogen
Texts describing penicillins may appear to have conflicting numbering systems; as there are two different, widely used numbering systems. The USP assigns the nitrogen atom at number 1 and the sulfur atom at number 4. In contrast, the Chemical Abstracts system assigns sulfur as number 1 and the nitrogen as number 4. Keep these differing numbering systems in mind when reviewing texts on beta-lactam medicinal chemistry.
Chemical Properties & Reactions
Penicillin’s overall shape is similar to a half-open book. As we talked about earlier, the bicyclic ring system has large, torsional strain and angle strain. Unlike typical tertiary amides, the carbonyl group of the strained four-membered ring is very reactive and susceptible to nucleophilic attack. Think about amide resonance from introductory organic chemistry and its effect on amide reactivity. In the case of the β-lactam ring, amide resonance is diminished. For steric reasons the bonds to the nitrogen cannot be planar; the opening of the four-membered ring relieves strain.
Penicillins can react with amines to form inactive amides. This has implications on co-administration and formulation.
Activity – Can you draw a reaction mechanism for the scheme shown below?
Penicillins are also generally susceptible to hydrolysis under alkaline conditions. Alkaline hydrolysis can be catalysed by the presence of metal ions such as Cu2+.The resulting hydrolysis products do not possess antibacterial activity; this is valuable knowledge for the storage, analysis, and processing of these medicinal chemistry compounds.
Penicillins also tend to be sensitive to acids (see reaction scheme below). Penillic acid is the major product of acid degradation. In the stomach where conditions are acidic, the drug breaks down. Acidic conditions must also be avoided during production and analysis. Penicillins, such as phenoxymethylpenicillin (Penicillin V), have enhanced acid stability as they have electron withdrawing R groups.
Like many other drugs, penicillins face enzyme-catalysed degradation in vivo. Amidases catalyse the conversion of the C6 amide to an amine. Amidases are useful in industry for the production of 6-Aminopenicillanic acid (6-APA). This compound is used a precursor for many semisynthetic penicillins.
Drug resistance is also a growing problem.β-lactamases (beta lactamases) are mainly responsible for this.
β-lactamases are serine protease enzymes that act against the β-lactam drugs through a similar mechanism as the transpeptidase enzyme; the bacterial enzyme targeted by penicillin. The mechanism of action will be shown later.
β-lactamase inhibitors such as clavulanic acid are given to patients in combination with penicillins such as amoxicillin. The use of clavulanic acid in penicillin formulations allows for a reduction in dosage. Furthermore, the spectrum of activity is also improved. Note that there is no single β-lactamase enzyme. Clavulanic acid does not inhibit all β-lactamase enzymes.
- Augmentin®: Amoxicillin + clavulanic acid
- Timentin®: Ticarcillin + clavulanic acid
Tazobactam and sulbactam are examples of β-lactamase inhibitors that contain the β-lactam ring. Avibactam is an example of a β-lactamase inhibitor that does not have the β-lactam ring in its structure.
Now lets turn our attention to the synthesis of semisynthetic derivatives.
Synthesis of Semisynthetic Derivatives
We briefly mentioned 6-APA earlier. 6-APA is acquired through enzymatic hydrolysis of Penicillin G or Penicillin V, or through traditional organic chemistry. Reaction with an acid chloride at the C6 primary amine allows the synthesis of many semisynthetic penicillin derivatives (try to draw the reaction mechanism as practice!). The free carboxylic acid is, though, a possible site of modification. Ester prodrugs such as pivampicillin were developed to improve the pharmacokinetic properties of their parent drug. Pivampicillin is a pivaloyloxymethyl ester prodrug of ampicillin.
We emphasized the cis stereochemistry of the H5-H6 protons at the start of this beta-lactam review. Generally speaking, the H5-H6 coupling constant (1H NMR) is in the range of 4-5 Hz. 13C NMR studies and DEPT experiments would help distinguish between CH, CH2 and CH3. When studying the IR spectra of penicillins, one must be on the lookout for the carbonyl stretches – particularly for the characteristic β-lactam carbonyl stretch at around 1770-1790 cm-1.
Depending on which penicillin is being studied, the side chain would give characteristic NMR and IR signals.The predicted 1H NMR spectrum of ampicillin is shown below. Note that the spectrum below is merely an estimate. Can you explain why the benzylic protons appear shifted downfield than expected?
Mechanism of Action
Recall the structure of bacteria from microbiology. The bacterial cell wall is needed by most bacteria in order to survive. Gram-positive bacteria possess a thick peptidoglycan layer in the cell wall and an inner cell membrane. Gram-negative bacteria, on the other hand, possess an outer membrane and an inner cell membrane. Porins are present in the outer membrane of Gram-negative bacteria. The much thinner peptidoglycan layer of Gram-negative bacteria is found between these two membranes. The significance of this outer membrane in drug design of broad-spectrum penicillins is assessed later.
Peptidoglycan (or murein) consists of sugar and amino acid units. This mesh-like polymeric layer outside the inner cell membrane forms the cell wall of bacteria. The sugars N-acetylglucosamine (GlcNAc or NAG) and N-acetylmuramic acid (MurNAc or NAM) alternate, and are connected through a β-(1,4)-glycosidic bond. An amino acid chain is found in each NAM. D-amino acids are also found in these amino acid chains. The sugars are cross-linked via these peptides; cross-linking adding structural integrity to the bacterial cell wall.
Transpeptidase enzymes are bacterial enzymes responsible for the formation of these crosslinks. Penicillins act by interfering with the cross-linking of peptidoglycan by inhibiting the transpeptidase enzyme. Without an intact cell wall, bacteria are generally unable to survive. Thus, penicillins are bactericidal in effect.
Through the synthesis and studies of many semisynthetic penicillins, the following conclusions were reached.
- Cis stereochemistry of H5 and H6 essential
- The bicyclic ring is very important
- The free carboxylate is essential
- The acylamino side chain is necessary
Variation is mostly limited to the R group of the amide, and, as mentioned earlier, prodrugs have been developed by modifying the carboxylate group. So far, we’ve reviewed the following structural modifications:
- Enhancing acid stability by using electron-withdrawing R groups
- Converting the carboxylate functional group to an ester to give a prodrug
We will now consider several other structural modifications.
Other Structural modifications
Gram-negative bacteria possess an outer lipopolysaccharide membrane which surrounds the thinner cell wall. The outer membrane serves as a protective layer against compounds that may pose harm to the bacteria. This partly explains why Gram-negative bacteria are generally resistant to antibacterial compounds. As examined earlier, porins are proteins present in the outer membrane. Water and essential nutrients can pass through these proteins. Small drugs can also pass through porins. The ability of drugs to pass through porins is dependent on their size, structure and charge.
Generally speaking, molecules that are large, anionic and hydrophobic are unable to pass through proteins. On the other hand, molecules that are small, zwitterionic and hydrophilic tend to pass through easily. During the search for broad-spectrum penicillins through variation of the acylamino side chain, the following conclusions were reached:
- Hydrophobic groups tend to enhance ability against Gram-positive bacteria
- Hydrophilic groups generally increase ability against Gram-negative bacteria
- Attachment of a hydrophilic group at Cα appear to improve activity against Gram-negative bacteria
Broad-spectrum antibiotics such as amoxicillin and ampicillin both have -NH2 groups attached to Cα (as shown below); both compounds being orally active. As well as this, the presence of the electron-withdrawing amino group increases acid-stability. Ampicillin is poorly absorbed by the gut due to ionisation of both the amino and carboxylic groups. Oral absorption of amoxicillin is, in contrast, much higher.
Modifications, such as changing the carboxylic acid to esters, were made at the carboxylic group to alleviate this problem. Pivampicillin and bacampicillin are examples of ester prodrugs of ampicillin. The prodrugs undergo metabolism in the body to give ampicillin. Can you remember the names of the enzymes involved in the ester hydrolysis of prodrugs?
The ureidopenicillins are broad-spectrum penicillins typically used parenterally, and are active against Pseudomonas aeruginosa. As the name suggests, a urea group is present in the molecule. The urea group is situated at Cα. Azlocillin and piperacillin are examples of ureidopenicillins.
β-lactamases gave bacteria resistance to the traditional penicillins, driving the need for β-lactamase-resistant penicillins. Steric shields may also be used. This strategy involves designing penicillins to resist β-lactamases. By placing a bulky group on the acylamino side chain, degradation of the drug by β-lactamases is minimized. However, if a steric shield is too bulky, the penicillin is not able to bind to transpeptidase.
Methicillin is an example of a penicillin with a bulky group. This semi-synthetic penicillin possesses a dimethoxybenzene R group. Both the methoxy groups of the benzene are at the ortho position. Nafcillin possesses a naphthalene ring in its acylamino side chain which acts as a steric shield. Flucloxacillin contains a bulky and electron-withdrawing heterocyclic acylamino side chain. Thus, flucloxacillin is an acid-resistant, narrow-spectrum, β-lactamase-resistant penicillin.
- Penicillins are bactericidal beta-lactam antibiotics
- Penicillin’s core structure consists of a fused β-lactam ring and a thiazolidine ring
- The bicyclic system is highly strained
- Modifications can also be made at the acylamino side chain
- Cis stereochemistry of H5 and H6 is essential
- 6-Aminopenicillanic acid (6-APA) is mainly used as a precursor for semisynthetic penicillin drugs
- The carboxylic acid group can be modified to give ester prodrugs
- Attach electron-withdrawing groups at the amide to enhance acid stability
- Hydrophilic groups at Cα at the acylamino side chain improves spectrum of activity
- Steric shields at the acylamino side chain generally improves resistance to β-lactamase enzymes
Cephalosporin C was the first cephalosporin discovered from a fungus obtained from Sardinian sewer waters during the mid-1940s. Just like penicillins, Cephalosporin C has a bicyclic system made up of a β-lactam ring fused with a sulfur heterocycle, which, in this case, is the dihydrothiazine ring.The side chain is referred to as the aminoadipic side chain. The acetoxy group of Cephalosporin C is a key feature; a feature examined in more detail later.
Cephalosporin C is less potent compared to penicillins, but this compound has many advantageous properties. Cephalosporin C is more acid-resistant and has a better spectrum of activity, for example. Moreover, the likelihood of causing allergic reactions is considerably less. As a result, cephalosporin C became a useful lead compound for the development of better, more clinically robust antibiotics.
Chemical Properties & Reactivity
Compared to the penicillins, the ring system strain is not as great, but, like the β-lactam carbonyl of penicillins, the β-lactam carbonyl group of Cephalosporin C is also reactive for similar reasons. Diminished amide resonance and ring strain confers reactivity. The four-membered ring is also susceptible to nucleophilic attack. Cephalosporin C inhibits the transpeptidase enzyme through the mechanism shown below. A serine residue is involved. As shown below, the acetoxy group serves as a leaving group.
Synthesis of Semisynthetic Derivatives
7-aminocephalosporinic acid (7-ACA) is used as the precursor of many cephalosporins. Unlike 6-APA which can be acquired from enzymatic hydrolysis of certain penicillins, 7-ACA cannot be acquired by enzymatic hydrolysis of Cephalosporin C. 7-ACA is produced by chemical hydrolysis of Cephalosporin C. Due to the presence of the reactive β-lactam ring, a special method of chemical hydrolysis was devised. Can you draw a mechanism for the formation of the imino chloride?
Cephalosporin analogues may be formed by reacting 7-ACA with acid chlorides.
- The β-lactam ring is crucial for activity
- Bicyclic ring system important in increasing ring strain
- The cis-stereochemistry at the positions highlighted in green is important
- Other groups may be substituted for the acetoxy group which may or may not serve as good leaving groups. The nature of the leaving group is important for activity. Better leaving groups tend to give cephalosporin C analogues with better activity.
- The acylamino side chain may be altered
- Sites of possible modifications are highlighted in red boxes.
First Generation Cephalosporins
Cefalexin, cephalothin (cefalotin), cephaloglycin, and cephaloridine are examples of first-generation cephalosporins. The methyl group in cephalexin is a poor leaving group, which is bad for activity. However, the use of a methyl group appears to improve absorption. Cefalexin may be synthesized through an acid-catalysed ring expansion of a penicillin. Cephalothin has an acetoxy as a leaving group and a 1-(thiophen-2-yl)propan-2-one in its acylamino side chain. Despite being a good leaving group, the acetoxy moiety is susceptible to enzyme-catalysed hydrolysis. Cephaloglycin’s acylamino side chain is the same as that for ampicillin. Cephaloridine has pyridinium as a leaving group, giving pyridine. Unlike the acetoxy group, the pyridinium is stable to metabolism.
First-generation cephalosporins share the following features:
- In general, their activity is comparably lower than penicillins but possess a broader spectrum of activity
- Apart from the methyl substituted cephalosporins, gut wall absorption is poor
- Most of the first-generation cephalosporins are administered by injection
- Activity against Gram-positive bacteria is greater than Gram-negative bacteria
Second Generation Cephalosporins
Cefamandole, cefaclor, and cefuroxime are examples of second-generation cephalosporins. The second-generation have increased activity against Gram-negative species of bacteria such as Neisseria gonorrhoeæ, while some have decreased Gram-positive activity. Many cephalosporins that fall under the second-generation group are also able to cross the blood-brain barrier. Cefuroxime is also an example of an oximinocephalosporin. The presence of the iminomethoxy group appears to increase stability against certain β-lactamases.
Third Generation Cephalosporins
Cefdinir and ceftriaxone belong to the third-generation of cephalosporins. During 2008, cefdinir was one of the highest-selling cephalosporins. Ceftriaxone is marketed by Hoffmann-La Roche under the trade name Rocephin® (known for its painful administration). Overall, third-generation cephalosporins are more stable to β-lactamase degradation, and have even greater anti-Gram-negative activity. Like the previous generation, the third-generation are able to cross the blood brain barrier, making them useful against meningococci.
Fourth Generation Cephalosporins
The fourth-generation compounds exist are zwitterionic. The fourth-generation cephalosporins are not only better at traversing the outer membrane of Gram-negative bacteria, but these compounds also have similar activity against Gram-positive bacteria as first-generation cephalosporins. Moreover, β-lactamase resistance is also greater. Like the 2nd and 3rd generations, many of the 4th generation can cross the blood brain barrier. They are also used against Pseudomonas aeruginosa.
Fifth Generation Cephalosporins
Currently, members of the scientific community have not reached agreement with regards to the use of the term ‘fifth-generation cephalosporins’. Fifth-generation compounds have demonstrable activity against MRSA. Ceftobiprole is often described as a fifth-generation cephalosporin. This compound possesses good anti-Pseudomonal activity. Ceftaroline fosamil is also another example of a cephalosporin described as fifth-generation.
Recall that one of the hydrogens emphasised earlier is a possible site of modification. The methoxy substituted versions are referred to cephamycins. A compound called cephamycin C can be isolated from Streptomyces clavuligerus. A urethane group is present instead of the acetoxy group, enhancing the compounds metabolic stability. Can you explain why?
Derivatives may be synthesised from the methoxy-substituted analogue of 7-ACA, or through reactions with cephalosporins. Note that some refer to cephamycins as a separate class of antibacterial compounds altogether. The cephamycins appear to have greater resistance against β-lactamases. Cephalosporins and cephamycins are sometimes collectively referred to as cephems.
- Fused β-lactam and dihydrothiazine ring form a bicyclic system.
- 7-aminocephalosporinic acid (7-ACA) is used as precursor for many semisynthetic cephalosporins
- Structure-activity relationships are similar to penicillins
- Nature of leaving group is important to activity
Generations of cephalosporins:
- 1st: Activity is comparably lower than penicillins but possess a broader spectrum of activity. Greater activity for Gram-positive organisms than Gram-negative organisms.
- 2nd: Have increased activity against Gram-negative bacteria but some have a concomitant reduction of Gram-positive activity. Many can cross the blood-brain barrier.
- 3rd: Even better activity against Gram-negative bacteria. Some compounds have the same problem of decreased Gram-positive activity as with the previous generation. They are also associated with improved β-lactamase resistance and many can also cross the blood-brain barrier.
- 4th: Better Gram-negative bacteria activity, β-lactamase resistance, and many can also cross the blood-brain barrier. Gram-positive activity similar to the 1st generation.
- 5th: Currently no universal agreement on its definition. Some drugs classified as ‘fourth-generation’ are classified as ‘fifth-generation’ and vice versa. Examples often include ceftobiprole, ceftaroline and ceftolozane. Ceftobiprole has potent anti-Pseudomonal activity and confers little resistance. Fifth-generation drugs also show activity against MRSA.
As the name suggests, the monobactams, such as aztreonam ,are β-lactam compounds that are not fused to another ring. Monobactams exhibit moderate activity against certain Gram-negative bacteria in vitro, including Neisseria and Pseudomonas.
The carbapenem class of β-Lactam antibiotics have broad-spectrum activity and exhibit resistance to many β-lactamases. Carbapenems are used as antibiotics of last resort for infections of bacteria such as Escherichia coli and Klebsiella pneumoniae. Thienemycin is a carbapenem first discovered and isolated from Streptomyces cattleya in 1976. Thienamycin exhibits excellent activity against Gram-positive and Gram-negative bacteria and displays resistance to many β-lactamases. Meropenem and doripenem are analogues of thienemycin and are examples of carbapenems currently in clinical use. Both compounds have been described as ultra-broad-spectrum.
From the structure of the compounds shown, it is easy to see that the carbapenems have some structural features that penicillins do not. The double-bond on the five-membered ring leads to high ring strain. A sulfur atom is also missing in the five-membered ring. The acylamino side chain is absent. Also note the trans stereochemistry of the hydrogens.
This concludes our review of the medicinal chemistry of beta-lactam antibiotics; a subject often featured on pharmacy, pharmaceutical chemistry and other medicinal-related courses. It informs the reader of structure-activity relationships, the development of synthetic derivatives, and how medicinal chemistry relates to broader concepts such as formulation and dosage. The growing problem of resistance is an ever-present challenge for medicinal chemists; a biological mechanism that spurs on the engine of research in this field.
Total synthesis of thienemycin:
- J. Org. Chem., 1990, 55 (10), pp 3098–3103
Antibacterial resistance worldwide: causes, challenges and responses
- Nature Medicine., 2004, 10, S122 – S129 | 0.9718 | FineWeb |
* '''Published:''' 1st August 2005 * '''Publisher:''' Wizards of the Coast * '''Author:''' David Noonan, Rich Burlew * '''Format:''' 160 page hardback * '''Rules:''' D&D 3.5 Edition * '''Product:''' * [[wp>Explorer's Handbook|Wikipedia]]
The ultimate sourcebook for players wishing to explore the world of Eberron.
The Explorer’s Handbook showcases the multi-continental aspect of the Eberron setting. The chapter on travel discusses instantaneous and played out travel and provides deck plans for airships, the lightning rail, and galleons, plus other methods of conveyance. A chapter on Explorer’s Essentials offers information on travel papers, pre-assembled equipment kits, how to join the Wayfarers’ Foundation, and more. This handbook encourages players to explore the entire world rather than remain fixed in one region. | 0.7968 | FineWeb |
Historian Amanda Foreman, author of the bestselling Georgiana, Duchess of Devonshire, has written a new book, A World on Fire: Britain’s Crucial Role in the American Civil War. In an article for the Wall Street Journal‘s “Word Craft” column about her creative process, Foreman provided a valuable lesson for presenters:
The fruit of my 11 years of research meant that I had more than 400 characters scattered over four regions … This vast mass of material was so unwieldy that I could hardly work my way through the first day of the conflict, let alone all four years.
While few presenters spend 11 years developing their stories about their businesses, they, like Foreman, have a vast mass of unwieldy material that they have to communicate to various audiences. Unfortunately, most presenters then proceed to deliver that mass to their audiences as is, inflicting the dreaded effect known as MEGO, “My Eyes Glaze Over.”
Although Foreman is a respected scholar with a doctorate in history from Oxford University, she has storytelling in her DNA. Her father was Carl Foreman, on Oscar-winning screenwriter who wrote the classic The Bridge on the River Kwai. At the end of her research, Amanda Foreman realized that, even for a story as immense and complex as the Civil War, she had too much information for both writer and reader to process. Her solution:
I plotted the time lines of my 400 characters and identified and discarded people who, no matter how interesting their stories, had no connection to anyone else in the book. This winnowed my cast down to 197 characters, all bound to one another by acquaintance or one degree of separation.
Foreman was tapping into a practice — well-known among professional writers — called “kill your darlings.” In fact, a community of writers in Atlanta has adopted that name for its website. The phrase is often attributed to novelist William Faulkner, but it was actually coined by Sir Arthur Quiller-Couch, a British writer and critic who, in his 1916 publication, On the Art of Writing, said:
Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it — whole-heartedly — and delete it before sending your manuscript to press. Murder your darlings.
The sentiment was echoed by Christopher Markus and Stephen McFeely, the screenwriters of Captain America, the current Hollywood action film based on the 70-year-old comic strip character. In another Wall Street Journal “Word Craft” article, the team wrote:
Adapting an existing work for film is usually a process of reduction. Whether it’s a novel or a short story, a true-crime tale or 70 years’ worth of comic books, the first job is distillation. If this means losing someone’s favorite character, so be it. The simple fact is that we can’t put everything on the screen. Darlings must die.
The phrase rings true because writers, who labor over their ideas and words like expectant mothers, invariably fall in love with their offspring and are reluctant to find fault, and even more reluctant to part with them. In the same manner, presenters who live, breathe, walk, and talk their businesses want to share every last detail about them with their prospective audiences. But audiences do not share their interest, and so presenters, like writers, must kill their darlings.
In presentations, the process begins by assembling all your story elements. A chef prepares for a meal by gathering all the ingredients, seasonings, and utensils, but doesn’t use every last one of them. Once you have assembled all your presentation ingredients, assess every item for its relevance and importance to your audience — not to you. Your audience cannot possibly know your subject as well as you do, and so they do not need to know all that you do. Tell them the time, not how to build a clock.
Delete, discard, omit, slice, dice, or whatever surgical method you chose to eliminate excess baggage. Be merciless. Retain only what your audience needs to know.
Once you have made that first cut, make another pass, and then another. Each time you do, you will see your draft with fresh eyes and find another candidate for your scalpel. Follow the advice of the classic Strunk and White’s The Elements of Style: “It is always a good idea to reread your writing later and ruthlessly delete the excess.”
Bestselling horror novelist Stephen King — who knows a thing or two about ruthless killing — follows a similar practice. In his 2000 book On Writing, he shared a note his editor once sent to him:
You need to revise for length. Formula: 2nd Draft = 1st Draft – 10%.
Deal with your vast mass of unwieldy material in your preparation, not in your presentation; behind the scenes, not in front of the room. A gentler way of saying “kill your darlings” is, “when in doubt, leave it out.”
A footnote: Amazon lists Amanda Foreman’s new book at 1,008 pages. Imagine how many more pages it would have run had she not killed those 203 characters. | 0.536 | FineWeb |
End of preview. Expand
in Data Studio
Sampling Code
from __future__ import annotations
import os
from typing import List
from datasets import Dataset, load_dataset
from tqdm.auto import tqdm
TARGET_CHARS = 400_000_000 # ≈100 M tokens (≈4 chars/token)
BUFFER = 10_000 # streaming shuffle buffer
SEED = 42
HF_REPO = "sumuks/Ultra-FineWeb-100M"
def sample_ultrafineweb(
target_chars: int = TARGET_CHARS,
buffer_size: int = BUFFER,
seed: int = SEED,
) -> Dataset:
"""
Stream the Ultra‑FineWeb English split and return a random sample
whose total `content` length is at least `target_chars` characters
(≈100 M tokens).
"""
stream = load_dataset("openbmb/Ultra-FineWeb", split="en", streaming=True)
stream = stream.shuffle(seed=seed, buffer_size=buffer_size)
picked: List[dict] = []
char_count = 0
for row in tqdm(stream, desc="sampling"):
text = row["content"]
char_count += len(text)
picked.append(row)
if char_count >= target_chars:
break
return Dataset.from_list(picked)
def main() -> None:
ds = sample_ultrafineweb()
total_chars = sum(len(r["content"]) for r in ds)
print(f"Sampled {len(ds):,} documents, {total_chars:,} characters")
ds.push_to_hub(HF_REPO, private=False)
if __name__ == "__main__":
main()
- Downloads last month
- 99