docid
stringlengths
8
16
text
stringlengths
3
29.7k
randval
float64
0
1
before
int64
0
99
science1694051
The innate immune system has many different components to it. If we consider a natural virus, like the base adenovirus used as a vector used in the Oxford/AstraZeneca vaccine, then we can assume that the virus already has sufficiently effective strategies in place to deal with them. In particular: Physical and chemical barriers are irrelevant, given an injected vaccine. Interferon and MHC pathways must already be either ineffective or slow enough to be secondary in the response against the adenovirus's foreign proteins, such that NK cells shouldn't be expected to change their efficacy. The vaccine protein isn't expected to damage the cell membrane, so that pathway won't change either. Inflammation and antibacterial pathways aren't triggered by presentation of an extra protein either. Activation of the adaptive immune system through antigen presentation is exactly the sort of response that is desired.
0.136611
99
science1694052
why doesn't the bacteriophage kill the mammalian cells? Phage are molecularly equipped to infect only their bacterial hosts. Consider the barriers that a wild-type phage needs to overcome in order to replicate and lyse a eukaryotic cell: The phage needs to get its genetic material into the cell, whose membrane has a vastly different lipid and protein composition compared to a bacterial cell. The viral genetic material (assuming DNA) needs to translocate to the nucleus. There's no cytoplasmic RNA polymerase to carry out transcription. The viral DNA needs to be bound by nuclear RNA polymerase. Bacterial/phage promoters have different motifs than eukaryotic promoters and will likely not be bound by the necessary transcription factors. The viral transcripts need to be translated, and the viral proteins need to self-assemble to produce new viral particles. Under normal conditions, phage λ can carry out none of these steps in mammalian cells, save instances of cytosolic import via energy-independent endocytic mechanisms which may incidentally overcome barrier #1.1 In the linked paper,2 highly modified versions of phage λ are used to deliver a construct encoding firefly luciferase (luc) into mouse cells in vivo and human K562-αvβ3 cells in vitro. These λ variants contain recombinant components from mammalian viruses that help them to perform the desired functions in mammalian cells: A high-affinity αvβ3 integrin binding protein (3JCLI4), engineered from human FNfn10 for enhanced binding and endocytosis into αvβ3-positive cells.3 This addresses barrier #1. A domain from the HIV-1 Tat protein fused to λ capsid protein gpD enhances mammalian cell entry 4 and permits nuclear translocation.5 This addresses barriers #1 and #2. A reporter gene cassette for luc expression driven by a human cytomegalovirus promoter (CMV) that can serve as a site of transcription initiation by mammalian RNA polymerase II. This addresses barrier #3. Further, this construct is made such that only genes directly regulated by the CMV promoter are transcribed when "infecting" the mammalian host,4 so only luc is expressed and not the full repertoire of phage genes. It's not entirely clear from the text of the linked study whether this is true of their λ variants, but the original paper describing the λ lysogen they utilize 4 includes gpD constructs fused with integrin-binding peptide RDG, the heparin-binding domain of vitronectin, and the nuclear localization signal of simian virus 40 T antigen, each of which serve to enhance λ entry into mammalian cells and/or nuclear translocation. Moreover, WT λ produces holins and endolysins to permeabilize the host cell wall in the final stage of lytic infection.6 Even if they were expressed in mammalian cells, these proteins are evolved to degrade bacterial cell walls, not eukaryotic cell membranes. All of this to say that bacteriophage lack the molecular machinery to infect and replicate in mammalian cells, and that even phage that have been engineered to enter mammalian cells lack the components to engage in productive infection. References Huh H, Wong S, St Jean J, Slavcev R. Bacteriophage interactions with mammalian tissue: Therapeutic applications. Adv Drug Deliv Rev. 2019 May;145:4-17. Lankes HA, Zanghi CN, Santos K, Capella C, Duke CM, Dewhurst S. In vivo gene delivery and expression by bacteriophage lambda vectors. J Appl Microbiol. 2007 May;102(5):1337-49. Richards J, Miller M, Abend J, Koide A, Koide S, Dewhurst S. Engineered fibronectin type III domain with a RGDWXE sequence binds with enhanced affinity and specificity to human alphavbeta3 integrin. J Mol Biol. 2003 Mar 7;326(5):1475-88. Eguchi A, Akuta T, Okuyama H, Senda T, Yokoi H, Inokuchi H, Fujita S, Hayakawa T, Takeda K, Hasegawa M, Nakanishi M. Protein transduction domain of HIV-1 Tat protein promotes efficient delivery of DNA into mammalian cells. J Biol Chem. 2001 Jul 13;276(28):26204-10. Vivès E, Brodin P, Lebleu B. A truncated HIV-1 Tat protein basic domain rapidly translocates through the plasma membrane and accumulates in the cell nucleus. J Biol Chem. 1997 Jun 20;272(25):16010-7. Young I, Wang I, Roof WD. Phages will out: strategies of host cell lysis. Trends Microbiol. 2000 Mar;8(3):120-8.
0.614121
99
science1694054
As explained by user GrumpyMammoth, you can do serial dilutions until the colonies are few enough to be counted and then calculate the CFUs using the dilution factor. If you do not want to count the colonies yourself, you can use ImageJ's "Analyze particles" function. This will give you the number of colonies. ImageJ's "Analyze particles" function will also give you the colony size in pixels. This can be converted into millimeters in ImageJ by opening the Image>Properties tab and changing the pixel width parameter.
0.250519
99
science1694055
These Chimpanzee adenoviral vectors do not bypass the innate immune system at all. In fact they are used precisely because of their activation of the innate immune system, which then results in greater immunity because activation of the innate immune system results in antigen presentation and then activation of the adaptive immune system. The reason they activate the innate immune system is because these viruses do not have a seroprevalence, so they can only activate the innate immune response with an end result of antigen presentation. In contrast, human adenovirus vectors generally have a high seroprevalence (because we've all had colds caused by adenoviruses), so they activate the adaptive immune response against the viral particle (vector), clearing it before presentation of the novel antigen (in this case the CoV spike protein) can occur. There's a nice commentary on the different mechanisms by which the ChAdOx1 (Astrazenica/Oxford) and the mRNA (Moderna/Pfizer) vaccines work at Nature Reviews Immunology. To quote the relevant paragraph (emphasis mine): The AdV vaccines also contain inherent adjuvant properties, although these reside with the virus particle that encases the DNA encoding the immunogen. Following injection, AdV particles target innate immune cells like DCs and macrophages and stimulate innate immune responses by engaging multiple pattern-recognition receptors including those that bind dsDNA — in particular TLR9 — to induce type I interferon secretion8. Unlike AdV vectors, mRNA vaccines do not engage TLR9, but both vaccine formulations converge on the production of type I interferon (Fig. 1). Type I interferon-producing DCs and other cells that have taken up the vaccine-derived nucleic acids encoding the S protein can deliver both an antigenic and inflammatory signal to T cells in LNs draining the injection site. This activates S protein-specific T cells and mobilizes adaptive immunity against SARS-CoV-2 One of the problems with using a simian adenovirus vector like the ChAdOx1 vector is that now a large portion of the population will have some antibody response to the vector (as well as the introduced spike), so future uses of this same vector might not result in as high an immune response as we have seen this time around, when the population is almost entirely naive.
0.703544
99
science1694057
To add on @ user47696’s answer, heterologous expression systems also enable large-scale production of recombinant proteins, particularly in the E. coli and the yeast systems. Foreign hosts may also provide a system that is simpler than the natural sources for the studies on proteins' functions.
0.287024
99
science1694059
Bacterial spores in most contexts are properly called endospores, formed within the bacterial wall and are a survival mechanism, creating resistance to desiccation, heat and cold, with tolerances up to 150 Celsius (300 F). The vast majority of bacterial spores studied are from Gram-positive species, that are medically relevant. These include organisms such as Bacillus anthracis (anthrax) and Clostridium species (colitis (C. difficile), food poisoning (C. perfringens), and tetanus (C. tetani)), which are significant pathogens of humans. The reason that they are studied is that sterility of equipment and items ued in medical settings, as well as in the food industry is a great way to prevent illnesses and not have patients getting sick in the hospital from nosocomial infections. In the context of autoclaving, the temperatures, times and pressures have been worked out to inactivate the bacterial spores from these medically relevant bacteria, and are suitable for the vast majority of bacteria that people encounter in their every-day lives. These conditions work even for things like Mycobacterium tuberculosis, which are incredibly resistant to many of the ordinary disinfectants used in health-care settings and in the food industry. However, there are whole classes of bacteria that survive under unusual conditions of high heat and/or high acidity or alkalinity. These organisms are known as extremophiles. It turns out that some of these are quite resistant to autoclaving, as one might expect. It seems that they are resistant to up to 3 standard autoclave cycles and some other pretty extreme conditions: Cultures of Desulfotomaculum sp. C1A60, D. kuznetsoviiTand D. geothermicum B2T survived triple autoclaving while other related Desulfotomaculum spp. did not, although they did survive pasteurisation. Desulfotomaculum sp. C1A60 and D. kuznetsovii cultures also survived more extreme autoclaving (C1A60, 130 °C for 15 min; D. kuznetsovii, 135 °C for 15 min, maximum of 154 °C reached) and high-temperature conditions in an oil bath (C1A60, 130° for 30 min, D. kuznetsovii 140 °C for 15 min). While this is interesting, and a real problem for those working with them, none of these sorts of species are medically relevant (as far as I know, I would love to be proved wrong!), so the fact that these are resistant to autoclaving is not a problem for the vast majority of us. As a note, language is inherently difficult and to be scientifically certain that something is so, is almost impossible. This is particularly so with bacterial species, where it is estimated that there are about a trillion species and we can culture only about 3250 of them, so for the rest we know next to nothing about how they grow, their physiology or anything beyond a genome. This means that we have no way of determining if autoclave resistant bacteria are actually found almost everywhere or if they are as rare as the proverbial hen's teeth. Basically what I am saying is that the Wikipedia article is incorrect and imprecise in its use of language. If it had said something along the lines of "inactivates all medically relevant spore forming bacteria", then it would have been correct and much more precise.
0.403756
99
science1694060
According to "Plant Identification Terminology: An Illustrated Glossary" (Harris & Harris, 2001): Tree: a large woody plant, usually with a single main stem or trunk. An emphasis here should be on the woody characteristic. I'm not sure many botanists (if any) would define 'trees' without woodiness being part of the definition. As such, botanically speaking, the growth habit of neither bananas nor bamboo would be considered a tree. The variation between tree and shrub is much less well-defined, and no "standard" definition exists. Often times, the number of stems coming from the base is used to differentiate trees (one or few stems) from bushes (many stems), but this is not universally applied or useful. Emphasis here is "usefulness". Differentiating shrubs from trees is honestly arbitrary and typically dictated by the application to which the definitions are being applied. In many cases, simply differentiating by maximum or average growth size can serve as the differing characteristic as size is going to often be most relevant for landscaping (i.e., trying to achieve desired sizes) and ecological applications (i.e., which part of the community strata the tree reaches in terms of light availability). Often, botanical studies will explicitly indicate their definitions of tree vs shrub by indicating a cutoff or transition height tat they use to differentiate the two. However, in many studies, both would be considered relevant and a "cutoff" would be unnecessary. In other words, number of stems and maximum growth can and are often used to differentiate trees from shrubs, but no standard number of stems or specific growth size are universally used to differentiate both woody growth habits.
0.628936
99
science1694061
tl;dr there is no 'depression' cell line. Cell lines would be suited for studying pharmacology, cellular or molecular mechanisms, none of which are depression. Depression also has no good correlates on the molecular level which can be used as a proxy by virtue of using in vitro cell lines. For instance, you can use cell lines to study amyloid protein biosynthesis or clearance to better understand the proteostasis that occurs in neurodegenerative disease, which are thought to arise through aberrant proteostasis. But depression itself is poorly defined and is less characteristic than that on a molecular level, and I certainly don't know of any cell lines that can be used to study "differential expression of genes in depressed folks". You'd be better prepared by taking brain biopsies and even that would be very incomplete of a picture to begin a study. Hi Sam. Really interesting idea, you certainly pose the question diligently. Do not be discouraged from thinking about such things following a read through my response. But... I think many first impressions here would include critical thoughts about how you formulate and think about the experiment. Thus, I would challenge your question. First, depression is a syndrome, and is not very clearly defined, certainly not well at the molecular level. If you want to reduce depression to a mechanism study-able in cell lines, you first need to justifiably operationalize it somehow. This I think is very problematic already. Additionally, depression does not occur at all in cell lines, or even between neurons in synapses, but rather (loosely speaking...) in vast, interconnected circuits, as its readout - symptomatology - are observable only as the output of entire nervous systems, in organisms. Depression is not caused by faulty enzymes as results of single mutations; they are very highly multi-faceted. As such, cell lines cannot be used as proxies. What cell lines can be used for is to study specific molecular interactions, say with how pharamaceuticals are metabolized, or which compounds may impact proliferation. However, here I quote a few things from one of your sources that jump out even during a quick skim read: On the basis of the neurotrophic hypothesis of antidepressant’s action, effects of antidepressant drugs on proliferation may serve as tentative individual markers for treatment efficacy Peripheral proliferation is unsuitable as surrogate marker for antidepressant response Although the in vitro treatment of patient-derived LCLs with fluoxetine presents high inter-individual variability regarding the LCL proliferation behavior, this phenomenon has—according to our data—no association with the patient’s clinical outcome. The paper merely screened for differential expression in genes in white blood cells following fluoxetine exposure, and found only two candidate genes, SULT4A1 and WNT2B, which they have some reason to believe may be involved in drug metabolism, which were correlated to a donor clinical response and remission, respectively. This is a paper which suggests potential candidates for further study, but does not at all implicate or demonstrate any connection at all between depression or the efficacy of the drug, or the genes involved. It is meager in lieu of your question! The key here is that they compare cells before and after fluoxetine exposure, not that they are comparing depressed or non-depressed neural tissue. They tried to use remission to correlate things, but it just didn't work well. Which I think is expected!
0.439538
99
science1694062
Interesting question! Yes, the hydrophobic amino acids are very important; they facilitate interaction with the hydrophobic (inner) portion of the lipid bilayer. A useful review focusing on the bilayer side that I'll reference throughout this answer is here: Mechanics of membrane fusion, doi: 10.1038/nsmb.1455. (a) shows a model of two lipid bilayers fusing (minus protein contributions). Essentially, the spike makes this process thermodynamically favorable. The hydrophobic residues are key because they are what enables the other end of the spike protein to interact with and deform the cell membrane. Here's a model of protein-mediated fusion from that same review (they emphasize that the deformation is key): The cleavage of spike is crucial to initiate this process because the hydrophobic residues are not accessible/cannot interact with the cell membrane unless the spike has been cleaved. This works to the advantage of the virus, since if the spike is close enough to be cleaved by TMPRSS2, it's close enough to stab the membrane of the correct cell. A virus with hydrophobic residues that are always ready-to-go, in comparison, would likely fuse with a lot of random/off-target membranes of cells or vesicles that would not be permissive to viral replication. Mechanisms of membrane fusion disparate players and common principles is a review paper. A detailed discussion of the physics of this process is beyond the scope of this site, but here are some potential starting points (disclaimer: I am not a physicist): Mechanics of membrane fusion/pore formation Membrane tension and membrane fusion The hydrophobic force: measurements and methods Reconciling the understanding of ‘hydrophobicity’ with physics-based models of proteins
0.377185
99
science1694063
There are many ways to do this. User Grb's answer is good but this method requires access to equipment. If this is a highschool level project, you could make two Nutrient Agar petri plates. Spread 1 milliliter of unfiltered water on one and 1 milliliter of filtered water on the other. Make sure this is done in a sterile environment (Using a Bunsen burner for example). Growth of colonies on you filtered plate will tell you there are bacteria in the filtered water. You can also count the number of colonies on both plates. If there are less colonies on the plate with filtered water, your filter works to a certain extent. If the counts are identical, your filter does not work. If you use colony counts, make sure you repeat the experiment multiple times for a more rigorous result.
0.197758
99
science1694065
They cool it fast to maintain the taste and quality and to avoid the food danger range which is 5 to 60'C, because typical pasteurization is only 99.999% effective. A fast transition through 30-40'C improves the quality a lot compared to keeping the product at 30-40'C ... If you had limited means, you could easily use extra pasteurization at the expense of the taste, and you would achieve 99.99999 percent bacteria reduction, and the slower cool would then let that figure rise by i.e. one order of magnitude. See the conclusions here: https://onlinelibrary.wiley.com/doi/full/10.1111/1541-4337.12357 To know which process best suits your equipment and product, check the tables to achieve the reduction and follow similar times: charts graphs
0.243479
99
science1694067
This is actually a very difficult question to answer - because how far back do you go. The clade Avialae which includes the true birds is fairly recent and contains a lot of new dinosaurs(! -debate over feathers), but the clade Avialae is included in the theropod (Theropoda) group of dinosaurs, which includes such things as Tyrannosaurus rex, Spinosaurus, and Giganotosaurus, all of which are fairly large. However, these are almost certainly sister clades to that of the birds rather than actual ancestors. However, Avialae is a made up clade with no actual fossil representing it; it is purely hypothetical as an ancestral marker for birds. It seems that there is quite a bit of debate among paleo ornithologists as to which groups of dinosaurs are most closely related to birds. It is thought that things like Archaeopteryx are part of the bird ancestral clade, but even this much is uncertain, as there are a number of other feathered dinosaurs within the larger clade, that definitely did not evolve into birds. You might consider Neotheropoda a suitable cut-off, as this contains the true birds, but is still quite diverse. You could also cut off at Averostra, which includes Aves, and Ceratosauria, a group of quite large feathered dinosaurs. Having said all that, we are very unlikely to even know which species are directly ancestors of birds, it seems that there are a lot of species that are very similar and all closely related, but making the choice of which is down to experts and a paucity of specimens. See this bit on Birds for some of the difficulty in defining them. If I had to make a choice, I would probably go with Ichthyornis as a definite ancestor, but I am no expert (I'm a virologist...).
0.109695
99
science1694069
Re: In particular, if the underlying DNA structure is changing, then wouldn’t we expect the progeny to inherit these epigenetic changes? Why is it so remarkable? Given the context of the quote, the "variegation" part of the phrase refers to gene expression sometimes being turned off by influence of newly-nearby heterochromatin. The extent to which this suppression of expression extends into the euchromatic region from the adjacent heterochromatin varies from cell to cell, but once established is stably inherited through further divisions of that cell. It is unknown what molecular mechanisms initiate and maintain the suppression of expression in the formerly euchromatic genes. Since we do not fully understand the molecular nature of what is required to initiate and maintain heterochromatin, especially in a section of DNA that was formerly euchromatin and so presumably does not contain any global "make me heterochromatin" sequence signals, the "remarkable" likely means "we don't yet understand the details" -- it's like magic.
0.50194
99
science1694070
Possibly the currently favoured view is that anatomically modern humans didn't evolve in one location, but evolved in a structure population that spanned Africa (and perhaps the Middle East): We challenge the view that our species, Homo sapiens, evolved within a single population and/or region of Africa. The chronology and physical diversity of Pleistocene human fossils suggest that morphologically varied populations pertaining to the H. sapiens clade lived throughout Africa. Similarly, the African archaeological record demonstrates the polycentric origin and persistence of regionally distinct Pleistocene material culture in a variety of paleoecological settings. Genetic studies also indicate that present-day population structure within Africa extends to deep times, paralleling a paleoenvironmental record of shifting and fractured habitable zones. We argue that these fields support an emerging view of a highly structured African prehistory that should be considered in human evolutionary inferences, prompting new interpretations, questions, and interdisciplinary research directions. Scerri, Eleanor ML, et al. "Did our species evolve in subdivided populations across Africa, and why does it matter?." Trends in ecology & evolution 33.8 (2018): 582-594. For a criticism of the mtDNA Southern Africa origins paper, see this paper.
0.071867
99
science1694072
Based on the location, size, behavior, and shape (particularly of the pectoral fins) this appears to be a flying (or helmet) gurnard (Dactylopterus volitans) with its "wings" folded. These range of these fish includes the Caribbean and they are known to use the forepart of their pectoral fins (i.e. the free extra 'fin' near their head) for "exploring the bottom" 1-3, which fits the behavior you described. The size is also reasonable since they are said to reach up to 50 cm in length. Image below for comparison — note these fish are described as being highly variable in coloration. <img src="https://i.stack.imgur.com/BUTd6.jpg" alt="Photo 108397502, (c) slebris, some rights reserved (CC BY-NC) " /> Image © slebris from iNaturalist some rights reserved References: https://fishbase.mnhn.fr/summary/1021 Roux C; Dactylopteridae. In: Miller, P. J., Whitehead, P. J. P., Bauchot, M. L., Hureau, J. C., Nielsen, J., & Tortonese, E. (1986). Fishes of the North-eastern Atlantic and the Mediterranean. Vol. III. Richard Clay Ltd, Bungay, United Kingdom. Davenport, J., & Wirtz, P. (2019). Digging with ‘hands’: observations of food capture in the flying gurnard Dactylopterus volitans (Linnaeus, 1758). Journal of Natural History, 53(41-42), 2489-2501.
0.141451
99
science1694073
I agree with Luigi - that is a Brown Recluse Spider (if you lighten the image, you can clearly see the diagnostic violin marking; that, plus the Recluse shape, size, legs and habits make me certain that this is what you have, even without seeing the eyes clearly). By the way, this is an adult male, which is why you noticed it - right now, these guys are wandering around looking for love instead of hiding in some quiet, undisturbed area snacking on cockroaches and other delicious insects. While these spiders are generally inoffensive, the venom is unpleasant in effect, and may rarely cause significant health effects. In this case, I would consider calling in a pest control service, although you're in an area where Recluses are pretty common. You might also want to get used to taking some precautions like shaking out your shoes in the morning, laying down sticky traps around the bedroom walls and under the bed frame legs, etc. I know that many people live their lives in close proximity to Recluses without problems, but once in a while unfortunate things can happen.
0.435124
99
science1694076
As you specifically state: Would it be possible to systematically deduce what this organism looks like and behaves like without reference to anything else (ex: a repository of genomes of known living organisms)? No, no chance at all, we certainly wouldn't even be able to determine a single gene function, other than that it codes for a series of amino-acids. After all, even simple genes of say 100 bases in DNA will still have 33 amino-acids, and most people don't remember that sort of information without cause, and the only way we can acquire that easily information is through repositories of information. The only way we could do something approaching working this out would be to take portions of the genome that we could identify as genes (start/stop codons), then express them one-by-one and empirically work out the function through classic biochemical methods. This is very laborious and time consuming - it would take many many person-hours per gene and a LOT of resources. However, this is how functions of novel genes and their related proteins were and still are worked out. With reference - perhaps, at least partially. From the sequence and with access to a repository such as Genbank we could deduce which type of organism it was by comparing to known genomes from other organisms in a process known as phylogenetic analysis. You could certainly tell which of the kingdoms it came from. In each organism there are conserved bits of the DNA (or RNA) that tell you which groups they are. For example all DNA based bacteria and archaea (that I know of) you can use the 16S ribosomal RNA sequence, which will identify bacteria and archaea down to genus and species level. For instance, you might have a genome that gives you information that it is a bacterium of the Staphylcoccus group, meaning that you can deduce that it is very very likely that it will be a small bacterium which is a round ball shape, and will stain positive with a Gram stain. You could also likely identify some genes and their function based on homology. You can apply this process to other genomes, for instance a mammalian one will tell you that it had fur, produced milk, was warm blooded, quadruped, spinal column etc.
0.510284
99
science1694077
The (food) energy in 1kg of lettuce or 1kg of beef depends on each's composition, not on its trophic level. Normally we can digest fats, protein and carbohydrates, but other organisms like certain bacteria and fungi can even digest components like fiber and cellulose than we can't. Water doesn't provide any energy, but is still part of an item's weight, so food scientists normally calculate available energy for people based on the dry weight of fat, protein and carbohydrates, using average calories per g for each of the 3 classes of components. The trophic levels/food chain 90/10 rule refers to how much biomass is produced in an ecosystem, not the energy content of a certain piece of that biomass.
0.155181
99
science1694079
The so-called '10% law' is a common, albeit very rudimentary, rule-of-thumb in foodweb analysis. It's commonly attributed to Raymond Lindeman, though he cited a wide range of ratios in natural systems. The point it's trying to make relates to the transfer of energy between trophic levels, not about the energy density of plants or animals in each trophic level. The '10% law' would generally be parsed as 'it takes 10 kilojoules of energy stored in grass to make one kilojoule of energy stored in a herbivore, and 10 kilojoules of herbivore to make one kilojoule of a carnivore'. Energy density of various foods is routinely measured. The US Department of Agriculture estimates the energy density of beef (15% fat, broiled) at 10470 kilojoules per kilogram, and the energy density of lettuce (Romaine) at 720 kilojoules per kilogram. One kilogram of lettuce clearly provides less food energy than one kilogram of beef.
0.638597
99
science1694080
1kg of beef has more energy than 1kg of lettuce but it isn't directly related to the trophic level energy loss. Given that each level of the food chain has a decrease of 10% of available energy You're all mixed up here. What the rule is saying is that if you start out with n units of solar energy, you lose 90% of it for every trophic level it passes through; Only 10% passes through as stuff that "stays around" materially (i.e. used to build the structure of the organism rather than being burned away as fuel to keep it alive). That means that it takes about 10x more energy to produce the same amount of edible calories in beef than it does in vegetables and grains, if they are separated by one trophic level. It's not talking strictly talking about the energy density in food, but instead about how wasteful it is to produce the food containing those edible calories from the point of view of the total solar energy invested. For the same solar energy, you can feed a lot more people calorie-wise with vegetables and grains than beef. It's kind of like thinking how many vegetables a rabbit eats over the couple years of its life from birth to when the rabbit becomes food for you. How long could you feed yourself on those same vegetables? Weeks maybe? How long could you feed yourself, if you ate the rabbit? A few days at most. You can imagine all those vegetables weigh more than the rabbit, but supply more energy than the rabbit meat, while the same mass of rabbit flesh has more calories than the same mass of vegetable. The rule isn't talking about calorie density of food. Obviously, there are complicating factors, such as the actual composition of the food, since that determines calories and nutrition (which isn't related to solar energy). The reason the same mass of beef has more energy is that 10% that sticks around keeps on accumulating. This is also the reason toxins accumulate in animals higher up the food chain. A little plankton might have a tiny bit of mercury in it, but a small fish might eat a billions of plankton. And, then a large fish might eat a thousand small fish. And, you might eat one large fish. If that mercury stuck around in everything that ate it, all that accumulated mercury ends up in you.
0.266215
99
science1694082
Echoing other answers, our ability to predict the function of individual genes is entirely dependent on either A) physical experiments (making the products of those genes in a test tube or performing genetic experiments) or B) inferring their function from their similarity to other genes that have been studied by physical experiments. In other words, we can't currently predict individual gene functions with computers unless we have some similar outside reference to compare to. Without the ability to do that for a single gene, there is no possibility of doing this for a whole genome. Forget about predicting how all of these genes would express or interact to create a complex living organism. If you could compare to other sequenced genomes, phylogenetic analysis would quickly tell you what species your unknown genome was most similar to. I thought I might offer an alternate answer about what one might be able to do with some general knowledge of genomes but not the ability to actually compare your unknown genome to specific genomes / known DNA sequence. With the right software, you could roughly predict where the genes were in your unknown genome. This could tell you how many genes this organism has, how many of those genes are unique genes or repeats, and how much of the genome is made of non-genic sequence. You could also look at the overall structure of your genome. These factors could give you a rough guess about what kind of organism you have. For example, if your genome is circular (which you could tell from the sequence), you have a prokaryote or archaeon. If there are very few genes, you could infer that this organism lived a parasitic lifestyle, since the smallest known genomes are from parasitic species that no longer need to encode for all of the functions of a fully independent lifestyle. A large genome, linear chromosomes, or multiple chromosomes would indicate a eukaryote. If you see evidence of polyploidy, your organism could be a plant. Through similar methods, you could probably tell a eukaryote's nuclear genome apart from mitochondrial and chloroplast genomes and guess whether it was a plant based on how many non-nuclear chromosomes it has (two distinct non-nuclear chromosomes would indicate a chloroplast genome is present). Just a little thought experiment on what one could do with an unknown genome sequence without Genbank or reference sequence, but with current computational tools and some knowledge about genomes.
0.668494
99
science1694083
There is no higher order meaning. Ribbon diagrams are a cartoon representation of portions of a polypeptide chain that are engaged in what is termed the secondary structure of proteins — the regular areas of protein structure formed by hydrogen bonding of peptide bonds. These are either individual helices (α-helices) or adjacent flat pasta strips (β-sheets). An experienced scientist in the field of protein structure can look at the combination of such features and perhaps recognize combinations that recur in proteins of similar function, but there is no — heaven preserve us — code. Ribbon diagrams are just one device for simplifying a complex object. There are other types of representation that allow appreciation of surface charge, hydrophobicity or individual side chains. An author representing proteins in a paper will choose one (or a combination) that best allows the reader to understand the information he is trying to convey. Ironically, in the diagram presented in the question they are merely serve as references to indicate the positions in the protein where mutations have occurred.
0.475064
99
science1694085
Echidna come from platypus like ancestors about 14-20 mil years ago. so they count if platypus count as returning to water although not 100% of there like cycle.
0.425591
99
science1694086
According to this article, nacre is composed of aragonite ($\mathrm{CaCO}_3$) and organic matrix (chitin, proteins ...) with a mass ratio $95 \% : 5 \%$. In the supplementary file, they provide densities for each component, namely the density of aragonite $\rho_a=2.95 \; \mathrm{g/cm}^3$ and the density of a generic protein $\rho_p=1.35 \; \mathrm{g/cm}^3$. From this data, we can estimate the density of nacre: $$\rho_n \approx \frac{1}{\frac{0.95}{\rho_a} + \frac{0.05}{\rho_p}} = 2.78\; \mathrm{g/cm}^3.$$
0.463322
99
science1694088
You have correctly identified that the trait is autosomal recessive. Now for the probabilities. We know that: II-1 is $Aa$ and II-2 is $Aa$ (because they don't show the trait, but their son does), III-1 is $Aa$ (because he is a carrier), III-2 is either $Aa$ or $AA$ (because she doesn't show the trait). IV-I will show the trait if it is recessive homozygote, $aa$. Because we are not sure about the genotype of III-2, we have two distinct scenarios: III-1 is $Aa$ and III-2 is $AA$ III-2 is $Aa$ and III-2 is $Aa$ Let's first calculate the probability for each of the scenario. Here, we have to be careful not to fall into the trap by ignoring the conditional probability. The probability of III-2 being $Aa$ is not $1/2$ as we might wrongfully assume from simply drawing a Punnet square of their parents ($Aa \times Aa$). Because we know that III-2 doesn't show the trait, he cannot be $aa$. Therefore, we have to eliminate this possibility from the Punnet square, and we are left with the probabilities $1/3$ for her being $AA$ (scenario 1) and $2/3$ for being $Aa$ (scenario 2). Scenario 1 (probability 1/3) III-1 is $Aa$ and III-2 is $AA$. Their child cannot show the trait. Scenario 2 (probability 2/3) III-1 is $Aa$ and III-2 is $Aa$. Their child will show the trait with the probability $1/4$. Conclusion The probability of IV-1 showing the trait is now the product of probabilities for the scenario 2 to be true and the probability for their child to show the trait. Therefore, $$\text{probability for IV-1 to be } aa = 2/3 \times 1/4 = 1/6.$$ As you see, not being careful about the conditional probability will lead you to the wrong result: $1/2\times1/4 = 1/8$.
0.465559
99
science1694089
Going exinct hardly counts for being successful. Let us, take, e.g., Ebola - a virus that is efficiently transmitted and quickly replocates, but ends up killing most of the hosts. It quickly goes extinct, because there are no hosts lefts for its replication. If Ebola still survives, it is because it infects animals other than humans, for whom it is elss lethal. A similar example is small pox, which was less lethal than ebola and therefore continued to circulate in human populatiosn for thousands of years... but became extinct once vaccinces were developed. On the other hand, flu and cold keep circulating in human populatiosn since time immemorial, and it is not likely that they will be going extinct any time soon, becaus ethey have developed a way of co-existing with their hosts.
0.550624
99
science1694091
Photosynthesis. early photosynthesizers, which would have been adapted for a reducing atmosphere, drove themselves extinct as they dumped oxygen into the atmosphere as a waste product. They were incredibly successful because they could live off of little more than the three of the most common materials on the planet. Eventually the oxygen built up to the point mostly only oxygen tolerant or later oxygen using organisms survived. https://pubmed.ncbi.nlm.nih.gov/20731852/
0.401786
99
science1694092
Yes. According to this article, the compressive strength of nacre is $300{-}500\;\mathrm{MPa}$, whereas the compressive strength of human bones ranges up to around $200\;\mathrm{MPa}$.
0.257782
99
science1694093
There is not enough information in the question to solve it. The answer key from the original question makes a logical error: Viscosity is directly proportional to resistance. This is true. An increase in viscosity increases resistance. Flow and pressure do not matter for this statement to be true. You are correct to assume an increase in resistance. Blood flow is inversely proportional to resistance. This is true, but it's missing a qualifying statement: blood flow is inversely proportional to resistance for a given pressure drop. You can only assume blood flow decreases when resistance increases if you also assume pressure stays the same. Maybe this is a reasonable assumption, though I'll note it is not one that you seem expected to make since all the answers involve changes in pressure. Blood pressure is directly proportional to flow Again, this is true, but is missing an even more important qualifying statement than the previous one. Blood pressure is directly proportional to flow for a given resistance. Importantly, you know this does not apply because you know resistance changed. If resistance is different between Situation A and Situation B, like in your problem, you cannot assume that flow and pressure change in the same direction from A to B. What you do know is that $$R=\frac{\Delta P}{Q}$$ holds. You know that R increases. So you know that the flow decreases if the pressure drop is constant, and you know that the pressure drop must increase if the flow is to remain the same. More generally, you can say "there will be a greater ratio of pressure to flow rate". You cannot solve the problem prompted by the question, which asks you to know the direction of flow and pressure change, without additional information about one or the other. The only way you would get a decrease in both flow and pressure is if the drop in flow is proportionately greater than the drop in pressure differential; nothing in the question suggests this is an assumption you should make. I would rewrite the solution as: If you have increase viscosity, you'd have increased resistance, resulting in a greater ratio of pressure to flow rate. You can only make statements about the ratio with the information given.
0.711877
99
science1694094
Yes. If duplication of the human gene happened after the speciation event, we have multiple orthologues. This is shown in the following diagram from Ensembl as one-to-many orthologue (ortholog_one2many). In the example from the figure, one human gene has two mouse orthologues, but it could be vice-versa as in your case. This is an answer to your general question about orthologues. I haven't looked into details about your particular gene (ARFGAP1).
0.134579
99
science1694095
only about 10 percent of energy stored as biomass in a trophic level is passed from one level to the next. "biomass in a trophic level is passed from one level to the next" is a lot of words to say that the cow eats the lettuce. This conversion from lettuce to beef is a lossy process. Similarly, if you hypothetically continue to farm down the trophic pyramid, growing bears and feeding them your cows, it's likely to take about 10 kg - likely more - beef to produce 1 kg of bear meat. A 300 kg cow will give about 180 kg of beef. But growing that animal for approximately two years is likely to require about 1,300 kg of grain and 7,200kg of silage/roughage. The "90%" number is a rough approximation; 180 kg is 13% of 1300, the silage or pasture gazing can count for something and bring it reasonably close to the 10% rule of thumb. Intentionally bred animals like some fish or poultry can be more efficient than beef. The important thing to consider is that when you drive by unimaginably huge fields of grain being farmed 40 feet at a pass by massive combines and wonder how people could ever grow hungry, be aware that we're wasting a lot of those calories by turning them into beef. As a side note, the cow will (generously) average about 40 liters (40 kg, 10 gallons) of water per day for those 2 years, for a total of (730 days * 40 kg water/day) / 180 kg beef) = 160 kg water actually drunk by the cow per kg beef produced. That's a lot, but that's not counting all the water used to grow that grain and roughage (and lettuce)...generous numbers for that growth is how you end up with insane quantities like 10,000 liters of water per kg beef. If you assume that the water in the cow's urine is recycled to the water table and that the water that drains into the soil goes into the same aquifer or transpires from the grain goes back into the water cycle, the numbers are much more manageable - but it's still a lot of water. If you substitute for the veggie burger at dinner, feel free to leave the water running while you brush your teeth, you've saved more water than a week's brushing by not consuming that meat. As a bonus side note: At a high level, both protein and carbohydrates have an energy density of 4 (kilo)calories per gram. Fat has an energy density of 9 calories per gram. However, 1kg of raw beef is not 1000 grams of metabolically available protein, it's actually a combination of water, collagen, elastin, about 200g protein, and about 50g (depending on the cut and cooking process) of fat. Similarly, 1 kg of lettuce is mostly water, quite a bit of indigestible cellulose, and about 30g of actual caloric carbohydrates. So your question could also be interpreted as "Does something containing 30g of carbohydrates have more energy than something containing 200g of protein", and the answer to that is a definitive "no", the former has 120 calories and the latter has 800 calories.
0.149782
99
science1694096
Unfortunately your assumptions are almost completely incorrect. The reasons some get sick and die and others don't are multi(multi)factorial. There will be some genetic component, but also things like prior exposure to similar disease(s),socio-economic status, health status, age (this is a big one for COVID-19), all play into it. The 1918 H1N1 influenza outbreak had a large number of confounding factors - it was in the middle of a global war, with large numbers of people (especially troops) crowding together (e.g. transport boats, barracks), often in terrible conditions (trenches), large portions of the world (esp. western Europe, China, Korea, parts of Africa) being refugees with poor nutrition, stress etc. In addition it was at the very beginnings of modern medicine - there were no antibiotics to take care of the following bacterial infections, no supportive medicine, no respirators etc. The people largely killed by the 1918 H1N1 were young men in their prime (20-40 y), who succumbed to something known as a cytokine storm, where their immune response was actually super strong (too strong) and they died from the resulting inflammation. There is a fairly large school of thought that says that the same is happening with COVID-19. It is thought that the cytokine storm against H1N1 is actually a heightened immune response because of prior exposure to another influenza virus. So, these people actually have "good" immune genes, but were the ones killed off by the virus. Secondly - "just search for poor genes" - you are advocating for something that a) we don't yet know all the genes involved in immune responses, and b) segregation of the population based on genetic characteristics (now where have I heard of that happening before...here, here, here); might have some sort of impingement on things like human rights, especially when you consider that age is one of the largest factors for COVID-19 death. Now those 3 comparisons above are a lot shitty, because those were horrific things done with no just cause, and you could make an argument that those more at risk of COVID-19 should take more precautions (as many do) - but how do you enact that into law? How do you enforce it? Can you tell if someone walking down the street has diabetes or asthma and are thus more at risk?
0.63752
99
science1694097
This is Phidippus audax commonly known as Jumping spider or Daring Jumping spider.
0.345224
99
science1694099
The International Society of Genetic Genealogy Wiki provides the following situations for which siblings may be considered $3/4$ siblings (a coefficient of relationship of 37.5): a man has children with each of two sisters (the children are related as half-siblings and first cousins) a woman has children by each of two brothers (the children are related as half-siblings and first cousins) a woman has children with both a man and his father (the children are related as half-siblings and half-aunt or half-uncle and half-niece or half-nephew) a man has children with both a woman and her daughter (the children are related as half-siblings and half-aunt or half-uncle and half-niece or half-nephew) Note that the above situations assume that all grandparents are unrelated. If you consider a situation where either or both sets of grandparents are full siblings, then you have a coefficient of relationship in the sibling grandchildren that is intermediate between 37.5 and 50.
0.292824
99
science1694100
Typical family sizes (especially in developed countries) are much smaller than they used to be, because cultural factors influence behaviour (do you want to have 10-15 kids if you can't afford to give them a good education?) From Kopf and Livni 2017: More fundamentally, maximizing the number of offspring does not necessarily maximize reproductive fitness, because the amount of resources/care you give to each individual offspring matters; if you have too many offspring, maybe none of them will grow up to be high-quality/reproductively successful. This trade-off affects non-human organisms as well; there is an entire literature in evolutionary biology on optimal reproductive tactics that includes this issue (Pianka 2008). Kopf, Ephrat, and Dan Livni. “The Decline of the Large US Family, in Charts.” Quartz, 2017. https://qz.com/1099800/average-size-of-a-us-family-from-1850-to-the-present/. Pianka, E. R. “Optimal Reproductive Tactics.” In Encyclopedia of Ecology, edited by Sven Erik Jørgensen and Brian D. Fath, 2567–72. Oxford: Academic Press, 2008. https://doi.org/10.1016/B978-008045405-4.00841-7.
0.125223
99
science1694101
Snail feeding. It is really obvious if you have seen it before. It is the track left by a feeding snail. Here is the grazing pattern of the common snail. https://commons.wikimedia.org/wiki/File:Land_Snail_radula_tracks.jpg https://alexhyde.photoshelter.com/image/I0000oboo93ZzSEI
0.256499
99
science1694103
Clarification of the Question It appears to me that the question posed in the title of this question, “What is sigma factor PvdS?” is answered by the poster — it is a sigma factor that is specific for, and allows transcription of, genes involved in pyoverdin synthesis in Pseudomonas aeruginosa. I assume that the poster knows that a sigma factor is a subunit of bacterial RNA polymerase that must associate with it before it is capable of initiating transcription. (The information is readily available on Wikipedia or in Berg et al.) The bulk of the actual question is devoted to describing the results of computer analysis of the genome of an unspecified archaeon. Towards the end the poster poses a question: …can I say that my genes could be upregulated or downregulated by this PvdS sigma factor… [The sentence continues for several lines. The emphasis is mine.] I therefore interpret the question as asking What conclusions can be drawn from computer analysis of the type performed, with particular reference to the biological system described? Answer It is necessary to emphasize at the outset that the conclusions that can be derived from studies performed in silico are severely constrained. Sequence similarity allows you to suggest that an experimentally uncharacterised gene might encode a protein with a similar function to a related gene. No more. Only experimental analysis of the protein allows you to state what its function actually is. The usual type of annotation of such proteins (and by extension their genes) is one of: hypothetical xxxxxx protein putative xxxxxx protein possible xxxxxx protein xxxxxx-like protein xxxxxx-domain protein Likewise, the occurrence of a motif that allows pyoverdin genes to be recognized by a transcription factor in P. aeruginosa can only suggest that a functionally uncharacterized gene from a different organism might be similarly recognized and its transcription regulated in a similar manner. In the example in question the most it is possible to say is that a system of gene regulation similar to that in P. aeruginosa may exist in the archaeon. To go any further one would need to identify (the gene for) for the putative sigma factor, show that it allowed the RNA polymerase to bind to the genes, and either show this affected transcription directly or perform genetic experiments that would allow the same conclusion. Furthermore, I would be wary of describing the putative sigma factor as ‘PvdS’ unless the proteins predicted to be encoded by genes on which this motif is found bear a similarity to the genes for synthesis of pyoverdin, from which its gene name, pvdS, is derived. (A BLAST search should, of course, be done on the genes to determine this — probably blastx.)
0.681855
99
science1694105
This looks to me like Thiania bhamoensis (Metallic Blue Jumper), whose species distribution includes India. It has a notably pointy abdomen, as you noted, and the listed size range appears to be consistent as well. I do not believe this is Phidippus audax due to the colour inconsistency (only the chelicerae are blue) as well as the fact that the known species distribution lies entirely within North and South America.
0.63257
99
science1694106
So it turns out that your estimate is not bad, by simple velocity calculation, I get it to be a little over 50% out (see below), but as you say, there is some error in your measurement. However, it turns out the pulse wave velocity is much more complicated than that, and subject to some debate in the literature. The speed at which the impulse propagates is not the speed of sound, but rather the speed of the impulse from your heart, and varies according to aterial stiffness (thinner arteries are faster, larger slower as they are more floppy). The equations for calculation are similar to those used to calculate the velocity of sound in a medium. There are two of these, where $P$ is pressure and $V$ is volume and $\rho$ is density of the blood: The Frank/Bramwell-Hill equation $$ PWV = \sqrt{\frac{V.dP}{\rho.dV}} $$ And the Moens-Korteweg equation ($E_{inc}$ = vessel wall elasticity, $h$ = wall thickness, and $r$ = radius $$ PWV = \sqrt{\frac{E_{inc}.h}{2.r.\rho}} $$ Now these are super complicated for the average person to measure and it seems a bit difficult to work with, so it can be simplified to a the classic $velocity = distance/time$. If you look at the structure of the arteries in the body, all the major blood ones to distal portions of the body feed off (unsurprisingly) the same source, the aorta. Now to estimate the difference in time you need to work out the time it would take to propagate to each point. I'm not too far off you in size (~10 cm taller), and did some very rough estimates of how far my elbow is from my heart and about how far my heart is from my ankle, and I got ~40 cm for elbow and ~135 cm for ankle. You gave a propagation speed of 1100 cm/s, so substituting that into the simple equation: $$ \text{time} = \text{distance}/\text{velocity} $$ So 40 cm distance: $$ \text{time} = 40/1100 $$ $$ \text{time} = 0.0364 \text{ seconds} $$ and 135 cm distance $$ \text{time} = 135/1100 $$ $$ \text{time} = 0.1227 \text{ seconds} $$ $$ \text{Difference in time} = 0.1227-0.0364 $$ $$ \text{Difference} = 0.0863 \text{ seconds} $$
0.171658
99
science1694107
Why revive a four-year old question? Although I do not consider nomenclature of this type terribly important, and the high-scoring answer from @VonBeche is reasonable, I decided to add my own ‘answer’ for several reasons. First this is a highly active question, probably because students are required to make this sort of distinction, second because there have been several recent incorrect answers, and third because none of the answers are supported by an authoritative source (I do not regard Wikipedia as necessarily authoritative). However, perhaps the most important reason is to emphasize that, although there has been a recent attempt to produce a standard nomenclature, in actual practice there is no generally agreed terminology for cofactors. Authority used in this answer Biochemical nomenclature arises in a haphazard manner from new research discoveries, and only when the dust has settled, so to speak, do committees try to standardize it. I expected to find something in the International Union of Biochemistry and Molecular Biology Recommendations on Biochemical & Organic Nomenclature but those seem confined to enzyme classification. I have therefore used as a starting point ChEBI — Chemical Entities of Biological Interest. This is a “freely available dictionary of molecular entities focused on ‘small’ chemical compounds” published on the EMBL–EBI website, and is part of the ELIXIR Core Data Resources “a set of European data resources of fundamental importance to the wider life-science community and the long-term preservation of biological data”. ChEBI Definitions of Cofactor, Coenzyme and Prosthetic Group URL: https://www.ebi.ac.uk/chebi/searchId.do?chebiId=23357 ChEBI Name: cofactor ChEBI ID: CHEBI:23357 Definition: An organic molecule or ion (usually a metal ion) that is required by an enzyme for its activity. It may be attached either loosely (coenzyme) or tightly (prosthetic group). ChEBI Ontology: cofactor (CHEBI:23357) is a biochemical role (CHEBI:52206) A number of compounds are listed as having the role of cofactor, including metal ions such as chloride and organic molecules such as NAD (sub-classified as a coenzyme, below) and FAD (subclassified as a prosthetic group, below) URL: https://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI:23354 ChEBI Name: coenzyme ChEBI ID: CHEBI:23354 Definition: A low-molecular-weight, non-protein organic compound participating in enzymatic reactions as dissociable acceptor or donor of chemical groups or electrons. ChEBI Ontology: coenzyme (CHEBI:23354) is a cofactor (CHEBI:23357) A number of compounds are listed as having the role of coenzyme (ascorbic acid, coenzyme A, NAD etc.) all of which are organic compounds. URL: https://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI:26348 ChEBI Name: prosthetic group ChEBI ID: CHEBI:26348 Definition: A tightly bound, specific nonpolypeptide unit in a protein determining and involved in its biological activity. ChEBI Ontology: prosthetic group (CHEBI:26348) is a cofactor (CHEBI:23357) A number of compounds are listed as having the role of prosthetic groups (e.g. FAD, haem lipoic acid), but none of them are simple metal ions although they are not exclusively organic (e.g. metal-sulphur clusters). Ontological interpretation and comparison with Wikipedia Entry The ChEBI entry on ‘Cofactor’ reproduces the opening paragraph of the Wikipedia page on the subject, relevant sections of which I quote below: A cofactor is a non-protein chemical compound or metallic ion that is required for an enzyme's activity as a catalyst. Cofactors can be divided into two types: inorganic ions and complex organic molecules called coenzymes. Coenzymes are further divided into two types. The first is called a “prosthetic group”, which consists of a coenzyme that is tightly or even covalently, and permanently bound to a protein. The second type of coenzymes are called “cosubstrates”, and are transiently bound to the protein. Cosubstrates may be released from a protein at some point, and then rebind later. However, the nature of Wikipedia (anyone can edit it) is such that further down the page under the section on Classification, the following appears: Cofactors can be divided into two major groups: organic cofactors, such as flavin or haem; and inorganic cofactors, such as the metal ions Mg2+, Cu+, Mn2+ and iron-sulphur clusters. Organic cofactors are sometimes further divided into coenzymes and prosthetic groups. The section then goes on to say: [Bryce 1979] noted the confusion in the literature and the essentially arbitrary distinction made between prosthetic groups and coenzymes group and proposed… …cofactors were defined as an additional substance apart from protein and substrate that is required for enzyme activity and a prosthetic group as a substance that undergoes its whole catalytic cycle attached to a single enzyme molecule. However, the author could not arrive at a single all-encompassing definition of a “coenzyme” and proposed that this term be dropped from use in the literature. Thus, it would seem that there are (at least) four different ways of classifying cofactors. That from ChEBI does not differ greatly from the second Wikipedia scheme (rather than the one it quotes), but the first Wikipedia scheme — distinguishing between coenzymes and prosthetic groups — certainly does. Bryce’s suggestion to discard the term coenzyme has not been adopted, but his critique of the general nomenclature is a warning against dogmatism in this area. “Fools rush in where angels fear to tread.”
0.132807
99
science1694110
I think your explanation is correct. The expected value of the exponential distribution is: $$t \sim \text{e}^{-\lambda t} \implies \langle t \rangle = \int_0^\infty t \ \text{e}^{-\lambda t} \; \text{d}t = 1/\lambda.$$ For the exponential survival function, we have to identify the parameter $S$. Since $S$ is the number of survived individuals after one year, we derive: $$S = \text{e}^{-\lambda} \implies \lambda = - \ln S.$$ Therefore, the life expectancy is simply: $$\langle t \rangle = -\frac{1}{\ln S}.$$ Because the probability of a breeding to happen each year is independent, we can multiply the life expectancy by the probability that the breeding happens in a one year period. This gives us the final result: $$\langle \# \text{ breedings} \rangle = \langle t \rangle \cdot P(\text{breeding}) = -\frac{1}{\ln S}\ P(\text{breeding}). $$
0.301144
99
science1694111
As you have correctly stated, the DNA cannot directly react with diphenylamine. First, it has to be cleaved so that the deoxyribose is exposed. In acidic solution and at high temperature, a depurination occurs several orders of magnitude faster than depyrimidination, because N-7 of the purine can be protonated and assist the cleavage. See this mechanism of the acid-catalyzed depurination:
0.272581
99
science1694112
Sometimes the probabilities of events are so low, that we can neglect them for all practical purposes. A classical example in thermodynamics: if we have a gas in a container, there is a non-zero probability that all the molecules assemble in one half of this container, while in the other part we have vacuum. It never happens in practice - the probability of such an event is $\propto 2^{-N_A}$, and if one observes a container, one would have to wait for longer than the age of the universe (which is much longer than our lifespan, the existence of the humankind, and the existence of life). Thus, we can confidently treat it as a law, and claim that the sea had never opened in front of Moses. Same about the acquisition of the cellular membrane: the event might have happened only once in a few billion years, and in very special conditions, which are not at all common nowadays. It is negligible for all practical reasons, to the precision that makes it possible to consider it a law.
0.534961
99
science1694113
There's a group of moths known as the "Underwing moths" for their brilliant coloured hindwings. I think these are used to startle predators when they get too close. They have the brilliant red/orange with a black border. If you could open the wings fully, it would help with ID significantly. The genus is Catocala, which translates to "beautiful lowerone" For some moths of Austria, you can browse images here, which is how I found the genus name.
0.646977
99
science1694120
As a first approximation, I think your explanation makes sense. You can imagine the system having a spring, and the hydrolysis of ATP providing the energy to load the spring (store the energy in the myosin). Afterwards, this energy is released as mechanical work in the power stroke(s). However, this analogy will not lead you far because things get complicated, especially if you want to study this process from a physical perspective. In biological textbooks (such as the one you provided), you will find the whole cycle being portrayed as going in one direction only (clockwise red arrows in your image). But because this is happening in the molecular world, most of the reactions/processes are actually reversible. In fact in molecular motors (such as your example with myosin), thermal motion frequently leads to backward steps, especially when the muscle is under high load. This can nowadays be directly measured with various single-molecule techniques: The simplified biophysical models for molecular motor are ratchets, using thermal motion. They are in a way analagous to Maxwell's demon. But the same way as Maxwell's demon is not feasible because it violates the second law of thermodynamics, a molecular (Feynman–Smoluchowski) ratchet cannot generate macroscopic work by solely extracting the thermal motion. And that is where the energy of ATP comes into play: the energy is used to keep the system out of equlibrium and break the symmetry of motion in space and time (steps forward become more frequent than steps backward). My answer is really far from being self-sufficient or self-explanatory, but I hope it provides at least some starting keywords for your further research of the topic.
0.498893
99
science1694125
Cell lines can do wacky things Have a look at Zhou, 2019, which discusses the genome of HepG2 cells. These typically have 49 to 52 chromosomes... and many other interesting aberrations. Awortwe, 2014 commented on the irreproducibility of drug-herb interaction studies in cell lines; they note Caco-2 cells could vary 100-fold in their ability to transport mannitol. Cell lines are frequently subject to genetic instability associated with their immortalization from a tumor, followed by in vitro evolution that changes their behavior. (Fusenig, 2017) The old traditional cell lines are useful in some contexts - if you would like to find a protein that binds something you are interested in, for example - but they can't be counted on to react in a human way.
0.233233
99
science1694126
Yes is the short answer. These are known as silencing RNAs or interfering RNAs The longer answer is You would need much more RNA than you could easily administer to the epithelial cells of the mucosa and other tissues that viruses like SARS-CoV-2 infect, so as to cover each individual cell. In addition you would need to be able to deliver it to those tissues in a timely manner (inhalation is possible, several drugs are administered in that manner), have it enter the cells (essentially what the Pfizer/Moderna vaccine does) and not have it be degraded. These are not insurmountable problems, the Moderna team managed to do it for their much longer vaccine, but I don't know if the same base modifications that work for their vaccine will still work for interference/silencing.
0.187131
99
science1694127
Argentine ants. https://en.wikipedia.org/wiki/Argentine_ant Global "mega-colony": The absence of aggression within Argentine ant colonies was first reported in 1913 by Newell & Barber, who noted “…there is no apparent antagonism between separate colonies of its own kind”. [8] Later studies showed that these “supercolonies” extend across hundreds or thousands of kilometers in different parts of the introduced range, first reported in California in 2000,[9] then in Europe in 2002,[10] Japan in 2009,[11] and Australia in 2010.[12] Several subsequent studies used genetic, behavioral, and chemical analyses to show that introduced supercolonies on separate continents actually represent a single global supercolony.[11][13] The researchers stated that the "enormous extent of this population is paralleled only by human society", and had probably been spread and maintained by human travel. This is true only in the introduced range. I read that in Argentina, this ant species is like any other and different colonies compete. A mutation led to the strain of ant that has colonized the world because of its cooperativity. I was interested to read in the comment by @Polypipe Wrangler that fire ants from different colonies in Australia do not fight each other. Different fire ant colonies definitely fight each other in Florida.
0.358548
99
science1694128
The claim that "all flowers have a number of petals that is related to the Fibonacci sequence" is simply false. Many flowers have other numbers of petals. Consider, for example, dogwoods, which have a very clear four-petal form. Lots of other flowers have four or six. A nice catalog of some common examples can be found on this "Ontario wildflowers" site, which lists by petal count.
0.086145
99
science1694129
Another way to ask this question: How old is the most recent common ancestor of Pogona lizards and their closest relatives? Diporiphora is a sister clade of Pogona, as stated in Hugall et al. 2008.1 Figure 4 of the same paper suggests that Diporiphora and Pogona diverged around 12 million years ago (annotations mine): An ultrametric chronogram generated from the Bayesian combined data phylogeny (Fig. 2C), under penalized likelihood rate smoothing (PLRS; optimal smoothing factor 80). The Riversleigh Physignathus calibration discussed in the text is used (indicated on figure as equal to 21 Mya). Tree pruned to show Australasian group only. Markov chain Monte Carlo (MCMC) sampling 95% confidence interval shown for selected nodes. So, a reasonable estimate for the age of the Pogona genus is 12 million years. References Hugall AF, Foster R, Hutchinson M, Lee MSY. Phylogeny of Australasian agamid lizards based on nuclear and mitochondrial genes: implications for morphological evolution and biogeography, Biological Journal of the Linnean Society, Volume 93, Issue 2, February 2008, Pages 343–358.
0.057741
99
science1694130
This appears to be an oleander hawk-moth caterpillar (Daphnis nerii) shortly before pupating. Otherwise the caterpillar would be bright green in colour: Newly hatched oleander hawk-moth larvae are three to four millimeters in length, bright yellow, and have a black, elongated "horn" on the rear of the body. … As they get older, the larvae become green to brown with a large blue-and-white eyespot near the head and a yellow "horn" on the rear. […] Just before it pupates, the oleander hawk-moth larva becomes browner in colour.
0.672357
99
science1694132
The cytoplasm is like the ocean. When you talk about the ocean, do you include the fish? What about islands? Sometimes yes, sometimes no. You could use a cytosol/cytoplasm distinction, where cytosol is "just the liquid part outside the organelles" and cytoplasm includes all the fish, but context matters and I don't think it's actually necessary to have a distinction. If someone is talking about the "pH of the cytoplasm", you can assume they mean the liquid part, not that they're taking some weighted average of pH over all the different organelles plus the space outside of them. Same thing for ion concentrations. If someone is talking about the "cytoplasmic face" of a membrane, it's clear they mean the side that faces the liquid stuff in cells, even if you're talking about vesicles and organelles where all the membrane faces are "inside the cytoplasm" since the whole organelle is. However, if one were to "remove all the cytoplasm from a cell" you'd expect the organelles (minus the nucleus) to come along; after all, they are in the cytoplasm so they go where it goes. If you need to memorize a definition for a class, use what the teacher gives you. Otherwise, think critically about the context in which the word is used. I like your teacher's definition better, not because the two options are "liquid only" and "liquid plus organelles", but because organelles are in the cytoplasm, so sometimes when people say cytoplasm they mean the whole ocean including the contents. Your teacher's definition allows for both uses of the term; yours explicitly does not.
0.034149
99
science1694133
It's going to depend on what definition is useful in what context. If was was talking about an mRNA being translated in the cytoplasm, I am saying it is absolutely not translated in the ER or the golgi apparatus. (Most proteins translated in those places are on their way to being transported somewhere else; they will not end up free-floating in the cytosol the way proteins translated in the cytosol usually are)
0.652656
99
science1694134
It depends on what is implied by over time in the question. Virus may become more deadly (or otherwise harmful) simply due to a random mutation. However, virus does not specifically aims at harming the host - rather the negative consequences for the host's health are a byproduct of the virus hijacking and killing the host cells. This can be, e.g., due to the toxins generated during the viral replication, or due to the new virions bursting out of a cell and thus destroying it, or because of the overreaction of the host's immune system. E.g., HIV preys at the immune CD4+ cells, whose count eventually drops below the critical level, making the host susceptible to opportunistic diseases. Thus, in a short run virus becomes more harmful/lethal, as the byproduct of being more successful in replicating and propagating itself. However, in the long run such success harms the virus, since, as an obligate parazite, it cannot exist without a host. Reduced number of hosts means reduced possibilities for virus to replicate. A virus that kills all of its hosts goes extinct. The viruses that continue exist do so either because they have reached an endemic equilibrium with their host or because they can survive in a host of different species, occasionally spilling into human population (like Ebola).
0.365987
99
science1694135
We are dealing here with the quantities differing by (at least) two orders of magnitude: energy corresponding to 260nm radiation is about 110 kcal/mol (here is a converter) stacking energies of DNA double helix are of the order of 1 kcal/mol The change in the absorption energy due to electron delocalization when unstacking is thus smaller than 1 kcal mol, and constitutes a negligible correction of less than 1% to the absorption energy. Update It is necessary to point out that: The absorption at 260nm does not correspond to a specific electronic transition, but rather to the average of the transition frequencies of different bases (240-270nm) . Thus, the actual position of the peak of absorption is not necessarily at this wave length, but varies depending on the DNA base composition. Moreover, the peak is broader than the stacking energy. The cited Wikipedia article (more precisely its English version) is the only source that I have seen so far, where it is claimed that the position of the absorption peak does not change. In fact, 260nm is not the position of the peak, but the standard wavelength where the absorption is measured and calibrated. In fact, the change in the absorption is so big, that a small shift of the absolute peak does not matter.
0.712952
99
science1694136
"Arc" is the Activity-regulated cytoskeleton-associated protein. It's one of the "immediate early genes", which are genes whose expression changes very quickly in response to certain neuronal activity. They are thought to be important in mediating the changes in expression of other proteins that accompany learning and memory in neurons. This is not talking about any other "intelligence" or memory besides what happens in the nervous system. It's about the mechanisms by which the nervous system operates. The paper (Pastuzyn et al 2018) suggests that this protein originally came to the animal genome from a virus rather than evolving from other animal genes, and that it behaves in a virus-like fashion to transfer genetic material from one cell to a neighboring cell. That's definitely interesting from an evolutionary and molecular biology perspective, but it does not mean the body "uses viral proteins for intelligence". In an animal, these aren't "viral proteins" any more, they are animal proteins, built from DNA in animal cells and evolved alongside those animal cells. The paper is about Arc in fruit flies, though Arc is also found in vertebrates but seemingly from a different lineage; therefore the authors suggest that this has happened at least twice. It's certainly fascinating that such a rare event would happen more than once; I'm not sure if there is additional literature on this because it seems hard to believe, but I suppose it's one more element to how all the genetic variation in modern species evolved. Pastuzyn, E. D., Day, C. E., Kearns, R. B., Kyrke-Smith, M., Taibi, A. V., McCormick, J., ... & Shepherd, J. D. (2018). The neuronal gene arc encodes a repurposed retrotransposon gag protein that mediates intercellular RNA transfer. Cell, 172(1-2), 275-288.
0.314901
99
science1694137
Let me first point out what is harmful about electric current: alternating (ac) current in the range of 20-200Hz is dangerous, because (even for miliampere current values) it may interfere with the oscillations of heart, causing fibrillation and subsequent death, if the normal blood circulation is not restored within a short period of time. This is typically the cause of death in the domestic accidents associated with electricity (leaving apart falling form a ladder when getting shocked). direct (dc) current is harmful only at high values, such as 1 ampere and greater. These are rare in domestic conditions, but not uncommon in industrial setting. In this case the current flowing through the body simply paralizes miscles, leaving the eprson conscious, but unable to release the wire. The harm results from being heated by currents, resulting in lethal burns. Touching the electric power line certainly involves the ac effect, and in some cases may imply the dc effect as well. If the animal simply died from the heart fibrilation, the damage to its tissues due to the current is virtually non-existent, and it will decay normally. If the current was high enough to cause burns, then the current stops once the tissues along its path carbonize and stop being conducting. Burned tissue is inorganic matter, which is of little interest to micro-organism. However, anything that was not burned, will decay normally.
0.625061
99
science1694138
Sinks and sources just refer to the sign of the local field potential measured with extracellular electrodes. Excitation involves positive charges entering cells, depolarizing them. When positive charges move into a cell, there is less positive charge outside the cell where the electrode is, so it becomes more negative. This is called a "sink" because the electrode records a negative deflection. Inhibition is a bit more complicated because shunting inhibition does not involve much net movement of ions, rather it involves opening chloride channels with reversal potential near the resting membrane potential, which prevents cation channels from contributing to depolarization (sodium coming in the cation channels is balanced by chloride coming in the inhibitory channels). Charge flows in circuits, so when you get negativity in one place you get positivity elsewhere and vice versa. If you stimulate a cell near its soma such that positive charge enters the cell there, the positive charge will flow into the rest of the cell but also attract negative charges towards it. The result is that membranes out on distal portions of the cell become slightly (because this is over a wide space) more positive inside, which in turn repels positive charge outside the membrane so an electrode near the fringes of a cell but not near the stimulation site will actually record a source. Generally in CSD recordings these types of sources dominate over inhibition, it's very difficult to record a "current source" and attribute it to inhibition. I attached a few references below; I think the Olejniczak paper is a good place to start. It's focused on EEG but will help understand how what goes on inside neurons affects what we measure outside of them. Nicholson & Freeman talk about the theory behind current source density and is probably a straightforward read if you're familiar with the physics/math of electricity. Mitzdorf is a longer review article that talks more about the information you can get from a CSD and talks about the difficulty in interpreting sources that I mention. Mitzdorf, U. (1985). Current source-density method and application in cat cerebral cortex: investigation of evoked potentials and EEG phenomena. Physiological reviews, 65(1), 37-100. Nicholson, C., & Freeman, J. A. (1975). Theory of current source-density analysis and determination of conductivity tensor for anuran cerebellum. Journal of neurophysiology, 38(2), 356-368. Olejniczak, P. (2006). Neurophysiologic basis of EEG. Journal of clinical neurophysiology, 23(3), 186-189.
0.604142
99
science1694139
Brains are not computers, and do not process information like computers. The trees from CS you are talking about are ways to run linear information stores through a central processing unit. Even with machines that can sort of do these operations in parallel, they are still very much series computations. Brains, on the other hand, are massively parallel, and information is encoded in a very high dimensional space, not linearly. There are no "bits" in a sequence to sort through. Information does not just move from "storage" to some "processing" part of the brain, it is constantly being processed at all stages in all places. When a memory is active, that information is linked to all the other related information physically because neurons are connected to each other. That is, when you think of a "cat", you might retrieve a memory of a specific cat you saw in a specific place because there is a physical connection between some of the same neurons that are active when you think of "cat" and the neurons encoding that engram of the specific cat memory. No "search" is necessary; every extremely high-dimensional brain state is followed by another extremely high-dimensional brain state. When you "search" your memory for cats, you are just biasing these future states to be those which involve neurons that were involved in the past during experiences of cats. I think the most attractive models for understanding how brains index information are attractor networks. There are in silico models and implementations of these sorts of networks, but without the parallel hardware of a biological brain they aren't particularly efficient.
0.139565
99
science1694140
Theoretically anything can happen, but you also have to take into account firstly, the probability of such an incident (which is particularly rare in this case) and secondly, if the creature can survive after or even until birth, to be counted as an organism with that kind of anomaly. For instance, the odds of trisomy 21 taking place in humans is not much greater than that of trisomy 11, but almost all fetuses with trisomy 11 would be aborted and never make it to this world.
0.162716
99
science1694141
The only "complete" monosomy that is survivable in humans is Turner syndrome. So your source is a bit silly. http://en.wikipedia.org/wiki/Monosomy
0.631711
99
science1694142
Short answer - no, they don't grow once they are released from the cell. I like to think of it as a factory production process. The widgets coming out of the factory assembly line don't grow, they just get sent out. Sometimes the assembly line makes mistakes and a virus might be missing something (or most things) or might even be just an empty shell and so be smaller. Some viruses are contained in "bags" (enveloped viruses), so sometimes the bag might be defective instead, or even empty. Some viruses (like TMV) "self-assemble" with their protein coat automatically "growing/assembling" around their nucleic acid (RNA in this case), but that's a one-time process in the cell and doesn't continue once the virion is released.
0.188062
99
science1694143
Viruses do not grow. Instead, their components self-assemble inside of a cell to form complete new viruses, which are then released to go and infect more cells. Here is a fairly accessible review on the subject: Mechanisms of Virus Assembly. Basically, when the virus reproduces, it manufactures a bunch of parts. A typical virus may be thought of as having three types of part: The genome (RNA or DNA) that encodes for the virus Structural proteins ("SPs"), which form the packaging for the viral genome Non-structural proteins ("NSPs"), which handle operations within the cell such as replication, packaging, and suppression of host cell anti-viral responses. In the replication process, the virus makes a bunch of copies of each of these independently. The SPs and genome come together to form new viral particles, and the NSPs get left behind. Consider for example, this Sindbis virus genome: Bases 79..7635 encode a "non-structural polyprotein": a collection of four protein elements that assemble in various different configurations to build a "viral factory" out of membranes within the cell and then copy the genome. Bases 7681..11418 encode a "structural polyprotein" collecting together the five elements that form the capsid that self-assembles around the genome to make a new particle. Now, this is biology, so there are exceptions and blurry boundaries, but this covers the typical cases pretty well, and even in the non-typical cases I have never heard of a virus that "grew" in a meaningful way.
0.556593
99
science1694145
I agree with the answers given by @jakebeal and @Armand. However, the analogy between a newborn virus and a newborn baby, suggested by the OP, is not without its merits. Indeed the virions released by a cell, in which the virus has replicated, are not always ready viruses. They need to undergo the process called maturation. E.g., see here and here.
0.277777
99
science1694146
Both Prader-Willi and Angelman syndrome can also occur as a result of having both members of the chromosome 15 pair derived from 1 parent, a condition known as uniparental disomy. Both can also result from a structural abnormality of the imprinting center, known as an imprinting mutation. In addition, Angelman syndrome can be caused by a mutation in the gene that causes it; a comparable cause is not present in Prader-Willi syndrome since it results from abnormality in more than 1 gene. See details: https://thehealthbd.com/prader-willi-syndrome/
0.608473
99
science1694147
Hybrid corn IS a "Mendelian selection scheme". The issue here is more predictability and uniformity of crop growth and yield. Normally, plants are genetically varied and having different versions of each gene from mother and father usually increases the fitness of the plant (this is sometimes called "hybrid vigor"). When farmers replant with seeds from only their "best" plants, over the generations the plants become more uniform and predictable in that (good) aspect, but also become more and more inbred (having both copies of a gene be exactly the same instead of slightly different). This inbreeding reduces plant fitness (e.g. yield or disease resistance). With the discovery of genetics in the early 20th century, farmers realized they could get the benefits of hybrid vigor without the drawback of inbreeding. It's a two step process: separately breed two different varieties of a plant, each highly inbred, then produce seeds in a "seed factory" that have one variety as the "mother" and one variety as the "father". Each inbred parent passes on only the one version it has of each gene, but each (offspring) seed ends up with both different versions of each gene, one from each parent. The resulting hybrid seeds called "F1" are both genetically uniform AND show "hybrid vigor" because each has two different versions of every gene. They just don't breed true, as the different versions of each gene get mixed up again ("reassorted") in further generations. However, as long as the two original inbred parent varieties are propagated (remember, they remain genetically identical as they are inbred), the F1 hybrid offspring seeds can be recreated whenever desired. This naturally leads to a larger organization (like a company) maintaining the two parent varieties and continually producing reproducibly uniform F1 hybrid seeds for farmers to actually use.
0.303536
99
science1694148
It's purely and simply that there is no single answer - as in the linked paper, there's no "gay gene", there is a group of identified genes that contribute, but not all the variance in the population seen can be attributed to those genes. i.e. you can have some or none of those genes and still be gay, or indeed all the genes and not be gay. This could mean that there are other genes still to be identified as playing a role, or it could mean that environment, epigenetic factors (e.g. methylation, some of these seem to be hereditary too!), expression levels etc. play roles in this trait, but as it is multi-factorial, we don't know the answer(s) yet. It may well be that we will never know, or that there are more than 1 answers to the trait.
0.459189
99
science1694149
I'd like to add on Armand's answer about the delivery method of DI RNA. The author has stated that in additional work that’s nonetheless unpublished, the workforce has now used nanoparticles as a [supply vector][1] and noticed that the virus declines by greater than 95% in 12 hours. If the delivery efficiency of DI RNA is guaranteed, the fraction of coinfected cells should be a concern anymore. Resources [1]: https://www.sciencedaily.com/releases/2021/07/210706153039.htm
0.139591
99
science1694150
General remark One does not necessarily need to use a subtle statistical method, but one does need good understanding of the experimental design and statistical analysis in order to draw reliable conclusions from the data (or know when not to draw such conclusions). It is for a good reason that statistics is a field of its own (just like biology) and there exist a dedicated stack exchange community (by far more active than biology). Pubmed is also full of articles explaining why this or that approach needs to be carefully - just try to search for spirious correlations and see how many articles come up. Correlations and non-correlations Closer to the point: the model in the OP assumes that certain trait can be a consequence of the location (or other non-genetic factor, not necessarily hereditary) and the genotype. The co-occurence of certain genotypes and locations confounds the problem. Moreover, it is possible that this co-occurence actually leads to real correlations between these two factors. One thing to look for is the appropriate sampling procedures, especially the sample size. As an extreme example, let us consider preference for wearing warm clothes in winter - is it a function of location (Moscow vs. Miami) or a trait coupled with Y-chromosome? We could do the analysis of variance, proposed in the OP and easily prove that there is no correlation with the presence of Y-chromosome... unless most of the individuals sampled in Moscow were men and most of those sampled in Miami were women, in which case we may erroneously attribute the preference to warm clothing to possessing Y-chromosome. It is clear what has gone wrong in the example above: the experimental design was not balanced statistical analysis was not corrected for this lack of balance One can thus expect improvements along these two axes: by designing experiments that allow disentangling undesired correlations and by employing the approrpiate methods of analysis. Let me however add a few more remarks: not all correlations can be disentangled - sometimes creating appropriate design is difficult or even impossible. This is especiallyw hen we are talking about complex genetic traits. there may be genuine correlation between the traits - e.g., the trait of interest and settling at a certain location may be both functions of genotype. In regard to the correlations that may arise after several generations, as the OP suggests, it is worth keeping in mind that such correlations require evolutionary timescales - they are a real issue when comparing Native Americans and Chinese, but less of a problem when comparing populations in New Yorkers and Detroit. Suggested Reading I suggest starting with the Wikipedia article on the Experiment design or an equivalent chapter in a biostatistics textbook. Statistics community Cross Validated is rather welcoming to biostatistics questions. Finally, there are many good statistics and biostatistics textbooks - the obstacle is usually not the availability of materials, but the level of math and abstraction.
0.044745
99
science1694152
That is almost certainly the larva of a carpet beetle, likely the Furniture Carpet Beetle (Anthrenus flavipes). They are distinctively hairy and striped. (image from wikimedia) Carpet beetles are found world-wide and are a pest of homes, workplaces, museums, basically anywhere that textiles, paper, or foods are contained. They will happily eat wool, silk, fur, hair, bone and shell (e.g. tortoise shell, not sea-shell). Getting rid of them is difficult - you need to clean out any areas where they can live, this includes all your clothes (hot tumble-dry for 1 h, or keep in sealed bags for 2+ months), any natural-fiber carpets, cupboards, furniture - literally anywhere dust can accumulate. Often easiest to get a pest-control expert in to deal with them.
0.057968
99
science1694153
A recombination map of the E. coli genome was recently published (several years after question was asked).
0.298618
99
science1694154
Some papers that should help and provide further info in their refs: An oldy but goody: "Genetic dissection of complex traits: guidelines for interpreting and reporting linkage results" (1995) Eric Lander & Leonid Kruglyak Here's a discussion of techniques involved, with an emphasis on linkage analysis but discussing association studies as well: "Genetic linkage analysis in the age of whole-genome sequencing" (2015) Jurg Ott, Jing Wang, and Suzanne M. Leal A more recent example: Höglund, J., Rafati, N., Rask-Andersen, M. et al. Improved power and precision with whole genome sequencing data in genome-wide association studies of inflammatory biomarkers. Sci Rep 9, 16844 (2019).
0.215623
99
science1694155
For fruit flies one usually wants pure lineages for specific alleles, corresponding to distinct phenotypes. One cannot arrive to such a state by merely enclosing the flies for many generations - in this case one will quickly end up in the situation of Hardy-Weinberg equilibrium, where all the alleles are present in constant proportions, not changing with time. Of course, in a finite population one expects that eventually one allele fixes itself, but that may take a rather long time in absence of selection. Thus, in practice one separates the desired phenotypes in every generation, and and makes them breed among themselves. In fact, this is pretty much what Mendel did with peas to obtain pure lineages - it is worth reviewing this chapter. Now, it is always risk that the population is not 100% pure, just as there is always a possibility that other genes contribute to the trait of interest. Here is where the (bio)statistical analysis comes in: testing for significance of the effect and filtering out random effects.
0.592327
99
science1694156
People have already done the inbreeding over sufficient generations for plenty of fast-breeding model organisms, and you just buy your initial stock from a company, with the understanding that some small amount of genetic drift may occur over time.
0.601111
99
science1694157
It seems they were attempting to correct for other behaviors/ecological niches that were correlated with brain size, but were not related to "urban tolerance". Their work includes much data on niches, behaviors, breeding strategies, etc, not just "urban tolerance". Presumably, related species would share similar brain-size effects from these confounders, so residual differences could be correlated more specifically with "urban tolerance". The authors comment: Because brain size correlates with these traits, failure to properly account for such additional drivers may mask the effect of brain size on urban tolerance. For example, if a large brain affects tolerance to urbanization by facilitating broader niches (Ducatez et al., 2015; Sol et al., 2016), including a measure of niche generalism in the model can block the effect of the brain on urban tolerance. They are trying to correlate a quite crude measure (brain size) with a specific behavior ("urban tolerance"), which as you suggest raises all sorts of methodological questions.
0.470502
99
science1694159
tl;dr I think you're right and your textbook is wrong. It would be interesting to know (a) what textbook this is (maybe it's the same as the one I quote below?) and (b) what your teacher says if you ask them this question. In general the energy content of a trophic level is roughly proportional to its biomass (although in going from terrestrial plants to animals we might expect the energy/gram to increase, since terrestrial plants contain a lot of energy-poor structural material). Both energy and biomass would also be proportional to numbers if individuals in each trophic level were approximately the same size (this can go in either direction; whales are much bigger than the plankton they eat, beavers are much smaller than the trees they eat ...) For oceanic phytoplankton and zooplankton respectively, some reasonable mass/energy conversions are phytoplankton: 2-3 calories per milligram (cal/mg) dry weight (Platt and Irwin 1973); zooplankton: 3-9 cal/mg dry weight (Davis 1993). (ZP are about 3x more energy-rich than PP.) In my view energy pyramids as well as number pyramids can be inverted, despite what this open-source biology textbook (ref below) says: Pyramids of energy are always upright, since energy is lost at each trophic level; an ecosystem without sufficient primary productivity cannot be supported. As detailed below, I think this statement is wrong. The energy loss tends to make energy pyramids a little bit less top-heavy than biomass pyramids, but they can be inverted for exactly the same reason that biomass pyramids can. The book shows an inverted biomass pyramid: And explains it as follows: However, the phytoplankton in the English Channel example make up less biomass than the primary consumers, the zooplankton. As with inverted pyramids of numbers, the inverted biomass pyramid is not due to a lack of productivity from the primary producers, but results from the high turnover rate of the phytoplankton. The phytoplankton are consumed rapidly by the primary consumers, which minimizes their biomass at any particular point in time. However, since phytoplankton reproduce quickly, they are able to support the rest of the ecosystem. This argument should apply equally well to energy. In other words, the amount of biomass or energy present in the lower trophic level (phytoplankton) at any moment in time (also called the stock or standing stock) is less than the energy present in the upper trophic level (zooplankton), but the flow of energy is constant (minus losses due to conversion inefficiency). In particular, if we take the English Channel values (4 g dry mass/m^2 PP; 21 g/m^2 ZP) and convert (g/m^2 * (1000*cal/mg) = cal/m^2) we get at most 3*4*1000 = 12000 cal/m^2 for PP and at least 3*21*1000 = 63000 cal/m^2 for ZP, so the pyramid is still inverted. (In fact, since ZP are generally more energy-rich than PP we would expect the energy pyramid to be more top-heavy than the biomass PP; here I have used the most conservative numbers possible, the top end of the range for PP content and the bottom for ZP content, which happen to be the same.) For example, suppose individual phytoplankters (yes, that's the word for an individual organism in the phytoplankton) and zooplankters are the same size/contain the same amount of energy (not true, but an OK simplification here), and that there are 10,000 phytoplankters and 90,000 zooplankters per cubic meter of water. The phytoplankters reproduce so fast that each ZP can eat 10 PP over the course of its lifetime without depleting the PP population. In one ZP generation (= 10 PP generations), 100,000 units of energy come into the PP (through photosynthesis) and are eaten by ZP (with a 10% loss due to trophic inefficiency); that's enough to maintain 90,000 zooplankters. Ecosystems. (2021, March 6). Retrieved September 3, 2021, from https://bio.libretexts.org/@go/page/12640 Davis, Nancy D. “Caloric Content of Oceanic Zooplankton and Fishes for Studies of Salmonid Food Habits and Their Ecologically Related Species.” Fisheries Research Institute, University of Washington, 1993. https://digital.lib.washington.edu/researchworks/bitstream/handle/1773/4192/9312.pdf?sequence=1. Platt, Trevor, and Brian Irwin. “Caloric Content of Phytoplankton.” Limnology and Oceanography 18, no. 2 (1973): 306–10. https://doi.org/10.4319/lo.1973.18.2.0306.
0.215545
99
science1694160
Argiope is a cosmopolitan genus (the link is for the genus, not the individual species you linked). Argiope species A. bruennchi, and A. lobata are found in Europe in particular. Based on your photos I think this is likely to be A. bruennchi rather than A. lobata, as it has strong yellow and black bands and lacks the lobed edge of the abdomen of A. lobata.
0.341576
99
science1694161
From the google search I did,guess they are green leafhoppers. To find more on the leafhopper of the second half, this link should help .Reach the site and give a search on green leafhoppers in the searchbar. Well,there are a large no of leafhoppers under the family Cicadellidae and I believe the ones in the picture differ. If my findings are correct,Cicadella viridis is the scientific name of the leafhopper of the 1st half,the second one comes under Nephotettix sp. Hope it has helped!
0.333261
99
science1694162
think I got it.. if I may I would like to answer this question since it could be helpful to anyone who's interested in this field of research. Regarding the first question: " how any of those arguments can justify the evolution of aging " Weismann presented a notion of group selection as a possible answer to biological aging, proposing that such mechanism becomes advantageous for the species by removing the most geriatric individuals. how those arguments can justify the evolution of aging: on the hypothesis that if all individuals were immortal, they would be competing with the younger ones for the available resources. if individuals did not die they would soon multiply inordinately and would interfere with each other's healthy existence. (Weismann, 1891) Thus, by natural selection the somatic cells of the organism would have come to lose their capacity for unlimited survival, and ageing of the organism as a whole would have appeared. (Kirkwood and Cremer, 1982) Regarding the second question: " How the concept of panmixia is related to the evolution of aging? " essentially it is used to justify the aforementioned concerning somatic immortality, that is, the cause of cellular aging. when applying this concept of panmixia to the post-reproductive period, aging can arise, as this is a period that does not contribute to the evolution of the species. Thus it can be considered as an inert physiological interval, such that the potential of somatic immortality starts to disappear. [Weismann speaking about reproduction] As soon as the individual has performed it’s share in this work of compensation, it ceases to be of any value to the species, it has fulfilled its duty and may die. (Weismann, 1891; p. 10) As soon as natural selection ceases to operate upon any character, structural or functional, it begins to disappear. As soon, therefore, as the immortality of somatic cells became useless they would begin to lose this attribute. (Weismann, 1891; p. 141) it's also important to mention, as a side note, that for Weismann, natural death arises due to the absence of cell renewal, which causes the wear of the histological tissue and thus the concept of panmixia justifies the limited number of cell division. References Kirkwood, T. B. L., & Cremer, T. (1982). Cytogerontology since 1881: A reappraisal of August Weismann and a review of modern progress. Human Genetics, 60(2), pp. 101–121. https://doi.org/10.1007/BF00569695 Weismann, A. (1891). Essays upon heredity and kindred biological problems. Volume I. Oxford: Clarendon Press.
0.150382
99
science1694163
I think that "morph" is a substantially generic term that it would cover environmental as well as genetic variation. A cursory google search suggests that this is common practice in zoology: a paper on lizards, one on ants, another on ants. This paper discusses the phenomenon in general using the term "polyphenism", in the context of the phenotypic plasticity literature. That last link is to a review paper titled "Polyphenism in insects" (Simpson et al. 2011), which uses the term "morph" extensively. This suggests to me that this is a well-known concept in entomology and no one will be angry/surprised when you use this term.
0.399567
99