INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
Hello ! I have been using Salmon to quantify RNA-seq of tumoral tissues using a quasi index of 161 transcripts, part of them being endogenous retroviruses sequences. Here is the command info of the quantification of a sample : { "salmon_version": "0.10.2", "index": "indexes/DB7", "libType": "A", "mates1": "output/trimmed/3963279a-4960-49a2-936a-a13bb4a7dde5/trimmed1.fastq", "mates2": "output/trimmed/3963279a-4960-49a2-936a-a13bb4a7dde5/trimmed2.fastq", "threads": "8", "numBootstraps": "100", "seqBias": [], "gcBias": [], "writeMappings": "bam/3963279a-4960-49a2-936a-a13bb4a7dde5", "output": "output/HSNC/salmon/3963279a-4960-49a2-936a-a13bb4a7dde5", "auxDir": "aux_info" } I've also used the "--writeMappings" argument for some samples and with a combination of samtools view, sort, and index, created sorted BAM files in order to vizualize them in IGView (I also created with IGView a pseudo-genome based on my set of transcript). However, for some transcripts, there are big differences between the observed coverage and the actual quantification computed by Salmon. I've read the [documentation][1] about that and understand that the mappings are computed before the quantification, so they're bound to be different. However, for some transcripts, particularly a family of retroviral enveloppes (K family), there are huge gaps sometimes. For some transcripts, the quantification gives a NumReads of 0 while hundreads or thousands of reads can be observed in IGView. The documentation says that the reads in the mapping can be incompatible with the library type inferred since its done before filtering, however, my lib_format_counts.json file shows a compatible fragment ratio of 100%, so I don't think it's the problem : { "read_files": "( output/trimmed/3963279a-4960-49a2-936a-a13bb4a7dde5/trimmed1.fastq, output/trimmed/3963279a-4960-49a2-936a-a13bb4a7dde5/trimmed2.fastq )", "expected_format": "IU", "compatible_fragment_ratio": 1.0, "num_compatible_fragments": 2737879, "num_assigned_fragments": 2737879, "num_frags_with_consistent_mappings": 1714617, "num_frags_with_inconsistent_or_orphan_mappings": 1023262, "strand_mapping_bias": 0.4979922629951762, "MSF": 0, "OSF": 0, "ISF": 860751, "MSR": 0, "OSR": 0, "ISR": 853866, "SF": 526731, "SR": 496531, "MU": 0, "OU": 0, "IU": 0, "U": 0 } I've run RSEM on these samples for comparison purposes and the quantification of RSEM roughly match up the one done by Salmon, so I don't think the quantification is at fault here. I thought that maybe since this affects mostly an entire family of retroviruses with often homologous sequences, during the mapping phase the reads from one sequence are distributed to all similar sequences, and then it's corrected after ? **My question is**, how can I validate or invalidate this hypothesis ? Or is there another reason that could explain this incoherence between the mappings and the quantification ? Thanks in advance ! [1]: https://salmon.readthedocs.io/en/latest/salmon.html
Hi [KDA](https://www.biostars.org/u/47732/), Actually, there isn't much to add to [h.mon](https://www.biostars.org/u/6093/)'s comment --- so I'd suggest that maybe he move it from a comment to an answer. Basically, having transcripts with many mapping reads but large gaps in coverage be assigned abundance close to 0, especially when there are other transcripts _without_ gaps in coverage explaining the same reads, is exactly what you would expect. Because of shared sequence, many reads will multi-map to transcripts that turn out to be poor explanations for these reads in light of other evidence. This is exactly what the statistical inference procedure in salmon (and RSEM etc.) is meant to resolve. You could try and track down some of the specific cases. For example, pick a read that maps to one of these transcripts (the ones with many multi-mapping reads, but with gaps in coverage), and then look at the other transcripts where it maps. Find one that has high abundance. Does that transcript look to have a more well-behaved and expected coverage profile? Another quick thing you could try is to find these transcripts that get assigned 0 or near-0 abundance, and remove them from the reference. If you re-quantify without them, do you explain approximately the same number of reads? If so, that means that the same number of sequenced fragments could be explained without these transcripts, and so the assignment of a near-0 abundance to them may be reasonable.
biostars
{"uid": 325916, "view_count": 3976, "vote_count": 2}
Hello everyone, I'm comparing 3 sets of variants from 3 different NGS pipeline runs, where all 3 operate on a common set of 150 samples. I see a lot of difference in the variants found in each run. The three runs are structured as follows: 1. The 150 samples are run as a single batch 2. The 150 samples are run as part of a larger cohort (total cohort size = 250 samples, say) 3. The 150 samples are split into 5 batches of 30 samples each For #2, GATK SelectVariants is used to extract variants found in the 150 samples. For #3, CombineVariants is used to combine the VCF files. When I look at a venn diagram of these 3 sets, I see only around a 40% overlap in variants. For reference, 100% is the set of all unique variants discovered across all 3 runs. To exclude pipeline quirks (AKA "this is the default behavior"), I compared 2 runs that were run 6 months apart on the exact same 25+ samples, and the variants discovered were identical. So, we can confidently say that only the cohort size difference could have caused this gap in variant discovery. Can we discuss what could be the case here please? I wish to understand why I see 3/5 of the dataset not being called in at least one of the runs. EDIT: These 3 runs are only computational NGS pipeline runs, they're not sequencing runs. In other words, I'm using the same BAM files across the board.
Correct me if I misunderstood your question, but it seems like you are splitting or combining samples into various sized groups and then running those through GATK. From the supplemental material of GATK on haplotype calling: > Whereas the initial phase of the algorithm is run per sample, the second stage combines the genotype likelihoods over all samples in order to determine the most likely alternate allele frequency in the cohort. The likelihood for a given set of genotype assignments at a given frequency is simply the product of the genotype likelihoods for each sample given that sample’s assigned genotype (Box 1). We then apply a population genetic prior to the allele frequency likelihoods based on θ, the population specific heterozygosity, and choose the most likely allele frequency and associated genotype assignments. The variant quality score for a polymorphic call is given as -10*log10(probability that the site is actually monomorphic). Most variant callers use some kind of Bayesian modeling for within-group variation, since being able to model within-group variation significantly increases the likelihood that the remaining variation is due to inter-group variation (usually treatment vs control, tumor vs normal in most experimental designs). Splitting the samples into artificial cohorts of sizes that are not true to the actual data will affect the results of this calculation substantially.
biostars
{"uid": 230719, "view_count": 1568, "vote_count": 1}
Hi Guys, I am new to the forum to come directly with a question / problem. I want to write a program that counts the patterns like this. I have given such sequences as an example: > - A: AAAAAAAAA - B: AAAAACAAG - C: GAAAACGAA - D: AAAAAAAAT > A patten is a group of four letters of the four groups. As an example, the first Patten would be "AAGA". Now I wonder how many times that happens in my entire Sequenze. So far I have only found programs that are looking for a given pattern. Does anyone have an idea how to program this or do you already know where this was done? I'm very thankful for your help! Best regards
What you are looking for is 'kmer frequency'. There are lots of pre-existing tools to calculate this. Our very own Alex Reynolds has this github repository for instance: https://github.com/alexpreynolds/kmer-counter Kmer frequency will give you the number of occurrences of ALL strings of `k` letters long. If you're only interested in particular kmer occurences, you can filter the data you get back. You may even be able to specify which kmers you want counted specifically. The above is not a python solution, but there are plenty out there that are which you should be able to find easily now you have that key terminology. [Here's][1] a tutorial you could follow if you wanted to code something yourself though. [1]: http://claresloggett.github.io/python_workshops/kmer_counting.html
biostars
{"uid": 337379, "view_count": 4765, "vote_count": 2}
Dear all, referring to the batch correction methods for scRNA-seq, would you have any preference and/or comments ? among possible choices : -- MNNCorrect, as outlined in SimpleSingleCell workflows : https://bioconductor.org/packages/release/workflows/html/simpleSingleCell.html -- ZINB-WAVE : https://bioconductor.org/packages/release/bioc/html/zinbwave.html -- HARMONY : https://www.biorxiv.org/content/10.1101/461954v2 -- SCTransform : https://satijalab.org/seurat/v3.0/integration.html thanks a lot, bogdan
From experience, SCTransform does not perform well unless the majority of the cells are of the same type. It will force true unique populations together with a heavy hand, whereas MNN is much more orthogonal in its changes. Seurat even has a wrapper around [fastMNN](https://htmlpreview.github.io/?https://github.com/satijalab/seurat.wrappers/blob/master/docs/fast_mnn.html). Haven't tried the other options though, so can't speak to them.
biostars
{"uid": 401404, "view_count": 4644, "vote_count": 3}
Hi I have a data frame with multiple columns indicating SNPs ID, chromosome number and position of the SNP (BP): df1 SNP CHR BP 2 2 rs9391724 6 31320795 5.41323 2 3.83E-07 6.417103 3 3 rs147949474 4 100738958 5.38602 3 5.74E-07 6.241012 4 4 rs3819285 6 31322742 5.32917 4 7.65E-07 6.116073 5 5 rs116548975 6 31320833 5.29763 5 9.57E-07 6.019163 6 6 rs138441284 4 100338276 5.26365 6 1.15E-06 5.939982 7 7 rs9391848 6 31320145 5.24588 7 1.34E-06 5.873035 8 8 rs181964803 6 31319900 5.1553 8 1.53E-06 5.815043 9 9 rs1377457 11 61144652 5.06155 9 1.72E-06 5.763891 10 10 rs9266070 6 31319618 5.03007 10 1.91E-06 5.718133 11 11 rs28835675 19 6807762 4.99524 11 2.11E-06 5.676741 12 12 rs9266066 6 31319525 4.98046 12 2.30E-06 5.638952 13 13 rs79215153 4 100398008 4.97751 13 2.49E-06 5.60419 14 14 rs368752130 6 31320017 4.90517 14 2.68E-06 5.572005 15 15 rs28873729 19 6807814 4.86929 15 2.87E-06 5.542042 16 16 rs5771860 22 49203073 4.8598 16 3.06E-06 5.514013 17 17 6:32451822 6 32451822 4.85825 17 3.25E-06 5.487684 18 18 rs12791961 11 61152028 4.80564 18 3.44E-06 5.462861 19 19 rs35957957 6 31320064 4.77551 19 3.64E-06 5.43938 20 20 rs10897155 11 61141164 4.77384 20 3.83E-06 5.417103 ........ Then I have another data frame with various gene names with indicated the start and end of that particular gene: df2 gene CHR. start end 2 ADCY9 16 4003388 4166186 3 ADORA2B 17 15848231 15879060 4 ATP2B4 1 203595689 203713209 5 C6 5 41142336 41261540 6 CD36 7 79998891 80308593 7 CD40LG X 135730352 135742549 8 CDH13 16 82660408 83830204 9 CFTR 7 117105838 117356025 10 CR1 1 207669492 207813992 ....... I want to subset the first data frame and get only the SNPs that fall into the gene coordinates based on the rows values from the second data frame (that has to be done for each gene) I wrote something like this but the process is taking long time so I suppose there is something wrong: for(i in 1:nrow(df1)) for(i in 1:nrow(df2)){ subset(df1, df1$BP > df2$start_position & df1$BP < df2$end_position) } Any help highly appreciated, thanks
We can use ***data.table::foverlaps***, see example: # example input snp <- read.table(text = "snp chr bp snp1 10 111 snp2 11 222 snp3 12 333 snp4 Y 444 ", header = TRUE, stringsAsFactors = FALSE) gene <- read.table(text = "gene chr start end gene1 1 10 200 gene2 11 100 300 gene3 12 300 350 gene4 X 1 100 ", header = TRUE, stringsAsFactors = FALSE) library(data.table) # both genes and snps must have the same keys. Adding bp as start and end columns. snp$start <- snp$end <- snp$bp # now convert to data.table object with the same keys. setDT(snp, key = c("chr", "start", "end")) setDT(gene, key = c("chr", "start", "end")) # and merge on overlap foverlaps(snp, gene, type = "within", nomatch = 0L) # chr gene start end snp bp i.end i.start # 1: 11 gene2 100 300 snp2 222 222 222 # 2: 12 gene3 300 350 snp3 333 333 333
biostars
{"uid": 339679, "view_count": 1870, "vote_count": 3}
Hi all, For doing GWAS and eQTL analyses typically many covariates are included. Usually known covariates like gender, age etc. and also unknown covariates like PCA dimensions. Problem is some of these covariates might be not independent and have a relatively high correlation. Does anybody knows a what a common cut of value is for correlation or what to do with not independent covariates like age and PCA dimensions?
I prefer to avoid using highly correlated covariates as they can often be redundant. My approach is to use the following procedure: 1. Select the (non-PCA) covariates that you are certainly including in the analysis. 2. Perform PCA analysis on the data with the covariates selected in step 1 regressed out. 3. Note that top PCs from step 2 are guarantied to be orthogonal to the covariates selected in step 1. 4. Run eQTL analysis with covariates from step 1 and a few top PCs from step 2.
biostars
{"uid": 181838, "view_count": 1408, "vote_count": 1}
Hi, to evaluate genome assemblers I would like to simulate PacBio HiFi reads. However, there seems no read simulator available that has a HiFi mode (at least I did not find anything via Google. If someone can point me to something, it would be amazing.). So I would just write my own simple read simulator for starters, and I was thinking about just adding random indels and subs with a 1% rate. This would roughly match the minimum Q20 that PacBio defines their HiFi reads as. Is this a viable approach, or would I be missing some important characteristic?
You could try something like I did here: https://github.com/chhylp123/hifiasm/issues/33
biostars
{"uid": 491415, "view_count": 879, "vote_count": 1}
Hi, I have output from snp-dists (https://github.com/tseemann/snp-dists) in molten format, e.g.: seq1 seq2 1 seq1 seq3 2 seq2 seq1 1 seq2 seq3 3 seq3 seq1 2 seq3 seq2 3 The third column gives the number of SNPs between the pair of sequences given in columns 1 and 2. As you can see, these values are duplicated, as it shows both the combination seq1 seq2 and seq2 seq1. How can I (in R or bash preferably) remove the duplicate values?
Using **awk**: $ awk '!(seen[$1,$2]++ || seen[$2,$1]++)' test.txt seq1 seq2 1 seq1 seq3 2 seq2 seq3 3 Using **R**: # example data x <- read.table(text = "seq1 seq2 1 seq1 seq3 2 seq2 seq1 1 seq2 seq3 3 seq3 seq1 2 seq3 seq2 3") # sort column values, then get unique unique(data.frame(c1 = pmin(x$V1, x$V2), c2 = pmax(x$V1, x$V2), value = x$V3)) # c1 c2 value # 1 seq1 seq2 1 # 2 seq1 seq3 2 # 4 seq2 seq3 3 Using **R** again, a bit simpler and scales better when we have more than 2 columns, ([Related StackOverflow post](https://stackoverflow.com/q/9028369/680068)): x[ !duplicated(apply(x[, 1:2], 1, sort), MARGIN = 2), ]
biostars
{"uid": 468079, "view_count": 722, "vote_count": 1}
Hi, I am trying to do differential expression based on RNA-seq data. I only have one variable, which is the timepoint. I want to analyze gene expression relative to the 0 h time point. I already set up the contrasts. Now, I am trying to set up the design matrix. What is the difference between design <- model.matrix(~timepoint) and design <- model.matrix(~0 + timepoint). There is a big difference in the results.
The clue here is to look at the coefficients, or columns derived in the design matrix. Your non intercept model design, will create a coefficient that describes the difference between the two levels of your `timepoint` variable. Your non-intercept model will create a coefficient for each level. Here's an example using your non-intercept model: library(tidyverse) library(limma) # Make a Phenotype Table foo <- data.frame(SampleType = paste0(rep(c("A","B"), each = 3)), Reps = rep(1:3, 2)) %>% mutate(Names = paste0(SampleType,Reps)) # Make some fake gene expression data genes.foo <- matrix(rnorm(6*100),ncol = 6) %>% `rownames<-`(paste0("Gene_",1:100)) %>% `colnames<-`(foo$Names) #non-intercept model model.foo <- model.matrix(~0 + SampleType, data = foo) %>% `colnames<-`(gsub("SampleType","",colnames(.))) model.foo A B 1 1 0 2 1 0 3 1 0 4 0 1 5 0 1 6 0 1 conts.foo <- c("A_Vs_B" = "A-B") contmap.foo <- makeContrasts(contrasts = conts.foo, levels = colnames(model.foo)) %>% `colnames<-`(names(conts.foo)) fit.foo <- lmFit(genes.foo, model.foo) %>% contrasts.fit(contmap.foo) %>% eBayes > fit.foo$coefficients %>% head Contrasts A_Vs_B Gene_1 -0.6722254 Gene_2 -1.1571439 Gene_3 -0.1021150 Gene_4 0.8687436 Gene_5 -0.9600460 Gene_6 1.3139228 topTable(fit.foo, coef = "A_Vs_B", number = Inf, p.value = 0.05, lfc = log2(1.5)) Alternatively, here's your intercept model model.foo <- model.matrix(~SampleType, data = foo) fit.foo <- lmFit(genes.foo, model.foo) %>% eBayes > model.foo (Intercept) SampleTypeB 1 1 0 2 1 0 3 1 0 4 1 1 5 1 1 6 1 1 # Here SampleTypeB is actually the difference relative to the first level (SampleTypeA) > fit.foo$coefficients %>% head (Intercept) SampleTypeB Gene_1 -0.2920748 0.6722254 Gene_2 -1.0356549 1.1571439 Gene_3 0.2477702 0.1021150 Gene_4 0.5016585 -0.8687436 Gene_5 0.3153554 0.9600460 Gene_6 0.2836491 -1.3139228 topTable(fit.foo, coef = "SampleTypeB", number = Inf, p.value = 0.05, lfc = log2(1.5)) edit: Changed the data frame initialisation.
biostars
{"uid": 320808, "view_count": 5441, "vote_count": 2}
Hello, Does cellranger count require index fastqs? The doc specifies the output directory of cellranger mkfastq (or csv options), however I have worked with the R1/R2 fastqs generated from another lab without issue. Thanks. -Todd
You have likely [seen this page][1] on 10x support site. Sample index files are optional but you do need to have your datafiles named in a specific way (`bcl2fastq`). See section on `My FASTQs are not named like any of the above examples` at end of the page linked. [1]: https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/using/fastq-input
biostars
{"uid": 9467686, "view_count": 1769, "vote_count": 1}
I did a reciprocal BLAST search to identify homologues genes (orthologs and paralogs) from 2 different species (A, B) in the genome of species C. What I'm trying to find out is whether e-values of homologs from species A are significantly higher then from species B, and whether A is more likely to have orthologes/paralogs in C. Is that even possible or does it make any sense? How do I calculate that? Another problem is that I also have E-values of 0 (< 2.225074E-308 I believe) and I don't know how to deal with them in a statistical evaluation (obviously the best hits). I am rather new to bioinformatics ... and please excuse my english. Thanks
As Jean-Karim pointed, e-values are not comparable between BLAST searches of two different databases. Bitscore won't help you either, because this score also depends on the size of the database. **Solution 1 (more accurate):** Instead of playing with BLAST scores by yourself, just use the [InParanoid][1] software, which finds orthologs and paralogs between two FASTA files (A and C). InParanoid is based on Reciprocal Best BLAST Hit, but it does a lot of things for you. As a result, you get a list of all orthologs and paralogs and, most importantly, confidence value for each ortologous/paralogous pair. In this way, you can apply t-test (if values are normally distributed) or Mann-Whitney test (if they are not). **Solution 2 (primitive, questionable):** If you don't want to use InParanoid, just simple RBH then you need to tell BLAST that the search space is equal among BLAST of the same query to multiple databases. Then, both, E-values and bitscores will be comparable among all results because the search space will be consistent. You do this by simply using `-dbsize` flag when running BLAST locally. In my opinion, you can use the average of the sequences in A, B and C. For example, if A has 4000 sequences, B has 3000 and C has 5000 sequences, I would set -dbsize flag to 4000. For example: blastp -query A.fasta -db C.fasta -dbsize 4000 [1]: http://software.sbc.su.se/cgi-bin/request.cgi?project=inparanoid
biostars
{"uid": 209606, "view_count": 2271, "vote_count": 1}
Hi ! I have many .bam that I want to get their .bai using samtools in the terminal. I tried the following command : samtools index *.bam However, I did not get any .bai file. Regards
using GNU parallel: parallel samtools index ::: *.bam
biostars
{"uid": 114921, "view_count": 104475, "vote_count": 14}
Hi, running this > ensembl = useMart(biomart > ="ENSEMBL_MART_ENSEMBL",dataset="hsapiens_gene_ensembl", host="may2017.archive.ensembl.org") or this: > ensembl = useMart(biomart > ="ENSEMBL_MART_ENSEMBL",dataset="hsapiens_gene_ensembl", host="http://dec2017.archive.ensembl.org") used to work in the past. but now it gives the error: > Request to BioMart web service failed. Verify if you are still > connected to the internet. Alternatively the BioMart web service is > temporarily down. Check http://www.biomart.org and verify if this > website is available. Error: XML content does not seem to be XML: The site is up, the internet connection also. In the site it says this: > If you are a user of biomaRt (a part of the Bioconductor library) > change the host from 'www.biomart.org' to 'www.ensembl.org' But in the commands I used to run there's no 'www.biomart.org' to replace... Do you know what is the correct command now? I also tried: > ensembl = > useMart(biomart="ENSEMBL_MART_ENSEMBL",dataset="hsapiens_gene_ensembl", > host="www.ensembl.org") and didn't work
As Emily has said, parts of Ensembl are down right now. This includes the archive sites and the BioMart interface at the main site. You can check this just by visiting in a browser (http://www.ensembl.org/biomart/martview?redirect=no) - at the moment you get an error page. However you can use one of the mirror sites e.g. http://uswest.ensembl.org/biomart/martview?redirect=no To do this using the **biomaRt** package you would do something like: ensembl = useMart(biomart="ENSEMBL_MART_ENSEMBL", dataset="hsapiens_gene_ensembl", host="uswest.ensembl.org", ensemblRedirect = FALSE) I would also suggest updating your version of **biomaRt** (and maybe also R). The error message about visiting www.biomart.org is misleading and is no longer produced in more recent versions of **biomaRt**. You now get a more appropriate URL to try, that would have taken you to the error page at Ensembl.
biostars
{"uid": 315520, "view_count": 10458, "vote_count": 2}
Hi, I aligned a few samples using STAR to the genome provided in the Illumina iGenomes UCSC hg19 bundle ([here][1]) -- I used the provided gene feature (gtf2) file as is. Now, my motive is to calculate the gene and isoform expression levels using bedtools multicov (at the same time). Use of the gtf2 file produces a file containing read counts per exon. I wish to compute gene and isoform read counts too, so I converted the gtf2 file to a gff3 file using using gtf2gff3 script from SO/GAL ([here][2]). My first question is: Is it OK if the alignment is performed with gtf2 file but counted for reads using the gff3 file, keeping in mind that the gff3 file was converted from the gtf2 file? My second question follows I have read both these resources ([here][3] and [here][4]) but do not understand the differences between: - exon vs CDS - transcript vs mRNA I know that with the process I described, it is possible to retrieve gene read count by selecting only the lines where feature=gene from the bedtools multicov output. What must I do for isoforms? I am confused by the semantics. Thanks ahead of time and let me know if my post was not clear enough. [1]: http://cufflinks.cbcb.umd.edu/igenomes.html [2]: http://www.sequenceontology.org/software/GAL.html [3]: http://mblab.wustl.edu/GTF2.html [4]: http://www.sequenceontology.org/gff3.shtml
<p>Regarding mRNA vs. transcript, since Istvan already mentioned exons and coding sequences (CDS), mRNAs are a subset of transcripts that encode proteins. There are many other types of transcripts (tRNAs, miRNAs, lncRNAs, etc.) that are not protein-coding. Depending on the type of library that was sequenced, you may expect to find mostly mRNAs or mostly other forms.</p>
biostars
{"uid": 98080, "view_count": 4228, "vote_count": 2}
Hi, I have one .vcf file of whole genome sequencing of tumour Vs normal samples of 21 patients. I need a data from like this as input for a tool for finding driver genes > head(mutations) sampleID chr pos ref mut 1 Sample_1 1 871244 G C 2 Sample_1 1 6648841 C G 3 Sample_1 1 17557072 G A 4 Sample_1 1 22838492 G C 5 Sample_1 1 27097733 G A 6 Sample_1 1 27333206 G A In separated .vcf files for each patient I have start, end, chromosome, ref, and variant allele. However I am sure how to get such data frame from this big vcf Any help please? Thank you
Using *bcftools*: bcftools query -f '[%SAMPLE %CHROM %POS %REF %ALT %GT\n]' myFile.vcf > myFileLong.txt
biostars
{"uid": 363250, "view_count": 2682, "vote_count": 1}
<p>I&#39;ve got a list of rs SNPs I&#39;d like to enter into the David functional annotation tool. Since it does not support rsids I need to get the refseq gene id (or something similar) of the genes these SNPs overlap with first.</p> <p>Is there a simple way of getting a gene ID for a SNP? Solution in BioPython or R is fine.</p> <p>Since I have 22k snps I need an automatic way. And if a text file that maps these values exist, a link to it would be enough.</p> <p>Ps. preferably looking for a solution that does not use the position of the SNPs; I would be able to solve the problem this way myself.</p>
It looks like DAVID can convert from Ensembl Gene IDs, so you can get to those from rs IDs using [R/biomaRt][1], like this: ``` library(biomaRt) mart.snp <- useMart("snp", "hsapiens_snp") getENSG <- function(rs = "rs3043732", mart = mart.snp) { results <- getBM(attributes = c("refsnp_id", "ensembl_gene_stable_id"), filters = "snp_filter", values = rs, mart = mart) return(results) } # default parameters getENSG() refsnp_id ensembl_gene_stable_id 1 rs3043732 ENSG00000175445 # or supply rs ID getENSG(rs = "rs224550") refsnp_id ensembl_gene_stable_id 1 rs224550 ENSG00000262304 2 rs224550 ENSG00000196689 ``` [1]: http://www.bioconductor.org/packages/release/bioc/html/biomaRt.html
biostars
{"uid": 111225, "view_count": 23509, "vote_count": 5}
<p>Hello,</p> <p>I would like you to suggest me a literature-based interactomics tool. Thanks in advance.</p>
<p><a href="http://www.ihop-net.org/UniPub/iHOP/">iHop</a> is a mature system that uses gene name recognition text-mining on MEDLINE abstracts and allows you to build a gene/protein network model by clicking and adding sentences with multiple gene names. From the iHop help pages:</p> <blockquote> <p>Nodes in this graph represent genes. Edges correspond to sentences that associate two genes with each other...The model graph makes it possible to analyze your collection of sentences in an interactive manner and to get familiar with the newly aquainted knowledge. By clicking on a gene, all its synonyms will be highlighted in red and the corresponding sentences will be ranked to the top. By clicking on an edge, those sentences containing the connected genes are shown at the top. Furthermore, a line separates these highlighted sentences from all others...By clicking on a gene in a sentence, the corresponding information page for the gene will be opened and additional information can be added to the Gene Model.</p> </blockquote> <p>Another option that has been around for some time is <a href="http://www.pubgene.org/">PubGene</a>:</p> <blockquote> <p>PubGene mines the abstract texts of 25 Million PubMed articles for co-citation of multiple genes or proteins and displays them as "Literature Networks", where nodes represent each gene or protein and the connecting lines represents the number of articles, in which each gene or protein pair is co-cited....When a gene or protein is studied, there is a good chance its name (or a synonym for that name) will appear in articles together with other gene or protein names. One can visualize how most genes that have been studied will be connected either directly or indirectly to each other in a Literature Network. Connections in the literature are a strong indicator of biological interaction.</p> </blockquote>
biostars
{"uid": 8135, "view_count": 3092, "vote_count": 4}
I have fasta file namely `119XCA.fasta` as shown below, >cellulase ATGCTA >gyrase TGATGCT >16s TAGTATG I need to remove all the fasta headers, keep the sequences one by one and need to write file name as a fasta header. The expected outcome is shown below, >119XCA ATGCTA TGATGCT TAGTATG I have used the following script `sed '/^>/d' foo.fa > out.fa` which remove the fasta headers but, i do not know how to manage to write file name as a header. Therefore, please help me to do the same.
try this: $ cat test.fa >cellulase ATGCTA >gyrase TGATGCT >16s TAGTATG $ awk 'BEGIN {print ">"ARGV[1]};!/^>/{print}' test.fa >test.fa ATGCTA TGATGCT TAGTATG $ cat <(echo ">"$basename test.fa) <(grep -v ">" test.fa) (note:extra space in header) > test.fa ATGCTA TGATGCT TAGTATG
biostars
{"uid": 466652, "view_count": 1053, "vote_count": 1}
Project PRJEB99111 has 147 samples. I want to download the metadata (age, sex, disease status, etc) of each sample, not fastq. The only way I can download the metadata is by downloading the xml file of each sample accession one by one - is there a way to bulk download all 147 metadata files? I can work with xml files if I have to. You can view the metadata for a specific sample accession by clicking on the"attributes" tab. Here is an example for one sample: [https://www.ebi.ac.uk/ena/data/view/SAMEA104228123][1] [1]: https://www.ebi.ac.uk/ena/data/view/SAMEA104228123
Using NCBI eUtils: `esearch -db bioproject -query "PRJEB99111" | elink -target biosample | efetch -format docsum | xtract -pattern DocumentSummary -block Attribute -element Attribute` produces (only a sample below) 2017-08-28 2017-08-26 ERS1887138 female 44 years UBERON:feces UBERON:feceUBERON:feces UCSF 2013-09-25 Missing: Not provided RRMS MS TRUE 124 urban biome human-associated habitat feces human-gut No USA:CA:San Francisco Missing: Not provided Missing: Not provided Homo sapiens 111 9606 Missing: Not provided mimarks-survey 37.76 adult -122.46 UCSF FALSE 1 stool 1 Missing: Not provided 1 1_a No Gut dysbiosis in patients with multiple sclerosis is characterized by bacteria that regulate T lymphocyte differentiation in vitro No_Treatment Off Missing: Not provided Missing: Not provided dry 1990 2017-08-28 2017-08-26 ERS1887137 male 61 years UBERON:feces UBERON:feceUBERON:feces UCSF Missing: Not provided Missing: Not provided RRMS MS TRUE 124urban biome human-associated habitat feces human-gut No USA:CA:San FranciscMissing: Not provided Missing: Not provided Homo sapiens 62 9606 Missing: Not provided mimarks-survey 37.76 adult -122.46 UCSF FALSE 1 stool 2 Missing: Not provided 1 1_a No Gut dysbiosis in patients with multiple sclerosis is characterized by bacteria that regulate T lymphocyte differentiation in vitro No_Treatment Off Missing: Not provided Missing: Not provided dry 1984
biostars
{"uid": 279582, "view_count": 7957, "vote_count": 2}
Hi, I am new to Bioinformatics.I am trying to use GATK tool for finding SNP and indels but the problem is that the documentation seems to be complex for a beginner like me with so many tools to start with including IndelRealigner,MuTect,Haplotypecaller..The more I read the more confused I am getting.I also tried using Indel religner which is the first step in the process of identification of indels(correct me if wrong) but I got an error saying that you don't have the intervals file in the targetIntervals in the following command. java -jar GenomeAnalysisTK.jar \ -T IndelRealigner \ -R reference.fasta \ -I input.bam \ -known indels.vcf \ -targetIntervals intervalListFromRTC.intervals \ -o realignedBam.bam The error message is:Could not read file /home/aditya/Bioinfotools/gatk/intervalListFromRTC.intervals because The interval file does not exist. I don't know where to get the intervals file from. But please guide me overall in understanding GATk.
There is a step by step guide: GATK best practices https://www.broadinstitute.org/gatk/guide/best-practices.php
biostars
{"uid": 183289, "view_count": 2064, "vote_count": 1}
Hi, everybody. I have FASTQ headers of the form @FCC3KD2ACXX:6:1101:1545:2184#ATCACGATC/1 The "ATCACGATC" portion of these older-style headers is supposed to be the "index sequence", or the molecular barcode of a multiplexed sample, according to the Wikipedia article. But I know what the barcodes are, and that isn't one of them. All the barcodes for this project are 6-8bp, not 9, and there are "only" about 300 legitimate barcodes in the lane. Overall, there are over 255,000 different individual ones of these index sequences on various different reads, out of about 178M reads in the file. Some of them even contain Ns, but they're all exactly 9bp long. And this particular sequence (ATCACGATC) is by far the most prevalent -- it's on 90% of the reads, so it can't be succeeding at separating anything out very specifically. I'm coming in late to a project, and all I have to go on is the FASTQ files, the list of barcodes, and some wet-lab protocol docs that I'm not particularly qualified to interpret. Any idea what this odd extraneous-looking sequence is? If so, thanks in advance!
Your sequence string 'ATCACGATC' is a perfect match to the TruSeq adapter index 1. The length (9bp) is atypical but not unheard-of. The length of the index read is often index-length-plus-one (i.e., seven cycles for a 6mer), so it appears that the sequencer was programmed for nine index cycles (8mer+1). The last nucleotide is typically trimmed from the data by default (--use-bases-mask = I8n), but the full length can be specified by the user (I9). As @igor indicates, the amount of data suggests that the file was not demultiplexed. However, given that 90% of your data are a perfect match, it's also unlikely to contain multiple libraries. Best guess is that some other lanes of this flow cell did contain 8mer indices, and CASAVA doesn't allow lane-specific BCL-to-FASTQ conversion (so the data in all lanes have identical formats).
biostars
{"uid": 198459, "view_count": 9415, "vote_count": 4}
Hi all, in relation to a mail from January this yearin the r-help community, I followed Simon's advice to do my analyses in DESeq2 instead of DESeq. I am working on an RNASeq from c. elegans. I have mapped the data with the ensembl genome build WBcel215. I have ran tophat2 to map and featureCounts to counts the reads (both with the defaults parameters). I have two conditions, control and a knock-out with each three replica. Now I am trying to find differentially regulated genes between the two conditions using DESeq2. This is the script I am using to read my raw count table into DESeq2: ``` featureCountTable <- read.table("featureCountTable_RawCounts.txt", sep="\t", quote=F) colData <- data.frame(row.names=names(featureCountTable), condition = c(rep("wt",3), rep("cpb3", 3))) cds <- DESeqDataSetFromMatrix ( countData = featureCountTable, colData = colData, design = ~ condition ) fit = DESeq(cds) res = results(fit) ``` But I am getting the same problem with DESeq2 as I have got with DESeq. When I ran the DESeq command I get a warning: ``` Warning messages: 1: In log(ifelse(y == 0, 1, y/mu)) : NaNs produced 2: step size truncated due to divergence ``` So again I have tried to change the fitType. fit = DESeq(cds, fitType="local") Which than came back without any warnings. Apparently this time both fitTypes are almost similar (at least to my inexperienced eyes.) I add both dispersion Plots. The red line goes through the point-cloud in both cases (as Simon defined a good fit in the last communication, I wish it would have bin so easy :-) . In the local fit type there a more outliers and the right end of the slope is going up again. I am not sure whether or not this is a good thing or not. So, my question is - which of the two options is better? I understand, that in general the parametric (default) option is better, but here it gives me a warning, so that something in the fit calculations is not good. How can I understand theses plots? Thanks for the help, Assa default/parametric fit <img alt="DESeq2_parametric" src="http://s23.postimg.org/uk4pitj47/DESesq2_parametric.png" style="height:180px; width:179px" /> local fit <img alt="DESeq2 local fit" src="http://s23.postimg.org/pvopnmtxj/DESesq2_local.png" style="height:180px; width:176px" /> P.S. I also tried to post it on the bioC help site, but got no responses, so I try it here.
This warning message can be ignored. It is coming from a call to R's glm() function in capturing the (dispersion ~ mean) trend. And the trend is fit iteratively until convergence, so though glm() complained at some step, it produced a final fit without error. If the parametric trend does not converge, local fit is substituted. What version of DESeq2 are you using? I thought I had worked on more comprehensible warning reporting in this function. I would go with the parametric to avoid the curve at the right side, although it shouldn't matter much.
biostars
{"uid": 105192, "view_count": 6951, "vote_count": 4}
I have been working with TCGA cancer data to examine expression (RNAseqV2) and methylation (Illumina 450k) data. I want to look at sequencing data, but I'm a bit lost to what sort of information is available through TCGA. I want to examine whether there are nonsense mutations between positions 2000-2500 across all cancer types available in TCGA. What sort of resources/workflow should I expect?
If you are asking whether you can search for non-sense mutations between amino acids 2000-2500 of a particular gene in the TCGA sets, I would suggest using the Mutation Annotation Files (MAFs). These files have already been run through somatic variation calling so you won't have to deal with the sequencing data directly. You can filter out your gene of interest across all the MAF's in perl,awk,grep etc... Then in the `amino_acid_change` column you can search for integer values between p.2000-p.2500, and `trv_type` "Non-sense mutation". Hope this helps. Nick
biostars
{"uid": 129630, "view_count": 2779, "vote_count": 1}
I'm trying to get a regex to work with rename; I've tried the approach of similar answered questions here but couldn't get the results I wanted. The files are named as such: SR1_S90_L001_R1_001.fastq.gz SR1_S90_L001_R2_001.fastq.gz Rinc_S96_L001_R1_001.fastq.gz Rinc_S96_L001_R2_001.fastq.gz And I would like to retain only the information prior to the first underscore and the _R1_ or _R2_ tags, like this: SR1_R1_.fastq.gz SR1_R2_.fastq.gz Rinc_R1_.fastq.gz Rinc_R2_.fastq.gz Thanks in advance!
Try safe-batch-rename tool `brename` ( https://github.com/shenwei356/brename ) brename -p '^(\w+?)_.+_(R[12])_.+' -r '${1}_$2.fq.gz' # updated # original answer # brename -p '^(\w+)_.+_(R[12])_.+' -r '${1}_$2.fq.gz' # if you have ran this, you can run 'brename -u' to undo.
biostars
{"uid": 323148, "view_count": 3697, "vote_count": 1}
I want to determine the substitution error rate and the indel error rate for a given BAM file. I've been reading over the following article. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3278762/#B1 First off I just want to confirm my definitions of a substitution error and indel error are correct. A substitution error would be when the sequencer substitutes a different base than the actual base in the sequence being sequenced. So if the actual sequence was ..ATGG.. but the sequencer read ..ACGG.., then the reading of C instead of a T would be a substitution error, since one of the reads has a C which shouldn't be there. An indel error would be when the sequencer deletes or inserts a base that is different from the actual sequence being looked at. So if the actual sequence was ..GATG.. and the sequencer read ..GTG.. then the A being deleted would be an indel error., since the read is missing an A that should be there. Assuming those are right, I am also confused with how to determine them from a BAM file. If a tool exists then that would be great For substitution I know it would involve the quality score for a given base and the number of mismatches. By mismatches I just mean if a given base has a coverage of 100X and 98 are T but the other two aren't, then there are 2 mismatches for that position. I'm just not sure how exactly I would combine these to find a rate. For indel errors I know it would involve homopolymers like mentioned in the paper but I have no idea how to find the rate.
BBMap gives very complete statistics for substitution, insertion, deletion, and N (no-call) rates; they get printed to the screen after mapping. Also, it can generate a histogram with the `mhist` flag, showing the match/sub/ins/del/N rate by position in the read; `ehist` shows how many reads have a specific number of substitutions; and `indelhist` gives the counts of indels of specific lengths. bbmap.sh ref=reference.fa in=reads.fq out=mapped.sam mhist=mhist.txt ehist=ehist.txt indelhist=indelhist.txt These histograms can also be generated by Reformat from a sam or bam file, but only if the cigar strings are in sam 1.4+ format (with X and = in the cigar strings instead of M). Most aligners do not generate that kind of cigar string, but you can easily check by looking at one of the mapped reads in the bam file. If you want to exclude actual polymorphisms in the genome that do not come from sequencing, it's probably best to call variations and mutate the reference according to the calls, then map again. Alternately, for SNPs at least, you can get an estimate of the number of substitutions by running an error-correction program which will generally tell you how many errors it finds and corrects.
biostars
{"uid": 149962, "view_count": 5418, "vote_count": 2}
<p>I checked inbreeding coefficient (F) on my samples (around 200) using plink </p> <pre><code>plink --file mydata --het </code></pre> <p>and found the distribution of F pretty symmetric:</p> <p><img src='http://snag.gy/QXhqn.jpg' alt='enter image description here' /></p> <p>and the short summary:</p> <pre><code>&gt; summary(het$F) Min. 1st Qu. Median Mean 3rd Qu. Max. -0.35990 -0.07199 0.01568 0.02755 0.13460 0.40790 </code></pre> <p>The F values are quite big (either positive or negative), which made me worry about sample quality. Presumably they are all unrelated. Can anyone tell me whether this distribution of F indicates some problems and what they are? Or, is there a rule of thumb about what F value is considered normal for unrelated, cleaned samples? Thanks!</p>
For relatedness try IBS/IBD estimation: - plink 1.9 https://www.cog-genomics.org/plink/1.9/ibd - old website: http://zzz.bwh.harvard.edu/plink/ibdibs.shtml PI_HAT: - Identical twins, and duplicates, are 100%identical by descent (Pihat 1.0) - First-degree relatives are 50% IBD (Pihat 0.5) - Second-degree relatives are 25% IBD (Pihat 0.25) - Third-degree relatives are 12.5% equal IBD (Pihat 0.125).
biostars
{"uid": 58663, "view_count": 10602, "vote_count": 3}
Hi. I am wondering a term with SIFT score. I think that SIFT refers to some measurement of SNPs, and while reading Annovar paper, I saw below sentence as follows: > Finally, Annovar can filter specific variants such as SNPs with >1% frequency in the 1000 Genomes Projects, or non-synonymous SNPs with SIFT scores > 0.05. Regarding above sentence, I ask you two questions. 1. I think that 1% frequency is a little bit low allele frequency. Dose it have an effect to filtering irrelevant snp variants? I don't think so.. 2. SIFT-score threshold is about 0.05 as shown in above sentence. What does SIFT means about and threshold of 0.05 might be effect on filtering variants?
<p>To answer your first question, 1% is the standard cutoff used to describe the difference between &quot;common&quot; and &quot;rare&quot; variants. Depending on your study, you might want to change that. For example, in a GWAS for a common trait, you might be interested only in variants that are above a certain frequency in the population, whereas if you&#39;re looking at rare Mendelian traits you might only want very low frequency variants. You may also want to narrow this down to a specific population, eg for a GWAS in African Americans, you would be interested in variants common in African populations. Steve mentioned the VEP, which allows you to filter variants by frequency, choosing your own frequency, &gt; or &lt; and pick a population of interest.</p>
biostars
{"uid": 121218, "view_count": 24783, "vote_count": 2}
Hi friends I installed Tophat2 with all dependencies (bowtie2, Boost, Samtools) but when I want to run tophat, it shows this error: ``` File "/home/tophat-2.0.14/bin/tophat", line 1003 except getopt.error, msg: ^ SyntaxError: invalid syntax ``` My python version is 3.4 , how can I fix this problem? Thanks
Try installing python2.
biostars
{"uid": 142769, "view_count": 5060, "vote_count": 1}
Trying to use [tbl2asn](http://www.ncbi.nlm.nih.gov/genbank/tbl2asn2/) The directions are [here](http://www.ncbi.nlm.nih.gov/books/NBK53709/#gbankquickstart.Submission_using_tbl2asn) It starts on Step C where you download the tbl2asn file and rename it. It says "set permissions as required for your operating system." but i have no idea how to do that. I moved it into a new folder in my applications and then: export PATH=/Applications/Command_line_apps:$PATH in the terminal . tried to call it using tbl2asn in the command prompt but I got back bash: /Applications/Command_line_apps/tbl2asn: Permission denied How do fix the permissions?
Hello jolespin - this post does not fit the main topic of this site, as is not a bioinformatics question. I'd suggest that you do some basic reading on file permissions in UNIX. FWIW, 'chmod +x yourfile' should make it executable, which may solve your problem.
biostars
{"uid": 99222, "view_count": 3807, "vote_count": 1}
Hi, Is there a tool out there which will allow me, in any way, to determine how similar two fastq file sets (paired-end) are? It could be any metric like number of identical reads etc. or any other metric which can be relevant in this case. I need this to diagnose the reason behind low agreement of variant calls between two identical runs: if the fastqs are quite similar to each other, then it was the variant-calling pipeline and not the upstream bench-work. Thanks!
Maybe you could try [commet][1]? It was designed for metagenomics, but it allows you to compute a distance between two fastq files. [1]: https://colibread.inria.fr/software/commet/
biostars
{"uid": 182163, "view_count": 3387, "vote_count": 1}
<p>Dear All,</p> <p>I would like to know about few docking softwares, freely available ones.</p> <p>Thanks</p>
Honestly, depends on which *docking* you are talking about. The mechanics of the software (i.e. the algorithms, scoring functions) depend on its purpose. If you want to do protein-protein docking, Autodock might not be the best option. For protein-ligand docking, it might. You also have protein-nucleic acid, protein-glycan, multiple proteins, etc. Autodock is a good option for protein-ligand docking. If you want an overview for protein-protein/DNA/etc docking, have a look at these reviews: [1][1], [2][2]. Nearly all that software is free for academic use. [1]: http://www.ncbi.nlm.nih.gov/pubmed/19462412 [2]: http://onlinelibrary.wiley.com/doi/10.1111/febs.12771/abstract
biostars
{"uid": 115663, "view_count": 4981, "vote_count": 1}
Hi! I have assembled my PacBio FASTQ reads with Canu. Now I would like to polish\correct the assembly by mapping these PacBio FASTQ reads on the assembly itself. I heard about Quiver in the SMRTanalysis pack, but I'm wondering if there is any alternative software. Thanking you in advance for your help!
The tools [Pilon][1] and [Icorn2][2] performs assembly polishing with Illumina data. I am not sure if Icorn2 also takes PacBio reads for correction. Running additional rounds of Quiver may be the helpful to use the PacBio reads efficiently. [1]: http://software.broadinstitute.org/software/pilon/ [2]: http://icorn.sourceforge.net/
biostars
{"uid": 258209, "view_count": 2724, "vote_count": 2}
<p>I have illumina paired-end whole-genome sequencing reads which I have map to around -400 reference plastid genomes. After getting mapped reads, I have to assemble as de novo plastid genome.</p> <p>1. Do I have map reads to invidiual reference genomes one by one, or can I download all genomes at one go and index as one reference genome?. Do bwa or bowtie has<br /> enough memory to index 400 genomes as one reference index genome?</p> <p>2. Do you think which one is best method?. Mapping individual genome or all genomes indexed as one?</p> <p>3. If I have to map individually, can I combine all bam file together and Can I convert to fastq file using bam2fastq tool (in picard) for denovo assembly?</p>
<s>I don't fully understand your introductory sentence as stated, but I can answer your questions to the best of my understanding:</s> 1. You can concatenate all of your genomes into one large file and then index that composite genome. Just make sure your naming convention for each component in the FASTA is logical so that you can understand your results downstream. bwa and bowtie don't "have their own memory," but if you're using a 64-bit system you shouldn't run into any size limitation troubles, especially with plastid sequences. 2. **EDIT:** I say map to a single composite reference. You can pass parameters into bowtie to limit your mappings on the front end, thus saving computational time and making it easier to isolate the most accurate mappings for each read. 3. You can merge BAM files using <a href="http://samtools.sourceforge.net/samtools.shtml">samtools merge</a>, but of course you wouldn't need to if you proceed according to my recommendation. There are a number of tools from converting from BAM back to FASTQ, and the Picard tool should work just fine. It does have trouble with paired-end mappings in certain circumstances though, and if you run into trouble using Picard I'd suggest <a href="http://bedtools.readthedocs.org/en/latest/content/tools/bamtofastq.html">bedtools bamtofastq</a>. <s>Is the idea here that you're going to map to a bunch of plastid reference sequences from various organisms, and then convert the aggregate mappings back to FASTQ and perform an assembly from them? If so, I say concatenate the reference sequences into one file and map against that. That way it will be easier to retain the best mappings up front, especially if you don't care about *which* reference you're mapping to.</s>
biostars
{"uid": 98432, "view_count": 6865, "vote_count": 2}
Hello all, `bcftools` have these logical operators that can be used in filtering expressions: && (same as &), ||, | What's the difference between `||` and `|`? Can someone provide an example and/or usecase for clarification? The [manual][1] have this example: QUAL>10 | FMT/GQ>10 .. true for sites with QUAL>10 or a sample with GQ>10, but selects only samples with GQ>10 QUAL>10 || FMT/GQ>10 .. true for sites with QUAL>10 or a sample with GQ>10, plus selects all samples at such sites But this is not clear to me. fin swimmer [1]: http://www.htslib.org/doc/bcftools.html#expressions
Say your VCF contains the per-sample depth and genotype quality annotations and you want to include only sites where one or more samples have big enough coverage (`DP>10`) and genotype quality (`GQ>20`). The expression `-i 'FMT/DP>10 & FMT/GQ>20'` selects sites where the conditions are satisfied within the same sample: bcftools query -i'FMT/DP>10 & FMT/GQ>20' -f'%POS[\t%SAMPLE:DP=%DP GQ=%GQ]\n' file.bcf 49979 SampleA:DP=10 GQ=50 SampleB:DP=20 GQ=40 On the other hand, if you need to include sites where both conditions met but not necessarily in the same sample, use the && operator rather than &: bcftools query -i'FMT/DP>10 && FMT/GQ>20' -f'%POS[\t%SAMPLE:DP=%DP GQ=%GQ]\n' file.bcf 31771 SampleA:DP=10 GQ=50 SampleB:DP=40 GQ=20 49979 SampleA:DP=10 GQ=50 SampleB:DP=20 GQ=40 This example is taken from http://samtools.github.io/bcftools/howtos/filtering.html --- **EDIT: (inserted by a mod)** *Answer given on [github][1]:* Well, sorry to demonstrate the difference on `&` and `&&` instead of `|` of `||`, but it's the same priniciple. The manual page says it all: QUAL>10 | FMT/GQ>10 .. true for sites with QUAL>10 or a sample with GQ>10, but selects only samples with GQ>10 QUAL>10 || FMT/GQ>10 .. true for sites with QUAL>10 or a sample with GQ>10, plus selects all samples at such sites Or you can try to run yourself: $ bcftools query -f'[%POS %SAMPLE %DP\n]\n' -i'FMT/DP=19 | FMT/DP="."' test/view.filter.vcf 3162006 A 19 3162007 A . 3162007 B . $ bcftools query -f'[%POS %SAMPLE %DP\n]\n' -i'FMT/DP=19 || FMT/DP="."' test/view.filter.vcf 3162006 A 19 3162006 B 1 3162007 A . 3162007 B . [1]: https://github.com/samtools/bcftools/issues/856#issuecomment-416140784
biostars
{"uid": 331106, "view_count": 3063, "vote_count": 3}
<p>I&#39;ve downloaded a fastq file from SRA (http://trace.ncbi.nlm.nih.gov/Traces/sra/) containing reads from a paired-end Illumina 101 bp RNAseq experiment. The only problem is, it contains both read pairs in a single file, whereas I need separate files with all the _1.fq reads in one and the _2.fq reads in another.</p> <p>Can anybody help? I&#39;m aware of the fastq-dump tool within the SRA Toolkit, but I couldn&#39;t get it to work when I was originally downloading the data.</p> <p>Many thanks in advance.</p> <p>My fastq file looks like this:</p> <p>$ head sra_data.fastq<br /> @SRR1659960.1.1 1 length=101<br /> NAGAAATGAATGAGCCTACAGATGATAGGATGTTTCATGTGGTGTATGCATCGGGGTAGTCCGAGTAACGTCGGGGCATTCCGGATAGGCCGAGAAAGTGT<br /> +SRR1659960.1.1 1 length=101<br /> #1=BDDDDDHFBFIEHHHHAG&lt;HE@HGGE@HHFGHGGHHFHIHG@FFGGGHIIIIIFAC=F@GEGEECCDCECCBBBBCCCD&gt;9599&gt;C:@&gt;5@9&gt;?CCCD<br /> @SRR1659960.1.2 1 length=101<br /> CCCACTTCCACTATGTCCTATCAATAGGAGCTGTATTTGCCATCATAGGAGGCTTCATTCACTGATTTCCCCTATTCTCAGGCTACACCCTAGACCAAACC<br /> +SRR1659960.1.2 1 length=101<br /> &lt;7?BD?DD&lt;DFFABBEHEEFHII&gt;C:BCDD?&lt;C?FFC4E&gt;@DEF&gt;?FGHDFBBCG8??DGGIII:BF@C=FFC;C=D;@?EA76?DDBEC?&gt;&gt;ACCCABBB<br /> @SRR1659960.2.1 2 length=101<br /> NATAAAGTGTATGACAAATATACAAGGCTCCTAATATTGGTTTAACTTGGAGAAGTAGGTAAAGGAAGAAGGGNAAAGGAAATAGACAAAAAGACTACAGT</p>
Use Reformat from the BBMap package: reformat.sh in=sra_data.fastq out1=r1.fq out2=r2.fq interleaved
biostars
{"uid": 162142, "view_count": 3005, "vote_count": 2}
Dear all, has anyone done any benchmarking on speeding up long read alignment algorithms ? I mainly use minimap2, but its' runtime varies by a factor of 10 across our cluster. I've been trying mm2-fast https://github.com/bwa-mem2/mm2-fast, the partially accelerated version, but without much success so far. Is for example PAF output faster than SAM ? Have others worked out how to scale minimap2 for Promethion scale datasets ? I expect LRA https://github.com/ChaissonLab/LRA has a similar runtime from their presented results, and others seem slower still (ngmlr etc). Thanks
1) Yes, when Minimap2 makes paf files it is faster than when it makes sam files.<br> 2) For a speedup at cost of accuracy you can increase the minimizer length ("-k") and window length ("-w").<br> 3) You can increase "-I". If the reference is larger than 4 Gbp, this will accelerate Minimap2 at cost of increased RAM consumption.<br> <br> Also, see https://github.com/lh3/minimap2/issues/322
biostars
{"uid": 9543993, "view_count": 688, "vote_count": 1}
I am interested in assessing the clonality of a tumor sample I have which is also paired with a normal tissue control from the same patient (~30x coverage). Specifically, I would like to know if there are any significant sub-clones present within the tumor sample. I have come across numerous tools which look like they may be able to help me, including: [cloneHD][1], [SubcloneSeeker][2], [ABSOLUTE][3], [CITUP][4], [THetA2][5], [PhyloWGS][6] .. others? Does anybody have an experience using any of these tools, or any thoughts on which one I should try out? [1]: http://www.sciencedirect.com/science/article/pii/S2211124714003738 [2]: http://www.genomebiology.com/2014/15/8/443 [3]: http://www.nature.com/nbt/journal/v30/n5/full/nbt.2203.html [4]: http://bioinformatics.oxfordjournals.org/content/31/9/1349.full [5]: http://bioinformatics.oxfordjournals.org/content/30/24/3532.full#ref-6 [6]: http://www.genomebiology.com/2015/16/1/35
We've been working on integrating clonality and heterogeneity estimation tools into bcbio (https://github.com/chapmanb/bcbio-nextgen). It's still a work in progress but we've been evaluating against internal datasets where we have external predictions of normal contamination. We've had the most success with: - Battenberg from Sanger (https://github.com/cancerit/cgpBattenberg). This calls CNVs and provides estimates of normal contamination. It's also a required input for PhyloWGS. - PhyloWGS. We don't currently have a way to validate heterogeneity but get a useful set of trees to estimate how much of a mix is in a tumor sample. - BubbleTree from MedImmune (http://www.bioconductor.org/packages/release/bioc/html/BubbleTree.html) which provides estimates of normal contamination and also clonality. We haven't dug much into the heterogeneity estimates of this yet. We also looked at THetA2, but didn't have good luck with it, so spent more time with the above three tools. Hope this helps provide some useful directions.
biostars
{"uid": 148085, "view_count": 6591, "vote_count": 5}
Hello, I am trying to use bedtools subtract in the following way: $ bedtools subtract -a A.bed -b B.bed but I get the following error: ERROR: file A.bed has non positional records, which are only valid for the groupBy tool When I tried to Google the answer, none of the solutions provided seemed to help. Here is the head of A.bed: $ head A.bed chr1 1 249250621 chr2 1 243199373 chr3 1 198022430 chr4 1 191154276 chr5 1 180915260 chr6 1 171115067 chr7 1 159138663 chrX 1 155270560 chr8 1 146364022 chr9 1 141213431 The cross checked that the columns are tab delimited and the three important columns are in the right order. Additionally, there are no invisible characters after the final column that might throw off Bedtools. Any insight into the problem would be greatly appreciated!
I found the solution. I made A.bed in Windows and B.bed in Linux, so the two files had conflicting end characters. When I turned the Windows end characters into Linux end characters, Bedtools started working properly again
biostars
{"uid": 330003, "view_count": 5401, "vote_count": 3}
I have data set between two comparison or two condition such as HSC and CMP , so i want to do a scatter plot with along with the regression value. Meanwhile when i m trying to label the sample they are all labelled into same color , i want different color for the labelling so that it can be distinguished Here is my sample data set gene HSC CMP ENSG00000158292.6 1.8102636 2.456869 ENSG00000162496.6 2.6796705 6.203838 ENSG00000117115.10 3.4509115 5.555739 ENSG00000159423.14 3.6809277 5.063446 ENSG00000053372.4 5.7089974 6.851090 ENSG00000127423.8 4.4894292 5.996304 ENSG00000242125.3 10.6258802 11.715932 Here is my code library("ggpubr") ggscatter(data1, x = "HSC", y = "CMP", add = "reg.line", conf.int = FALSE, cor.coef = TRUE, cor.method = "pearson", color = "black", size=1) Any suggestion or help would be highly appreciated
If I understand well you want a scatterplot with a linear regression ? >Edit to take your comment into account i.e. the color variable using ggplot if data1 is your dataframe : ggplot(data1,aes(x=HSC,y=CMP,col=variable)) + geom_point() + geom_smooth(method = "lm", se = TRUE)
biostars
{"uid": 284277, "view_count": 2747, "vote_count": 1}
Hello I have WIG file and need to find genes regulated by the protein which is alternatively spliced (to be precise - the type of alternative splicing should be a5ss - alternative donor sites) Please advise what programs should I use for this purpose.
[MATS][1] is my favorite tool for splicing event analysis [1]: http://rnaseq-mats.sourceforge.net/ As mentioned in the previous comment, MISO is another option. In both cases, you need `.bam` files for RNA-Seq data (neither uses ChIP-Seq data, which is probably not a reliable tool for analyzing splicing events).
biostars
{"uid": 105037, "view_count": 4371, "vote_count": 1}
Hello. I am trying to run a blast against NR locally, and I am looking to find a way to run it as fast as I can. I recalled from my memory that I used LC_ALL=C fgrep -wf- for a "quicker" search file from one file to another, so I'm wondering if I can do something similar with the blast. In the same line with fgrep I tried blastp -query FILE -db LC_ALL=C /home/Protein_DataBases/nr/nr.fasta -evalue 1e-5 -outfmt 6 -num_threads 25 but it doesn't work. Does anyone knows if I can do this action?? Thank you in advance!
Don't put `LC_ALL=C` as a parameter to blast! That won't work. You can set LC_ALL=C before a command: LC_ALL=FOO; echo $LC_ALL prints: FOO it gets applied to all commands run in that shell. The best recommendation would be to export the variable in bash (preferable upon initialization) so that it is always applied. export LC_ALL=C you may run into various problems otherwise when it comes to bioinformatics tools and processes. Specifically sorting will be byte-wise when is set to `C` and alphabetical otherwise.
biostars
{"uid": 415951, "view_count": 854, "vote_count": 1}
How can I best plot a histogram for billions of genotype quality values? I have a simple one column file with billions of genotype quality values. The file is several GB uncompressed. Is there a statistics library in Python or R that can build up a histogram by streaming trough the data? Instead of loading everything to memory and then creating the histogram? I prefer using all of the data versus sampling it. Or do I have to write a script first to collect the counts per bin and then give those count per bin to R for plotting? This functionality feels like it should already exist in a stats library some where. I know the min and max of the values and would be able to specify a bin size.
Meet the selling point of Datashader: http://datashader.readthedocs.org/en/latest (even though I tend to filter and subset the data whenever possible).
biostars
{"uid": 180857, "view_count": 2333, "vote_count": 1}
Hi ! I started using snakemake to replace my bash scripts in order to have a more "pretty" code, but I have some problems, especially to parallel my jobs. I have this code: FILES = [ os.path.basename(x) for x in glob.glob("Experience/*") ] SAMPLES = list(set([ "_".join(x.split("_")[:2]) for x in FILES])) CONDITIONS = list(set(x.split("_")[0] for x in SAMPLES)) for path in DIRS: if not os.path.exists(path): os.mkdir(path) rule all: input: expand('Trimming/{sample}_R1.trim.fastq', sample=SAMPLES) rule trimming: input: adapters = ADAPTERS, r1 = 'Experience/{sample}_R1.fastq.gz', r2 = 'Experience/{sample}_R2.fastq.gz' output: r1 = 'Trimming/{sample}_R1.trim.fastq', r2 = 'Trimming/{sample}_R2.trim.fastq' message: ''' --- Trimming --- ''' shell: ' bbduk.sh in1="{input.r1}" in2="{input.r2}" out1="{output.r1}" out2="{output.r2}" \ ref="{input.adapters}" minlen='+str(minlen)+' ktrim='+ktrim+' k='+str(k)+' qtrim='+qtrim+' trimq='+str(trimq)+' hdist='+str(hdist)+' tpe tbo ' I have 5 samples, having used the wildcard "sample", I was expecting that my 5 trimming start at the same time, but they start one after the other .. What's wrong with my code? thank you in advance
How did you start the pipeline? Did you use the `-j` option of snakemake to allow multiple jobs?
biostars
{"uid": 364708, "view_count": 4201, "vote_count": 2}
Kind of a basic awk question. I have a bed file where I have been doing the following to limit interval sizes to those under 1000bp with the following awk script, how would I go about doing the same thing but limiting the output to those between 150-200bp only? Thanks! awk '{if($3-$2 <= 1000) print}' test.bedpe > test_under1000.bedpe
Use awk `AND` operator: awk '{if (CONDITION && CONDITION) print}' test.bedpe > test_between_150-200.bedpe
biostars
{"uid": 401031, "view_count": 1075, "vote_count": 1}
Hi everyone, I am trying to make a consensus from a aligned bam file. This is my workflow: bcftools mpileup -Ou -f reference_HG19.fa aligned.bam | bcftools call -mv -Ob -o calls.bcf bcftools index calls.bcf cat reference_HG19.fa | bcftools consensus calls.bcf > consensus.fa However, the created consensus.fa file turned out to be identical with the reference_HG19.fa file. I looked at calls.bcf file with bcftools view, and it seems the following columns are all empty: #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT aligned.bam When I look at calls.bcf with samtools view I get the error "Aborted". Does anyone have an idea how to solve this issues? Thank you very much! Robert
We finally found the solution: The downloaded `bam` was generated by paired end sequencing. For some unknown reason `bcftools` considers these read pairs to be "anomalous". By using the `-A` input option one prevents `bcftools` from skipping anomalous read pairs. Now it's working as it is supposed to.
biostars
{"uid": 353381, "view_count": 3741, "vote_count": 4}
Dear all, referring to the batch correction methods for scRNA-seq, would you have any preference and/or comments ? among possible choices : -- MNNCorrect, as outlined in SimpleSingleCell workflows : https://bioconductor.org/packages/release/workflows/html/simpleSingleCell.html -- ZINB-WAVE : https://bioconductor.org/packages/release/bioc/html/zinbwave.html -- HARMONY : https://www.biorxiv.org/content/10.1101/461954v2 -- SCTransform : https://satijalab.org/seurat/v3.0/integration.html thanks a lot, bogdan
So far, MNN is the best (but still very limit) algorithm for general batch effect correction method. But based on the recent paper (https://www.nature.com/articles/s41587-019-0113-3 ), in some situations, it just exhibits minor improvement than doing nothing. It's all depends on how good your data are.
biostars
{"uid": 401404, "view_count": 4644, "vote_count": 3}
***EDIT SKB 17MAR16*** Hi Group, figured out that I was not calling the correct variable in my loop. I made a handle file outside of the loop to append, but then was using the SeqIO.write function without the handle, so it was writing over the same file with just the last record. Please find the correct script below and thanks for the help! # this script is used to convert fastq files to fasta files # then to rename the fasta ID with the sample ID from the lab from Bio import SeqIO import sys # grabbing the file and the name seq_file = sys.argv[1] labels = seq_file.split(".") # converting the file from fastq to fasta SeqIO.convert(seq_file,"fastq",labels[0]+".fasta","fasta") # taking the converted file and then changing the fasta header handle = open(labels[0]+".fasta","a") for seq_record in SeqIO.parse(handle,"fasta"): old_header = seq_record.id new_header = labels[0] seq_record.id = new_header + "_" + old_header # renaming the pseudogene with # the lab id and the referance # used seq_record.description = "" # this strips the old header out SeqIO.write(seq_record, handle,"fasta") handle.close() ***/EDIT*** Hi group, I wrote a script that will take the fastq output from samtools mpileup > vcf2fq and create a fasta file that includes the drops in coverage. When I use this scripted for a bacterial genome it works as expected, however when I use it on a segmented genome it only outputs the last segment in the file. I'm using python3.4 and biopython1.66. Please see my code and troubleshooting below. # this script is used to convert fastq files to fasta files # then to rename the fasta ID with the sample ID from the lab from Bio import SeqIO import sys # grabbing the file and the name seq_file = sys.argv[1] labels = seq_file.split(".") # converting the file from fastq to fasta SeqIO.convert(seq_file,"fastq",labels[0]+".fasta","fasta") # taking the converted file and then changing the fasta header handle = open(labels[0]+".fasta","a+") for seq_record in SeqIO.parse(handle,"fasta"): old_header = seq_record.id new_header = labels[0] seq_record.id = new_header + "_" + old_header # renaming the pseudogene with # the lab id and the referance # used seq_record.description = "" # this strips the old header out SeqIO.write(seq_record, labels[0]+".fasta","fasta") handle.close() I think I have an error when writing the new record. I have been playing with this line: handle = open(labels[0]+".fasta","a+") When I have it written like this, It runs without error, I get a multi-fasta output without the header change; however when I have the code written like this: handle = open(labels[0]+".fasta","rU") like in the biopython tutorial, I get the output header I want, but only the last record in the file. I'm not sure if or why with the file open in "append" is just skipping the next part of the script. I would appreciate any input be appreciated
See original thread for answer
biostars
{"uid": 181428, "view_count": 4829, "vote_count": 1}
Hello, I am trying to extract some transcript sequences from a stringtie merged gtf using gffread and am getting the following errors: Error (GFaSeqGet): subsequence cannot be larger than 227 Error getting subseq for MSTRG.17.1 (1..229)! This error happens for many of gene entries, some but not all transcripts end up getting extracted. The gtf file was created in StringTie using the same reference genome file as everything else (where my bam files are from, the reference annotation gtf file). This annotation is from running StringTie first and merging the gtf annotation files. The main problem seems to be that the end coordinates exceed sequence length when extracting transcripts even though I am sure that they were made correctly. If there is a way to just skip that particular gene that could work as well. This is the command I used: gffread /Users/Desktop/StringTie_comp_annotation_new.gtf -g /Users/Desktop/Xla.v91.repeatMasked.fa -w transcripts.fasta
Hi, jjrin. I'm having a similar situation here. I have a BAM file from Bowtie2 and a BAM file from BWA. The Bowtie2 BAM file works fine but when I run gffread with the BWA BAM file I got the same error. I found in other threads that it may be related with soft-clipped reads extending beyond the contig. Anyway, I got a workarround by just replacing the end position of the GTF file by the sequence size on my reference FASTA file. If this solution is a valid one to you, I let the script here: [gtf_fixer_togffread.py][1] [1]: https://github.com/VictorGambarini/gtf_fixer_to_gffread
biostars
{"uid": 264727, "view_count": 3910, "vote_count": 1}
Hi All I have some samples with low mapping in Salmon (40% and less) that have higher alignments in Tophat, and trying to troubleshoot. I picked some of the unmapped reads (from writeunmapped salmon parameter) and Blat them to human. Some have 2 or more matches with identity 99% to 100% And some have many many matches, I need to scroll the page down too much. Many of these matches are 100% and some range between 85% to 100% identity. I looked also into the “ambig_info.tsv’ , found some records with 0 unique mapping and more than 100 ambiguous mapping, but couldn’t relate to those unmapped. This is how one match of one of them look in one mate and the other: ACTIONS QUERY SCORE START END QSIZE IDENTITY CHRO STRAND START END SPAN browser details YourSeq 22 100 122 151 100.0% 10 + 37378275 37378303 29 browser details YourSeq 22 52 74 151 100.0% 10 - 37378275 37378303 29 So why this is not counted as mapped for example? Any hint, clue? Thanks
Salmon maps reads to the transcriptome while TopHat aligns to the genome (assuming that you are using Tophat in the "normal" way). I suspect that if you examine the reads that map with TopHat but not with Salmon, some (perhaps large) proportion of them is mapping to intergenic and/or intronic regions, so those reads will not be mapped by Salmon. Taking a look in a browser at the BAM file from TopHat or using a tool like RNASeqQC can help quantify the intergenic/intronic reads.
biostars
{"uid": 284505, "view_count": 2977, "vote_count": 3}
Hello world ! I am working on microarray data (Hugene 2.0 st from affymetrix) and I was willing to get probe-level informations. For one probeset (fsetid), I need all the different probes (fid) and their chromosome, start, stop, sequence etc. I know I can get this on NetAffx but as I have to check multiple probesets, I wanted to use the library *oligo* in association with *pd.hugene.2.0.st* Here is my code tested on a 'test-list' of 6 .CEL files: ## Library loading library(oligo) library(dplyr) library(pd.hugene.2.0.st) # .CEL importation and read celFiles <- list.celfiles('./', full.names=TRUE) batch = read.celfiles(celFiles) # Expression matrix tmp tmp=batch@assayData$exprs dim(tmp) # Connexion pd.hugene.2.0.st@getdb() and normalization conn = pd.hugene.2.0.st@getdb() norm = rma(batch, background=TRUE, normalize=TRUE, target="core") # Data importation tableNames=dbListTables(conn) tablesList=list() for (tableName in tableNames){ tablesList[[tableName]]=dbGetQuery(conn,paste('SELECT * FROM',tableName)) print(tableName) str(tablesList[[tableName]]) } # Simplification of the tablesList dataframe testList <- tablesList$featureSet testList2 <- tablesList$pmfeature testListComplet <- dplyr::full_join(testList, testList2, by = "fsetid") # Test on one probeset randomly selected : probeset 16730541 ind = testListComplet$fid[which(testListComplet$fsetid == 16730541)] tmp[ind,] probeset_test <- subset(testListComplet, fsetid == 16730541) probeset_test Here is the probeset_test result I get : fid fsetid strand start stop transcript_cluster_id exon_id crosshyb_type level chrom type fid atom x y 153940 16730541 0 102218019 102218094 16730540 5033332 1 NA 11 1 944681 153940 48 586 153941 16730541 0 102218019 102218094 16730540 5033332 1 NA 11 1 2165196 153941 279 1343 So, according to pd.hugene.2.0.st, for probeset **16730541**, I have 2 probes : **153940 & 153941** with a common start (102218019) and stop (102218094). But when I check on the NetAffx website ( https://www.affymetrix.com/analysis/netaffx/exon/wtgene_probe_set.affx?pk=712:16730541 ), here are the two probes I get : atcagcggcgccgacaaggagatac chr11:102218019-102218043 (+) cagcaaacacggaagctgcgcggct chr11:102218070-102218094 (+) Did I do something wrong ? Did I misunderstand something ? Could you please enlighten me ?! I am sorry if this is a stupid question but I could not figure it out by myself :( Thank you so much in advance. Here is my R info : platform x86_64-apple-darwin15.6.0 arch x86_64 os darwin15.6.0 system x86_64, darwin15.6.0 status major 3 minor 4.0 year 2017 month 04 day 21 svn rev 72570 language R version.string R version 3.4.0 (2017-04-21) nickname You Stupid Darkness
You are right (almost!). If you see further in the NetAffx page that you linked **Probe Set Location:** The genomic location of the probe set for the genome assembly used at array design time (See Genome Source). These coordinates ***begin at the first base of the first probe sequence and end at the last probe of the probe set.***
biostars
{"uid": 263082, "view_count": 2674, "vote_count": 2}
Hi I know by sp_url <- paste("https://cancer.sanger.ac.uk/cancergenome/assets/", + "signatures_probabilities.txt", sep = "") implemented in `MutationalPattern` R package I can download mutational signatures from the COSMIC website but version 2 (30 signatures) Anyone knows where I can obtain these probabilities for version 3 (60 signatures) Thanks in advance
[Here][1] you can download SBS, DBS and ID probabilities for PCAWG reference signatures. Make sure which one are you using, there is one from SigProfiler and the other from SignatureAnalyzer, two methods used in original publication. [1]: https://www.synapse.org/#!Synapse:syn12009743
biostars
{"uid": 434874, "view_count": 1147, "vote_count": 2}
<p>Hi Guys,</p> <p>I have a BAM file, and a big read list. What I want to do is to remove the reads in the read list from the BAM file. I can transform Bam to Sam file and then use a Python script to remove unwanted reads. And then transform Sam to Bam again. But I am wondering if there is a more efficient way, which I mean faster, easier, and memory-efficient, to achieve this goal?</p> <p>Any advice is appreciated!</p> <p>Tao</p>
[picard FilterSamReads](http://broadinstitute.github.io/picard/command-line-overview.html#FilterSamReads) > **READ_LIST_FILE (File)** Read List File containing reads that will be included or excluded from the OUTPUT SAM or BAM file. Default value: null.
biostars
{"uid": 172737, "view_count": 13019, "vote_count": 3}
Hi. I post a question continuously. In this question , curious things are about somatic and germline. Please see the below paragraph with bold text. At every position where one or both samples had a variant, VarScan performs a direct comparison between tumor and normal genotype and supporting read counts to determine the somatic status. (I do understand so far but next sentence is confused to me) **Variants present in both samples are classified as somatic , variants heterozygous in the normal but homozygous in the tumor are classified as LOH(loss of heterozygosity) , and variant shared between samples are classified as germline**. the second sentence above, What is the difference between somatic and germline? The sentence that Variants present in both samples are confused in meaning. I knew that somatic mutation is that the variants only exist in tumor sample, but reading and thinking again that sentence .., I can't connect the somatic concept I already knew. and also I post a question before , but re-question about homozygous and heterozygous. how can we define homo or hetero considering we have a only one pair of chromosome ? I can't imagine the homo or hetero in next-seq, because I knew that the only one pair of chromosomes is sequenced... Am I misunderstanding about concept? I hope you understand my fuzzy questions. I am looking forward to your answer Thank you!
<p>A somatic variant (=mutation) is a mutation that happens at some point in your life. This variant differs to what is hardcoded in your genome at birth. In cancer genomics, e.g. if you want to find somatic mutations, which means acquired mutations that are unique to the cancer cells (either as a cause or a consequence of the cells transforming into tumor cells), you usually compare the tumor genome against the wildtype (germline) genome of the same individual (e.g. PBMCs, healty tissue). If you variant in the tumor but not in the germline, this could be a hint for a somatic mutation.</p> <p>Here is a simplified example explaining hetero and homozygosity:</p> <p>Lets assume, we have a diploid cell (which is not always the case for e.g. tumor cells which can have copy number variations). If variant A (e.g. a SNV) is homozygous, you expect to see 100% of the reads from your exome-seq or wgs (lets assume NGS doesnt produce errors) containing this variant. If this variant is heterozygous (so one chromosome has the variant A and the other has the wildtype B), you expect to see 50% of the reads to contain A and 50% of the reads to contain B.</p> <p>Note, that this explanation is for DNA-Seq, not for RNA-Seq.</p> <p>Maybe this publication is also useful for understanding the very interesting field of allele frequencies:</p> <p>http://www.nature.com/srep/2014/140422/srep04743/full/srep04743.html</p>
biostars
{"uid": 108058, "view_count": 6935, "vote_count": 3}
After reading about discrepancies in RPKM method and solution as TPM. This question came to my mind would TPM be a good tag normalization method for ChIP-seq. During ChIP normalization I am always worried about the ChIP-enrichment strength when comparing ChIP-seq in two different condition after normalizing with tag per million. **Is TPM a good method for ChIP-seq normalization?** All suggestions are helpful. ---------- Thanks
Not sure it answers your question... One strategy I used to quantify "peak strength" is to count reads in the target region (e.g. from peak callers or some other regions of interest) and compare this count to the count in the flanking regions, left and right of the target. By contrasting target vs flanking you don't have to worry about library size and you capture local biases that, supposedly, are conserved between libraries. You can get the script from here: wget 'https://github.com/dariober/bioinformatics-cafe/blob/master/localEnrichmentBed.py?raw=true' -O localEnrichmentBed.py This is from the help: localEnrichmentBed.py -h usage: localEnrichmentBed.py [-h] --target TARGET --bam BAM --genome GENOME [--slop SLOP] [--blacklist BLACKLIST] [--tmpdir TMPDIR] [--keeptmp] [--verbose] [--version] DESCRIPTION Compute the read enrichment in target intervals relative to local background. Typical use case: A ChIP-Seq experiment on a sample returns a number of regions of enrichment. We want to know how enriched these regions are in a *different* sample. Note that enrichment is quantified relative to the local background not relative to an input control. See also localEnrichmentScore.R to combine replicates and compare treatment vs control. OUTPUT: bed file with header and columns: 1. chrom 2. start 3. end 4. targetID 5. flank_cnt 6. target_cnt 7. flank_len 8. target_len 9. log10_pval 10. log2fc EXAMPLE localEnrichmentBed.py -b rhh047.bam -t rhh047.macs_peaks.bed -g genome.fa.fai -bl blacklist.bed > out.bed Useful tip: Get genome file from bam file: samtools view -H rhh047.bam \ | grep -P "@SQ\tSN:" \ | sed 's/@SQ\tSN://' \ | sed 's/\tLN:/\t/' > genome.txt REQUIRES: - bedtools 2.25+ - numpy, scipy NOTES: For PE reads, the second read in pair is excluded (by samtools view -F 128) since coverageBed double counts pairs. optional arguments: -h, --help show this help message and exit --target TARGET, -t TARGET Target bed file where enrichment is to be computed. Use - to read from stdin. --bam BAM, -b BAM Bam file of the library for which enrichment is to be computed. --genome GENOME, -g GENOME A genome file giving the length of the chromosomes. A tab separated file with columns <chrom> <chrom lenght>. NB: It can be created from the header of the bam file (see tip above). --slop SLOP, -S SLOP Option passed to slopBed to define the flanking region (aka background). If `int` each target will be extended left and right this many bases. If `float` each target is extended left and right this many times its size. E.g. 5.0 (default) extends each target regions 5 times its length left and right. --blacklist BLACKLIST, -bl BLACKLIST An optional bed file of regions to ignore to compute the local background. These might be unmappable regions with 0-counts which would inflate the target enrichment. --tmpdir TMPDIR Temp dir to use for intermediate files. If not set python will get one. A subdir will be created here. --keeptmp If set, the tmp dir is not deleted at the end of the job (useful for debugging). --verbose, -V Print to stderr the commands that are executed. --version show program's version number and exit
biostars
{"uid": 195689, "view_count": 7693, "vote_count": 5}
<p>How can I create tabix .tbi index file for some .vcf file using java (htsjdk)?</p>
Use a [TabixIndexCreator][1] [1]: http://samtools.github.io/htsjdk/javadoc/htsjdk/index.html?htsjdk/tribble/index/tabix/TabixIndexCreator.html
biostars
{"uid": 136561, "view_count": 4282, "vote_count": 3}
Hello, I would like to convert a blast tabular format to bed format, I am interested in the subject ID, start and end, so columns 2, 9, 10. Hence it seemed to me this simple awk would do: awk '{print($2"\t"$9-1"\t"$10)}' file.blastn > file.bed However, when attempting a bedtools intersect bedtools intersect -a file.annotation.gff -b file.bed Error: unable to open file or unable to determine types for file file.bed - Please ensure that your file is TAB delimited (e.g., cat -t FILE). - Also ensure that your file has integer chromosome coordinates in the expected columns (e.g., cols 2 and 3 for BED). I did check that the file is indeed TAB delimited. However, I realised that the coordonates difference between columns 10 and 9 was sometimes negative. Example awk '{print $3 -$2}' AZF180.bed|head -200 -214 -253 It's not clear how I should deal with that. Simply swapping the columns such as rows with negative sum values become positive? Thanks for the insight
I'll assume you have tabular blast output (if not then you should re-run and ask for tab output of blast) In the tabular output if the alignment is forward against reverse the coordinates will indeed be large to small (instead of small to large). To have correct bed format you will need to switch those coordinates around indeed. this can be achieved in the same awk cmdline you already used, expanding it a bit (tip: if-condition)
biostars
{"uid": 9470584, "view_count": 878, "vote_count": 1}
I'm having trouble with Bedtools Intersect. It is not reporting overlap when all of feature A is within feature B. Command: bedtools intersect -a File1.bed -b File2.gff -wao -f 0.90 -sorted > Out.file Out.file: VMNF01000005.1 484754 484980 . . + . . . -1 -1 . . . . 0 VMNF01000005.1 484754 484980 . . + . . . -1 -1 . . . . 0 VMNF01000005.1 484754 484980 . . + . . . -1 -1 . . . . 0 VMNF01000005.1 484754 484980 . . - . . . -1 -1 . . . . 0 VMNF01000007.1 6294425 6294650 . . - . . . -1 -1 . . . . 0 VMNF01000007.1 6294425 6294650 . . - . . . -1 -1 . . . . 0 VMNF01000007.1 6294425 6294650 . . - . . . -1 -1 . . . . 0 VMNF01000007.1 6294425 6294650 . . + . . . -1 -1 . . . . 0 VMNF01000008.1 1441418 1441616 . . - . . . -1 -1 . . . . 0 When I search manually, all of feature A sits within feature B. ![Image of overlap when viewed in Artemis. Feature A (pink and yellow highlighted region within circle) sits within feature B (red highlighted region).][1] Can anybody please explain where I have gone wrong? Bedtools version: bedtools v2.29.2 [1]: https://i.ibb.co/gy5x7V5/Artemis-VMNF-Promoter-in-red-mimp-in-pink.png
I will leave this for those who may be interested - the issue was that the scaffolds/contigs had a slight difference in the names between the .bed file and the .gff file. The .bed file included the suffix ".1". After adding ".1" to the scaffold/contig names in File2.gff, both bedtools intersect and bedmap worked. E.g. => File1.bed <== VMNF01000005.1 482754 486980 . . + VMNF01000005.1 482754 486980 . . + ==> File2.gff <== VMNF01000002 GenBank PROMOTER 1 34 . + 1 ID=FocTR4_00017231;Name=FocTR4_00017231 VMNF01000002 GenBank PROMOTER 4227 5039 . - 1 ID=FocTR4_00017232;Name=FocTR4_00017232
biostars
{"uid": 442527, "view_count": 1380, "vote_count": 1}
<p>How should we select the correct kmer size for kmergenie analysis? I have read in wiki page that &#39;largest k-mer size to consider&#39;. In that case I am looking for valuable comments from you people.</p>
<p>Smaller than the length of your reads, larger than 20. Attempt many different values and merge the assemblies in the end.</p>
biostars
{"uid": 110453, "view_count": 4314, "vote_count": 2}
Does anyone know where could I find a comprehensive list of human CNV regions? I am working with 1000G variants and want to filter out variants within CNV regions, to see whether some pattern change or not. Thanks, Federico
The 1000 Genomes Consortium recently published an integrated SV/CNV map for the 1000 Genomes phase 3 data: http://www.nature.com/nature/journal/v526/n7571/full/nature15394.html As a VCF file: ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/integrated_sv_map/ALL.wgs.integrated_sv_map_v2.20130502.svs.genotypes.vcf.gz Alternatively, Decipher also has a population CNV map in tab-delimited format: https://decipher.sanger.ac.uk/files/downloads/population_cnv.txt.gz
biostars
{"uid": 174343, "view_count": 5099, "vote_count": 1}
I have a design of 6 conditions with three replicates each, with expression levels for ~51000 transcription products. I'm using Limma's voom transformation to make my data approximately Gaussian, which by ocular pat-down seems to be the case: <img alt="" src="http://i.imgur.com/euA1mso.png" style="height:300px; width:421px" /> The mean-variance estimation is visible here - kinda overdispersed in the mid range but it doesn't look critically bad to me: <img alt="" src="http://i.imgur.com/2Gxw28k.png" style="height:300px; width:463px" /> I fit the linear model thusly: ``` > labels [1] A A A C C C G G G L L L T T T U U U Levels: A C G L T U design <- model.matrix(~labels) fit2 <- eBayes( lmFit(voomCounts, design) ) ``` However, this yields the following p-value distribution from the pairwise t-tests (looks like a misspecified model) <img alt="" src="http://i.imgur.com/yyOsYWN.png" style="height:300px; width:421px" /> The `fit2$F.p.vals` has only 10-15 genes *above* 0.01, the rest are absolutely tiny; and when I plot some of the genes called for differential expression they look very unremarkable. What am I doing wrong? Thanks so much :)
I do not know why are you doing voom counts but if you want differentially expressed genes from 6 samples, lets say 3 replicates each of cases and controls `df <-` "is your dataframe containing 6 coloumns (3 for each ) of normalized expression data" then just go for ``` library(limma) groups<-as.factor(c(rep("Cases",3),rep("Control",3))) design<-model.matrix(~0+groups) colnames(design)=levels(groups) fit<-lmFit(df, design) cont.matrix<-makeContrasts(Cases-Control, levels=design) fit2<-contrasts.fit(fit, cont.matrix) ebfit<-eBayes(fit2) topTable(ebfit, coef=1) topTable(ebfit, number=Inf, p=0.05, adjust.method="none", coef=1) ## topTable would be your list of DEG ```
biostars
{"uid": 114905, "view_count": 5074, "vote_count": 1}
Hiho I want to create a rarefaction curve using R i.e. vegan {rearecurve/speccaccum}. This worked out quite well (please see here): ![][1]) Anyways, as you can see in the figures the curves don't reach saturation. So I know that i have to kind of fit or simulate the missing data but this is exactly the problem. How can I produce a fit or something similar that shows me how large my sample has to be to reach saturation in discovered species? Thanks for your time and help! ps. code: (raremax <- min(rowSums(t(species)))) Srare <- rarefy(t(species), raremax) plot(specnumber(t(species)), Srare, xlab = "Observed No. of Species", ylab = "Rarefied No. of Species") abline(0, 1) rarecurve(t(species), step = 20, sample = raremax, col = "blue", cex = 0.6) [1]: https://i.imgur.com/7gPo7gv.png
My quite naive suggestion is to estimate the richness with some estimator like Chao1 (should be in Vegan package), then extrapolate your richness curve to get 90% of estimated diversity and check for the sample size. See [iNEXT R package][1] for details on extrapolating rarefaction curves. [1]: http://chao.stat.nthu.edu.tw/blog/software-download/inext-r-package/
biostars
{"uid": 106561, "view_count": 19276, "vote_count": 2}
Hi all! I'm trying to do a heatmap with `plink`'s output for IBD calculation (`.genome`file). This file has several columns, but only three are important to me, so my parsed input format looks like this: IID1 IID2 PI_HAT ID1 ID2 0.0163 ID1 ID3 0 ID1 ID4 0.0155 ID2 ID1 0.0096 ID2 ID3 0.0125 ID2 ID4 0.475 ... I would like to do a heatmap with the PIHAT values for all my IDs (I have hundreds). To my understanding, `R`asks for a matrix as input for the plot, but I'm not able to parse this input into the correct format (I actually get the heatmap, but all the values are wrong). Could someone please give me advice or if there's another way to do the plot, I'd be as well to try it. Thank you very much in advance!
Input data for `heatmap` should be numeric matrix, your data is long format data, we need to reshape it to wide, then convert to matrix for plotting, see example: library(reshape2) # example data df1 <- read.table(text = "IID1 IID2 PI_HAT ID1 ID2 0.0163 ID1 ID3 0 ID1 ID4 0.0155 ID2 ID1 0.0096 ID2 ID3 0.0125 ID2 ID4 0.475", header = TRUE, stringsAsFactors = FALSE) #convert long-to-wide x <- dcast(df1, IID1 ~ IID2, value.var = "PI_HAT") # convert to matrix with column AND rownames myM <- as.matrix(x[ , -1 ]) row.names(myM) <- x$IID1 # I am converting all NAs to 0, reconsider if this is suitable in your case. myM[ is.na(myM) ] <- 0 #then plot heatmap(myM) <a href="https://imgur.com/jvpHnRq"><img src="https://i.imgur.com/jvpHnRq.jpg" title="source: imgur.com" /></a>
biostars
{"uid": 379352, "view_count": 1875, "vote_count": 1}
Hi I have normalized a microarray data and matched the prob identifiers to gene symbol but now for some genes I have several probes. For example for gene A I have several matched probes. So I have repetition for gene A. How I can take mean over the expression of repeated genes please and having an unique value? This is my expression matrix > head(array[,1:10,1:5]) GSM482796 GSM482797 GSM482798 GSM482799 1 OR2T6 0.0171 -0.1100 -0.0394 -0.0141 2 EBF1 0.1890 0.0222 0.0832 0.0459 3 DKFZp686D0972 1.9400 0.2530 0.3770 0.8310 4 ATP8B4 -0.1490 0.0690 -0.0637 -0.0527 5 NOTCH2NL 0.1540 -0.3880 0.2160 -0.0812 6 SPIRE1 0.2920 0.1690 0.5500 0.1430
> limma::avereps(array,ID=rownames(array))
biostars
{"uid": 403594, "view_count": 580, "vote_count": 1}
Hi all, I have bulk RNA-seq data with 12 samples - WT (x4), 'A' KO (x4), and 'B' KO (x4). I want to generate a 2D PCA plot (biplot) like below figure to look at the relationship between the samples. ![a 2D PCA plot][1] I have tried an R package, 'PCAtools,' but it looks not work correctly as below. ![the PCA plot generated by the code below][2] I have pasted my code and data below. I will very much appreciate it if you share any advice or suggestions. Thanks in advance! Joshua library(PCAtools) data <-read.csv("C:/.../all,gene_log2cpm_revised.csv", fileEncoding = 'UTF-8-BOM') groups = c(rep("WT", 4), rep("A-KO", 4), rep("B-KO", 4)) cols = c('red', 'green', 'blue')[factor(groups)] data$gene_name = as.numeric(as.factor(data$gene_name)) pca = prcomp(data) pca$x pca$sdev biplot(pca, cex=0.7, scale=T, xlim=c(-0.6,+0.6)) Data file format: ![Data file format top genes][3] [1]: /media/images/7893d708-b043-4fc5-a698-665c99df [2]: /media/images/428ae629-d824-4fc0-a2a2-9c61f108 [3]: /media/images/a9d95265-adc6-444c-9aca-17b400a3
Hi Joshua, Your second plot is not produced by my package, *PCAtools*. It is produced via base R functions. Can you take a look through my vignette, please? - https://bioconductor.org/packages/release/bioc/vignettes/PCAtools/inst/doc/PCAtools.html#quick-start-deseq2 You probably need something like: library(PCAtools) data <- read.csv( 'C:/.../all,gene_log2cpm_revised.csv', fileEncoding = 'UTF-8-BOM', row.names = 1) groups = c(rep("WT", 4), rep("A-KO", 4), rep("B-KO", 4)) cols = c('red', 'green', 'blue')[factor(groups)] metadata <- data.frame(groups = groups) rownames(metadata) <- colnames(data) p <- pca(data, metadata = metadata) biplot(p, colby = 'groups') Kevin
biostars
{"uid": 9495706, "view_count": 1415, "vote_count": 1}
I want to recall SNPs from Illumina HumanOmni2.5-4v1. I have the raw data (`Grn.idat` and `Red.idat`) files, and also a matching `FinalReport.csv` which include the next columns: SNP Name Sample Name GC Score Allele1- Forward Allele2- Forward Allele -Top Allele2-Top Allele1-Design Allele2-Design Allele1-AB Allele2-AB Theta R X Y X Raw Y Raw B Allele Freq, Log R Ratio And I have thousands of such files. How can I get the number of calls for a specific SNP? Should I use the raw idat files, or CSVs? And if the answer is the CSVs, then which column, and how to interpret it? Thanks
I finally ended up using R package named crlmm. It has a function named genotype.Illumina that does the recalling given the raw idat files.
biostars
{"uid": 177351, "view_count": 3393, "vote_count": 2}
Hi everyone. Many bioinformatics courses usually have the same items like matching aligments, genome assembling, sequence alignment, variant calling and so on. But, I am interested in algorithms on which for example BLAST or Artemis works (like choosing primer, finding splicing indexes, find secondary structures of a family of RNA molecules etc). Could you provide me some books or other sources where I can read about these non-obvious algorithms. Sorry if my question is silly, I know that I could google every algorithm I want to understand, but it would be better to have 2-3 sources with a good explanation. Thanks a lot.
Basically, majority of the algorithms in bioinformatics can be found the pdf files as the following: 1. http://bioinformaticsalgorithms.com/ 2. http://www.math-info.univ-paris5.fr/~lomn/Cours/BC/Publis/Complements/introductiontoBioinformaticsAlgorithms.pdf 3. https://mitpress.mit.edu/books/introduction-bioinformatics-algorithms 4. http://www.amazon.com/Introduction-Bioinformatics-Algorithms-Computational-Molecular/dp/0262101068 5. http://bix.ucsd.edu/bioalgorithms/ 6. http://www.comp.nus.edu.sg/~ksung/algo_in_bioinfo/
biostars
{"uid": 164475, "view_count": 3766, "vote_count": 2}
<p>I am trying to analyze GSE 30321 datasets in R. However, it has 295 samples separated into two files. I use</p> <pre><code>gset &lt;- getGEO('GSE30321',GSEMatrix=TRUE) </code></pre> <p>In this way, gset has two elements. Does anyone know how to merge the two files into one and gset has only one element with 295 files? Thank you.</p>
<pre><code>gselist = getGEO("GSE30321") eset = combine(gselist[[1]],gselist[[2]]) </code></pre> <p>This is usually all that is needed. </p>
biostars
{"uid": 73003, "view_count": 4901, "vote_count": 2}
Hello, I tried to download the library ERX5671923 from SRA using fastq-dump with the --split-files option. It is a library from the Fly Cell Atlas (experiment ERP129698 in SRA). I retrieved only one file per run (e.g ERR6032593). As it is a paired end 10X V3 library I was expecting to retrieve 2 or 3 files (Read 1, Read 2 and potentially Index 1) but it contains only one single file with 91bp reads. Do you have any idea if it is possible to use this file and, if yes, how to use it? I want to generate abundance matrix using kallisto/bustools. Best Wishes, Julien
That happens quite often that R1 is missing, don't ask me why. Good thing is that often submitters provide BAM files allowing reconstruction of fastq from there. That is the case [here][1] for all four accession numbers. See for example at the bottom of [here][2]. You can conveniently get bam files with `prefetch` from the sra-toolkit: ```bash mamba install -c bioconda sra-tools prefetch --type bam --max-size 9999999999 -O ./ ERR6032593 ``` Sometimes in `Type` (see below) it doesn't say `bam` but something like `10X Genomics bam file`, for example [here][3]. Then you can use `--type TenX` with `prefetch` afaik. ![enter image description here][4] Once you have the BAM files **and it is the BAM file from CellRanger** use the bam2fastq utility from 10x to convert the bam back to fastq: https://support.10xgenomics.com/docs/bamtofastq If the BAM was made from alternative pipelines you will probably need to do custom parsing to recreate the R1 file as technically scRNA-seq (10x) is single-end sequencing using R2 while R1 is not used for the actual alignment but CB/UMI are processed differently. You probably need to access the tags that store CB and UMI sequences and recreate R1 accordingly, putting these sequences into the read positions where either CellRanger or your processing pipelines expect them. For example, 10x Chromium 3' v3 has CB in R1 position 1-16 and the UMI at 17-28, so that is relatively easy to parse from the BAM tags (I guess, untested, never done manually myself). But then again there are probably corner cases, so be careful. See also: https://bioinformatics.stackexchange.com/a/15523 [1]: https://www.ncbi.nlm.nih.gov/Traces/study/?acc=ERX5671923&o=acc_s%3Aa [2]: https://trace.ncbi.nlm.nih.gov/Traces/index.html?view=run_browser&acc=ERR6032593&display=data-access [3]: https://trace.ncbi.nlm.nih.gov/Traces/index.html?view=run_browser&acc=SRR5167880&display=data-access [4]: /media/images/6b04d15d-d005-4699-a986-3ce7c53f
biostars
{"uid": 9556827, "view_count": 307, "vote_count": 1}
Hello ! I don't know what the difference between " ENSG00000002586.1" and "ENSG00000002586.19_PAR_Y" in Ensembl gene/transcripts ID means. How do you deal with these ID with "_PAR_Y" in RNA-seq mapping process ? Thanks,
Just found out that these suffixes are added in the GENCODE annotation files. We need to discuss how we deal with these because we can't be putting out IDs with one hand and not providing tools that can interpret them with the other hand. I'm sorry for any confusion.
biostars
{"uid": 398174, "view_count": 3171, "vote_count": 1}
Hi! I am currently working to analyze alpha & beta diversity with microbiome data of 48 sample. (Animal stool samples) I finished working of Beta diversity and got the data of significance. (Weighted UniFrac, Bray-curtis) However, I received this message and need some help from you. The message is as below (From reviewer) ------ I highly recommend the use of **linear modeling (LM) or generalized linear modeling (GLM) which is** commonly used in microbiome studies **rather than a Wilcoxon rank-sum test**. This will allow you to better control for things that may impact your results such as the age animals. It is also important to control for site as this is known to significantly affect the gut microbiome and could be included as a random variable in a generalized linear mixed model (GLMM). ---------- I found that non-binomial regression or possion regression is the most commonly used glm for microbial analysis and trying to use the models in comapring Alpha and Beta diversity difference of the 16s rRNA data. However, my question is that what package in R or python is recommended when figuring out microbial analysis. (Perhaps, many researchers in the metabarcoding field use some common statistical tools) And, also I want to know that how can I modify (adding a dependent variable or etc. ) the data file to use in the GLM model. (What variable should be added to compare in Alpha and Beta diversity analysis, and how can I calculate them?) my metadata file is as below ![Meta data file to be used in GLM][1] [1]: /media/images/478208ba-66c0-4776-a09d-8228c249 Any recommendation about using Package source, specific GLM model or the way to modify data - would be a big help for me to solve the problem. Thank you
In R, linear models are available with the function lm() and generalized linear models with glm(), both from package stats. For mixed effects, you can use the [lme4 package][1] (functions lmer() and glmer()). There's also the function glmnet() from the [glmnet package][2] if you want LASSO or elastic net regularization. [1]: https://cran.r-project.org/package=lme4 [2]: https://cran.r-project.org/package=glmnet
biostars
{"uid": 9543405, "view_count": 786, "vote_count": 2}
I have a list of genomic ranges mapped to hg19 . my data is in matrix format lets call it `ranges` which has 600,000 rows and 4 clumns here is few row of my data head(ranges) chr start end strand [1,] "chr1" "10025" "10525" "." [2,] "chr1" "13252" "13752" "." [3,] "chr1" "16019" "16519" "." [4,] "chr1" "96376" "96876" "." [5,] "chr1" "115440" "115940" "." [6,] "chr1" "235393" "235893" "." Is there a function that gets sequences and calculates GC content for each row( each range) I would prefer that output be in a vector format I would really appreciate your help
Hi, You can use the `getSeq` and `alphabetFrequency` functions from Bioconductor's [Biostrings](https://bioconductor.org/packages/Biostrings/) package to obtain the sequence and compute the frequency of each nucleotide, and then, just divide the sum of G+C by the total number of nucleotides. To use `getSeq` the easiest way is to create a GRanges object from your matrix. I use `toGRanges` from [regioneR]( https://bioconductor.org/packages/regioneR/) package, but there are other ways to build it. library(regioneR) library(BSgenome.Hsapiens.UCSC.hg19) ranges <- matrix(c("chr1", "10025", "10525", ".", "chr1", "13252", "13752", "."),ncol = 4, byrow = TRUE) colnames(ranges) <- c("chr", "start", "end", "strand") #Build a GRanges from your matrix ranges <- toGRanges(data.frame(ranges)) #Get the sequences and compute the GC content freqs <- alphabetFrequency(getSeq(BSgenome.Hsapiens.UCSC.hg19, ranges)) gc <- (freqs[,'C'] + freqs[,'G'])/rowSums(freqs) Here `gc` is a vector with the GC frequency of each range in the original matrix. Hope this helps Bernat
biostars
{"uid": 478444, "view_count": 2573, "vote_count": 2}
The site that normally hosts the gmap/gsnap source code has gone away: http://research-pub.gene.com Does anyone know if this intentional or when it will be back?
This has now been resolved and http://research-pub.gene.com/gmap/src/ is available again. For future reference raising an issue via this URL under the "Website Usability" seemed to get this to the correct people. https://www.gene.com/contact-us/email-us
biostars
{"uid": 391989, "view_count": 719, "vote_count": 3}
<p>Hi all,</p> <p>Our lab has sequenced 20 different strains of the same bacterial species. I now want to analyse these data, i.e., link genomic data to virulence (some of the strains are virulent, others aren't) and other phenotype differences.</p> <p>I started with de novo assemblies and mapping to an annotated reference genome. However, it is unclear to me how to proceed now in a good way. Most of the information for comparative genomics I find on the web applies only to the human genome or one-to-one comparisons of bacterial genomes.</p> <p>Is there a standard workflow for this kind of analysis? Does anybody know of any good tutorials/references/articles? How would you proceed?</p>
I don't think there is a single best standard approach to this, but [Bakker et al.][1] did a comparative genomics approach on Listeria with the goal of identifying virulence genes. You could simply try to replicate their methods on your data. The main focus seems to be on the gene level, identifying a subset of virulence related candidate genes, and then look for whole gene deletions. Thereby I would also include biological knowledge on bacteria in general and knowledge about this specific species, as to what are good candidate sets of genes (e.g. Type III secretion system, etc.). The aim is to identify if there are differences in the inventory of genes. In addition and in case, the whole gene deletion/insertion approach does not yield good candidate genes, I would run the whole raw data through a variant calling pipeline and look at smaller variation like small insertions or deletions, frame-shifts, mutations in CDS and promoter sequences. To accomplish this a standard pipeline (aka. [BWA][2] into [samtools][3]) could be used. You can then check variants common for certain phenotypes and check their effect on the protein level. Further, looking for non-coding elements might be interesting too. [1]: http://www.biomedcentral.com/1471-2164/11/688/ [2]: http://bio-bwa.sourceforge.net/ [3]: http://samtools.sourceforge.net/
biostars
{"uid": 49604, "view_count": 4847, "vote_count": 8}
Hi, I have data from a specific cell from mouse fed with a certain diet. I integrated 4 datasets that were measured at four different time for the integrated single-cell RNA seq analysis. I have been referring to the Seurat vignette : https://satijalab.org/seurat/v3.1/immune_alignment.html. I am using SingleR to identify cell type for each cluster and I am wondering if I need to set `DefaultAssay` as "RNA" or "integrated". I tried both, but they gave me slightly different results for cell type identification. Should I keep `DefaultAssay` as "RNA" or "integrated"? Any thoughts and advice are greatly appreciated. Thank you.
The assay should be RNA, since SingleR expects expression values.
biostars
{"uid": 442514, "view_count": 2755, "vote_count": 3}
Dear Friends, I have a matrix like this: Gene BRCA THYM TGHJ ACC 23 21 7 XTG 12 13 9 CFG 45 4 8 The numbers are the snp count from vcf file. I want to plot this in the form of a matrix with numbers colored based on their count; for example the highest number is colored in "Red" and then gradually the intensity of the color decreases with the decrease in number, so in this 45 is colored "Red" and 4 is colored with a very light color. Please let me know if am clear. I am looking to plot this matrix using ggplot2, but other ways in R are also very welcome. A matrix like this: ![enter image description here][1] [1]: https://s8.postimg.cc/65y00ijs5/Screen_Shot_2018-08-02_at_1.05.39_AM.png
I remember seeing a package for this exact task, but can't find it. So here is the starting steps, it will require more work to make it as pretty as you have in your example plot: # example input df1 <- read.table(text = " Gene BRCA THYM TGHJ ACC 23 21 7 XTG 12 13 9 CFG 45 4 8", header = TRUE) library(ggplot2) library(dplyr) library(tidyr) plotDat <- gather(df1, key = "Gene2", value = "value", -Gene) ggplot(plotDat, aes(Gene, Gene2, col = value, fill = value, label = value)) + geom_tile() + geom_text(col = "black") + theme_minimal() + scale_fill_gradient2(low = "white", mid = "yellow", high = "red") + scale_color_gradient2(low = "white", mid = "yellow", high = "red") ![enter image description here][1] [1]: https://i.imgur.com/cIDojU0.jpg
biostars
{"uid": 330285, "view_count": 9599, "vote_count": 2}
I'm interested in comparing the genes and genes sets regulated by two different drugs. I performed a microarray on Drug A vs control, and Drug B vs control. Using Limma, I calculated the differential expression for all 20,000 genes, and performed a pre-ranked GSEA. I now have ~700 gene sets that are depleted by Drug A, and ~800 gene sets that are depleted by Drug B (FDR<0.05). There are ~280 gene sets that are depleted by both Drug A and Drug B. **Taking the pre-ranked GSEA output, how can I meaningfully compare these 280 gene sets depleted by both treatments to characterize similarities and differences between these treatments**. If I am interested in PI3K signaling, could I compare the Normalized Enrichment Scores (NES) for all of the gene sets involved in PI3K signaling and perform a Wilcoxon test to determine if there is different enrichment between the two treatments? Can I take the number of genes within a gene set regulated by each drug and meaningfully compare those gene sets? Thank you for the help, a few pointers or a publication will set me on the right track. I've searched and can't find analyses like this.
I believe that <a href="https://bioconductor.org/packages/release/bioc/html/GSVA.html">gene set variation analysis (GSVA)</a> will work here. This will allow you to compare two sets of enriched terms and pathways. It may take a day to learn the methods, but the <a href="https://bioconductor.org/packages/release/bioc/vignettes/GSVA/inst/doc/GSVA.pdf">tutorial</a> is a good starting point. Other than that, just a manual 'human' comparison by looking over the top enriched terms should suffice. At the end of the day, we have to learn to disengage the computer and start to interpret results ourselves.
biostars
{"uid": 285119, "view_count": 6578, "vote_count": 2}
Hi all, reading carefully the documentation of VarScan CNA pipeline I noticed in step 4 the following suggestion: > If all of the data and segments are consistently above or below the > neutral value (0.0), you can re-center the data points with VarScan > copyCaller. My data seem to belong in this category after plotting in R using DNAcopy package. All are consistently below 0.0. So, my question is how should I know how much I do need to re-center my data? I mean I can calculate this by eye, but how accurate can be that? Is there any proper way to calculate that? Thank you in advance.
Here's an old script I have laying around that does the recentering. Run it on the varscan copyCaller output, then repeat segmentation on the new recentered file. https://gist.github.com/chrisamiller/f2c9e8bd565d500a8e8166fbd5eba2b1
biostars
{"uid": 194056, "view_count": 2800, "vote_count": 2}
I'm trying to achieve what this post was looking for https://www.biostars.org/p/77012/ Currently this is my command: bcftools mpileup -Ou --max-depth 8000 --min-MQ 30 --min-BQ 30 -f reference.fasta sample1.sorted.bam | bcftools call --ploidy 1 -Ou -mv | bcftools filter -s LowQual -e \'%QUAL<20\' > sample1.flt.vcf #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT sample1.sorted.bam Imtechella_halotolerans_length_3113269 4051 . C T 41.4148 PASS DP=2;VDB=0.02;SGB=-0.453602;MQ0F=0;AC=1;AN=1;DP4=0,0,0,2;MQ=42 GT:PL 1:71,0 Imtechella_halotolerans_length_3113269 4081 . C T 45.4146 PASS DP=2;VDB=0.02;SGB=-0.453602;MQ0F=0;AC=1;AN=1;DP4=0,0,0,2;MQ=42 GT:PL 1:75,0 As far as I can understand the DP specified in the INFO is depth of coverage across all samples. How can I get the depth per sample info in the format/genotype field? Any help appreciated!
Hi, you can specify this via the `mpileup` command. Please take a look at this snippet from my historical pipeline: - <a href="https://github.com/kevinblighe/ClinicalGradeDNAseq/blob/master/AnalysisMasterVersion1.sh#L300-L316">ClinicalGradeDNAseq#L300-L316</a> That is: bcftools mpileup \ --redo-BAQ \ --min-BQ 30 \ --per-sample-mF \ --annotate FORMAT/AD,FORMAT/ADF,FORMAT/ADR,FORMAT/DP,FORMAT/SP,INFO/AD,INFO/ADF,INFO/ADR \ -f "${Ref_FASTA}" \ Aligned_Sorted_PCRDuped_FiltMAPQ.bam |\ bcftools call \ --multiallelic-caller \ --variants-only \ -Ob > Aligned_Sorted_PCRDuped_FiltMAPQ.bcf ; Kevin
biostars
{"uid": 9481584, "view_count": 3057, "vote_count": 2}
<p>Is there a software package for GWASs that can include an interaction effect (SNP x predictor) in the regression using dosage data?</p>
Can you use [PLINK2](https://www.cog-genomics.org/plink2/assoc#linear)? That provides an interaction term for the linear regression... In terms of the dosage data, do you have to use the probabilistic data, or would best-guess hard-calling work?
biostars
{"uid": 101581, "view_count": 4797, "vote_count": 1}
Hi everyone, I was wondering if someone can point me to a good explanation where i can better understand how the reads in my fastq files are created. in the fastqc results from the data set I got, we assume that a lot of the adapter was sequenced ([2]. image). I'm not talking about read-thourgh, but the adapter somehow was sequenced. At least we think this is what we have. I have tried to understand in which direction the reads in my fastq files are read. ![example Overrep. sequences ([1]. image). What I don;t get is on which side do I find my adapters, if any are left? Are they at the beginning of my read so should I crop head of the read, or should I trim the end of the of the read, as the adapters are there? Is there a good explanation for that somewhere? thanks george ![1] ![2] [1]: /media/images/dd7a724b-980d-4bd9-8291-5ee57fc3 [2]: https://i.postimg.cc/9MZhcLMj/per-base-sequence-content.png
In Illumina sequencing adapter are always going to be present at 3'-end of the read (unless you are using some modification of standard procedure). You can also have adapter dimers (i.e no insert). That said, what kind of data is this. Looks like you can almost read the sequence looking at the plot so amplicons perhaps?
biostars
{"uid": 9505653, "view_count": 642, "vote_count": 1}
<p>Hello everyone! This is my first time posting here so I hope I am doing this the right way.</p> <p>I&#39;m a student in bioinformatics in Qu&eacute;bec, Canada and I just started a summer project with my teacher. I have to predict disordered proteins from a database containing a lot of sequences. I don&#39;t know if any of you is familiar with this process but I&#39;ll ask anyway.</p> <p>I&#39;ll use DISOPRED3 for the prediction. The problem is that the results I get are not the same on my personal computer as those I get from the server itself. Before using my scripts to predict with all the files I have, I would like to have the same results as the server. I use UniRef90 (like they do) and I installed the last version of the program. What could be different and cause the small differences? I already sent an email asking what they are using but they didn&#39;t respond yet.</p> <p>Thank you, any help will be appreciated!</p>
<p>If you read the FAQ at the psipred download page (<a href="http://bioinfadmin.cs.ucl.ac.uk/downloads/psipred/">http://bioinfadmin.cs.ucl.ac.uk/downloads/psipred/</a>), you get:</p> <pre> &quot;2. The next most common question we get regards getting different predictions from our PSIPRED web server from those you get on your own system. There are many reasons why this might occur, but the most obvious one is that we are using a ​different sequence data bank to the one you are using. Because modern secondary structure prediction methods are based on</pre> <pre> analysing multiple sequence alignments, if your data bank includes some extra sequences or misses out some sequences compared to our local data bank then you might get a slightly different prediction. Hopefully the differences will be small, but they can be quite significant if the alignment only includes a few sequences. So, if our server can find say just 5 homologous sequences and your system finds 20 homologous sequences to align, then the predictions may be very different indeed. That&#39;s just the reality of analysing evolutionary information. If the alignments are different, then the predicted secondary structure will probably be different. It&#39;s also possible that we are running a slightly older version of PSIPRED. Oddly enough we don&#39;t update our servers immediately after releasing a new version of PSIPRED as we need to do internal testing first. So, check that you are running the exact same version of PSIPRED as our server is currently running. Even so, it&#39;s likely that the real problem is going to be down to differences in the data banks and alignments that are produced.&quot;</pre> <p>Also, I am no expert with this software but some structure modelling software are not always deterministic and have a constant seed parameter for debugging purpose. Others (like Rosetta Antibody) are wonderful but will never run or even compile correctly on your laptop.</p>
biostars
{"uid": 141216, "view_count": 2354, "vote_count": 1}
Dear all, I want to do GATK variant calling. I have several raw-paired-end-reads fastq files for one pig individual. And I have finished mapping using bwa mem. GATK best practices (for calling germline SNPs + indels) tells that after mapping, I need to merge bam files. And it recommends `picard MergeBamAlignment`. what's the difference between `picard MergeBamAlignment` and `samtools merge` please? I found the input files they need are different. The former one need unmapped bam file as well as the reference genome file. And in my understanding, each `MergeBamAlignment` command is only for one fastq (pair) file, not for multiple fastq files. Is there any other difference please? Particularly, are their aims, and their output results any different please? I feel that `MergeBamAlignment` is for combining the information both from mapped file and unmapped file. samtools merge is for combining diffferent mapped file into one big mapped file. Or Can I ignore GATK's advice and simply use `samtools merge` instead? Thank you. Yingzi
Hi Yingzi,so you are here! I haven't used `MergeBamAlignment(Picard)` but I think the best way to get knowledge about a software is to read the original documentation. As for your case, firstly, we can find the asscoiated description in the documentation of [MergeBamAlignment(Picard)](https://software.broadinstitute.org/gatk/documentation/tooldocs/4.0.0.0/picard_sam_MergeBamAlignment.php) and the documentation reads: >A command-line tool for merging BAM/SAM alignment info from a third-party aligner with the data in an unmapped BAM file, producing a third BAM file that has alignment data (from the aligner) and all the remaining data from the unmapped BAM. Quick note: this is not a tool for taking multiple sam files and creating a bigger file by merging them. For that use-case, see {@link MergeSamFiles}. Secondly, we can also find the documentation of [samtools](http://www.htslib.org/doc/samtools.html) and we can find the function of merge: >Merge multiple sorted alignment files, producing a single sorted output file that contains all the input records and maintains the existing sort order. As you can see,the two tools have different functions, so you need to use both of them. What's more, you can also merge multiple bam files using [MergeSamFiles(Picard)](https://software.broadinstitute.org/gatk/documentation/tooldocs/4.0.0.0/picard_sam_MergeSamFiles.php) instead of `samtools`. You can also find that the desctiption of `MergeSamFiles(Picard)`: >Merges multiple SAM and/or BAM files into a single file. This tool is used for combining SAM and/or BAM files from different runs or read groups into a single file, similarl to the "merge" function of Samtools (http://www.htslib.org/doc/samtools.html). So I can confidently to say that what you have understood is right. Yours :) Duo
biostars
{"uid": 344609, "view_count": 3669, "vote_count": 1}
Hey I'm new here, and pretty new to R as well. I am trying to perform a PairwiseAlignment on 25 protein sequences from Uniprot. I have 25 Fasta files in this format: >tr|A0A287AI92|A0A287AI92_PIG Carbonic anhydrase 1 OS=Sus scrofa OX=9823 GN=CA1 PE=1 SV=1MTSPAWGYDGEYGPEHWSKVYPIANGNNQSPIDIKTSETKHDTSLKPISV..... I loaded the Fasta files to R using : ProtSeq_1 <- read.fasta("C:/Users/tiriy/Documents/A0A287AI92.fasta") Now I'm trying to order the Fasta files into data frame - which I miserably failed at doing and looping the Pairwise alignment with for loop like so: comb<- combn(10,2) MT1 <- a_data_frame for (i in 1:ncol(comb)) { x<-pairwiseAlignment( toString( MT1 [comb [1,i],1]), toString(MT1 [comb[2,i],1]), substitutionMatrix = "BLOSUM100", gapOpening = -2, gapExtension = -8, scoreOnly = FALSE) fileA<-paste0("C:/Users/uri/Desktop/", i,"-","blusom",".txt") writePairwiseAlignments(x,file=fileA)} - How can I data frame all the sequences? - Or otherwise any other suggestion on looping the pairwise alignment on these fasta files? Many thanks in advance! I hope I'd be able to give my aid back in the future
So since I was able to solve all my problems alone, I thought it would be helpful if I posted my script here for future reference. (Note the final product is a data frame for all combinations of sequences pairwise alignments with sequence descriptions for both subject and pattern and their scores) # Installing and loading packages pairwise alignment blosum50 and blosum100 ---- # Installing seqinR R package: install.packages("seqinr") # seqinR end # Installing Bioconductor - Biostrings chooseCRANmirror() # Getting inside CRAN mirrors 1 # Choosing the first option install.packages("BiocManager") # Installing BiocManager in order to install Biostrings BiocManager::install("Biostrings") # Installing Biostrings via BiocManager # End of Biostrings installation: Biostrings, BiocManager # seqinr: require(seqinr) # seqinr end # Biostrings: require(Biostrings) # Loading Biostrings into library #Biostrings end # Step 1 - Listing all fasta files:---- all_fasta<-list.files("C:/Users/tiriy/Desktop/TheProject/ProjectR/fastas") # Making a vector to reffer to all fasta files in certain folder # End of step 1 # Step 2 - Making a data frame out of the files ---- # using biostrings package - readAAStringSet # And converting it into a dataframe: A <- readAAStringSet(all_fasta) #Reading all fasta files at once {biostrings} A1 <- names(A) # Setting vector to hold all the names from the fasta # files retrievd in vector A A2 <- paste(A) # Setting a vector A2 with pasted values from vector A ProtDF <- data.frame(A1,A2) # Using A1 as first olumn and A2 as second column # End of step 2 # Step 3 making a generation of combinations for pairwise alignment ---- D <- combn(25,2) # End of easy peasy step 3 # Step 4 loading data:---- # # Loading Data: data("BLOSUM50") data("BLOSUM100") # End of step 4 # Step 5 - assigning empty vectors for a for loop---- ScoreX <- c() nameSeq1 <- c() nameSeq2 <- c() # End of step 5 # Step 6 THE LOOP ---- for (i in 1:300) { dumpme<-pairwiseAlignment(toString(ProtDF[D[1,i],2]) # combining all pairwise alignment possible ,toString(ProtDF[D[2,i],2]), # for 25 sequences = 300 combinations substitutionMatrix = "BLOSUM50", # pairwise alignment settings gapOpening = -2, gapExtension = -8, scoreOnly=FALSE) # filling vectors ScoreX[i]<-c(dumpme@score) #score nameSeq1[i]<-c(as.character(ProtDF[D[1,i],1])) nameSeq2[i]<-c(as.character(ProtDF[D[2,i],1])) } #End of step 6 # Step 7 creating a dataframe ---- BLOSUM50DF<- data.frame(nameSeq1,nameSeq2,ScoreX ) # DF consists of columns sequence 1 and 2 names as writen in fasta colnames(BLOSUM50DF)<-c("Sequence 1","Sequence 2","Score") # score achieved in pairwise alignment calculation write.csv(BLOSUM50DF,file="C:/Users/tiriy/Desktop/TheProject/BLOSUM50DF.csv") # writing to file (csv) # End of step 7
biostars
{"uid": 373531, "view_count": 1675, "vote_count": 1}
Hi everyone, I have exome data of 25 unrelated patients and 40 unrelated control samples. I'm looking for rare variants associated with the disease. I've checked some obvious things: one variant present in cases not in controls. One gene enriched for rare variants in cases compared to controls. However, now I want to do some statistical testing to find association of a variant/gene with the disease. Since this is just a small-scale study I don't think I can work with methods used in GWAS. Can someone point me to some papers/methods/ideas that could be of interest for my specific situation? Thanks in advance
As Sam pointed out, SKAT is a good place to start but unless there is a **very** strong signal from one gene/variant it's unlikely that you will get a statistically significant association from 25 cases. [This paper][1] from Lander & co has some calculations to estimate minimum sample size needed in these types of studies to get good results, and even under the best conditions there generally needs to be hundreds of cases. But it's worth a shot and even if nothing comes up with a low enough p-value the top hits still might be of interest. If you haven't already, try answering questions such as: "what rare variants show up in *at least two* cases, but no controls", "what genes contain novel variants in *at least two* cases, but no controls" Have you done any filtering by types of variants (eg, removing all synonymous variants from your analysis, or only looking at stop loss/gain variants)? [1]: http://www.pnas.org/content/early/2014/01/16/1322563111
biostars
{"uid": 104253, "view_count": 3594, "vote_count": 3}
Hi all, When I'm using `FeaturePlot()` for one gene, my graph has the color gradient guide but when I want to plot multi genes, the color gradient guide is missed. FeaturePlot(seurat.HSC, features ='Ecm1', min.cutoff = 'q10') ![enter image description here][1] vs FeaturePlot(seurat.HSC, features = c('Ecm1','Gucy1b1','Acta2'), min.cutoff = 'q10', split.by = 'sample' ) ![enter image description here][2] How to solve it? Thanks for any help. [1]: /media/images/9bbc9437-c384-4bc8-a874-3ee0b01f [2]: /media/images/a0ae54a0-658a-4dc7-bfb7-da0079e4
Hello, I think the problem is related to *split.by*, try to use this: FeaturePlot(seurat.HSC, features = c('Ecm1','Gucy1b1','Acta2'), min.cutoff = 'q10', split.by = "sample") & theme(legend.position = "right") Best
biostars
{"uid": 9529373, "view_count": 1125, "vote_count": 1}
<p>This is a long post, detailing my observations with questions at the end.</p> <p>I have (foolishly) volunteered to look at some proteomics data in a proprietary format. My aim is to convert it to mzXML, then generate "annotated PMF spectra" - that is, a plot of intensity v. m/z ratio where peaks are labelled with mass and perhaps peptide and positions in the protein.</p> <p>The data are from a Voyager DE-STR MALDI-TOF instrument; the file suffix is ".dat". It seems from <a href="http://tools.proteomecenter.org/wiki/index.php?title=Formats:mzXML">this information</a> that there are few options. One is to install <a href="http://edwardslab.bmcb.georgetown.edu/software/PyMsXML.html">PyMsXML</a> on a Windows machine which also has the proprietary Data Explorer software. PyMsXML appears not to have been updated since 2007. Another possibility might be the executables from <a href="http://proteowizard.sourceforge.net/">ProteoWizard</a>, again requiring Data Explorer.</p> <p>This is tremendously painful for me, since I never use Windows. However, I do have a version of WinXP installed as a virtual machine using VirtualBox (Ubuntu). I have worked through the PyMsXML installation guide, with the following results:</p> <p><strong>1. Download and install ActivePython</strong></p> <p>The latest version from ActiveState is 2.7.0.2 (or for Python3, 3.1.2.4). However, the 2.7 version does not appear to include the "COM Makepy utility" referred to in the PyMsXML instructions. I downloaded and installed the earliest available free version, 2.5.5.7, which does include the utility.</p> <p><strong>2. Install Data Explorer</strong></p> <p>I have been sent a zip archive. Confusingly it is named "DataExplorer5.1.zip, but the actual version seems to be 4.0.0.0. Anyway, it seems to install and run OK.</p> <p><strong>3. Install COM library interfaces</strong></p> <p>The instructions are to open the COM Makepy utility and look for "ExploreDataObjects 1.0 Type Library (1.0)" and "IDAExplorer 1.0 Type Library (1.0)" - the latter is for .dat files. Neither of these exist. However, there is a library named "Data Explorer 4.2 Type Library (4.2)". The interface to this appears to install correctly.</p> <p>The PyMsXML instructions then refer to a couple of tests to check installation. The test for Analyst files fails, but the one for Data Explorer appears to pass.</p> <p><strong>4. Download, install and edit the PyMsXML scripts</strong></p> <p>This step is fine. Next - run on a test file. I run:</p> <pre><code>pymsxml -R voyager -o myfile.mzXML myfile.dat </code></pre> <p>And I get the error:</p> <pre><code>Traceback (most recent call last): File "C:\bin\pymsxml.py", line 1796, in &lt;module&gt; x.write(debug=opts.debug) File "C:\bin\pymsxml.py", line 83, in write self.write_scans(tmpFile,debug) File "C:\bin\pymsxml.py", line 300, in write_scans for (s,d) in self.reader.spectra(): File "C:\bin\pymsxml.py", line 1528, in spectra (tf,fixedMass) = doc.InstrumentSettings.GetSetting(self.delib.constants.dePr eCursorIon,i-1,None) AttributeError: class constants has no attribute 'dePreCursorIon' </code></pre> <p>I have much less to say about the ProteoWizard executables: they all fail to run with the message "The system cannot execute the specified program." I briefly attempted to build from source under Cygwin, but gave that up as a waste of time.</p> <p>So my questions are:</p> <ul> <li>Has anyone got PyMsXML to run using ActivePython &gt; 2.4 ?</li> <li>Any idea what the PyMsXML error message means ? </li> <li>Any tips at all for getting PyMsXML, ProteoWizard or any other tool to convert Voyager .dat files to mzXML ?</li> </ul>
This one is tough to debug fully without Windows and the Data Explorer software. The problem is that the Data Explorer library, which is loaded via COM, is missing a constant that PyMsXML is expecting. Here are the relevant lines of code plucked from the source: from win32com.client import Dispatch, gencache self.delib = gencache.EnsureModule('{06972F50-13F6-11D3-A5CB-0060971CB54B}', 0,4,2) (tf,fixedMass) = doc.InstrumentSettings.GetSetting( self.delib.constants.dePreCursorIon,i-1,None) Since PyMsXML is from 2007, the best guess is that something has changed in the Data Explorer API since then that is breaking it. One thing you could do is put a: print dir(self.delib.constants) In front of the error line and see what the available constants are; if you're lucky maybe the constant will have changed names to something you can recognize and you'll be able to update the code and get it running.
biostars
{"uid": 3403, "view_count": 3664, "vote_count": 2}
I am attempting to use Salmon (version 0.12.0) to quantify transcript counts from RNAseq data (unstranded paired-end reads). The existing pipeline in my lab is to trim/qc fastq files, align them with STAR (2.5.2a), merge sample BAMs together (if they were spread across lanes), sort them with samtools, and then feed them to Salmon in alignment-based mode. STAR was run with the `--quantMode TranscriptomeSAM` option and the `genomeDir` pointed to a genome generated using STAR's `genomeGenerate` function with the `--sjdbGTFfile` option pointing to a GTF file. Here's the Salmon run line: salmon quant -t /hg38_salmon_transcriptome.fa -l IU -p 16 -a some.transcriptome.sorted.bam -o ./ This produces a quant.sf file that seems to make sense, although it also produces a massive (15+ GB) error file that seems really upset about suspicious pairs (see here for someone else with the same issue: https://www.biostars.org/p/164823/ ) WARNING: Detected suspicious pair --- The names are different: read1 : K00274:68:HGYCHBBXX:5:1119:13311:37220 read2 : K00274:68:HGYCHBBXX:3:2103:3325:33598 I was a bit sick of this behavior and decided to give Salmon the fastq files directly in alignment-free mode. This pipeline was to trim/qc my fastqs, combine the files from different lanes and then gives the reads to salmon. The program runs fine and produces no errors. Here's that Salmon run line: salmon quant -i /hg38_salmon_transcriptome_index -l IU -1 samp_R1.fq.gz -2 samp_R2.fq.gz -o ./ The big issue is that the quant.sf files look really different between these two pipelines. Like the transcript ENST00000361739 (this is the MT-CO1 gene) has a TPM of 40735 in alignment-free mode, but a TPM of 105 in alignment-mode. Further, the range of TPMs (and counts) varies widely between the two modes. In alignment-free mode, I'm getting TPMs in the thousands, but in alignment mode the highest is 300 and most values are under 100. So, my questions are: 1. Does anyone know what's causes the "suspicious pair" error when I'm running in alignment-based mode? 2. **Why am I getting such huge differences in the outputs of these two strategies?**
The command line you show for running using the STAR alignments: ``` salmon quant -t /hg38_salmon_transcriptome.fa -l IU -p 16 -a some.transcriptome.sorted.bam -o ./ ``` suggests that the bam file provided to salmon was (coordinate?) sorted; is this the case? In alignment mode, salmon, like e.g. RSEM, expects all of the alignments for a read to be adjacent in the BAM file, and for the mates of a pair to follow each other. STAR will output the alignments like this by default, as long as you don't ask it to sort the output. Could you verify if this is the case and, if so, see how things look if you don't coordinate sort the output bam?
biostars
{"uid": 363830, "view_count": 3531, "vote_count": 2}
Hi Guys, I want to concatenate all these values in the columns enclosed with "" and separated by comma in the order as shown below. How can I get this done in R? `df1` ``` chr start end strand chr4 443333 232444 + chr5 4455332 4433323 - chr5 4443333 4433355 + ``` I want this result Result (exactly as shown below): "chr4","443333","232444","+","chr5","4455332","4433323","-","chr5","4443333","4433355","+"
all <- paste(apply(df1, 1, function(row) paste(dQuote(row), collapse=",")), collapse=',"+",')
biostars
{"uid": 144157, "view_count": 13900, "vote_count": 4}
Hello, Everyone, The raw data is coming from https://satijalab.org/seurat/v3.1/pbmc3k_tutorial.html. pbmc3k_filtered_gene_bc_matrices.tar.gz. It is the single-cell file: barcodes.tsv, genes.tsv, matrix.mtx. I wanted to transfer matrix.mtx file to csv file. I tried many python mat2csv, none of them worked. Thanks in advance for any help! Best, Yue
R library(Matrix) matrix_dir = "/home/li/" barcode.path <- paste0(matrix_dir, "barcodes.tsv") features.path <- paste0(matrix_dir, "features.tsv") matrix.path <- paste0(matrix_dir, "matrix.mtx") mat <- readMM(file = matrix.path) feature.names = read.delim(features.path, header = FALSE, stringsAsFactors = FALSE) barcode.names = read.delim(barcode.path, header = FALSE, stringsAsFactors = FALSE) colnames(mat) = barcode.names$V1 rownames(mat) = feature.names$V1 Python import csv import gzip import os import scipy.io matrix_dir = "/home/li/" mat = scipy.io.mmread(os.path.join(matrix_dir, "matrix.mtx")) features_path = os.path.join(matrix_dir, "features.tsv") feature_ids = [row[0] for row in csv.reader(open(features_path), delimiter="\t")] gene_names = [row[1] for row in csv.reader(open(features_path), delimiter="\t")] feature_types = [row[2] for row in csv.reader(open(features_path), delimiter="\t")] barcodes_path = os.path.join(matrix_dir, "barcodes.tsv") barcodes = [row[0] for row in csv.reader(open(barcodes_path), delimiter="\t")]  
biostars
{"uid": 408927, "view_count": 6964, "vote_count": 1}
Hi, SAM file specification has fields/tags such as alignment score (AS:i:<N> in bowtie2) and edit-distance (NM:i:<N>) for each end of the pair. I am working with amplicons paired-end data (thus overlap is not uncommon) and would like to know if there is a paired-end "combined" score for the whole mate/pair (taking into account the overlap), e.g. for alignment score, edit distance, etc; this will be useful for later filtering etc. I'm currently using bowtie2, and have just noticed "The alignment score for a paired-end alignment equals the sum of the alignment scores of the individual mates." written in the manual. Is that indeed the case especially when there is an overlap? How are the scores calculated for each end (0.5 for each ?) I'm currently using my own implementation as a post-processing which is inefficient. many thanks Shim
Overlaps are not in any way taken into account when calculating the `AS` tag in bowtie2. There is no score reported that's a direct sum of the mate's `AS` score, though one can sort of use the MAPQ as an ersatz value for this, since MAPQ is dependent on the summed `AS` from the mates (as well as their summed `XS` values). But really if you want to compute a joint `AS` and such while accounting for overlaps then post-processing is your only option.
biostars
{"uid": 302582, "view_count": 1348, "vote_count": 1}
Hello, I have a list of human genes and I'd like to retrieve the physical coordinates (GRCh37/hg19 assembly) of their 3'UTRs. Are you aware of any software that can do that? Thanks !
For RefSeq annotation, you can use the `add_utrs_to_gff` python script to first add 5' and 3' UTR features and then use unix `grep` to extract the genes of your interest. The latest RefSeq annotation of the GRCh37 assembly is here: https://ftp.ncbi.nlm.nih.gov/genomes/all/annotation_releases/9606/105.20190906/ ## download annotation in GFF3 format $ curl -O https://ftp.ncbi.nlm.nih.gov/genomes/all/annotation_releases/9606/105.20190906/GCF_000001405.25_GRCh37.p13/GCF_000001405.25_GRCh37.p13_genomic.gff.gz ## download the add_utrs_to_gff3 python script $ curl -O https://ftp.ncbi.nlm.nih.gov/genomes/TOOLS/add_utrs_to_gff/add_utrs_to_gff.py ## add utr features to the gff3 file $ python3 add_utrs_to_gff.py GCF_000001405.25_GRCh37.p13_genomic.gff.gz > GRCh37_with_utrs.gff3 ## extract 5' UTR for GeneID:5768 $ grep 'five_prime_UTR' GRCh37_with_utrs.gff3 | grep -w 'GeneID:5768' NC_000001.10 BestRefSeq five_prime_UTR 180123968 180124042 . + . ID=utr00100412821;Parent=rna-NM_001004128.2;transcript_id=NM_001004128.2;Dbxref=GeneID:5768,Genbank:NM_001004128.2,HGNC:HGNC:9756,MIM:603120 NC_000001.10 BestRefSeq five_prime_UTR 180124004 180124042 . + . ID=utr00282651;Parent=rna-NM_002826.5;transcript_id=NM_002826.5;Dbxref=GeneID:5768,Genbank:NM_002826.5,HGNC:HGNC:9756,MIM:603120
biostars
{"uid": 413395, "view_count": 2073, "vote_count": 1}
Hello, I am trying to analyze the public dataset https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE126030 I've downloaded the fastq files onto my cluster, and would like to proceed with cellranger count. I am in a test folder and the only file is: SRR8526547_1.fastq and refdata-cellranger-GRCh38-1.2.0 cellranger count --id=cellranger \ --transcriptome=/home/jl2/scratch60/refdata-cellranger-GRCh38-1.2.0/ \ --fastqs=.\ --sample=SRR8526547_1.fastq \ I keep getting the error of Invalid path/prefix combination: /gpfs/ycga/scratch60/k/jl2/test, ['SRR8526547_1.fastq'] No input FASTQs were found for the requested parameters. Can't seem to figure out what's wrong. Does it need fastq.gz instead of fastq?
Hello, I have been troubleshooting > error: No input FASTQs were found for the requested parameters. for several hours now. In my case the file names, the file path and the command were all fine. Finally what solved the issue for me was to move the fastq.gz files into a seperate folder that only contained fastq.gz files. The original folder had some other files in it (md5, fastqc output, etc.). Not sure why this was a problem for the pipeline, but make sure to give this a try if you run into similar trouble. Best, Max
biostars
{"uid": 427428, "view_count": 8375, "vote_count": 2}
<p>Hello,</p> <p>I&#39;d like to study how NCBI&#39;s non-redundant protein database (nr) has developed over the years. However, I&#39;m yet to find a way to download anything but the latest release from the NCBI ftp. Are those old versions lost for good from the public domain?</p>
As far as I am aware NCBI do not provide archived versions of the 'nr' database, although they might be available upon request. However since most of the sequences in 'nr' come from the protein translations in GenBank and UniProt provide archived releases for UniProtKB (which includes translations from EMBL-Bank), the UniProt releases would probably cover what you need. See ftp://ftp.uniprot.org/pub/databases/uniprot/previous_releases/. Alternativly the UniProt's <a href="http://www.uniprot.org/help/uniparc/">UniParc database</a> is equivalent to the NCBI's 'nr' database, and provides additonal date information which would allow you to create subsets based on the database at a particular date. For the XML version of the UniParc database, which contains the additional information, see ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/uniparc/ Please note: the NCBI 'nr' database and the UniParc database are sets of non-identical sequences (i.e. the database contain one sequence for each unique sequence, with meta-data providing details of all the source entries containing the sequence). Non-redundant sequence databases such as <a href="http://www.uniprot.org/help/uniref">UniRef</a> or those generated with <a href="http://cd-hit.org/">CD-HIT</a> are different, and merge subsequences such as those from sequencing fragments into either the longest or a representitive sequence. To generate your own 'nr' like database(s) use the 'nrdb' program (http://blast.advbiocomp.com/pub/nrdb/) on your collection of sequences.
biostars
{"uid": 97867, "view_count": 6353, "vote_count": 3}
<p>Why do we remove duplicates from BAM files while using Samtools? When we have paired end data we can remove duplicates as a fragment OR as a pair. How do each of these methods differ?</p>
<p>I would personally recommend using Picard for marking or removing duplicates. If you have a paired data, then both reads for a pair will be used to select duplicates. In this case, if there is another pair that has both of its reads aligning at the same exact location as this pair, then one of these would be marked as duplicates. For fragment reads, location of only one read will be used to mark the duplicates.</p>
biostars
{"uid": 66901, "view_count": 15103, "vote_count": 2}
Hello ! I don't know what the difference between " ENSG00000002586.1" and "ENSG00000002586.19_PAR_Y" in Ensembl gene/transcripts ID means. How do you deal with these ID with "_PAR_Y" in RNA-seq mapping process ? Thanks,
It means the gene has multiple copies in the [pseudoautosomal regions][1]. The '.1' part is the version number. How you deal with them most likely depends on whether you care about these regions. [1]: https://www.ensembl.org/info/genome/genebuild/human_PARS.html
biostars
{"uid": 398174, "view_count": 3171, "vote_count": 1}
Hi, I have two lists of genes mycounts <- read.csv("geneID and length.csv", header = T, sep = "\t", stringsAsFactors = FALSE) ``` > colnames(mycounts) [1] "genesID.geneslength" > head(mycounts[1:4,]) [1] "R0010W,1272" "R0020C,1122" "R0030W,546" "R0040C,891" > dim(mycounts) [1] 7130 1 ``` mycounts1 <- read.table("read.txt", header = T, sep = "\t", stringsAsFactors = FALSE) ``` > dim(mycounts1) [1] 5961 1 > colnames(mycounts1) [1] "Freq" ``` How I can have only genes in my read file in my genes file? I mean genes file has 7130 that I only need 5961 of them May you help me please? Thank you
Something is weird here, since you have the names files joined with its length and separated by a comma. Am I right?. I believe that since you have only 1 column as I can see from the dim(). It is likely that you need to do the `read.csv` in a different way to be able to separate both values, like using a different sep value. I need to know the format of the original file to suggest you In that case none of these suggestions will work because the genes name are common, but then the length value has to be the same to do a merge
biostars
{"uid": 171078, "view_count": 2678, "vote_count": 1}
I have `VCF` file (containing diallelic variants) and reference genome in `Fasta` of some non-model plant. I'd like to extract `SNPs` flanking sequences. I've found that `bedtools` and `samtools faidx` could be to some extent useful, but apparently don't solve the issue. I need to get output (`fasta` or tabular file) with following SNPs representation : TCTCTGCCAATCACTAGAGGCCGCTTTCGCTTTTA[A/G]TTTGTGTGTGGTCAGAGTTCTTCCGGACTTT
Here a little python solution: import pysam # open vcf file vcf = pysam.VariantFile("input.vcf") # open fasta file genome = pysam.FastaFile("genome.fa") # define by how many bases the variant should be flanked flank = 50 # iterate over each variant for record in vcf: # extract sequence # # The start position is calculated by subtract the number of bases # given by 'flank' from the variant position. The position in the vcf file # is 1-based. pysam's fetch() expected 0-base coordinate. That's why we # need to subtract on more base. # # The end position is calculated by adding the number of bases # given by 'flank' to the variant position. We also need to add the length # of the REF value and subtract again 1 due to the 0-based/1-based thing. # # Now we have the complete sequence like this: # [number of bases given by flank]+REF+[number of bases given by flank] seq = genome.fetch(record.chrom, record.pos-1-flank, record.pos-1+len(record.ref)+flank) # print out tab seperated columns: # CRHOM, POS, REF, ALT, flanking sequencing with variant given in the format '[REF/ALT]' print( record.chrom, record.pos, record.ref, record.alts[0], '{}[{}/{}]{}'.format(seq[:flank], record.ref, record.alts[0], seq[flank+len(record.ref):]), sep="\t" ) fin swimmer **EDIT**: I rewrote the code. The version before had an off-by-one error.
biostars
{"uid": 334253, "view_count": 5637, "vote_count": 2}
<p>I&#39;d like to get my hands on a microarray dataset with as many conditions as possible. One thought that I had was to simply concatenate many individual microarray datasets together, however it seems that they often dont have the same probes, and sometimes are in different units (fold change, intensity, etc).</p> <p>Is there a way I can download all the datasets of GEO, but only include datasets that have the same probes, and are normalized in the same way?</p>
Did you have a look at the <a href="https://www.ebi.ac.uk/gxa/home" target="_blank">Expression Atlas</a>? It integrates many gene expression experiments to make them comparable.
biostars
{"uid": 174720, "view_count": 1341, "vote_count": 1}
Hi, I'm a medical student with genetics, and I have come to understand some knowledge in bioinformatics would help me getting a research position in the field. I've been browsing the internet for some course, but all the ones I've found so far did not suit my needs. Does anyone know of an online bioinformatics course that: 1. is affordable 2. is open to an undergraduate medical student 3. provides a certificate 4. teaches you the most important things for genetic, genomic and molecular biology research
The [Data Analysis for Life Sciences series from EdX][1] meets all of your criteria. I've been too busy to enroll, but I did get my hands on some of the notes and lectures and they're top-notch. [1]: https://www.edx.org/course/data-analysis-life-sciences-1-statistics-harvardx-ph525-1x
biostars
{"uid": 180100, "view_count": 2972, "vote_count": 2}
Hi, I am trying to the plot the grid plot using the `ggplot2` library, however, I am getting the error as given below. Please assist me with this. Error in `[.data.frame`(Group_plot, , virus_strain) : undefined columns selected Thank you, Toufiq EDIT by @RamRS: --- OP edited their question after it was answered. This was the content before they edited it: ---- Hi, I am trying to the plot the grid plot using the ggplot2 library, however, I am getting the error as given below. Please assist me with this. > dput(head(res.mods.group)) structure(list(none = c(0, 0, 0, 0, 0, 0), Pandemic.influenza..A.H1N1.new.subtype. = c(-30.1204819277108, 34.1935483870968, 41.7910447761194, -33.3333333333333, 17.0731707317073, 0)), row.names = c("M16.14", "M16.2", "M16.26", "M15.73", "M15.21", "M14.72"), class = "data.frame") > dput(head(Gen3_ann)) structure(list(Module = c("M4.1", "M5.1", "M6.2", "M7.3", "M8.1", "M9.9"), Cluster = c("A29", "A29", "A34", "A28", "A2", "A37"), Cluster_location = c(1L, 4L, 4L, 6L, 1L, 1L), Function_New = c("Bio", "TBD", "Cell", "Pathogenesis", "Toxicity", "Erythrocytes"), position = c("A29.1", "A29.4", "A34.4", "A28.6", "A2.1", "A37.1")), row.names = c("M4.1", "M5.1", "M6.2", "M7.3", "M8.1", "M9.9"), class = "data.frame") library(reshape2) library(ggplot2) # Set parameters GSE_ID = "GSE21802" platform = "GPL6102" ## prepared cluter position Group_plot = res.mods.group Group_plot <-Group_plot[rownames(Gen3_ann),] rownames(Group_plot)==rownames(Gen3_ann) # check if rownames is the same rownames(Group_plot) <- Gen3_ann$position Group_plot <- as.data.frame(Group_plot) head(Group_plot) # creat new grid with all filtered cluster## mod.group1 <- matrix(nrow=38,ncol=42) rownames (mod.group1) <- paste0("A",c(1:38)) colnames (mod.group1) <- paste0("",c(1:42)) ## virus_strain = colnames(Group_plot) N.virus_strain = length(virus_strain) i=1 for (i in 1:N.virus_strain){ virus_strain = virus_strain[i] for (i in 1 : nrow(Group_plot)){ Mx <- as.numeric(gsub(x = strsplit (rownames(Group_plot)[i],"\\.")[[1]][[1]],pattern = "A",replacement = "")) My <- as.numeric(strsplit (rownames(Group_plot)[i],"\\.")[[1]][[2]]) mod.group1[Mx,My] <- Group_plot[,virus_strain][i] } mod.group <- mod.group1[-c(9:14,19:23),] melt_test <- melt(mod.group,id.var=c("row.names")) colnames(melt_test) = c("Aggregate","Sub_aggregate","%Response") pdf(paste0(GSE_ID, "_", platform, "_Group_comparison_to_control", virus_strain, "_Grid.pdf"), height = 5.5, width = 8.5) plot = ggplot(melt_test, aes(Aggregate, as.factor(Sub_aggregate))) + geom_tile(color="#E6E6E6" , size = 0.2, fill=color )+ geom_point(aes(colour=`%Response`),size=4.5)+ ylab("") + xlab("") + labs(title= "Pandemic Infleunza vs Control")+ theme(axis.text.x = element_text(angle = -90, hjust = 0))+ scale_color_gradient2(low = "blue", mid="white", high = "red",limits=c(-100,100), na.value = "#E6E6E6", guide = "colourbar")+ theme_light() + theme(panel.grid.minor = element_line(colour="black", size=0.9))+ coord_flip() + scale_x_discrete(limits = rev(levels(melt_test$Aggregate))) + theme(panel.border = element_rect(color = "black",size = 0.5), axis.text.x = element_text(colour="black",size=9,angle=0,hjust=0.5,vjust=2,face="plain"), axis.text.y = element_text(colour="black",size=9,angle=0,hjust=0.5,vjust=0.5,face="plain")) plot(plot) dev.off() } Error in `[.data.frame`(Group_plot, , virus_strain) : undefined columns selected Thank you, Toufiq ----
no, then your code won't work any more, as you reference `virus_strain` two more times. Rename the single element and all occurrences to something like `this_strain` this_strain = virus_strain[i]
biostars
{"uid": 431636, "view_count": 699, "vote_count": 1}