INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
I have multiple VCFs for single cell RNA seq data and I want to get peptide sequences from these files. I have searched and found out that we can annotate the VCF with Ensembl's VEP to get the amino acids/protein information. However, I am looking to get a fasta file as an output which can be used for downstream analysis. I found out that GATK's FastaAlternateReferenceMaker can be used to get a fasta file from VCF. Can I use the output VCF from VEP as an input to the FastaAlternateReferenceMaker to get the required fasta? I am not sure if I should pass the entire VCF as an input or just the protein information. Please help me get a better understanding of this procedure.
I found out that this can be done in the following way: 1. Find the proteins in the mutations using Ensembl's VEP and sending the VCF files as an input. By doing so we will get several information as an output but we are interested in Ensembl’s VEP protein ID which starts with **ENSP.** (VEP output format provided here: [https://m.ensembl.org/info/docs/tools/vep/vep_formats.html#vcfout][1]) 2. Once we have the protein IDs we can use the REST API provided by Ensembl and pass the ID to get the fasta: [https://rest.ensembl.org/sequence/id/ENSP00000404426?content-type=text/x-fasta;type=protein][2] [1]: https://m.ensembl.org/info/docs/tools/vep/vep_formats.html#vcfout [2]: https://rest.ensembl.org/sequence/id/ENSP00000404426?content-type=text/x-fasta;type=protein
biostars
{"uid": 9506771, "view_count": 1032, "vote_count": 1}
I want to extract a list of sequences from NCBI. I am esearch command mentioned [here][1]. For one gene symbol, I could do it like this: esearch -db nuccore -q 'SS1G_01676[gene]' | efilter -source refseq -molecule genomic | efetch -format gene_fasta | awk -v RS='(^|\n)>' '/SS1G_01676/{print RT $0}' I want to use a bash loop to extract a list of sequences and below is what I have tried, but wouldn't yield any results. What am I missing here? declare -a arr=("SS1G_03709" "SS1G_07286" "SS1G_04907") for i in "${arr[@]}" do myquery="'${i}[gene]'" echo "myid :" ${i} echo "my query :" ${myquery} esearch -db nuccore -q ${myquery} | efilter -source refseq -molecule genomic | efetch -format gene_fasta | awk -v RS='(^|\n)>' '/${i}/{print RT $0}' >>text.fasta done [1]: https://www.biostars.org/p/345772/#345918
You are right, it stops after the first gene. I am not sure why. (EDIT: see explanation below) This works though: for line in `cat temp.txt`; do esearch -db nuccore -q $line| efilter -source refseq -molecule genomic | efetch -format gene_fasta | awk -v r="$line" 'BEGIN {RS="(^|\n)>"} $0 ~ r {print ">" $0}' ; done Apparently, for while loops involving `esearch` one should add `< /dev/null` to prevent esearch from "draining" the remaining lines from stdin. See documentation here: https://www.ncbi.nlm.nih.gov/books/NBK179288/#chapter6.While_Loop
biostars
{"uid": 346343, "view_count": 2299, "vote_count": 2}
I know SNP is change at a single position in a genetic sequence like A to G or C to T in a GWAS studies. My basic question is how these type of data is represented as I got a [SNP dataset][1] here but having hard times what it is also I have seen VCF file format and it contains lot of information like LD,MAF etc According to my understanding it should be discrete data. Also How we calculate Z score of such a discrete data as I have seen lot of papers filters SNP based on there low Z values. **My Understanding** One can obviously make a 2x3 contingency table where rows represents subjects and control and columns represent types of allelets like AA, Aa and aa and count numbers of those cells from the data given then apply chi-square and calculate p-value but how Z score would be calculated for such data? So I am having two issues one in being understanding dataset related to SNP and how Z score are calculated? [1]: https://github.com/Gregor-Mendel-Institute/atpolydb/blob/master/250k_snp_data/call_method_32.tar.gz
Your questions are not a nuisance, so, do not feel bad for asking. In association studies, the usual focus at each SNP position is the minor allele, i.e., the SNP allele that has the lowest frequency in the samples being studied in your dataset - I am assuming that you know this? At some genotyped sites, the minor allele may have a frequency (i.e. minor allele frequency - MAF) of 49% compared to 51% for the major allele, which is less interesting because, with a frequency of 49%, it is seen as a '*common*' variant. At others, however, the minor allele may have a MAF of just 1%, which classes it as a '*very rare*' variant (MAF 5% is usually the cut-off for rare / non-rare). Important to note, however, that both common and rare variants can be functional and have roles in disease. For further reading, read: <a href="https://www.ncbi.nlm.nih.gov/pubmed/22251874">Rare and common variants: twenty arguments.</a> In any case, if we just take the most basic type of association test and tabulate the number of minor and major alleles in our cases and controls, we can get an example 2x2 contingency table like this: contingency.table Cases Controls Minor allele 27 6 Major allele 73 94 You can see that the minor allele is more frequent in the cases for this particular SNP. We can easily derive a 1 degree of freedom Chi-square p-value for this in R Programming Language: chisq.test(contingency.table) Pearson's Chi-squared test with Yates' continuity correction data: contingency.table X-squared = 14.516, df = 1, p-value = 0.0001389 Not genome-wide significance at all, but this is only a 100 sample dataset as an example. ------------------------------------------ We can then derive an odds ratio (OR) for the minor allele: (27/6) / (73/94) [1] 5.794521 ------------------------------------------- Standard error of OR: sqrt((1/27) + (1/6) + (1/73) + (1/94)) [1] 0.477536 ------------------------------------- Upper 95% confidence interval (CI) of the OR 5.794521 * exp(1.96 * 0.477536) [1] 14.77421 ---------------------------------------- Lower 95% CI of the OR: 5.794521 * exp(- 1.96 * 0.477536) [1] 2.27264 ----------------------------------------------- With all of this useful information, we can then also calculate the Z-score. The Z-score is the log of the OR (log.OR) divided by the standard error of log.OR (SE.log.OR). The SE.log.OR calculation involves both the OR and the lower CI of the OR: log.OR <- log(5.794521) lower95.log.OR <- log(2.27264) SE.log.OR <- (log.OR - lower95.log.OR) / 1.96 Then calculate Z: log.OR / SE.log.OR [1] 3.679121 ---------------------------------------------------------------- ---------------------------- Another way to calculate p-values, ORs, and Z-scores in association studies is through logistic regression analysis. In regression, one can encode the genotypes as categorical variables or, usually, numerical variables in 'additive' models. In these cases, one has the following: - homozygous minor allele = 2 - heterozygous minor allele = 1 - homozygous major allele = 0 One can also adjust for covariates in these models, such as smoking status, BMI, ethnicity and/or PCA eigenvectors, etc. From regression, the OR is the exponent of the *estimate*, and the Z-score (if not explicitly given) can be calculated in the same way as above. I built a pipeline for a complex type of trios family analysis using these types of metrics and conditional logistic regression (where cases and controls are matched into strata): <a href="https://github.com/kevinblighe/GwasTriosCLogit">GwasTriosCLogit</a> --------------------------------------------- --------------------------------- If you are wondering from where I magically got 1.96 and used it in the calculations, then look <a href="https://en.wikipedia.org/wiki/1.96">HERE</a>. This example is to just give you a fundamental understanding of what is going on 'behind the scenes' in association studies. Obviously there are many dozens of types of analyses that involve different statistical tests, and programs like PLINK, etc, are undoubtedly doing further adjustments to the data than I have shown here. Kevin
biostars
{"uid": 310373, "view_count": 10780, "vote_count": 4}
<p>Hi,</p> <p>I have a list of kmers, between 8-12 nt in length, and I would like to align these to a larger sequence returning all ungapped matches with at most 2 mismatches. I would like search to be exhaustive i.e. I do not want to miss anything. I wrote a python script to compute hamming distance for all substrings of my reference to the query, but it is too slow for many (1000s) queries on a reference of ~100,000nt.</p> <p>What program would you recommend that does this and runs rather quickly. I have looked into Bowtie2, but I am unsure if it was designed to work with such short query sequences.</p> <p>Thanks for the feedback.</p>
Ended up creating a custom python script that: 1. Broke up the larger sequence into kmers of required sizes into a set. 2. For each query kmer, compute possible all 2-mismatch kmers into a set. 3. interesect set from 1 with set from 2. Works extremely quickly as my kmers are quite small (8-12 nt) and the search target is also relatively small (tens of kb).
biostars
{"uid": 175216, "view_count": 3995, "vote_count": 3}
Hi all, There is a great preprint for the "complete" human genome T2T-CHM13 on bioRxiv. I'm not able to find a gff/gtf/bed annotation file for this and I was hoping someone might share a link if one exists. [https://www.biorxiv.org/content/10.1101/2021.05.26.445798v1.full][1] [https://github.com/marbl/CHM13][2] Thanks! [1]: https://www.biorxiv.org/content/10.1101/2021.05.26.445798v1.full [2]: https://github.com/marbl/CHM13
Hey, it looks like the correct link is now on Github. See https://github.com/marbl/CHM13#downloads.
biostars
{"uid": 9473226, "view_count": 4206, "vote_count": 2}
Hi Everyone, I am a newbie in processing scRNA-Seq data and I am trying to understand a Seurat Object. Here's a single cell dataset deposited on NCBI GEO: GSE158130 ``` str(obj) Formal class 'dgCMatrix' [package "Matrix"] with 6 slots ..@ i : int [1:2110714] 25 26 28 30 32 112 113 114 124 127 ... ..@ p : int [1:1177] 0 2014 4094 6011 8780 11407 14821 17657 19653 20675 ... ..@ Dim : int [1:2] 27044 1176 ..@ Dimnames:List of 2 .. ..$ : chr [1:27044] "1/2-SBSRNA4" "5S_rRNA" "5_8S_rRNA" "7M1-2;OR2F1" ... .. ..$ : chr [1:1176] "WSJ0005001" "WSJ0005002" "WSJ0005003" "WSJ0005004" ... ..@ x : num [1:2110714] 1 1 2 1 1 1 1 1 1 1 ... ..@ factors : list() ``` What I understand from this object: - @Dim provides the dimensionality of this dataset is 27,044 (genes) x 1176 (cells) - @Dimnames slot provides names for genes and cells. I have the following questions: 1. What data does slot @i, @p and @x show? 2. Could we determine if QC, normalization and dimensionality reduction methods have been applied to the data by looking at the object? 3. This dataset also provides 2 additional files - `GSE158130_cellID_barcode_map.txt` and `GSE158130_SK-N-SH_counts.txt`. I am not sure how to use these files and what should be my next steps. Any help is appreciated here. Thanks.
This isn't a Seurat object, but rather a sparse matrix from the `Matrix` library. `@x` are all of the non-zero counts. `@i` are the index positions column-wise of non-zero values. `@p` is a cumulative sum of the number of non-zero values in each column. `GSE158130_SK-N-SH_counts.txt` is the same matrix but in csv format. `GSE158130_cellID_barcode_map.txt` has some metadata about each cell, such as the assigned cell-id, corresponding cell barcode, patient ID, etc. From this data alone you can't tell how the data was processed. You'll need to look at the methods from the paper.
biostars
{"uid": 9462003, "view_count": 1958, "vote_count": 2}
Hi everyone. I am having troubles undestanding if the PFAM-A hmm's provided by the pfam database are the ones obtained by the SEED or from the FULL alignment of the family. In the step-by-step description in pfam's website (say: http://pfam.xfam.org/family/PF00004#tabview=tab6 ) they provide the command used to obtain the hmm form the seed, and then the command to run the 'search' performed on all the pfamseq to obtain the "full" msa. Then they provide a "raw hmm" to download, but it is not clear to me if this hmm is the one they obtained from the seed with the given command OR another one, obtained from the full alignment; neither I unsterstand if it is supposed to be the same hmm i can find in the ftp website in the file PFAM-A.hmm In the PFAM papers i cannot find any paragraph citing the "full-msa" hmm, but I've head about it from many people. Maybe they are wrong Or i am not good in reading pfam's papers and documentation, but it's days i am struggling with this and I still cannot find some clear definition of all the databases and the hmms used to build them.
I was once confused by this. It seems definitions are not as clear in bioinformatics as they are in mathematics and misunderstanding is common. If you click on the 'download' link once you follow your link above you will see the NSEQ field which has the value 207. This seed alignment defines the family. Sometimes the HMM model is called the alignment, sometimes all the sequences lined up together is called the alignment. The full alignment is what results when executing an HMM search based on the seed alignment. The full alignment does not define a family since it changes as new sequence data is ingested.
biostars
{"uid": 338373, "view_count": 2742, "vote_count": 1}
Hi folks. I need to run a de novo short-read genome assembler (on a paired-end/mate-pair library) that prefers outputting shorter but error-free contigs rather than longer contigs/scaffolds which may be mis-assembled. What assembler or what specific setting in an assembler of choice do you recommend to yield such contigs (as error-free as possible and no contig overlappings)?
I think error free contigs depends on the quality of your data too and the contamination if any. It also depends on the repetitiveness of genome, level of polymorphism (in order to know the correctness of contigs) and heterozygosity of the individual. SOAP contigs are short as they start from K+1 of your kmer. By increasing the `min_abundance` parameter in denovo assemblers, you can get more accurate contigs. Minia is definitely one of the ones to try out. If you have lesser number of error-free reads, go for overlap assembler such as CAP3. This wouldn't work for a large number of reads due to memory constraints.
biostars
{"uid": 96350, "view_count": 2302, "vote_count": 1}
I have multiple file like below in linux > Aardvark_GENES_D.fa.1 > Aardvark_GENES_D.fa.2 > Aardvark_GENES_D.fa.3 > Aardvark_GENES_D.fa.4 I want to rename them by removing last extension and editing string like below Aardvark_ACMSD_D.fa Aardvark_ARID1B_D.fa Aardvark_CRYM_D.fa Aardvark_SMO_D.fa Kindly help me how to do so ?????
assuming list.txt contains the name of the gene in the correct order paste <(ls *.fa.* | sort -t '.' -k3,3n ) list.txt | awk '{split($1,a,/_/); printf("mv %s %s_%s_%s\n",$1,a[1],$2,a[3]);}' | sed 's/\.[0-9]*$//' mv Aardvark_GENES_D.fa.1 Aardvark_ACMSD_D.fa mv Aardvark_GENES_D.fa.2 Aardvark_ARID1B_D.fa mv Aardvark_GENES_D.fa.3 Aardvark_CRYM_D.fa mv Aardvark_GENES_D.fa.4 Aardvark_SMO_D.fa when you're happy with the result, pipe it into 'bash'
biostars
{"uid": 266139, "view_count": 1675, "vote_count": 1}
Hi, I am working on variant calling on fungal genomes. I have Illumina HiSeq reads. I am new to this and am following this workflow Step 1: QC of raw reads performed using FastQC tool Step 2: Preprocessing of raw reads performed using Trimmomatic-v0.36 tool Step 3: QC of clean reads performed using FastQC tool Step 4: Alignment to reference genome using bowtie2-v2.2.6 tool Step 5: SAM to BAM (alignment files) conversion using samtools-v1.3.1 Step 6: Remove duplicates using sambamba-0.6.6 tool Step 7: coordinate sorting of bam files with samtools Step 8: Variant calling performed using samtools/bcftools Step 9: variant filtering with bcftools In this post https://www.biostars.org/p/8237/ and other variant calling related resources I found 2 additional steps before variant calling i.e. **local realignment** and **base quality recalibration**. 1. Are these steps essential? 2. I found that these option are not available in samtools but in GATK. How can I perform these steps for my data?
Local realignment is not needed anymore with GATK latest pipeline using HaplotypeCaller. You may just forget it as it's utility is only in calling SNPs nearby Indels, and there is no definite guideline how much it helps when using with tools other than GATK. https://software.broadinstitute.org/gatk/blog?id=7847 Base quality re-calibration can still give you some advantages, as it claims to rectify the base call probabilities, which are used by almost all of the variant callers to give more confidence to the called variant. It is easy to do, provided you have GATK pipeline set-up at your end. http://gatkforums.broadinstitute.org/gatk/discussion/44/base-quality-score-recalibration-bqsr (TL;DR: See "Creating a recalibrated BAM" in above doc) PS: Your reads must have RG-tags in bam-files to use most of the GATK protocols/tools. You may search my earlier post regarding that. PPS: If it looks too troublesome, you may ignore it too as this not an obligatory step.
biostars
{"uid": 247990, "view_count": 3180, "vote_count": 2}
According to [wikipedia][1], **sensitivity** and **specificity** are defined as: ![sensitivity formula][2] ![specificity formula][3] However, in [cuffcompare][4], the values are calculated as: ``` sp=(100.0*(double)ps->exonTP)/(ps->exonTP+ps->exonFP); sn=(100.0*(double)ps->exonTP)/(ps->exonTP+ps->exonFN); ``` `sn` is sensitivity and `sp` is specificity. The formulas don't match, in particular **true-positive (exonTP)** is now used in both formulas. Does anybody know why? [1]: https://en.wikipedia.org/wiki/Sensitivity_and_specificity [2]: https://upload.wikimedia.org/math/1/3/4/13435a42931645fb999e06c9d62629f6.png [3]: https://upload.wikimedia.org/math/b/6/2/b62ae9edff0202210dde58500b1eb7f8.png [4]: https://github.com/cole-trapnell-lab/cufflinks/blob/master/src/cuffcompare.cpp
That was explained in [their paper][1]. In this context, we are measuring the accuracy at the base level. In the base level, we will have significantly more noncoding bases than coding bases. Therefore, TN will be much larger than FP. Therefore, SPC (specificity) will tend to be close to one and not very suitable as a metric. To fix the issue, cufflinks prefers another formula: sp=(100.0*(double)ps->exonTP)/(ps->exonTP+ps->exonFP); It is important to note that cufflinks is not not calculating specificity anymore. Instead, we are getting the **prediction accuracy** of the experiment. It is a technical mistake that cufflinks should fix. They should never claim it's specificity when it's not. [1]: http://www.sciencedirect.com/science/article/pii/S0888754396902980
biostars
{"uid": 138438, "view_count": 1998, "vote_count": 2}
Hello everyone ! I need a few advices regarding the final step of my *D. suzukii* assembly, using long PacBio reads : the polishing step. First, let me explain how I obtained the file I am working on. I have made two different assembly using different algorithms : Falcon and Canu. I have assessed and compared theses assembly using quast for the classic assembly metrics, and busco2 to assess gene content (using set Arthropoda and diptera). I also evaluated gene content using some handmade scripts that were looking for particular gene of interest. The two assembly were really different, in terms of metrics and gene content, and I couldn't be happy with one or an other. I then used Mahul Chakraborty tool, called quickmerge (see here on github : https://github.com/mahulchak/quickmerge ). This tool created a merged assembly using both the advantages of each assembly. My Busco results were really nice compared to the previous one. The assembly was also more contiguous, with a greater N50 and way much less contigs. For reminder, of for those who don't know busco, it is a tool that look for genes in your assembly, that are shared by different species (for Arthropoda, it is genes that are orthologs among the Arthropoda clad and it goes on) and always present in single copy. The genes are then categorized as following : S : Single , D : Duplicated, F : Fragmented, M : Missing. There is 800 genes assessed for Arthropoda, and 2800 genes assessed for Diptera. My data come from a very polymorphic species, and I always tend to have high scores of Duplicated. I'm not really scared by it. What I absolutely want to reduce, are the numbers of fragmented or missing genes. Then, using busco, I have tried different polishing using different coverage : 40X and 80X. The results are kind of confusing for me, and I need advises of an expert eye on it. Here are my different Busco Result depending on coverage : **Non polished assembly :** *Arthropoda* : S : 91% , D : 5.9% , F : 2.2% , M : 0.9% *Diptera* : S : 87.1%, D : 5.1%, F : 5.1%, M : 2.7% **40X polished assembly :** *Arthropoda* : S : 89.1% , D : 9.4%, F : 0.9%, M : 0.9% *Diptera* : S : 86.9% , D : 8.4%, F : 2.8%, M : 1.9% **80X polished assembly :** Arthropoda : S : 86.5%, D : 11.4%, F : 0.9%, M : 1.2% Diptera : S : 84,6%, D : 11%, F : 2.6%, M : 1.8% So, I am not that surprised that the more we polish, the more we get duplicated genes. My final assembly size is 280Mb, but the estimated size of the genome, using flux cytometry, is 250Mb. So, I was expecting duplicate of some polymorphic regions. What surprise me, and what I don't understand, is the variation of fragmented and missing genes. I was expecting that the more reads I will use, the less fragmented and missing genes I will get. it work for diptera clade, but not for Arthropoda. Doubling the coverage increased a little bit this number for Arthropoda, not for diptera, while keep dramatically increasing the duplicated genes in both clads. I am confused now, because I found the BUSCO results from 40X polishing better for Arthropoda, but 80X better for Diptera. My interpretation of this, is that the polishing kind of "revealed" our true level of duplication, which is the reflect of an high polymorphism level. I think that the fact we loss a bit of genes in arthropoda set is because the sequences have maybe evolved a lot, and busco can't recognize some of the genes anymore. I know it is a bit long to read, but I really need some outside point of view. Anyone already experienced assembly of an highly polymorphic species ? Should I keep the 40X polishing or the 80X polishing ? Or maybe continue polishing with an even higher coverage ? Any recommendations or critics about the pipeline I used ? (merging two different assembly for examples). Thanks for reading me this far ! Cheers, Roxane
After a few months working on my dataset, I think I finnally managed to give myself an answer. I'm reporting here my thought so it can benefit to anyone in need. My question was : **Which polishing coverage should I use to reduce fragmented and missing genes ?** In the past, I reported results for 2 polishing coverage : 40x and 80x. I also tested 160x (I had enough coverage). Because Diptera set is more closer to Drosophila, I've chose to keep only Diptera score in account here. *160x polished assembly* Diptera : S : 84,3%, D : 11.3%, F : 2.8%, M : 1.6% As you can see, doubling the polishing coverage (from 80x to 160x), did not decreased as much as I expected the fragmented and missing gene. Even decreased a bit my single score. I had a talk with a man used to PacBio assembly that told me that a polishing coverage higher than 100x doesn't improve that much assembly, it can even make it a bit worse. Considering that, I chose to keep the **80x polishing**. Compared to 40x, it has better indel rate, and the polishing step main goal is to decrease this indel rate, signature of PacBio assembly. I'm open to any discussion regarding these results ! Cheers Roxane
biostars
{"uid": 246505, "view_count": 3783, "vote_count": 2}
In the BWA man page, it is said that BWA should output a certain number of optional fields like X0, X1, XO, etc... However, when I run BWA mem, none of them are produced (some of them would be extremely useful for the project I'm dealing with right now). Could someone shed lights on this? Thanks
<p><a href="http://sourceforge.net/p/bio-bwa/mailman/message/31998697/">This thread</a> on the bio-bwa mailing list addresses your question. The summary is that `bwa mem` does not output those tags. Heng Li, the author, states:</p> > Bwa-mem is unable to compute accurate X0 due to algorithmic restrictions. You can derive XM from CIGAR and NM. XT:A:M is not applicable to bwa-mem. XT:A:U and XT:A:R can be derived from mapping quality.
biostars
{"uid": 110504, "view_count": 2370, "vote_count": 1}
Dear BioStars, I have a genomic ranges (GR) list object called `lgr.test` and would like to simply add several columns to its metadata. How would I do that? I have a list of numeric vectors `lvec.test` of the same lengths as the GRs in `lgr.test`. Thank you! > summary(lvec.test) Length Class Mode chr1 589 -none- numeric chr2 790 -none- numeric chr3 482 -none- numeric chr4 681 -none- numeric chr5 698 -none- numeric chr6 492 -none- numeric chr7 713 -none- numeric chr8 489 -none- numeric chr9 590 -none- numeric chr10 521 -none- numeric chr11 853 -none- numeric chr12 395 -none- numeric chr13 373 -none- numeric chr14 358 -none- numeric chr15 388 -none- numeric chr16 340 -none- numeric chr17 465 -none- numeric chr18 287 -none- numeric chr19 357 -none- numeric > summary(lgr.test) Length Class Mode chr1 589 GRanges S4 chr2 790 GRanges S4 chr3 482 GRanges S4 chr4 681 GRanges S4 chr5 698 GRanges S4 chr6 492 GRanges S4 chr7 713 GRanges S4 chr8 489 GRanges S4 chr9 590 GRanges S4 chr10 521 GRanges S4 chr11 853 GRanges S4 chr12 395 GRanges S4 chr13 373 GRanges S4 chr14 358 GRanges S4 chr15 388 GRanges S4 chr16 340 GRanges S4 chr17 465 GRanges S4 chr18 287 GRanges S4 chr19 357 GRanges S4
<p>Ok, a <code>for</code> loop works (see below), but if some GR gurus see this, please save me from my ignorance and tell me how to do this using standard GR functions (<code>Map , endoapply, mendoapply, Reduce</code>). The <code>for </code>solution is fast enough but doesn&#39;t use the names of the objects in the lists, so it depends entirely on their order.</p> <p>Cheers!</p> <p><code>For</code> solution:</p> <p><code>for (i in 1:length(lgr.test)) { values(lgr.test[[i]])$NEW&lt;-lvec.test[[i]] }</code></p>
biostars
{"uid": 101192, "view_count": 3798, "vote_count": 1}
Hey, I'm studying the Bio.Entrez, to retrivie information from NCBI... I already made basic scripts to retrieve sequences based on protein or nucleotide IDs, but I'm wondering if I can retrieve all proteins based an specific taxonomy ID.... So I have a 3 column csv file, like this: Reoviridae,Cardoreovirus,Eriocheir sinensis reovirus Reoviridae,Mimoreovirus,Micromonas pusilla reovirus Reoviridae,Orbivirus,African horse sickness virus Reoviridae,Orbivirus,Bluetongue virus And I wrote, at the moment, this: #!/usr/bin/python3 # -*- coding: utf-8 -*- from Bio import Entrez import argparse, csv import xml.etree.ElementTree as ET parser = argparse.ArgumentParser(description = 'This script a csv file and returns protein information by viral family.') parser.add_argument("-in", "--input", help="CSV file with 3 columns", required=True) args = parser.parse_args() input_file = args.input with open(input_file,'r') as in_file: reader_in_file = csv.reader(in_file,delimiter=',') viral_family_lst = [] for line in reader_in_file: viral_family = line[2].rstrip('\n') viral_family_lst.append(viral_family) for viral_family in viral_family_lst: handle_id_var = Entrez.esearch(db="Taxonomy", term=viral_family,retmode='xml') tree = ET.parse(handle_id_var) root = tree.getroot() for app in root.findall('IdList'): for l in app.findall('Id'): id = l.text print(id) So, at the moment, this script returns the taxonomy ID for each "viral specie", and idk how I can use this IDs to retrieve all proteins for each virus....
Something like this? #!/usr/bin/env python3 # -*- coding: utf-8 -*- import json import pandas as pd from Bio import Entrez Entrez.email = "[email protected]" import io input = io.StringIO(""" family,genus,species Reoviridae,Cardoreovirus,Eriocheir sinensis reovirus Reoviridae,Mimoreovirus,Micromonas pusilla reovirus Reoviridae,Orbivirus,African horse sickness virus Reoviridae,Orbivirus,Bluetongue virus """) df = pd.read_csv(input, sep=',') # To load from file, do (check if has column names (header) or not): # df = pd.read_csv(filename, sep=',', header=None) print("List of species:", list(df.species)) # Entrez esearch result limit RETMAX = 33 def get_ids(response) -> list: j = json.loads(response.read()) return list(j['esearchresult']['idlist']) for species in df.species: txids = get_ids(Entrez.esearch(db="Taxonomy", term=species, retmode="json")) for txid in txids: prids = get_ids(Entrez.esearch(db="Protein", term=F"txid{txid}[Organism:noexp]", retmax=RETMAX, retmode="json")) print(F"Species {species} ({txid}), protein IDs: {prids}") for prid in prids: # print(json.loads(Entrez.esummary(db="Protein", id=prid, retmode="json").read())['result'][prid]) fasta = Entrez.efetch(db="Protein", id=prid, rettype="fasta", retmode="text").read() print(fasta)
biostars
{"uid": 450524, "view_count": 854, "vote_count": 2}
Hi - I am obtaining from a RNA-seq bam file the BCF output from samtools mpileup in order to corroborate variants seen in the exome. Thus, for corroborating one single SNP, my command looks like this: ``` samtools mpileup -ug -t DP -t DV -t DP4 --min-MQ 40 --min-BQ 35 -f GRCh37.fa --region "22:29907105-29907105" RNAsample.bam | bcftools view --min-alleles 3 #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT RNAsample 22 29907105 . G A,<X> 0 . DP=47;I16=14,12,6,7,974,36674,484,18098,520,10400,260,5200,488,11274,261,5927;QS=0.666667,0.333333,0;VDB=0.280567;SGB=-0.683931;RPB=0.814533;MQB=1;MQSB=1;BQB=0.956592;MQ0F=0 PL:DP:DV:DP4 82,0,143,160,182,243:39:13:14,12,6,7 ``` Because, samtools is always outputting "*an non-ref base 'X' represents an base has not been seen from the alignment data*" I need to filter for minimum 3 alleles (one ref, one alt and the X). I find this all counter-intuitive and would like to omit the non-ref base 'X' in the output from samtools mpileup as in this case it will not be needed for anything. Does anybody know how to do this properly (excluding sed, awk and the like ;)) EDIT: The pileup itself for this position looks as follows: ``` $ samtools mpileup -t DP -t DV -t DP4 --min-MQ 40 --min-BQ 35 -f /home/mpschr/bin/bcbionextgen/data/genomes/Hsapiens/GRCh37/seq/GRCh37.fa --region "22:29907105-29907105" RNAsample.bam 22 29907105 G 39 .,...AAA..A,,a,a..a,a,,A..aA,,a,.,..a,. DDDDDDDDDDDDJFJJEGHIIJJIJJEJDEDDJDJHDDF ```
Ok, after a lot of experimenting I have found a solution with which I am satisfied - I only use the `bcftools view`, since I already did the variant calling, and then pipe it to `bcftools norm`, where norm shortens the representation of the indels and with the option '-m-both' splits up multiple alleles to a line each. Thus the command looks like this samtools mpileup -ug -t DP -t DV -t DP4 --min-MQ 40 --min-BQ 35 -f GRCh37.fa --region "22:29907105-29907105" sample.bam | bcftools view | bcftools norm -m-both -f GRCh37.fa and gives this output: ``` #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT Sample.bam 22 29907105 . G A 0 . DP=95;I16=35,6,18,2,2110,121546,1123,68521,2460,147600,1200,72000,869,19919,428,10158;QS=0.651934,0.348066;VDB=0.12166;SGB=-0.692067;RPB=0.998114;MQB=1;MQSB=1;BQB=0.426273;MQ0F=0 PL:DP:DV:DP4 255,0,255:61:20:35,6,18,2 22 29907105 . G <X> 0 . DP=95;I16=35,6,18,2,2110,121546,1123,68521,2460,147600,1200,72000,869,19919,428,10158;QS=0.651934,0;VDB=0.12166;SGB=-0.692067;RPB=0.998114;MQB=1;MQSB=1;BQB=0.426273;MQ0F=0 PL:DP:DV:DP4 255,255,255:61:20:35,6,18,2 Lines total/modified/skipped: 1/0/0 ``` By piping it through `grep -v '<X>'` the non reference bases are removed from the output. In any case, thanks for all the help Devon
biostars
{"uid": 161981, "view_count": 5563, "vote_count": 1}
From what I can find in papers, heatmaps using RNA seq data are created in several ways: using log-fold changes, z-scores, etc. The edgeR vignette states: > Inputing RNA-seq counts to clustering or heatmap routines designed for microarray data is not straight-forward, and the best way to do this is still a matter of research. To draw a heatmap of individual RNA-seq samples, we suggest using moderated log-counts-per-million. This can be calculated by cpm with positive values for prior.count, for example : > logcpm <- cpm(y, log=TRUE) Just out of curiosity, I was wondering, how would it differ from calculating z-scores using the fitted.values (derived from the glmQLFit step) in the RNA seq analysis pipeline. Would the heat maps created using z-scores calculated from fitted.values turn out all that different?
The purpose of making heatmap of logCPMs is to display sample to sample variability, which allows you to see variability both between groups and between replicates. Plotting fitted values instead would be pointless because fitted values do not show variability between replicates, and also incorrect because fitted values are not normalized by library size.
biostars
{"uid": 409305, "view_count": 1543, "vote_count": 1}
As part of a larger project I've implemented GSEA in Matlab. I want to test my code by comparing my output - p-values for KEGG pathways say, with another implementation. In my implementation the GSEA algorithm starts from a ranked list of gene ids with real valued weights. So I want to upload the same list to the tool used for comparison. However, as I look round tools for GSEA most of them seem to start at an earlier point, requiring the original gene data, which would mean that there would be scope for differences in preprocessing to affect the output, whereas I want a pure comparison of the GSEA part. What would you say is the simplest way to perform GSEA (not another enrichment algorithm) on a weighted list of entrez-gene ids? I'm fine if it involves some coding, but don't want to get bogged down editing large amounts of other people's code to perform what should be a straightforward test to check that my results are comparable with the expected results.
I believe that http://www.broadinstitute.org/cancer/software/genepattern/modules/docs/GSEAPreranked/1 should work for you.
biostars
{"uid": 201245, "view_count": 5282, "vote_count": 1}
Hi there, I know it can be considered naive but I am trying to plot normalised counts on selected genes, outputted from my deseq2, that I have further confirmed with rt-qpcr experiments (the idea is just to confirm that these genes follow the same, expected, trend). since these genes are supposed to be significantly regulated upon my treatment, I was wondering if a statistical test should be performed on those or this should be considered as 'wrong step' to do.
How are you plotting the counts? Side by side with treated vs untreated? If so, why not just use the p-value from DESeq2? If not, you need to clarify what/how exactly you're plotting.
biostars
{"uid": 426988, "view_count": 640, "vote_count": 1}
Hi everybody I`m using spades trying to assembly a trypanosomatid genome, i just have around 150 MB reads from nanopore sequencer, the smallest read its about 150 bp, and the longest about 84 kb, I know that i have no enough reads for genome assembly but I am trying to do it anyway, thats my command line: spades.py --careful --nanopore --only-assembler -o out_spades -s file.fastq It does not works, did somebody has used it before? Thanks!
Ok, actually it will only use Nanopore as part of a hybrid assembly. From the docs: "SPAdes should not be used if only PacBio CLR, Oxford Nanopore, Sanger reads or additional contigs are available." So it's expecting the nanopore reads filename after the --nanopore argument (which explains your error) but it's also expecting another file to do a hybrid assembly. Do you have Illumina or other data?
biostars
{"uid": 191600, "view_count": 3908, "vote_count": 1}
I am trying to run the R command readVcf in R, it shows function not found. I have already downloaded the package "VariantAnnotation". I don't know if it helps. Does anyone have any idea?
Just run these 3 command lines in R: ``` source("http://bioconductor.org/biocLite.R") biocLite("VariantAnnotation") #install the package library("VariantAnnotation") #load the package example("readVcf") #optional, test the function by running example codes ```
biostars
{"uid": 64380, "view_count": 24806, "vote_count": 2}
Dear all, I am trying to calculate differential gene expression in DESeq2 for a simple two condition experiment with three replicates for each condition. After loading the DESeq2 library I load my count table using: countData <- as.matrix(read.table("combined.counts.CvsT.txt", header = T, row.names = 1)) > head(countData) Col0C1 Col0C2 Col0C3 Col0T1 Col0T2 Col0T3 AT1G01010 756 225 331 445 941 676 AT1G01020 346 207 256 516 474 264 AT1G01030 45 36 23 32 67 63 AT1G01040 1675 1163 1671 2914 3335 2065 AT1G01046 10 6 6 17 32 18 AT1G01050 2035 1541 1946 2833 3320 2012 In order to create the experimental design/meta table I do: colData <- data.frame(condition=ifelse(grepl("Col0C", colnames(countData)), "control", "triggered")) rownames(colData) <- colnames(countData) colData condition Col0C1 control Col0C2 control Col0C3 control Col0T1 triggered Col0T2 triggered Col0T3 triggered When I then try to calculate differential expression, I get the following error: > dds <- DESeqDataSetFromMatrix(countData, colData, formula(~ condition)) Error in DESeqDataSetFromMatrix(countData, colData, formula(~condition)) : could not find function "DESeqDataSetFromMatrix" I am using R Studio, but I don't think this is a problem of DESeq2 (it also happens when I run the script in the console version of R) but rather of the script I use to generate the metatable. Does anyone have an idea what's going wrong? My session info is: R version 3.4.2 (2017-09-28) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 Matrix products: default locale: [1] LC_COLLATE=German_Germany.1252 LC_CTYPE=German_Germany.1252 LC_MONETARY=German_Germany.1252 [4] LC_NUMERIC=C LC_TIME=German_Germany.1252 attached base packages: [1] parallel stats4 stats graphics grDevices utils datasets methods base other attached packages: [1] SummarizedExperiment_1.8.1 DelayedArray_0.4.1 matrixStats_0.53.1 [4] Biobase_2.38.0 GenomicRanges_1.30.0 GenomeInfoDb_1.14.0 [7] IRanges_2.12.0 S4Vectors_0.16.0 BiocGenerics_0.24.0 loaded via a namespace (and not attached): [1] Rcpp_0.12.16 compiler_3.4.2 pillar_1.2.1 RColorBrewer_1.1-2 [5] plyr_1.8.4 XVector_0.18.0 bitops_1.0-6 base64enc_0.1-3 [9] tools_3.4.2 zlibbioc_1.24.0 rpart_4.1-13 tibble_1.4.2 [13] gtable_0.2.0 lattice_0.20-35 rlang_0.2.0 Matrix_1.2-12 [17] GenomeInfoDbData_1.0.0 cluster_2.0.6 nnet_7.3-12 grid_3.4.2 [21] survival_2.41-3 BiocParallel_1.12.0 foreign_0.8-69 latticeExtra_0.6-28 [25] Formula_1.2-2 ggplot2_2.2.1 scales_0.5.0 splines_3.4.2 [29] colorspace_1.3-2 acepack_1.4.1 RCurl_1.95-4.10 lazyeval_0.2.1 [33] munsell_0.4.3 Thanks al lot. Ricky
I don't see DESeq2 in your sessionInfo(), how did you load it?
biostars
{"uid": 304670, "view_count": 14864, "vote_count": 3}
``` gi|110640213|ref|NC_008253.1|_4832863_4833322_0:0:0_0:0:0_11 163 gi|110640213|ref|NC_008253.1| 4832863 23 70M = 4833253 460 TACCGCAATGTGCTTATTGAAGATGACCAGGGAACGCATTTCCGGCTGGTTATCCGCAATGCCGGAGGGC 2222222222222222222222222222222222222222222222222222222222222222222222 XT:A:U NM:i:0 SM:i:23 AM:i:0 X0:i:1 X1:i:1 XM:i:0 XO:i:0 XG:i:0 MD:Z:70 XA:Z:gi|110640213|ref|NC_008253.1|,+4019608,70M,1; ``` Suppose I have the following alignment (illustrated above). As you can see there are multiple alignments in the form (rname, pos, cigar, NM), where NM defines edit distance. I have noticed that the pos field can either be positive or negative. Where do positional values beginning with a positive or negative sign start in the sequence specified by rname?
Following the advice given from Brian Bushnell and Istvan Albert, I was able to find the block of code that produce the multiple hits. Under `bwa/bwase.c`, beginning at line 467, we can see that, in fact, the `+` and `-` signs refer to the positive and negative strands.
biostars
{"uid": 168615, "view_count": 2487, "vote_count": 1}
I know for the reverse procedure, there are lots of methods available. But how can I change my fasta file with sequences like: >gi|189094002 MDDAELNAIRQARLAELQRNAAGGGSSTNPSSSSSGGAQDSAQENMTITILNRVLTNEARERLSRQQTKITFNRKNIAADDDEDDDDFFD >gi|68485955 MDDAELNAIRQARLAELQRNAAGGGSSTNPSSSSSGGAQDSAQENMTITILNRVLTNEARERLSRVKIVRRDSQQKQQTKITFNRKNIAGDDEDDDDFFD to like: >gi|189094002 MDDAELNAIRQARLAELQRNAAGGGSSTNPSSSSSGGAQDSAQENMTITILNRVLTNEARERLSR QQTKITFNRKNIAADDDEDDDDFFD >gi|68485955 MDDAELNAIRQARLAELQRNAAGGGSSTNPSSSSSGGAQDSAQENMTITILNRVLTNEARERLSR VKIVRRDSQQKQQTKITFNRKNIAGDDEDDDDFFD I know its just about inserting line breaks in each sequence, but is there any way that I can check whether the fasta file is single line fasta or not, and if is, add line breaks in each sequence.. Ps. I am using windows.. Thanks for your consideration
from this https://github.com/jimhester/fasta_utilities you can use wrap.pl "limits FASTA lines to 80 characters" Or you can use from here http://hannonlab.cshl.edu/fastx_toolkit/commandline.html#fastx_clipper_usage fasta_formatter fasta_formatter -i yourfile.fasta -w 80 -o yourout.fasta > [-w N] = max. sequence line width for output FASTA file. > When ZERO (the default), sequence lines will NOT be wrapped - > all nucleotides of each sequences will appear on a single > line (good for scripting). ###**advice**: *If you will work in bioinformatics try any Linux flavor*
biostars
{"uid": 205745, "view_count": 19307, "vote_count": 1}
Part of a paper I am writing involves comparing different human genome assemblies. I would like to have some kind of citation for the assemblies hg18, hg19, and hg38. It seems like many other papers do not cite them, for example http://nar.oxfordjournals.org/content/early/2010/10/18/nar.gkq963.full. However, I noticed some conflicting info on various database entries for the genomes and would like to know which information to use. For example, the release dates differ for hg19 on the NCBI assembly database verses the Genome Reference Consortium page February 27, 2009 vs March 3, 2009. http://www.ncbi.nlm.nih.gov/assembly/GCF_000001405.13/ http://www.ncbi.nlm.nih.gov/projects/genome/assembly/grc/human/index.shtml (click CRCh37)
Citing versions of any particular bioinformatics/genomics resources can get tricky because there is often no formal publication for every release of a given dataset. Further complicating the situation is the fact that you will often come across different dates (and even names) for the same resource. E.g. the latest cow genome assembly generated by the University of Maryland is known as 'UMD 3.1.1'. However, the UCSC genome browser uses their own internal IDs for all cow genome assemblies and refers to this as 'bosTau8'. Someone new to the field might see the UCSC version and not know about the original UMD name. Sometimes you can use dates of files on FTP sites to approximately date sequence files, but these can sometimes change (sometimes files accidentally get removed and replaced from backups, which can change their date). The key thing to aim for is to provide suitable information so that someone can reproduce your work. In my mind, this requires 2-3 pieces of information: 1. The name or release number of the dataset you are downloading (provide alternate names when known) 2. The specific URL for the website or FTP site that you used to download the data 3. The date on which you downloaded the data E.g. The UMD 3.1.1 version of the cow genome assembly (also known as bosTau8) was downloaded from the UCSC Genome FTP site (ftp://hgdownload.cse.ucsc.edu//apache/htdocs/goldenPath/bosTau8/bigZips/bosTau8.fa.gz). When no version number is available - it is very unhelpful not to provide version numbers of sequence resources: they can, and *will* change - I always refer to the date that I downloaded it instead.
biostars
{"uid": 136527, "view_count": 7935, "vote_count": 1}
Hello, I have the following experimental design for an experiment in which we conducted RNA sequencing: 2 treatment groups and 2 batches, but one of the batches is exclusively one treatment. ``` sample treatment batch 3 control A 4 control A 5 control B 6 control B 7 control B 13 control B 14 control B 15 BD A 16 BD A 17 BD A 18 BD A 18 BD A ``` It is a poor experimental design, but unfortunately it is the data that I currently must work with. To account for my the potential batch effect of the B batch, I am using the following design formula: ```r dds <- DESeqDataSetFromMatrix(countData = countData,colData = colData,design = ~ batch + treatment) dds$treatment <- factor(dds$treatment, levels=c("control", "BD")) dds$batch <- factor(dds$batch, levels=c("A", "B")) dds <- DESeq(dds, full=design(dds), reduced = ~ batch) ``` The results give me many less DE genes than if I simply ignored the batches and only use `~ treatment`. This makes sense, because according to PCA and clustering, there is a batch effect in my samples. I've read the DESeq2 manual and many posts, but am not a statistician and would love to hear feedback if the design I'm using here makes sense, with the lack of representation of both treatment groups in the batch I am intended to correct for. Thank you!
<p>Your design is correct, it&#39;s not an issue that the batch is just within one treatment since that treatment is itself present in both batch A and B. You only have a big problem when a whole group is present in its own batch.</p>
biostars
{"uid": 159072, "view_count": 10627, "vote_count": 10}
Hi All, I have a question regarding adapter trimming process of small RNA-seq data. The library for this dataset was prepared using NEBNext multiplex small RNA sample prep set for illumina (E7300S/L: https://www.neb.com/-/media/catalog/datacards-or-manuals/manuale7300.pdf). So I used `bbduk.sh` from BBtools(https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbduk-guide/) using the following command: bbduk.sh -Xmx1g in=Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq out=/media/owner/7ef86942-96a5-48a7-a325-6c5e1aec7408/trimmed_files/bbmap_trimmed/clean_Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq ref=NEB-SE_5_and_3_Prime.fa ktrim=r k=23 mink=11 hdist=1 tpe tbo The adapter file`NEB-SE_5_and_3_Prime.fa` contains both 5' and 3' adapters: >NEB_sRNA_read_1 AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC >NEB_sRNA_read_2 AGATCGGAA So the problem I have is with the trimmed file- the trimmed file now got rid of first adapter: cat clean_Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq | head -n 20000 | grep AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC owner@owner-HP-Z840-Workstation[bbmap_trimmed] but it is still showing the second adapter: owner@owner-HP-Z840-Workstation[bbmap_trimmed] cat clean_Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq | head -n 1000 | grep AGATCGGAA TTTCTCTGAGCACTCCTTAGTACAAGATCGGAAGAGCACACGTCGAACTC AAATGTTCTGAGGACTGGTTCTAGATCGGAAGAGCACCGTCTGAACTCCA GATGGGCCCCGGGTTCGATTCCCGGCGAACGCACCAGATCGGAAGAGCCA TTGGACGTGTTATTTTCAGACAAGATCGGAAGAAGCACACGTCTGAACTC Can someone please help me understand if I need to remove both of these adapters in order to perform downstream/expression analysis? I have been using btrim to trim adapters from RNAseq data (in this case I never had to provide adapter infile), but this is the first time I am doing it with bbmap (and also with trimmomatic) for smallRNAseq data. In case of smallRNAseq data, do we normally trim both 5' and 3' adapters and have both adapter sequences in infile fortrimming? Can someone please help me understand this process? Thank you for your help in advance.
The smallest adapter sequence (`NEB_sRNA_read_2P`) is just 9bp, I think with your current settings of `k=23 mink=11` it is not being used. Try using `k=9 mink=6 hdist=0`. The flags `tbo` and `tpe` have no effect here, as you have single end data.
biostars
{"uid": 327802, "view_count": 4500, "vote_count": 2}
Hi all, I aligned my RNA-seq against reference genome using tophat, I used the default aligner bowtie2. And also the default parameters: tophat -p 8 -G $annotation -o out $database L1_1.fq.gz L1_2.fq.gz After got the results, I found out that in the unmapped.bam file, some reads have exact same sequences with the reference. The follow is one line in the unmapped.sam file: ``` DGZN8DQ1:360:H9RN8ADXX:1:1101:4791:1895 69 * 0 255 * * 0 0 TTTTGCTTTCTGACTCTGTGCTTGTGCCTTCAAGACTTTCACAACGATTTTCTGCTCCTCAATAAGGAAAGCCCGAGATCGGAAGAGCACACGTCTGAAC CCCFFFFFHHHHHJJJJJJJIJJJHIJJJJJIJJJIJJJJIJJJJJIJJJJJJJJJJJJIJIJJJJJIJJJJJJHHFFDEDDDDDDDDDDDDDDDDDCCD ``` Does anyone know why the bowtie2 doesn't treat those reads as mapped? Thanks
<p>Dirty little secret: bowtie2 doesn&#39;t always find exact matches. If you change the order of reads in a file you&#39;ll sometimes get different alignment results for them. I&#39;ve never bothered to find the reason, since this ends up affecting very few reads.</p>
biostars
{"uid": 106942, "view_count": 2996, "vote_count": 2}
<p>Hi,</p> <p>Imagine a Fastq file generated from a Roche 454 platform. You have no information whatsoever about the protocol that what used. The header of the reads give no specific information, just random alphanumeric characters. Each read starts with a 30 bp sequence and ends with a 15bp sequence that look to me like an adapter (?).</p> <p>How can I be sure that reads are single-ends or paired-ends? Is there anyway to know that just on the basis of sequence information?</p> <p>Thanks ;)</p>
For 454 flx: grep 'GTTGGAACCGAAAGGGTTTGAATTCAAACCCTTTCGGTTCCAAC' 454Reads.fastq | wc -l You should see a big number for 454 'paired-end' data, or 0 for single end data. The built-in linker sequences are: 1. -linker flx -- GTTGGAACCGAAAGGGTTTGAATTCAAACCCTTTCGGTTCCAAC, a palindrome, equal to its own reverse complement. 2. -linker titanium -- TCGTATAACTTCGTATAATGTATGCTATACGAAGTTATTACG and the reverse-complement CGTAATAACTTCGTATAGCATACATTATACGAAGTTATACGA. For more, <a href="http://wgs-assembler.sourceforge.net/wiki/index.php/SffToCA">http://wgs-assembler.sourceforge.net/wiki/index.php/SffToCA</a>
biostars
{"uid": 111047, "view_count": 7534, "vote_count": 1}
I uploaded a single FASTA file with multiple gene clusters from different organisms to an online program called antiSMASH, or fungiSMASH in my case. Most clusters have a ketosynthase (KS) gene in them. antiSMASH identified the putative KS genes and provided me with an output in a text file. Can anyone assist me in extracting (parsing may be the correct term?) the genes of interest (nucleotide sequences) with associated accession number and definition, or just the taxon name from the definition? I believe all of the genes of interest will say: /aSDomain="PKS_KS" And if this is present then I will want the range of nucleotides indicated adjacent to the heading aSDomain. For example: aSDomain 2527..3816 However, sometimes the adjacent numbers will says something like: aSDomain join(1610..1702,1756..2109,2165..3010) In which case I believe I would want to concatenate each range indicated. Or: aSDomain complement(join(10640..10648,10717..11439)) In which case I believe I would want to concatenate all ranges and then take the complementary sequence. I would like to do an alignment and then make a phylogenetic tree based on the extracted KS genes. I believe FASTA format would be a good output to have my KS genes in, but I can convert if necessary. This is my first time using antiSMASH and I'm new to coding so I apologies for any obvious blunders and I would have preferred to attach a file of my output data but I didn't see that as an option! Thanks in advance for any help! Here's a link to my output: https://drive.google.com/open?id=1KWbh3D7jY7u5AytlGCLxC65MR_KKUY7X If someone has a better way of attaching a large text file (~1.7 million characters), I'm all ears. Here's a link all of the output that antiSMASH generated (not just text file of all annotated genes): https://drive.google.com/open?id=1KWbh3D7jY7u5AytlGCLxC65MR_KKUY7X
Hi [mac03pat][1], This should work: ``` #!/usr/bin/env python from Bio import SeqIO from Bio.SeqFeature import SeqFeature gbk = "antiSMASH_processed_geneclusters.txt" fa = "antiSMASH_processed_geneclusters.fa" input_handle = open(gbk, "r") output_handle = open(fa, "w") for record in SeqIO.parse(input_handle, "genbank"): features = [feature for feature in record.features if feature.type == "aSDomain"] for feature in features: if feature.qualifiers["aSDomain"][0] == "PKS_KS": output_handle.write(">%s %s\n%s\n" % ( record.id, feature.location, SeqFeature(feature.location).extract(record.seq))) output_handle.close() input_handle.close() ``` [1]: https://www.biostars.org/u/55218/
biostars
{"uid": 381183, "view_count": 1719, "vote_count": 2}
<p>I&#39;m using Tophat to align RNA-Seq reads to a genome. I want to know how many reads aligned, but I didn&#39;t know about the -g option when I ran Tophat, so the normal commands like &quot;samtools -c accepted_hits.bam&quot; isn&#39;t giving me the number of unique reads in the file, it&#39;s giving me the total number of alignments. Is there any way I can get this information without re-running Tophat?</p>
<p><code>samtools view -c -F 256 accepted_hits.bam</code></p>
biostars
{"uid": 147596, "view_count": 2848, "vote_count": 3}
Hello, I'm currently analyzing an RNA-seq experiment consisting of clinical patient samples pre- and post-treatment, for individuals that had no response (NR, n=6), partial response (PR, n=4), or complete response (CR, n=2) to our compound. Unfortunately, no replicates were collected for each individual patient, but we're doing the best we can with these samples. The goal is hypothesis generation for downstream validation. Our main questions are: 1) Which genes consistently change expression after treatment? 2) Which genes change specifically in CR/PR patients and are unchanged in NR patients after treatment? I'm trying to determine the best way to analyze these data with these limited resources. I've analyzed the pre- and post-treatment samples with CuffDiff and DESeq2, and have markedly different results. I'm currently trying to analyze them with IsoEM2/IsoDE2 as these perform bootstrapping to report confidence intervals and were designed for an experiment without replicates. Do you have any insight on which of these programs (or a different one) that would be best suited for an experiment without replicates? There doesn't seem to be any consensus in the literature, so I was hoping for any input. Ultimately, I plan on calling differentially expressed genes by pooling the two CR, four PR, and six NR patients as "biological replicates" to determine genes that change within each group, then looking at the fold change of these genes within each individual patient. Does this sound like a reasonable approach? I've been wondering if there is a reasonable way to analyze each of these patients individually, then find which genes are consistently differentially expressed. I'm hesitant to put any faith into the reported p-values from DE programs, as there are no replicates. Would it be reasonable to use expression (minimum FPKM cutoff) and log2-fold change to call "putative differentially expressed genes" in each patient, then examine the overlap? Or am I opening a can of worms with this line of thinking? Thank you very much for the help, this is a wonderful community!
Do not analyse the patients separately. In that design you have not replicates. And even if you could analyse them without replicates, looking for overlaps is a terrible way of finding which effects are significant - it relies on the arbitrary thresholds you have choosen to use to call significance having some none-arbitrary meaning. But if you analyse them together you do have biological (but not technical) replicates. There is absolutely nothing wrong with this design. Its not a trick and or fudge, its the correct design for the experiment. See my answer to https://www.biostars.org/p/292316/#292420 for more discussion of what is a biological and what a technical replicate. As mentioned by Friederike, analysis of an experiment with almost exactly this design is explained in section 3.5 of the edgeR user manual. Definitely use edgeR, Deseq2 or limma-voom to do the analysis (they all use approximately the same algorithm). I generally prepare counts using salmon and then tximport to import the data to R. Don't use cuffdiff - it cannot do these kinds of complex experimental designs.
biostars
{"uid": 298462, "view_count": 8880, "vote_count": 4}
I have a set of protein accession number. I want to retrieve their sequences but programmically. Is there anyway to do that from Uniprot? There is a page description on Uniprot itself but to be honest I could not understand it. Any comment would be appreciated Thanks
You can get it from UniProt directly using `curl` as follows: $ cat uniprot_ids.txt P00750 P00751 P00752 $ for acc in `cat uniprot_ids.txt` ; do curl -s "https://www.uniprot.org/uniprot/$acc.fasta" ; done > uniprot_seqs.fasta But if you choose to go with Entrez Direct, then I suggest the following command: $ cat uniprot_ids.txt | epost -db protein | efetch -db protein -format fasta > uniprot_seqs.fasta
biostars
{"uid": 354555, "view_count": 3024, "vote_count": 1}
<p>I have a set of paired end and single end MiSeq Illumina reads: <code>Sample_1.fastq, Sample_2.fastq, Sample_s.fastq</code></p> <p>If I wanted to assemble this with Abyss it would be:</p> <pre> abyss-pe K=Kmer name=Sample_Kmer in=&#39;Sample_1.fastq Sample_2.fastq&#39; se=&#39;Sample_s.fastq&#39;</pre> <p>Now I want to perform a hybrid assembly with pacbio reads thrown into the mix. I have 3 PacBio subreads: <code>Sample.1.subreads.fastq, Sample.2.subreads.fastq, Sample.3.subreads.fastq</code> (all ~2 gigabytes)</p> <p>My question is, how do I include all of these read files in one assembly command?</p> <p>I was able to do this in Spades/Soapdenovo so far.</p>
<p>I think Abyss will not use long reads on the assembly step, it will only use long reads for scaffolding - check on its <a href="https://github.com/bcgsc/abyss#rescaffolding-with-long-sequences">manual</a>. <a href="http://sourceforge.net/projects/ceruleanassembler/">Cerulean</a> performs hybrid assembly starting from an Abyss assembly, but it is designed for small genomes.</p>
biostars
{"uid": 148363, "view_count": 2921, "vote_count": 1}
<p>Hi all,</p> <p>I am working on microarray expression. I am new to this, so pardon me if I provide incomplete details.</p> <p>I have downloaded the raw data from GEO (gilent-014850 Whole Human Genome Microarray 4x44K G4112F) and we want to look at expression levels of genes with different datasets added on top (expression levels of genes with and without epigenetic marks, transcription factors).</p> <p>When I was extracting the information, I realised that there are multiple probes for certain genes. At first, I took the highest expression values for each gene and then later compared it with the average and the difference was notable. </p> <p>I was wondering if there is a method to identify which probes are present in all isoforms and which are present in a few.</p> <p>Thanks for the help.</p>
<p>If you get the probe sequences, you should be able to map it back to the genome (a useful exercise anyway - sometimes the genome sequence has changed and the probes e.g. no longer map uniquely), and then you can use the latest gene annotations to figure out which specific exon they hit. From there, if you sort by transcript ID, you should be able to get an idea. There also seem to be some alternative splicing databases in existence, but I've never used them, so can't tell you anything about them: <a href='http://www.eurasnet.info/tools/asdatabases'>http://www.eurasnet.info/tools/asdatabases</a></p>
biostars
{"uid": 74687, "view_count": 8161, "vote_count": 3}
Hi friends, I was going to adapter trimming by trimmomatic then I did so but ``` [izadi@lbox161 ~]$ cd /usr/data/nfs6/izadi/Trimmomatic-0.33/ [izadi@lbox161 Trimmomatic-0.33]$ java -jar trimmomatic-0.30.jar SE -phred33 SRR1944913.fastq output.fq ILLUMINACLIP:TruSeq3-SE:2:30:10 LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36 Error: Unable to access jarfile trimmomatic-0.30.jar ``` Do you know what happened? Thank you
<p>Going to guess that since you are in directory for Trimmomatic-0.33 you should be using &quot;java -jar trimmomatic-0.33.jar&quot; instead of trimmomatic-0.30.jar?</p>
biostars
{"uid": 160739, "view_count": 5580, "vote_count": 1}
<p>Is there a way to filter out broad peaks / overlaps from a narrowpeak file? Example: I am looking at Pol II and have very nice sharp peaks at the TSS of genes, and these are the kind of peaks i&#39;m interested in identifying inside exons / introns. I&#39;m looking for overlaps of a TF at Exons but I am only interested in narrow, sharp, clean peaks and not large islands with multiple overlaps of peaks.</p> <p>I am mainly using bedtools as I have no programming experience, but I don&#39;t mind learning another tool if needed.</p>
Adding my comment as an answer. Not sure how "broad" these peaks are, given its a narrowPeak file, but can't you use a size based filter? Particularly, if you are not interested in regions with overlapping peaks, you can first merge them using bedtools merge, and then filter out large/merged peaks using awk as follows: **Size based method**: Assuming you want to merge all peaks which are less than 50bp apart, and remove all peaks that are larger than 300bp: $ bedtools merge -d 50 -i [input file] > merged.bed $ awk '{if ($3-$2 <= 300) print $0;}' merged.bed > filtered.bed **Overlap based method:** Assuming you want to merge only overlapping peaks, and remove all merged features where more than two peaks are merged: $ bedtools merge -c 1 -o count -i [input file] > merged.bed $ awk '{if ($4 <=2) print $0;}' merged.bed > filtered.bed Note that bedtools merge removes all except the first three columns in the output unless explicitly retained column-wise.
biostars
{"uid": 167381, "view_count": 2088, "vote_count": 1}
I have ran mothur and uparse pipeline and they were very straightforward. now I am giving a try to qiime - but it is a headache - so un organized and not similar to what I experienced so far. for example, I have to make a mapping file; but seems I am not pointing to the fastq file in the mapping file ?!! In my data the reverse data quality is so bad; so I am going to use only the forward read. so my question is how to for a mapping for for such a set up and it would be great if someone gives a hint how to "Pick OTUs through OTU table" ! Qiime mapping file example looks like #SampleID BarcodeSequence LinkerPrimerSequence Treatment DOB Description I wonder which column is for the path to the fastq files ? Also, I already have removed barcodes, primers with other tools Thank you
QIIME is very helpful but can have it's painful moments. Here is the pipeline for the Illumina data: http://qiime.org/tutorials/processing_illumina_data.html I make fake mapping file because I already have fastq without any adapters. Just make sure the names are correct but the sequences can be fake. Convert fastq to fasta and use the pipeline.
biostars
{"uid": 118019, "view_count": 5397, "vote_count": 1}
Hi. Like other identifiers, entrez ids are also change as time goes by. I used two library, 'org.Hs.eg.db' and 'annotate', to convert entrez ids into gene symbols. Some of entrez ids were not changed because of updates. Take entrez id 164022 as an example, library(org.Hs.eg.db);library(annotate) getSYMBOL('164022',data = 'org.Hs.eg') 164022 NA If you search the entrez id at NCBI, it says that 164022 was replaced with 653505. (See. http://www.ncbi.nlm.nih.gov/gene/?term=164022) Therefore, you should use the newest entrez id to get its gene symbol. getSYMBOL('653505',data = 'org.Hs.eg') 653505 "PPIAL4A" There are about two hundreds entrez ids whose symbol-matching failed. As manual searching requires huge time, I need a solution. How to update old entrez ids into newest entrez id? Is there a function or library for this?
This is a very good question. You may try with the [mygene.info][1] service: $: curl mygene.info/v2/gene/164022?fields=entrezgene { _id: "653505", entrezgene: 653505 } Remove the "fields" parameter to get more information. For more documentation on mygene.info, check http://mygene.info/v2/api#MyGene.info-gene-annotation-services-GET-Gene-annotation-service [1]: http://mygene.info/
biostars
{"uid": 201622, "view_count": 3039, "vote_count": 5}
I want to get expression data from TCGA for the cancer of my interest around half of data are RNASeqv2 and the rest from RNASeqv. This is from TCGA: > RNASeq Version 2 is similar to [RNASeq][1] in that it uses sequencing data to determine gene expression levels. RNASeq Version 2 uses a different set of algorithms to determine the expression levels are the results are presented in a slightly different set of files. > There are two analysis pipelines used to create Level 3 expression data from RNA Sequence data. The first approach used at TCGA relies on the [RPKM][2] method, while the second method uses MapSplice to do the alignment and RSEM to perform the quantitation I want to use this data to build a regulatory network. My question is that, should I use just RNAsev or RNASeqV2 or I can mix all of them and use them in my model? What's the problem? What's the disadvantage of using both of them? (Some samples come from RNASeqv2 and others from RNASeq) [1]: https://wiki.nci.nih.gov/x/TghhAg [2]: https://wiki.nci.nih.gov/x/VxNCB
I would use the dataset that maximizes the sample size (which I would guess to be V2). The isoform expression levels will vary if you use a different tool for mRNA quantification. The gene-level quantification should be more similar (and is what I would recommend using anyways), but it is best to avoid potential sources of bias if you can. I would expect all old samples should be run with the latest pipeline. For example, I would check the publication data site to see what data is listed. For example, I only see V2 quantification for the latest publication: https://tcga-data.nci.nih.gov/docs/publications/
biostars
{"uid": 98547, "view_count": 1911, "vote_count": 2}
What are the advantages and disadvantages of mapping to genome or transcriptome? Is there a good quality transcriptome available for Macaca Mulata and human? The advantage of mapping to transcriptome is definitely the time, it takes more time to map reads to the genome. I also head that for some species a good quality transcriptome is not available so it is preferred to map to genome and the other way round. I could not find any comparison paper where the same reads were mapped to genome and transcriptome. I found the same questions on SeqAnswers, though they out of date (year 2010 and 2014).
It depends on what your interest is. If you are happy with the transcriptomes available then you can use them with following caveats (besides points you already mention above): A. you are not going to be able to identify new transcripts B. there is some chance that reads may align in regions that they may not have originated from. On plus side: you can use programs like `salmon` (that don't need to align the data) to speed the process up while requiring significantly less hardware resources. Using `salmon` with `genome decoys` will help avoid stray matches (ref: https://www.biostars.org/p/456231/#456366 ). Human transcriptome is reasonably well characterized at this time. If you are not interested in alternately spliced transcripts then there is `RefSeq select` and `MANE` ([**LINK**][1]) datasets that you can use. I am not sure what the status is for Macaca. [1]: https://www.ncbi.nlm.nih.gov/refseq/MANE/
biostars
{"uid": 9486348, "view_count": 1691, "vote_count": 1}
Hi, I'd like to ask for help with finding an efficient way of counting reads from a bam file that lie within an interval (from a bed file). The problem is I only want reads that lie **entirely** within a given interval (no matter how long they are, or what percentage of the given interval they cover). The intervals may be overlapping. I'm dealing with amplicon sequencing. Currently, the only way I am able to do this is by separately intersecting (bedtools) the bam with each region in my bed file and then using `samtools -c`. This approach however takes too long. To me this seems like a very basic problem which I believe must have been solved but I'm unable to google the right solution. Thanks for any suggestions.
Using my tool samjs https://github.com/lindenb/jvarkit/wiki/SamJS ``` samtools view -bu -F 4 input.bam seq2:250-300 |\ java -jar jvarkit-git/dist-1.133/samjs.jar -e 'record.alignmentStart >= 250 && record.alignmentEnd <= 300' |\ samtools view -Sc - ```
biostars
{"uid": 150530, "view_count": 4742, "vote_count": 2}
I have been using clusterProfiler, which is a very useful package for gene set analysis and visualisation. I would like to use the '`cnetplot`' function to plot a network of GO terms and the related genes. However for larger networks, the automatic display can be confusing and it would be helpful to be able to move nodes around. In the past I could do this with with `cnetplot(fixed=FALSE)` option, but after updating R and re-installing clusterProfiler, the output remains static. I am using R 3.5.3 with clusterProfiler v3.10.1 which I installed using Bioconductor 3.8. I have installed and loaded the 'igraph' package, and the following test code produces output in an interactive window, as desired: library(igraph) g <- make_ring(10) tkplot(g) Is there any way to make cnetplot output interactive, or is that functionality simply not available in the latest release? Any help would be greatly appreciated!
it is indeed not available in the latest release as all the visualization methods were rewrote from scratch using ggplot2. However, if you want to use the old methods, you can use the [doseplot](https://github.com/GuangchuangYu/doseplot) package.
biostars
{"uid": 375555, "view_count": 4952, "vote_count": 1}
Hello, is it possible easily to get rs id's having genomic locations? Ex: from this: Chr1 158669597 158669597 Chr11 72946311 72946311 To this: Chr1 158669597 158669597 rs141159720 Chr11 72946311 72946311 rs145119561 My idea was to dobnload all SNP's from UCSC or Ensembl and then compare two files by genomic coordinates in R it's simply: mydata$rs <- allsnps$rs[match(mydata$loc,allsnps$loc)] However list of all human SNP's is huge, reference file has anout 20gb. Do you know any easy solution to get result such like this? Thanks.
If you're worried about disk usage, [BEDOPS][1] does set operations on compressed files in a format called [Starch][2], which offers a higher compression ratio than bzip2 and, as mentioned, allows direct set operations, without any extraction to an intermediate file. With BEDOPS installed, you could create a Starch file of SNPs like so: $ wget -qO- http://hgdownload.cse.ucsc.edu/goldenpath/hg19/database/snp147.txt.gz \ | gunzip -c \ | awk -v OFS="\t" '{ print $$2,$$3,($$3+1),$$5 }' \ | sort-bed - \ | starch - \ > hg19.snp147.starch (Adjust as needed for the SNPs you need, for the reference genome you're working with.) An uncompressed raw BED file of SNPs is roughly 5.5 GB. Compressed as a Starch file, as shown above, it should be about 5-10% of that, around 250-500 MB. Given a sorted, tab-delimited genomic regions file `regions.bed` like this: chr1 158669597 158669598 chr11 72946311 72946312 Then you could do a `bedmap` operation with the compressed SNP annotations like this: $ bedmap --echo --echo-map-id --delim '\t' regions.bed hg19.snp147.starch chr1 158669597 158669598 chr11 72946311 72946312 rs765534001 In this example `chr1:158669597-158669598` does not associate with a SNP ID, while `chr11:72946311-72946312` associates with SNP `rs765534001`. So there are data hygiene issues with your input you should fix before using this toolkit: 1. Your regions should be half-open, zero-based. This means that the start and stop columns should not be equal. You can use `awk` to increment the stop field by 1, where start and stop are equal. 2. Your regions should be sorted with BEDOPS `sort-bed` for set operations to run quickly and correctly. 3. Your regions should start with `chr` and not `Chr`. This can be fixed with `awk`, as well. [1]: http://bedops.readthedocs.io/en/latest/ [2]: http://bedops.readthedocs.io/en/latest/content/reference/file-management/compression/starch.html
biostars
{"uid": 272876, "view_count": 3680, "vote_count": 1}
Hello All, Let's say that some whole genome sample was sequenced with a coverage of 30x. As far as i'm aware, this means that, with respect to the reference genomes' nucleotides, the data represents each nucleotide 30 times on average. Let's also say that the tissue sample was heterozygous for some loci, where the frequency of the two alleles are both 0.5. Does this mean that coverage for each of these locations are, in effect 15x? I.e if you aligned the data (and it aligned correctly), you would expect to see ~15 reads with allele 1 and ~15 with allele 2. N.B I ask because I am trying to make a simulated cancer genomics dataset. For this I am using ART, and have "mutated" the hg19.fa file, by introducing some point mutations. This mutated file with represent one haploid set, whilst the non-mutated hg19.fa file will represent the other haploid set; this should add realistic point mutations, which are usually heterozygous in nature. I then plan to sequence at 30x, so, I was going to run ART for each file at 15x and then combine to get 30x. Any thoughts? Thanks, Izaak
Theoretically the coverage per allele would indeed be total coverage/2 (for diploid genomes). However, that's more often not exactly 50%.
biostars
{"uid": 224269, "view_count": 2462, "vote_count": 2}
Hi, I wonder why for calculating gene lengths, they calculate the median of the transcripts in goseq manual? Its under section 5.3. I suppose it should be sum rather than median. The manual can be found here. http://www.bioconductor.org/packages/release/bioc/vignettes/goseq/inst/doc/goseq.pdf Cheers!
<p>The goal is to determine if there&#39;s a bias by gene length. In order to do this, one needs to derive some sort of gene length measure. There are a couple ways to do that:</p> <ol> <li>Union gene model: The total non-redundant exonic length of a gene</li> <li>Estimated length: Derived from using expectation maximization, where you then have an estimate of the expected gene length within each sample</li> <li>Median transcript length: What&#39;s used here, which is just the median of the annotated transcript lengths.</li> </ol> <p>If you actually summed the transcripts, as you suggest, then you&#39;d get odd results for genes with many isoforms. The result also wouldn&#39;t match any biologically plausible length (e.g., if a gene has 20 isoforms, each ~1kb then your method would yield a length of 20kb rather than a more reasonable ~1kb estimate).</p>
biostars
{"uid": 140155, "view_count": 2070, "vote_count": 1}
Hi, I am trying to match the location of the list of string like this library(seqinr) at <- ("ATATATAT") s1 <-ifelse(at[8]=="T"||"A" && at[7]=="A"||"T" && at[6]=="T"||"A",5, ifelse(at[2]=="T"||"A" && at[4]=="A"||"T" && at[1]=="T"||"A",'1','0' )) s1 It works fine only for one sequence. I tried it in a for loop but getting error like invalid 'x' type in 'x && y' Any help is much appreciated Thanks
To simplify your example, condition: if any 2nd or 4th position in every sequence has A or T, then TRUE. # example data x <- c("AAGTA", "AAGTA", "AAGTA", "ACGAA") # in this example all TRUE all(substr(x, 2, 2) %in% c("A", "T") | substr(x, 4, 4) %in% c("A", "T")) # [1] TRUE If this is not the solution you are looking for, then please provide example input and expected output, clearly.
biostars
{"uid": 378692, "view_count": 904, "vote_count": 1}
I am trying to post query to a webserver : http://www.imtech.res.in/raghava/antibp/submit.html but I am getting an error Traceback (most recent call last): File "crawler.py", line 4, in <module> conn = httplib.HTTPConnection("http://www.imtech.res.in/raghava/antibp/submit.html") File "/usr/lib/python2.7/httplib.py", line 704, in __init__ self._set_hostport(host, port) File "/usr/lib/python2.7/httplib.py", line 732, in _set_hostport raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) httplib.InvalidURL: nonnumeric port: '//www.imtech.res.in/raghava/antibp/submit.html' The python script is shown below: import httplib, urllib params = urllib.urlencode({'seqname':'GICACRRRFCPNSERFSGYCRVNGARYVRCCSRR','format':'Amino acid sequence in single letter code', 'terminus':'N-terminus', 'method':'svm', 'svm_th':'0', 'type': 'Submit'}) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("http://www.imtech.res.in/raghava/antibp/submit.html") conn.request("POST", "", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() What could be the problem? Thank you.
<p>That's how I would do it, with the disclaimer that I'm no expert in querying web pages and I don't know anything about the server in question:</p> python import mechanize br = mechanize.Browser() br.set_handle_robots(False) br.open("http://www.imtech.res.in/raghava/antibp/submit.html") br.select_form(nr = 0) ## See what is available on this web page: for f in br.forms(): print f #<POST http://www.imtech.res.in/cgibin/antibp/antibp1.pl multipart/form-data # <TextControl(seqname=)> # <TextareaControl(seq=)> # <FileControl(file=<No files added>)> # <SelectControl(format=[*nformat, sformat])> # <RadioControl(terminus=[*1, 2, 3])> # <RadioControl(method=[*1, 2, 3])> # <TextControl(svm_th=0)> # <TextControl(ann_th=0.6)> # <TextControl(qm_th=-0.2)> # <SubmitControl(<None>=Submit) (readonly)> # <IgnoreControl(<None>=<None>)>> ## Input your sequence and parameters: br['seqname']= 'myseq' br['seq']= 'GICACRRRFCPNSERFSGYCRVNGARYVRCCSRR' br['format']= ['nformat'] br['terminus']= ['1'] br['svm_th']= '0' ## Sumbit and collect results: res= br.submit() html= res.read() Now `html` is string in html format that you could parse with an html parser or something else. The relevant bit in `html` should look like: <td><font size="4"><b>Antibacterial Activiy</b></font></td></tr><tr> <td align="CENTER">GICACRRRFCPNSER</td><td align="CENTER">1</td><td align="CENTER">1.975</td><td align="CENTER">YES</td></tr><tr> <td align="CENTER">GYCRVNGARYVRCCS</td><td align="CENTER">18</td><td align="CENTER">1.051</td><td align="CENTER">YES</td></tr><tr> <td align="CENTER">ICACRRRFCPNSERF</td><td align="CENTER">2</td><td align="CENTER">1.001</td><td align="CENTER">YES</td></tr><tr> ...
biostars
{"uid": 144838, "view_count": 4806, "vote_count": 1}
Hi, all I use awk to extract rows from a text file: awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file1.txt > new_file1.txt But how can I use the same code for multiple files ? I can use : awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file1.txt > new_file1.txt awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file2.txt > new_file2.txt awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file3.txt > new_file3.txt awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file4.txt > new_file4.txt awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file5.txt > new_file5.txt . . . . . . . . . . . . . . . . awk 'NR==FNR{vals[$1];next} ($1) in vals' indiv.txt file29.txt > new_file29.txt But there is some ways to do it automaticaly ? for example a loop or file*.txt > new_file*.txt. Thanks! Abdel
There is a variable called `FILENAME` which provide the current filename `awk` reads in. You can take this to create a filename `awk` should print its output. $ awk 'NR==FNR{vals[$1];next} ($1) in vals { print $0 >> "new_"FILENAME}' indiv.txt file1.txt file2.txt file3.txt fin swimmer
biostars
{"uid": 357961, "view_count": 1172, "vote_count": 1}
Why is it so difficult to make things in ggplot2 , i like the way it helps in customisation but the curve is steep nevertheless Here is my sample dataframe df <- gene HSC CMP ENSG00000158292.6 1.8102636 2.456869 ENSG00000162496.6 2.6796705 6.203838 ENSG00000117115.10 3.4509115 5.555739 ENSG00000159423.14 3.6809277 5.063446 ENSG00000053372.4 5.7089974 6.851090 If i have plot a boxplot i can simply write this `boxplot(df[,-1],col=c("red","blue"))` I get a boxplot but when im trying with ggplot2 im having difficult time ex <- melt(df, id.vars=c("HSC", "CMP")) ggplot(data = ex, aes(x = CMP, y = HSC)) + geom_boxplot() I get a single boxplot what i want is i get a box plot for HSC and CMP which i got when i use simple base R boxplot . Any help or suggestion would be highly appreciated with my ggplot2 code
ex = melt(df, id.vars="gene") ggplot(ex, aes(x=variable, y=value)) + geom_boxplot() Your `melt()` command produced nonsensical output.
biostars
{"uid": 284326, "view_count": 8510, "vote_count": 3}
I can easily view the **chromHMM tracks** for **Roadmap project** but not able to download the bed files from the UCSC genome browser. I want the **bed files** for **H3K27ac marks** for a couple of tissues like **Heart Left Ventricle**, **Fetal Heart** etc. I am attaching a screenshot of the browser with Heart Left Ventricle chromHMM track. Also, [this is the link][1] for the same at UCSC Genome Browser just in case the screenshot does not upload. The track is second from top and names chromHMM tracks from Roadmap. [![< image not found >][3]][2] Does anyone know how to download these tracks as bed files? [1]: https://genome.ucsc.edu/cgi-bin/hgTracks?db=hg19&position=chr10%3A180000-187400&hgsid=392137799_1PzPhmDS4ZXa40AFMunFuaQTgkjR [2]: https://www.dropbox.com/s/bmjeuhp200vkklc/chromHMM_Roadmap_H3k27ac_HeartLeftVentricle.png?dl=0 [3]: https://www.dropbox.com/s/bmjeuhp200vkklc/chromHMM_Roadmap_H3k27ac_HeartLeftVentricle.png?dl=0
I had emailed Ting Wang (of the **Roadmap Epigenomics Project**) and he promptly replied back addressing my question. Here is the answer to this question, in his own words: > All the original chromHMM tracks are here for you to download: https://sites.google.com/site/anshulkundaje/projects/epigenomeroadmap > > All Roadmap/ENCODE data are available through the WashU EpiGenome Browser (http://epigenomegateway.wustl.edu/). If you right-click on any track and select "information", you will usually see a url for data source.
biostars
{"uid": 115784, "view_count": 5798, "vote_count": 2}
Hi everyone, I'm working on Copy Number data from TCGA. I download "Gene Level Copy Number Variation" using TCGABiolinks R package and the following code: library(TCGAbiolinks) query_cnv <- GDCquery(project = "TCGA-KICH", data.category = "Copy Number Variation", data.type = "Gene Level Copy Number Scores") GDCdownload(query_cnv) data <- GDCprepare(query_cnv) Everything works great. I get a nice dataframe with first three columns being: "Gene.Symbol" / "Gene.ID" / "Cytoband". To facilitate the analysis and being able to merge data from other sources such as RNASeq, I tried to convert Ensembl gene ids contained in Gene.Symbol in Hugo Symbol using BioMart. library(biomaRt) mart <- useDataset("hsapiens_gene_ensembl", useMart("ensembl")) genes <- gsub(".\\.","\\1",data$Gene.Symbol) geneIDs <- getBM(filters = "ensembl_gene_id", attributes = c("ensembl_gene_id","hgnc_symbol"), values = genes, mart = mart) However, over 19729 different Ensembl ID, I only get 3269 match. What is surprizing is that according GDC docs (https://docs.gdc.cancer.gov/Data/Bioinformatics_Pipelines/CNV_Pipeline/), this dataset should contain CNV associated to each gene, so I would expect a little bit more match according coding regions. When I tried to search the description of Ensembl ID not found by Biomart. I get zero answers from both Ensembl and NCBI. (Example: "ENSG000000081221" "ENSG000000081314" "ENSG000000676014" "ENSG000000783616" "ENSG000000788015"); So, it's like these ID did not exist in any database. So, where they are coming from ? Did I miss something ? Is it normal to have few genes coding for proteins in this kind of datasets ? Should I process differently for the analysis of such data ? Any suggestions or comments will be really helpful.
The IDs as you present them do not exist. The problem is this line of code, which is not doing what [I believe] you believe it's doing: genes <- gsub(".\\.","\\1",data$Gene.Symbol) You need to remove the final digits from each Ensembl ID after the dot. This works: library(TCGAbiolinks) query_cnv <- GDCquery(project = "TCGA-KICH", data.category = "Copy Number Variation", data.type = "Gene Level Copy Number Scores") GDCdownload(query_cnv) data <- GDCprepare(query_cnv) data <- data.frame(data) ens <- sub('\\.[0-9]*$', '', data$Gene.Symbol) require(org.Hs.eg.db) ens_to_symbol <- mapIds( org.Hs.eg.db, keys = ens, column = 'SYMBOL', keytype = 'ENSEMBL') head(ens_to_symbol) ENSG00000008128 ENSG00000008130 ENSG00000067606 ENSG00000078369 ENSG00000078808 "CDK11A" "NADK" "PRKCZ" "GNB1" "SDF4" ENSG00000107404 "DVL1" library(biomaRt) mart <- useDataset('hsapiens_gene_ensembl', useMart('ensembl')) ens_to_symbol_biomart <- getBM( filters = 'ensembl_gene_id', attributes = c('ensembl_gene_id', 'hgnc_symbol'), values = ens, mart = mart) ens_to_symbol_biomart <- merge( x = as.data.frame(ens), y = ens_to_symbol_biomart , by.y = 'ensembl_gene_id', all.x = TRUE, by.x = 'ens') head(ens_to_symbol_biomart) ens hgnc_symbol 1 ENSG00000000003 TSPAN6 2 ENSG00000000005 TNMD 3 ENSG00000000419 DPM1 4 ENSG00000000457 SCYL3 5 ENSG00000000460 C1orf112 6 ENSG00000000938 FGR Kevin =============== -----------------
biostars
{"uid": 9463102, "view_count": 1195, "vote_count": 2}
Dear Biostars, For those familiar with IBM LSF, I am wondering if someone knows how to configure a job, via the job itself or a config file, how to handle stdout when a job enters SSUSP states. As far as I understand my issue, a job will restart from the beginning after entering SSUSP but the stdout is appended and not overwritten. Below is an example of a submission. This happens for a variety of functions regardless of output type (compressed or plain text file) bsub -R 'span[hosts=1]' -N -o stdout.file myFunction ...
In my specific case, passing `-oo` instead of `-o` was a solution because this overwrites instead of appends the output, however I was not able to address the original issue, will update if I find a work-around... https://scicomp.ethz.ch/wiki/LSF_mini_reference
biostars
{"uid": 9541383, "view_count": 257, "vote_count": 2}
Hello everyone, I want to make a GSEA using the online tool of the Broad Institute. To do that I generally use FPKM or RPKM matrix and I was wondering if you think it's ok to use the normalized read file that I get as output from DESeq2 thank you
If you have already run DEseq2 it would be easier for you to use GSEA with the "preranked" mode, giving as input a metric based on the logFC and the pvalues that you got as result of your differential expression. It is pretty well explained here : http://crazyhottommy.blogspot.com/2016/08/gene-set-enrichment-analysis-gsea.html
biostars
{"uid": 404716, "view_count": 2987, "vote_count": 1}
<p>Hello everyone,</p> <p>I would like to download somatic SNP data from the <a href='http://tcga-data.nci.nih.gov'>TCGA</a>. But if I have a look at the data matrix right <a href='https://tcga-data.nci.nih.gov/tcga/dataAccessMatrix.htm?mode=ApplyFilter&amp;showMatrix=true&amp;diseaseType=BRCA&amp;tumorNormal=TN&amp;tumorNormal=T&amp;tumorNormal=NT'>here</a>, there are two color codes "Tumor, matched normal" and "Normal, matched Tumor". I looked up the <a href='https://wiki.nci.nih.gov/display/TCGA/Selecting+Data+Sets#SelectingDataSets-Color-CodingDataSets'>online guide</a> and the <a href='https://wiki.nci.nih.gov/display/TCGA/Getting+Started+with+the+Data+Matrix#GettingStartedwiththeDataMatrix-legend'>Getting Started with the Data Matrix</a> guide.</p> <p>They explain it like</p> <ul> <li>TN (Tumor, matched normal) – Data for a tumor tissue for which matched normal tissue exists.</li> <li>NT (Normal, matched tumor) – Data for normal tissue for which matched tumor tissue exists.</li> </ul> <p>But where is the difference?</p> <p>May some of you are more experienced using TCGA then me.</p> <p>With all the best,</p> <p>Mario</p>
There is no difference between TN and NT for somatic mutations, because tumors and normals are paired up for somatic variant calling. It only makes sense for data generated separately for tumors/normals... like RNA-seq or methylation assays. The data matrix doesn't have a very intuitive interface because they tried to generalize the filtering UI for different data types. I'd recommend downloading the somatic mutations (MAF files) directly from the [TCGA DCC][1] or from [Firehose][2]. Read more about TCGA MAF files [here][3]. You can also use the [firehose_get][4] script to download latest available MAFs that are fed into Firehose. Here's how to do that: ``` wget http://gdac.broadinstitute.org/runs/code/firehose_get_latest.zip unzip firehose_get_latest.zip ./firehose_get -b -only Mutation_Packager_Calls data latest ``` To list the other kinds of data (like `Mutation_Packager_Calls` in the command above) look them up [here][5]. [1]: https://tcga-data.nci.nih.gov/tcgafiles/ftp_auth/distro_ftpusers/anonymous/tumor/ [2]: http://gdac.broadinstitute.org/runs/stddata__latest/samples_report/index.html [3]: https://www.biostars.org/p/69222/ [4]: https://confluence.broadinstitute.org/display/GDAC/Download [5]: http://gdac.broadinstitute.org/runs/stddata__latest/data/
biostars
{"uid": 86929, "view_count": 21687, "vote_count": 6}
Hi Guys, I am using pileup in Rsamtool from R package. I need to get the total "count" for each position. For example, for chr11, position 1643082, there are a total of 1 count of A, 3 counts of C and 18 counts of T. I want them in this order: chr11:1643082 A:1, C:3, T:18 and so forth for position chr11, 47663948 to be shown in the result. There are thousands of different positions and I need to calculate the counts for each nucleotide.Thanks! `table1` ```r seqnames pos strand nucleotide count which_label 1 chr11 1643082 + A 1 chr11:1643082-1643082 2 chr11 1643082 - C 3 chr11:1643082-1643082 3 chr11 1643082 + T 15 chr11:1643082-1643082 4 chr11 1643082 - T 3 chr11:1643082-1643082 5 chr11 47663948 + C 16 chr11:47663948-47663948 6 chr11 47663948 - C 11 chr11:47663948-47663948 7 chr11 47663948 + T 2 chr11:47663948-47663948 ```
You can do this easily with `data.table` package. Something like this: ```r df[, sum(count), by=list(seqnames,pos,nucleotide)] ```
biostars
{"uid": 146405, "view_count": 2354, "vote_count": 3}
Hi, I am going to get some data from plasmid sequencing to identify SNPs on the plasmids. What it is done on the lab is the following: - The plasmids are purified by size - We amplify the plasmids using the phi29 polimerase. The polimerase will go trough the plasmid multiple times. Hence we get the same sequence concatenated multiple times. **My question is related to this step** - We sequence it using Oxford Nanopore My question is: On step two, I wrote that "we get the same sequence concatenated multiple times". For me, this a potential source of information to correct the base calls prior to aligning them to the reference. Since you have the same sequence multiple times ( concatemers). What I would like to know is if there is a tool that uses the information of the concatemers to improve the base calls. I have try to find some methods but found none. I know that PacBio has a similar flavour using the "circular consensus calling", But I have not found any methodological explanation. thanks!
Thanks to this [answer][1] in bioinformatics stackexchange, there is a tool doing exactly what you want. The tool is called [C3P0a][2] [1]: https://bioinformatics.stackexchange.com/questions/5397/error-correction-within-the-long-read/5406#5406 [2]: https://github.com/rvolden/C3POa
biostars
{"uid": 347757, "view_count": 1037, "vote_count": 2}
Hello everyone, I want to download all rbcL sequences for a species list. I don't have the Genbank ID, I just want to use a list of species name, then download all the rbcL genes, which were deposited in Genbank. I appreciate if anyone can give a suggestion. Best, Lingyun
I resolved the problem. The NCBIminer is pretty well
biostars
{"uid": 366748, "view_count": 905, "vote_count": 1}
Hello Biostars-Community, I would need your kind suggestions regarding an issue with the Gviz package from Bioconductor. I am attempting plot Enhancer-Promoter connections and tried to (mis)use the AlignmentsTrack() function and the "sashimi" plots display type for it. Typically, alignment tracks are generated from BAM files and if I do that, I can nicely display the sashimi plots as shown in the manual. However I would like to visualize enhancer-promoter connections and thus attempted to create a fake alignment file from simulated reads spanning the coordinates of the enhancers and promoters and connecting them with introns. Unfortunately for this fake file, I can't get the sashimi plots to work. It will display the fake reads in the "pileup" mode, but when I choose type="sashimi", the track remains blank and I get a strange error. I suppose the sashimi plot depends on additional columns like the mapping quality or such, that would automatically be read from a BAM file, but which I didn't provide with the simulated data. Any suggestions? Thanks a lot Matthias Minimal working (not working) example: library("Gviz") library("GenomicRanges") eppairs <- structure( list( ID_trscpt = c("Ppp3cb.NM_001310426", "Ppp3cb.NM_001310427"), ID_enh = c( "WTonly_chr14:21460154-21460564", "WTonly_chr14:21297822-21298296" ), Relevance_int = c(21.32, 10.66), chr = c("chr14", "chr14"), start = c(21460154L, 21297822L), stop = c(21460564L, 21298296L), chr_trscpt = c("chr14", "chr14"), start_trscpt = c(21365694L, 21365694L), stop_trscpt = c(21366295L, 21366295L), Gene = c("Ppp3cb", "Ppp3cb") ), .Names = c( "ID_trscpt", "ID_enh", "Relevance_int", "chr", "start", "stop", "chr_trscpt", "start_trscpt", "stop_trscpt", "Gene" ), row.names = c(7981L, 7984L), class = "data.frame" ) # construct a fake alignment file for sashimi plots. For each relevance point connection, add 10 fake aligned read pairs read_weights <- floor(eppairs[,"Relevance_int"]*10) sashimidata <- eppairs[,c("start","stop","start_trscpt","stop_trscpt")] sashimidata <- apply(sashimidata,1,function(x){ y <- sort(as.numeric(x)) data.frame("start"=y[1],"stop"=y[4],"cigar"=paste0(y[2]-y[1],"M",y[3]-y[2],"N",y[4]-y[3],"M")) }) sashimidata <- data.frame("chr"=rep(eppairs[,"chr"],read_weights),do.call("rbind",sashimidata[rep(names(sashimidata),read_weights)])) track_ep_pairs <- AlignmentsTrack(range=with(sashimidata,GRanges(chr,IRanges(start,stop),"*",cigar)),genome="mm9",isPaired=FALSE) # works, I see gapped reads plotTracks(list(track_ep_pairs),extend.left = 0.1, extend.right = 0.1) # Error plotTracks(list(track_ep_pairs),extend.left = 0.1, extend.right = 0.1,type="sashimi") #Error in GAlignments(seqnames = seqnames(range), pos = start(range), cigar = range$cigar, : # 'cigar' must be a character vector with no NAs # BUT: #table(sashimidata$cigar) #601M93859N410M 474M67398N601M #213 106 #any(is.na(sashimidata$cigar)) #[1] FALSE
Thanks to everybody, who read this post and tried to help. I could solve the problem now. I was unfamiliar with the data format of Gviz tracks, but it turned out that the underlying data can be manipulated just as a regular GRanges object. After I finally found out that a simple ranges(track_ep_pairs) gives you access to the data used to generate a track, I could analyze it and track down issues. While *mcols()* won't work on tracks, *ranges()* does. It turns out that a simple factor to character conversion of the cigar column will do: track_ep_pairs <- AlignmentsTrack(range=with(sashimidata,GRanges(chr,IRanges(start,stop),"*",cigar)),genome="mm9",isPaired=FALSE) ranges(track_ep_pairs)$cigar <- as.character(ranges(track_ep_pairs)$cigar) Now the sashimi plots render like a charm. Yeah!
biostars
{"uid": 374775, "view_count": 1523, "vote_count": 1}
I have 600,000 + unique protein sequences that I want to blast against themselves. For example, I want to run sequence 1 against sequence 1, sequence 2 against sequence 2, ... sequence 3453 against sequence 3453 etc. I have local blast installed on my computer and blastdb is prepared for the protein sequences. Problem is, each sequence has to be blast against the whole database when I will only retain the hit from a unique sequence against itself. Needless to say, I need a much more efficient method! Any idea?
``` curl -ks "ftp://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.fasta.gz" | gunzip -c |\ awk '/^>/{printf("\n%s\n",$0);next;} {printf("%s",$0);} END{printf("\n");}' | grep -v '^$' |\ while read T do echo "$T" > prot.fa read S echo "$S" >> prot.fa blastp -query prot.fa -subject prot.fa done ```
biostars
{"uid": 136289, "view_count": 2254, "vote_count": 2}
Hello, I've watched a video from YouTube about OMIM. Then I get that OMIM already had more than 3000 disease-associated genes till last year. Now I want to get all of those genes from OMIM to do some experiments. I think maybe I should use "Gene Map" of OMIM, but I still don't know how to search to get all disease-associated genes. Do you have any good advice or guide? Thank you in advance, naulty
You can download MIM ids and corresponding GeneID from [here][1]. [1]: http://www.omim.org/downloads
biostars
{"uid": 118566, "view_count": 12846, "vote_count": 1}
I have the vector: stri = c("Stage II", "Stage IVA", "Stage IIIB", "ACB" ) and I want to remove the A and B in "Stage IVA" ,"Stage IIIB", but not in "ACB", I want to get c("Stage II", "Stage IV", "Stage III", "ACB" ) Is there any method by using gsub?
Fortunately, I find the way to solve it. Here is the answer: gsub("(^Stage.*)(A|B)","\\\\1", stri, perl = TRUE)
biostars
{"uid": 9542094, "view_count": 369, "vote_count": 1}
Hi there! I have a list of paired fastq. I want to filtrate human read from each of these pairs using BBmap tool. here is a functionnal snakemake rule i wrote: #Removing human reads from fastq rule clean_fastq: message: "Removing human reads from fastq." input: unzip_fastq_R1 = rules.fastq_unzip.output.unzip_fastq_R1, unzip_fastq_R2 = rules.fastq_unzip.output.unzip_fastq_R2, BBMAP = rules.get_BBmap.output.BBMAP , index = rules.create_index.output.index output: R1_cleaned = result_repository + "FASTQ_CLEANED/{sample}_R1_cleaned.fastq", R2_cleaned = result_repository + "FASTQ_CLEANED/{sample}_R2_cleaned.fastq" params: path_human= result_repository + "FASTQ_CLEANED/" shell: """ {input.BBMAP} in1={input.unzip_fastq_R1} in2={input.unzip_fastq_R2} \ basename={rules.clean_fastq.params.path_human}{wildcards.sample}_%.fastq outu1={output.R1_cleaned} outu2={output.R2_cleaned} \ path=temp/ """ Each jobs have a cost of about 20Go RAM and the probleme is that I can only have 32 Go available. I dont know if this is possible for snakemake to execute all jobs from a same rule in a queue, to avoid this memory problem. If not, I probably should check an other tool to process these fastq. Any ideas? (except bmtagger, I had too much problem with it haha) What would you suggest? Thx, Hadrien
I think you can use the [resources](https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html#resources) directive in combination with the `--resources` command line option. E.g., your rule `clen_fastq` could be: rule clean_fastq: resources: mem_gb= 20, input: ... then: snakemake -j 10 --resources mem_gb=32 ... This will run at most 10 jobs at a time using at most 32 GB of memory (of "mem_gb" in fact) which means `clean_fastq` will run one a time.
biostars
{"uid": 424706, "view_count": 1081, "vote_count": 1}
Hi everybody ! I'm trying to use PCA over Rna-seq data just to understand PCA. I saw pcaExplorer and I did not quite understand what it actually does. I have a count matrix of 90 samples (healthy and cancer). I have then thousands of genes. I want to obtain the genes that differ my samples the most for further classification ( I know there are other techniques but I just wanted to exercise a bit, with PCA ). Thats what I have done : Let z be the transpose of a count matrix, normalized with method TMM = samples as rows and genes as columns z <- t(counts.tmm) pca <- prcomp(z, center = TRUE,scale. = TRUE) summary(pca) pca.class <- sample.cond$class # here sample.cond$class will be HC or Pancreas (Tumor) ggbiplot(pca,ellipse=TRUE,groups = pca.class,choices= c(1,2),var.axes=FALSE) dev.off() The result is the following : Importance of components: PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 PC11 PC12 Standard deviation 22.6979 8.01951 6.89380 5.85445 5.39476 5.30267 5.2139 5.00639 4.99198 4.96105 4.85406 4.7934 Proportion of Variance 0.2937 0.03667 0.02709 0.01954 0.01659 0.01603 0.0155 0.01429 0.01421 0.01403 0.01343 0.0131 Cumulative Proportion 0.2937 0.33039 0.35749 0.37703 0.39362 0.40965 0.4251 0.43944 0.45365 0.46768 0.48111 0.4942 PC13 PC14 PC15 PC16 PC17 PC18 PC19 PC20 PC21 PC22 PC23 PC24 Standard deviation 4.74638 4.69116 4.66121 4.62983 4.56994 4.55543 4.50351 4.46740 4.44247 4.36019 4.32935 4.3111 Proportion of Variance 0.01284 0.01255 0.01239 0.01222 0.01191 0.01183 0.01156 0.01138 0.01125 0.01084 0.01069 0.0106 Cumulative Proportion 0.50706 0.51960 0.53199 0.54421 0.55612 0.56795 0.57951 0.59089 0.60214 0.61298 0.62367 0.6343 PC25 PC26 PC27 PC28 PC29 PC30 PC31 PC32 PC33 PC34 PC35 PC36 Standard deviation 4.26250 4.21480 4.15619 4.14941 4.04881 4.01388 3.94854 3.93874 3.91634 3.86539 3.81318 3.79119 Proportion of Variance 0.01036 0.01013 0.00985 0.00982 0.00935 0.00919 0.00889 0.00884 0.00874 0.00852 0.00829 0.00819 Cumulative Proportion 0.64462 0.65475 0.66460 0.67441 0.68376 0.69295 0.70183 0.71068 0.71942 0.72794 0.73623 0.74443 PC37 PC38 PC39 PC40 PC41 PC42 PC43 PC44 PC45 PC46 PC47 PC48 Standard deviation 3.77294 3.73427 3.72646 3.62194 3.59297 3.56278 3.51904 3.42127 3.38418 3.37216 3.32199 3.30519 Proportion of Variance 0.00812 0.00795 0.00792 0.00748 0.00736 0.00724 0.00706 0.00667 0.00653 0.00648 0.00629 0.00623 Cumulative Proportion 0.75254 0.76049 0.76841 0.77589 0.78325 0.79049 0.79755 0.80422 0.81075 0.81723 0.82352 0.82975 PC49 PC50 PC51 PC52 PC53 PC54 PC55 PC56 PC57 PC58 PC59 PC60 Standard deviation 3.27842 3.2430 3.21268 3.18443 3.17152 3.09140 3.05199 3.04186 3.03361 2.99736 2.98039 2.94492 Proportion of Variance 0.00613 0.0060 0.00588 0.00578 0.00573 0.00545 0.00531 0.00528 0.00525 0.00512 0.00506 0.00494 Cumulative Proportion 0.83588 0.8419 0.84776 0.85354 0.85928 0.86472 0.87003 0.87531 0.88056 0.88568 0.89074 0.89569 PC61 PC62 PC63 PC64 PC65 PC66 PC67 PC68 PC69 PC70 PC71 PC72 Standard deviation 2.92100 2.90968 2.86032 2.8395 2.82278 2.79180 2.74172 2.73852 2.70586 2.68556 2.64009 2.60152 Proportion of Variance 0.00486 0.00483 0.00466 0.0046 0.00454 0.00444 0.00429 0.00428 0.00417 0.00411 0.00397 0.00386 Cumulative Proportion 0.90055 0.90538 0.91004 0.9146 0.91918 0.92363 0.92791 0.93219 0.93636 0.94047 0.94445 0.94831 PC73 PC74 PC75 PC76 PC77 PC78 PC79 PC80 PC81 PC82 PC83 PC84 Standard deviation 2.58597 2.57208 2.52419 2.49683 2.45577 2.43039 2.4041 2.36481 2.31208 2.28884 2.24954 2.20218 Proportion of Variance 0.00381 0.00377 0.00363 0.00355 0.00344 0.00337 0.0033 0.00319 0.00305 0.00299 0.00289 0.00276 Cumulative Proportion 0.95212 0.95589 0.95952 0.96308 0.96652 0.96988 0.9732 0.97637 0.97941 0.98240 0.98529 0.98805 PC85 PC86 PC87 PC88 PC89 PC90 Standard deviation 2.18536 2.1373 2.06398 1.94600 1.88876 7.458e-15 Proportion of Variance 0.00272 0.0026 0.00243 0.00216 0.00203 0.000e+00 Cumulative Proportion 0.99077 0.9934 0.99581 0.99797 1.00000 1.000e+00 And the image looks like this : ![enter image description here][1] My Questions are : 1) Why the PC are the samples? Shouldn't they be the genes? If I do `var.axes=TRUE` it actually shows me the arrows that are the genes so is correct but I did not get this point. 2) How can I get the genes that make my samples differ by condition the best from the PCA that i have computed? If I do not transpose the matrix : z <- counts.tmm # samples are in the columns and genes as rows pca <- prcomp(z, center = TRUE,scale. = FALSE) summary(pca) ggbiplot(pca,ellipse=TRUE,choices= c(1,2),var.axes=FALSE) dev.off() thats what I get : Importance of components: PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 PC11 PC12 Standard deviation 13.0772 4.07904 2.49366 2.04542 1.82674 1.50391 1.46282 1.41308 1.39098 1.33582 1.32702 1.30732 Proportion of Variance 0.5983 0.05821 0.02175 0.01464 0.01167 0.00791 0.00749 0.00699 0.00677 0.00624 0.00616 0.00598 Cumulative Proportion 0.5983 0.65649 0.67825 0.69288 0.70456 0.71247 0.71996 0.72694 0.73371 0.73996 0.74612 0.75210 PC13 PC14 PC15 PC16 PC17 PC18 PC19 PC20 PC21 PC22 PC23 PC24 Standard deviation 1.29400 1.28199 1.26799 1.25235 1.24683 1.21349 1.19880 1.18708 1.18165 1.17765 1.16516 1.15687 Proportion of Variance 0.00586 0.00575 0.00562 0.00549 0.00544 0.00515 0.00503 0.00493 0.00488 0.00485 0.00475 0.00468 Cumulative Proportion 0.75795 0.76370 0.76933 0.77482 0.78025 0.78541 0.79043 0.79536 0.80025 0.80510 0.80985 0.81453 PC25 PC26 PC27 PC28 PC29 PC30 PC31 PC32 PC33 PC34 PC35 PC36 Standard deviation 1.15105 1.13993 1.12768 1.1218 1.11720 1.10388 1.09192 1.08421 1.07808 1.07271 1.06507 1.05828 Proportion of Variance 0.00464 0.00455 0.00445 0.0044 0.00437 0.00426 0.00417 0.00411 0.00407 0.00403 0.00397 0.00392 Cumulative Proportion 0.81917 0.82371 0.82816 0.8326 0.83693 0.84119 0.84537 0.84948 0.85354 0.85757 0.86154 0.86546 PC37 PC38 PC39 PC40 PC41 PC42 PC43 PC44 PC45 PC46 PC47 PC48 PC49 Standard deviation 1.05420 1.0415 1.03585 1.02542 1.0145 1.00792 0.99140 0.97666 0.97385 0.96267 0.9568 0.95232 0.94773 Proportion of Variance 0.00389 0.0038 0.00375 0.00368 0.0036 0.00355 0.00344 0.00334 0.00332 0.00324 0.0032 0.00317 0.00314 Cumulative Proportion 0.86935 0.8731 0.87689 0.88057 0.8842 0.88773 0.89117 0.89450 0.89782 0.90106 0.9043 0.90744 0.91058 PC50 PC51 PC52 PC53 PC54 PC55 PC56 PC57 PC58 PC59 PC60 PC61 Standard deviation 0.9416 0.93214 0.92504 0.91419 0.91151 0.89366 0.88537 0.88206 0.87722 0.86586 0.86438 0.85371 Proportion of Variance 0.0031 0.00304 0.00299 0.00292 0.00291 0.00279 0.00274 0.00272 0.00269 0.00262 0.00261 0.00255 Cumulative Proportion 0.9137 0.91672 0.91972 0.92264 0.92555 0.92834 0.93108 0.93381 0.93650 0.93912 0.94173 0.94428 PC62 PC63 PC64 PC65 PC66 PC67 PC68 PC69 PC70 PC71 PC72 PC73 Standard deviation 0.8449 0.84420 0.83856 0.83176 0.82325 0.80923 0.7924 0.78655 0.78025 0.77028 0.76573 0.75772 Proportion of Variance 0.0025 0.00249 0.00246 0.00242 0.00237 0.00229 0.0022 0.00216 0.00213 0.00208 0.00205 0.00201 Cumulative Proportion 0.9468 0.94928 0.95174 0.95416 0.95653 0.95882 0.9610 0.96318 0.96531 0.96738 0.96944 0.97144 PC74 PC75 PC76 PC77 PC78 PC79 PC80 PC81 PC82 PC83 PC84 PC85 Standard deviation 0.75455 0.74868 0.74169 0.73250 0.73053 0.72453 0.71426 0.70595 0.6973 0.68962 0.6761 0.66666 Proportion of Variance 0.00199 0.00196 0.00192 0.00188 0.00187 0.00184 0.00178 0.00174 0.0017 0.00166 0.0016 0.00155 Cumulative Proportion 0.97344 0.97540 0.97732 0.97920 0.98107 0.98290 0.98469 0.98643 0.9881 0.98980 0.9914 0.99295 PC86 PC87 PC88 PC89 PC90 Standard deviation 0.66129 0.6555 0.64137 0.61824 0.59554 Proportion of Variance 0.00153 0.0015 0.00144 0.00134 0.00124 Cumulative Proportion 0.99448 0.9960 0.99742 0.99876 1.00000 ![enter image description here][2] But the list of PC is always 90. Now here it makes sense that they are 90 (samples) but before I don't get why is 90 when the genes are thousands. 4) Last question , for my purpose (obtain genes that shows more difference between my samples), is it more suitable the first (PC as genes) or the second (PC as samples)? From pcaExplorer I have seen that they relate this last PCA (samples as columns and genes as rows) with the study of the Heatmap so I assume that this last one is the one to use for gene view? But I don't get why. [1]: /media/images/cccd8f68-e7e2-4db2-9754-9c030b54 [2]: /media/images/a174713a-facf-4890-9899-3cd2f6aa
Not an expert by any means, but I have a couple ideas for you. About your data and the genes-vs-samples thing, I think you're right that you want counts per gene being the components, not counts per sample. But I think you already have that when your samples are on rows-- it's just hard to see in the output, especially because the number of components you see there is throwing you off. You can look at `pca$x` to see the actual transformed data. Here's a stupid example with ten samples and three "measurements" where there's really only one dimension. ``` > x <- matrix(1:30, ncol=3) > x [,1] [,2] [,3] [1,] 1 11 21 [2,] 2 12 22 [3,] 3 13 23 [4,] 4 14 24 [5,] 5 15 25 [6,] 6 16 26 [7,] 7 17 27 [8,] 8 18 28 [9,] 9 19 29 [10,] 10 20 30 > pca <- prcomp(x) > summary(pca) Importance of components: PC1 PC2 PC3 Standard deviation 5.244 3.55e-16 6.418e-32 Proportion of Variance 1.000 0.00e+00 0.000e+00 Cumulative Proportion 1.000 1.00e+00 1.000e+00 ``` It rotated it so PC1 has all the variation (and the others essentially none, since this only requires one dimension to fully explain). If you print the object you see that rotation: ``` > pca Standard deviations (1, .., p=3): [1] 5.244044e+00 3.549628e-16 6.417917e-32 Rotation (n x k) = (3 x 3): PC1 PC2 PC3 [1,] 0.5773503 -0.8164966 0.0000000 [2,] 0.5773503 0.4082483 -0.7071068 [3,] 0.5773503 0.4082483 0.7071068 ``` And then `pca$x` contains the same ten points, with the rotation applied, where they all lie along that line: ``` > pca$x PC1 PC2 PC3 [1,] -7.7942286 2.220446e-15 -4.440892e-16 [2,] -6.0621778 1.776357e-15 -4.440892e-16 [3,] -4.3301270 1.110223e-15 -2.220446e-16 [4,] -2.5980762 7.771561e-16 -2.220446e-16 [5,] -0.8660254 2.498002e-16 -5.551115e-17 [6,] 0.8660254 -2.498002e-16 5.551115e-17 [7,] 2.5980762 -7.771561e-16 2.220446e-16 [8,] 4.3301270 -1.110223e-15 2.220446e-16 [9,] 6.0621778 -1.776357e-15 4.440892e-16 [10,] 7.7942286 -2.220446e-15 4.440892e-16 ``` If you give input that's wider than it is long (more genes than samples), you should see that the number of rows of the output is still what you expect, and it's just your columns that are fewer than you might expect. ``` > x2 <- matrix(1:50, ncol=10) > x2 [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [1,] 1 6 11 16 21 26 31 36 41 46 [2,] 2 7 12 17 22 27 32 37 42 47 [3,] 3 8 13 18 23 28 33 38 43 48 [4,] 4 9 14 19 24 29 34 39 44 49 [5,] 5 10 15 20 25 30 35 40 45 50 > pca2 <- prcomp(x2) > pca2$x PC1 PC2 PC3 PC4 PC5 [1,] -6.324555 -2.498002e-15 4.440892e-16 4.440892e-16 4.440892e-16 [2,] -3.162278 -1.249001e-15 2.220446e-16 2.220446e-16 2.220446e-16 [3,] 0.000000 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 [4,] 3.162278 1.249001e-15 -2.220446e-16 -2.220446e-16 -2.220446e-16 [5,] 6.324555 2.498002e-15 -4.440892e-16 -4.440892e-16 -4.440892e-16 ``` Thinking it through with a very small number of samples and dimensions to make my brain hurt less, this makes sense. If you had just two points, you would never need more than two axes (actually, one, right?) to fully define the difference between them. So you'll never need more components in the output than you have samples going in. **So long story short, give your samples on rows and genes on columns.** P.S. Be careful juggling different matrices and data frames and such. `ggbiplot` would have complained about the transposed case, except that happens that the length of the `groups` happened to match up. (I usually aim for putting more into data frames wherever possible and then subsetting/slicing them as needed, just because of how often that sort of thing comes up in R.)
biostars
{"uid": 9532581, "view_count": 525, "vote_count": 2}
Hello. I am new to differential gene expression analysis. I was trying the tutorial of griffith test data provided in BIOSTAR HANDBOOK. I have done the alignment using HISAT2 and when I am doing quantification using the > featureCounts I am getting the GZIP ERROR:-2 as follows: > GZIP ERROR:-2 the data is paired ended so I am using the same command given in the manual of subhead package. >featureCounts -p -a annotation.gtf -t exon -g gene id -o counts.txt mapping results PE.bam Also, the path and working directory all are fine, still, I don't know why this error is coming up. Please suggest as I am stuck at this step and not able to proceed. Thanks in advance!
It seems the error encounter due to `BAM` file input with some bad blocks. Did you check if the validity of the `BAM` file that is generated correctly and not damaged/corrupted? I'd suggest trying a couple of things: 1. Could you try re-running the command with `SAM` file as input in `featureCounts`? 2. Could you try running the command on a different BAM file (maybe from publically available data?) Have a look at a discussion forum regarding similar error: https://groups.google.com/forum/#!topic/subread/S4smWRfBNPM Hope this helps!
biostars
{"uid": 450567, "view_count": 1938, "vote_count": 1}
I have a multiple sequence alignment represented as a "mulit-fasta." I want to remove any column that contains a gap. Thought I would ask before writing my own.
Trimal is very versatile. Try with "-gt 0.0" http://trimal.cgenomics.org
biostars
{"uid": 233221, "view_count": 1695, "vote_count": 1}
Hello everyone, I have a list of 30 microRNAs and I want to build a network of interaction with the targets. Well, I'm thinking about use these databases: 1. [TargetScan Release 7.2][1] 2. [mirTarBase v6.0][2] 3. [enter link description here][3] 4. [microT-CDS 5][4] 5. [miRWalk 3][5] And as a cut-off the target has to be present in 3 of these databases and have at least two microRNAs as targets. **First point:** Does this approach look robust? **Second point**: miRWalk 3 seems to use other bases (miRBase, TargetScan and miRDB) as Resources for miRNA-target data [http://mirwalk.umm.uni-heidelberg.de/resources/][6] Am I right? If so, it seems redundant to use it here in my approach. If anyone can give me some advice on how to do this analysis I will be very grateful, I have read several articles and each author does this analysis in a different way. [1]: http://www.targetscan.org/vert_72/ [2]: http://mirtarbase.mbc.nctu.edu.tw/php/index.php [3]: http://www.mirdb.org/ [4]: http://diana.imis.athena-innovation.gr/DianaTools/index.php?r=MicroT_CDS/index [5]: http://mirwalk.umm.uni-heidelberg.de/ [6]: http://mirwalk.umm.uni-heidelberg.de/resources/
Yes, that's the conclusion that I got, too, i.e., that each author does it a different way. You know what? - I also did it a different way and here's what I did: Given that most of these databases provide *in siico* predictions, and in some cases have come under scrutiny and criticism, the more corroborative evidence that you can accumulate for each mir, the better. There is an R package that does this for you, called <a href="https://bioconductor.org/packages/devel/bioc/vignettes/miRNAtap/inst/doc/miRNAtap.pdf">miRNAtap</a>. Whilst it doesn't have all of the databases that you mention, it does look at: - DIANA (Vlachos et al., 2012) - PicTar (Krek et al., 2015) - TargetScan (Agarwal et al., 2015) - miRanda (Betel et al., 2008) - miRDB (Wong & Wang, 2015) The good thing here is that you can automate it and say that you want evidence for an interaction in at least 3 of these databases, or even 5 (all). *miRNAtap* also let's you to do gene enrichment of the genes targeted by each of your mirs of interest, whilst in addition the tutorial to which I've linked also shows how you can perform KEGG pathway enrichment of these, too (using *KEGGprofile*). --------------------------------------------- ------------------------ miRWalk, I believe, was the database that came under some scrutiny in the past, but they appeared to then improve the data held in it. For one, they actually have validated mir-to-mir, mir-to-gene, etc interactions, i.e., from functional studies. In my study (yet to be published), I used miRWalk as a sort of secondary validation step for anything of interest found in the first part using *miRNAtap*. --------------------------------------------------- -------------------------------- After you have identified some key mirs and gene targets, you can develop some nice figures like a *mirPrint* (a term that I chose) and a graph (yet unnamed) that shows the key mirs and their gene targets, and also the layering of the graph indicates how many mirs target the same gene. <a href="https://imgbb.com/"><img src="https://image.ibb.co/mon7yy/sss.png" alt="sss" border="0"></a> <a href="https://imgbb.com/"><img src="https://image.ibb.co/h36d5d/d.png" alt="d" border="0"></a> Just to give you some ideas. Kevin
biostars
{"uid": 315119, "view_count": 2556, "vote_count": 1}
I have a BLAST tabular output with millions of hits.Query is my sequence and subject is a protein hit. I am interested in finding the subjects corresponding to the same query that do not overlap. If I know the subject start and end sites it becomes possible to do; if S1 < E2 < S2 and E1 < S2 < E2 OR S2 - E1 > 0 Basically, since there are many hits and number of subjects vary, I may understand the algorithm, but find it difficult to implement in code. For example,my input file ``` query subject start end cont20 EMT34567 2 115 cont20 EMT28057 238 345 cont31 EMT45002 112 980 cont31 EMT45002 333 567 ``` Desired output (I want the program to print only the query and subject names that do not overlap) ``` cont20 EMT28057 cont20 EMT34567 ``` I have started the script using regex, but I am not sure how to continue or if this is a right way ```py import re output=open('result.txt','w') f=open('file.txt','r') lines=f.readlines() for line in lines: new_list=re.split(r'\t+',line.strip()) query=new_list[0] subject=new_list[1] s_start=new_list[8] s_end=new_list[9] ```
<p><strong>UPDATED:</strong></p> <p>Input: <code>file.txt</code></p> <pre><code>query subject start end contig1 EMT16196 481 931 contig15 EMT15298 1 148 contig18 EMT04099 1 290 contig18 EMT20601 1 290 contig18 EMT23062 1 290 contig20 EMT14935 298 524 contig20 EMT19916 415 434 contig20 EMT19915 422 441 contig20 EMT19914 298 317 contig29 EMT30092 1 20 contig30 EMT31940 61 795 contig35 EMT03428 181 785 contig37 EMT02979 364 1184 contig42 EMT19888 449 657 contig42 EMT19888 339 472 contig43 EMT19888 339 657 contig45 EMT27750 329 363 contig45 EMT17965 889 908 contig51 EMT32871 324 390 contig51 EMT32871 203 241 contig52 EMT15568 107 127 contig56 EMT28040 811 939 contig67 EMT32527 132 489 contig69 EMT12559 38 226 contig79 EMT05411 85 919 contig95 EMT26862 138 327 contig95 EMT10613 20 164 contig107 EMT33347 1 243 contig107 EMT33347 255 387 contig107 EMT14531 135 385 contig108 EMT33347 1 423 contig108 EMT14531 135 565 contig109 EMT07436 60 88 contig149 EMT17561 119 219 contig159 EMT28057 39 307 contig176 EMT23021 359 379 </code></pre> <p>Python:</p> <pre><code>from itertools import groupby def nonoverlapping(hits): """Returns a list of non-overlapping hits.""" nonover = list(hits) overst = False for i in range(1,len(hits)): (p, c) = hits[i-1], hits[i] # Check whether hits overlap. if c[2]&lt;=p[3]: if not overst: nonover.remove(p) nonover.remove(c) overst = True else: overst = False return nonover fh = open('file.txt') oh = open('results.txt', 'w') fh.next() # Ignore header line in BLAST output. # Loop over BLAST hits (grp) for each query (qid). for qid, grp in groupby(fh, lambda l: l.split()[0]): hits = [] # I need to convert start and end positions # from strings into integers. for line in grp: hsp = line.split() hsp[2], hsp[3] = int(hsp[2]), int(hsp[3]) hits.append(hsp) # Sort hits by start position. hits.sort(key=lambda x: x[2]) for hit in nonoverlapping(hits): oh.write('\t'.join([str(f) for f in hit])+'\n') </code></pre> <p>Results: <code>results.txt</code></p> <pre><code>contig1 EMT16196 481 931 contig15 EMT15298 1 148 contig29 EMT30092 1 20 contig30 EMT31940 61 795 contig35 EMT03428 181 785 contig37 EMT02979 364 1184 contig43 EMT19888 339 657 contig45 EMT27750 329 363 contig45 EMT17965 889 908 contig51 EMT32871 203 241 contig51 EMT32871 324 390 contig52 EMT15568 107 127 contig56 EMT28040 811 939 contig67 EMT32527 132 489 contig69 EMT12559 38 226 contig79 EMT05411 85 919 contig109 EMT07436 60 88 contig149 EMT17561 119 219 contig159 EMT28057 39 307 contig176 EMT23021 359 379 </code></pre>
biostars
{"uid": 95294, "view_count": 5154, "vote_count": 2}
<p>Dear All, Am I right to use the following script to transform a fastq file (named test.fastq) to a fasta file? THANKS a lot!</p> <pre><code>#!/usr/bin/perl use strict; use Bio::SeqIO; my $in=Bio::SeqIO-&gt;new(-file=&gt;"test.fastq",-format=&gt;'fastq'); my $out=Bio::SeqIO-&gt;new(-file=&gt;"&gt;test.fasta",-format=&gt;'fasta'); while(my $seq=$in-&gt;next_seq()){ $out-&gt;write_seq($seq); } </code></pre>
<p>The following list shows the time for converting 2 million 100bp sequences in fastq to fasta with different approaches (locale set to "C"):</p> <pre> ================================================================================================ Real(s) CPU(s) Command line ------------------------------------------------------------------------------------------------ 1.8 1.8 seqtk seq -A t.fq > /dev/null 3.1 3.1 sed -n '1~4s/^@/>/p;2~4p' t.fq > /dev/null 5.8 12.4 paste - - - - &lt; t.fq | sed 's/^@/>/g'| cut -f1-2 | tr '\t' '\n' > /dev/null 7.6 7.5 bioawk -c fastx '{print ">"$name"\n"$seq}' t.fq > /dev/null 11.9 12.9 awk 'NR%4==1||NR%4==2' t.fq | tr "@" ">" > /dev/null 22.2 22.2 seqret -sequence t.fq -out /dev/null # 6.4.0 26.5 25.4 fastq_to_fasta -Q32 -i t.fq -o /dev/null # 0.0.13 ================================================================================================ </pre> <p>In the list, seqtk, bioawk and seqret work with multi-line fastq; the rest don't. If you just want to use the standard unix tools, rtliu's sed solution is preferred, both short and efficient. It should be noted that file <code>t.fq</code> is put in <code>/dev/shm</code> and the results are written to <code>/dev/null</code>. In real applications, I/O may take more wall-clock time than CPU. In addition, frequently the sequence file is gzip'd. For seqtk, decompression takes more CPU time than parsing fastq.</p> <p>Additional comments:</p> <ul> <li><p>SES observed that seqret was faster than Irsan's command. At my hand, seqret is always slower than most. Is it because of version, locale or something else?</p></li> <li><p>I have not tried the native bioperl parser. Probably it is much slower. Using bioperl on large fastq is discouraged.</p></li> <li><p>I do agree 4-line fastq is much more convenient for many people. However, fastq is never "defined" to consist of 4 lines. When it was first introduced at the Sanger Institute for ~1kb capillary reads, it allows multiple lines.</p></li> <li><p>Tools in many ancient unix distributions (e.g. older AIX) do not work with long lines. I was working with a couple of such unix even in 2005. I guess this is why older outputs/formats, such as fasta, blast and genbank/embl, used short lines. This is not a concern any more nowadays.</p></li> <li><p>To convert multi-line fastq to 4-line (or multi-line fasta to 2-line fasta): <code>seqtk seq -l0 multi-line.fq &gt; 4-line.fq</code></p></li> </ul>
biostars
{"uid": 85929, "view_count": 116350, "vote_count": 16}
Hi, as the title, where I can find GPL15314 annotation file? The platform tile is ' Arraystar Human LncRNA microarray V2.0 (Agilent_033010 Probe Name version)', so what's the meaning of Probe Name version? Does it means only probe name information in this platform with out lncRNA name? Can I use these probe name as the last output or I have to do annotation for it? Any suggestion would be appreciated.
If you click on the `View Full table` button at the [bottom of the page for this platform][1] on NCBI GEO you will find the annotation table. [1]: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GPL15314
biostars
{"uid": 331190, "view_count": 2320, "vote_count": 1}
Hi, I am trying to convert a BAM file to a FASTQ file. This is my first time analysis of BAM file. I want to use Picard tools for the analysis. After reading some posts on Biostars and Picard website, I was able to understand something. But, I am getting following error: #### COMMAND $ java -Xmx6g -jar ../picard-tools-1.126/picard.jar SamToFastq I=file.bam F=file.fastq #### ERROR ``` [Thu May 21 14:46:55 CDT 2015] picard.sam.SamToFastq INPUT=../../data/F14FTSUSAT1066_HUMsfcX/bam/1AA_rawlib.bam FASTQ=1AA_rawlib.fastq OUTPUT_PER_RG=false RE_REVERSE=true INTERLEAVE=false INCLUDE_NON_PF_READS=false READ1_TRIM=0 READ2_TRIM=0 INCLUDE_NON_PRIMARY_ALIGNMENTS=false VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false [Thu May 21 14:46:55 CDT 2015] Executing as deepak@tenor on Linux 3.10.0-229.1.2.el7.x86_64 amd64; OpenJDK 64-Bit Server VM 1.7.0_75-mockbuild_2015_03_26_13_15-b00; Picard version: 1.126(4691ee611ac205d4afe2a1b7a2ea975a6f997426_1417447214) IntelDeflater [Thu May 21 14:46:57 CDT 2015] picard.sam.SamToFastq done. Elapsed time: 0.02 minutes. Runtime.totalMemory()=2058354688 To get help, see http://broadinstitute.github.io/picard/index.html#GettingHelp Exception in thread "main" htsjdk.samtools.FileTruncatedException: Premature end of file at htsjdk.samtools.util.BlockCompressedInputStream.readBlock(BlockCompressedInputStream.java:382) at htsjdk.samtools.util.BlockCompressedInputStream.available(BlockCompressedInputStream.java:127) at htsjdk.samtools.util.BlockCompressedInputStream.read(BlockCompressedInputStream.java:252) at java.io.DataInputStream.read(DataInputStream.java:149) at htsjdk.samtools.util.BinaryCodec.readBytesOrFewer(BinaryCodec.java:404) at htsjdk.samtools.util.BinaryCodec.readBytes(BinaryCodec.java:380) at htsjdk.samtools.util.BinaryCodec.readByteBuffer(BinaryCodec.java:490) at htsjdk.samtools.util.BinaryCodec.readInt(BinaryCodec.java:501) at htsjdk.samtools.BAMRecordCodec.decode(BAMRecordCodec.java:178) at htsjdk.samtools.BAMFileReader$BAMFileIterator.getNextRecord(BAMFileReader.java:660) at htsjdk.samtools.BAMFileReader$BAMFileIterator.advance(BAMFileReader.java:634) at htsjdk.samtools.BAMFileReader$BAMFileIterator.next(BAMFileReader.java:628) at htsjdk.samtools.BAMFileReader$BAMFileIterator.next(BAMFileReader.java:598) at htsjdk.samtools.SamReader$AssertingIterator.next(SamReader.java:514) at htsjdk.samtools.SamReader$AssertingIterator.next(SamReader.java:488) at picard.sam.SamToFastq.doWork(SamToFastq.java:153) at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:187) at picard.cmdline.PicardCommandLine.instanceMain(PicardCommandLine.java:89) at picard.cmdline.PicardCommandLine.main(PicardCommandLine.java:99) ```
There's something wrong with the bam. Please try samtools view -c file.bam to test the bam.
biostars
{"uid": 143466, "view_count": 8602, "vote_count": 2}
Hello, I am not good with loops in R and have a challenging data to subset. Dimensions of the dataframe is 17 x 18000. The value of first 200 columns are categorical binary and the rest of the columns have positive numerical values. Representative dataframe is below; View(df) Drug_1 Drug_2 . . . Drug_200 Gene_1 Gene_2 . . . Gene_17800 Cell_1 1 1 . . . 1 3.410109 2.698543 . . . 2.991730 Cell_2 0 1 . . . 1 6.190569 2.785505 . . . 2.893962 Cell_3 1 1 . . . 0 5.503953 2.614325 . . . 2.787185 Cell_4 1 1 . . . 1 3.314800 2.685167 . . . 3.746460 Cell_5 0 1 . . . 1 3.702378 2.663557 . . . 5.541395 Cell_6 1 1 . . . 1 6.623338 2.623761 . . . 2.892601 Cell_7 0 0 . . . 1 3.855267 2.685530 . . . 2.879253 Cell_8 1 1 . . . 1 3.813186 2.741521 . . . 7.204914 Cell_9 1 1 . . . 0 4.010305 2.619892 . . . 2.930020 Cell_10 0 1 . . . 1 3.769854 2.831024 . . . 4.495060 Cell_11 0 1 . . . 0 4.325175 2.795230 . . . 3.181098 Cell_12 1 1 . . . 1 5.502184 2.691975 . . . 2.928878 Cell_13 1 0 . . . 1 5.711048 2.649376 . . . 2.897740 Cell_14 1 1 . . . 1 3.990681 2.719580 . . . 2.934628 Cell_15 1 0 . . . 1 5.650302 2.843495 . . . 3.025947 Cell_16 1 1 . . . 1 3.250378 2.498467 . . . 6.397197 Cell_17 1 1 . . . 1 5.366431 2.853150 . . . 5.033118 I want to explain the drug responses of cells (1 or 0) for a drug with their respective gene expression levels (high or low) via logistic regression models. However, as a first step I have to select features (genes in my case). The structure of my case is quite complex for implementing common feature selection approaches. To manually pick contrast response inducing features for each of 200 drug, I planned to form a nested loop for drugs and subset the genes for each drug which are differentially expressed compared to opposite response giving cells. To illustrate; I want to subset the genes which have different values (higher or lower) in 0 response giving cells compared to 1 response giving cells. And aiming to do this for all 200 drugs in a loop. I hope I could explain my problem clearly. Can you help me to establish a working loop, please ? Thanks in advanced.
Here is the start, double loop example: # example data df <- mtcars[1:8] colnames(df)[1:4] <- paste0("drug_", 1:4) colnames(df)[5:8] <- paste0("gene_", 1:4) # double loop sapply(colnames(df)[1:4], function(drug){ sapply(colnames(df)[5:8], function(gene){ coef( lm(formula(paste(drug, gene, sep = "~")), data = df[, c(drug, gene)]) )[ 2 ] }) }) # output # drug_1 drug_2 drug_3 drug_4 # gene_1.gene_1 7.678233 -2.3379172 -164.62780 -57.54523 # gene_2.gene_2 -5.344472 1.4282442 112.47814 46.16005 # gene_3.gene_3 1.412125 -0.5909041 -30.08039 -27.17368 # gene_4.gene_4 7.940476 -2.8730159 -174.69286 -98.36508
biostars
{"uid": 391442, "view_count": 789, "vote_count": 1}
Hello, I'm wondering how to get the start and the end positions of all promoters in all human chromosomes? Thanks
At Ensembl we've annotated promoters as part of our regulatory build ([shiny new paper on it][1]). These are based on segmentation data from ENCODE and RoadMap Epigenomics, finding consensus regions of promoter activity between cell types. This will be further refined as we add more cell types to the analysis (e.g. more from ENCODE and RoadMap and add in Blueprint). You can access these annotations through the Ensembl Browser ([here][2] is one at the 5' end of a gene, where we expect it to be), BioMart (e.g. [this query][3] will get you all the predicted promoters on chromosome 21), the Ensembl APIs and [the Ensembl FTP site][4]. [1]: http://genomebiology.com/2015/16/1/56/abstract [2]: http://www.ensembl.org/Homo_sapiens/Location/View?db=core;g=ENSG00000164190;q=;r=5:36872531-36882821 [3]: http://www.ensembl.org/biomart/martview/eae58d86e73c969c2ae027c0b7329c8a?VIRTUALSCHEMANAME=default&ATTRIBUTES=hsapiens_regulatory_feature.default.regulatory_feature.chromosome_name|hsapiens_regulatory_feature.default.regulatory_feature.chromosome_start|hsapiens_regulatory_feature.default.regulatory_feature.chromosome_end|hsapiens_regulatory_feature.default.regulatory_feature.feature_type_name&FILTERS=hsapiens_regulatory_feature.default.filters.regulatory_feature_type_name.&quot;Promoter&quot;|hsapiens_regulatory_feature.default.filters.chromosome_name.&quot;21&quot;&VISIBLEPANEL=resultspanel [4]: ftp://ftp.ensembl.org/pub/current_regulation/homo_sapiens/
biostars
{"uid": 136548, "view_count": 8085, "vote_count": 2}
<p>Hi all,</p> <p>I came through a heatmap of CNVs from Illumina Genome Studio which has four samples (see attachment). On Y axis they have genomic coordinates and on X axis the samples. Red denotes amplification and blue denotes deletion. I was wondering how can we make similar heatmap in R for CNV data or expression data? I know a little basis of ggplot and ggbio but I don't know how to make heatmap with genomic coordinates on Y axis. There are fix length of each chromosomes defined say chr-1 is 250 million base-pairs (units), chr-2 is 240 M, chr-3 is 200 M etc. Now there are segments of copy number changes with start position and end position for which heatmap is to be made. Now for chr-1 of size proportional to 250M, we need red block at location proportional to base pair 23432-to-25925 and another red block at 34564-to-44572 etc... similarly for each chromosome.</p> <p>CNV data is in segments:</p> <p>chr start end copy-number-T1 copy-number-T2</p> <p>chr1 23432 25925 4 3</p> <p>chr1 34564 44572 5 5</p> <p>chr1 78463 85634 3 4</p> <p>chr2 1375364 1378364 1 2</p> <p>chr2 1463723 1469367 4 6</p> <p>chr2 1678573 1683642 2 5</p> <p>etc....</p> <p>Thanks in advance.</p> <p><img src='http://iforce.co.nz/i/gojnqqm4.0pt.jpg' alt='CNV heatmap' /></p>
<p>I can help with something like this, but I don't know how to prettify the Y axis labels - all chromosomes in my example are of length 20, so labels are getting overcrowded and I disabled them.</p> <p><strong>-- edit: fixed labels</strong></p> <p><img src='http://s16.postimg.org/q4lglksjp/Rplot.png' alt='enter image description here' /></p> <pre><code>require(ggplot2) require(reshape) data &lt;- data.frame(matrix(rnorm(2*20*4)-0.5, nrow=2*20, byrow=F)) names(data) &lt;- paste("T",1:4,sep="") data$chromosome=factor(1:(2*20),1:(2*20), labels=paste(rep(paste("chr",1:2,sep=""),each=20),1:20,sep=".")) data$labels=matrix(rbind(paste("chr",1:2,sep=""), matrix(rep("",19*2),nrow=19)),byrow=T,nrow=20*2) dm &lt;- melt(data,id.var=c("chromosome","labels")) dm$fill &lt;- rep("white",2*20*4) dm$fill[dm$value &lt; -0.5] &lt;- "red" dm$fill[dm$value &gt; 0.5] &lt;- "blue" p &lt;- ggplot(dm, aes(variable, chromosome, fill=fill)) + geom_tile() p + scale_fill_identity(expand=c(0,0),guide = "legend", labels=c("deletion","amplification","none")) + scale_y_discrete(breaks=dm$chromosome,labels=dm$labels) + ggtitle("Example for 2 chromosomes") </code></pre> <p><strong>-- edit2, with your data, just changed two values to negatives:</strong> </p> <pre><code>require(ggplot2) require(reshape) # four samples, here genome is linear, 2M long, each bin is 1000bp # matrix rows = bins, matrix columns = samples data &lt;- matrix(rep(rep(0,2000000/1000),4),ncol=4) # the intervals from your sample intervals=data.frame(chromosome=c("chr1","chr1","chr1","chr2","chr2","chr2"), start=c(23432,34564,78463,1375364,1463723,1678573), end=c(25925,44572,85634,1378364,1469367,1683642), t1=c(4,5,3,-1,4,2), t2=c(3,5,4,2,-6,5)) # transfer the values from data frame into the matrix, probably can code a single function mark_t1 &lt;- function(a,b,c){ data[ (a/1000) : (b/1000), 1] &lt;&lt;- c} apply(intervals[,c('start','end','t1')], 1 , function(x) mark_t1(x[1],x[2],x[3])) mark_t2 &lt;- function(a,b,c){ data[ (a/1000) : (b/1000), 2] &lt;&lt;- c} apply(intervals[,c('start','end','t2')], 1 , function(x) mark_t2(x[1],x[2],x[3])) # converting the matrix into a data frame data &lt;- as.data.frame(data) names(data) &lt;- paste("T",1:4,sep="") # add bins column data$bin=seq(1,2000000,by=1000) dm &lt;- melt(data,id.var=c("bin")) # color the bins according to their values dm$fill &lt;- "white" dm$fill[dm$value &lt; -0.5] &lt;- "red" dm$fill[dm$value &gt; 0.5] &lt;- "blue" # plot as red-white-blue rectangles, by using bins and continuous scale, it's easy to # place Chromosome or even Gene markers # p &lt;- ggplot(dm, aes(variable, bin, fill=fill)) + geom_tile() p + scale_fill_identity(expand=c(0,0),guide = "legend", labels=c("amplification","deletion","none")) + scale_y_continuous(breaks=c(0,1400000,2000000),labels=c("Chr 1","Chr 2","Chr 3")) + ggtitle("Example for 2 chromosomes") + theme_bw() + theme(panel.grid.major = element_blank(),panel.grid.minor = element_blank(), panel.border = element_blank(),panel.background = element_blank(), axis.title.x= element_blank(),axis.title.y = element_blank()) </code></pre> <p><img src='http://s11.postimg.org/jg6r7a6dv/Rplot.png' alt='enter image description here' /></p> <p><strong>or just like a heatmap plot:</strong></p> <pre><code>#plot as white-to-blue heatmap p2 &lt;- ggplot(dm, aes(variable, bin, fill=value)) + geom_tile() p2 + scale_y_continuous(breaks=c(0,1400000,2000000),labels=c("Chr 1","Chr 2","Chr 3")) + scale_fill_gradient(low = "white", high = "steelblue") + theme_bw() + ggtitle("Example for 2 chromosomes") + theme(panel.grid.major = element_blank(),panel.grid.minor = element_blank(), panel.border = element_blank(),panel.background = element_blank(), axis.title.x= element_blank(),axis.title.y = element_blank()) </code></pre> <p><img src='http://s15.postimg.org/6hgkjgjtn/Rplot01.png' alt='enter image description here' /></p>
biostars
{"uid": 89723, "view_count": 11842, "vote_count": 10}
Dear all, I have a gft file and I want to remove a cenrtaind word in a certain column: In this case I want to remove in the second column "mRNA." and just to keep CA01g00010 in this column, could you help me with this? Pepper1.55ch01 mRNA.CA01g00010 63209 63880 I would like this output Pepper1.55ch01 CA01g00010 63209 63880 Best
awk -v OFS="\t" -v FS="\t" '{ $2=gensub("^mRNA.", "", 0, $2); print $0; }' your_file > your_modified_file Edit: removed tab-separated assumption, as I just saw it should treat a gtf, which we all know is tab-separated.
biostars
{"uid": 382607, "view_count": 1963, "vote_count": 1}
<p>Hi all,</p> <p>Any script available for doing sequence length disrtibution of fastq file?</p> <p>Thanks, Deepthi</p>
Another awk solution, very similar to Frédéric's: cat reads.fastq | awk '{if(NR%4==2) print length($1)}' | sort -n | uniq -c > read_length.txt And to quickly obtain a graph in R: reads<-read.csv(file="read_length.txt", sep="", header=FALSE) plot (reads$V2,reads$V1,type="l",xlab="read length",ylab="occurences",col="blue")
biostars
{"uid": 72433, "view_count": 74069, "vote_count": 16}
Hi, I have a somewhat high content of mitochondrial RNA in my RNA-seq experiment. Is there a way to use samtools to remove alignments to the 'MT' chromosome and keep all the rest? I'm considering using samtools view in combination with awk but perhaps there's a better/cleaner solution? Thanks!
I routinely cleanse my SAM files of chrM, and unassembled "random" contigs before running ChIP-seq analysis. I use 'sed' on the SAM file. Although you could be clever and do this via 'samtools view' without the need for creating an intermediate SAM file :) sed '/chrM/d;/random/d;/chrUn/d' < file.sam > file_filtered.sam
biostars
{"uid": 128967, "view_count": 30339, "vote_count": 10}
I usually use `bedtools intersect` to find overlapping regions of bed files, but it seems like this tool can only output overlap between a pair of files. I need something that can do the following from many bed files and **only** report regions contained in all of the bed files. Example input from 4 separate bed files: ``` chr1 50 100 chr1 60 120 chr1 30 90 chr1 50 90 ``` Desired output: ``` chr1 60 90 ``` Any tools for this? Maybe I should just `cat` all the bed files together and merge them?
> Maybe I should just cat all the bed files together and merge them? You can do that. But there is also `multiIntersect`. Check [here][1] [1]: https://www.biostars.org/p/13516/
biostars
{"uid": 323450, "view_count": 5064, "vote_count": 3}
I have analyzed RNA-seq data with DESeq2 and am trying to plot a 3D PCA using rgl-plot3d. I was trying to output PC1, PC2, and PC3 and then plot them. However, I realized that I get different results for PC1 (and PC2) when I try plotPCA (used with DESeq2) and prcomp. What is the bug on my code? dds <- DESeqDataSetFromHTSeqCount( sampleTable = sampleTable, directory = directory, design= ~group) rld <- rlog(dds, blind=TRUE) **From DESeq2:** data <- plotPCA(rld, intgroup=c("treatment", "sex"), returnData=TRUE ) data$PC1 > [1] -1.9169863 -2.0420236 -1.9979900 -1.8891056 0.9242008 1.0638140 >[7] 0.6911183 1.0551864 0.9598643 -1.5947907 -1.5666862 -1.6694684 >[13] -1.2523658 -1.0785239 1.3005578 2.2913536 2.5381586 2.4287372 >[19] 1.7549495 **Using prcomp** mat <- assay(rld) pca<-prcomp(t(mat)) pca <- as.data.frame(pca$x) pca$PC1 >[1] -1.29133735 -2.96001734 -3.08855648 -3.51855030 -0.68814370 -0.01753268 >[7] -2.31119461 -0.10533404 -1.45742308 -1.30239486 -1.36344946 -1.93761580 >[13] 6.04484324 4.83113873 0.75050886 -0.14905189 2.70759465 3.43851631 >[19] 2.41799979
You got me intrigued so I looked at plotPCA code: function (object, intgroup = "condition", ntop = 500, returnData = FALSE) { rv <- rowVars(assay(object)) select <- order(rv, decreasing = TRUE)[seq_len(min(ntop, length(rv)))] pca <- prcomp(t(assay(object)[select, ])) So DESeq2 first sort the rows by variance, select the top 500 (by default) and only then call prcomp
biostars
{"uid": 416573, "view_count": 3896, "vote_count": 2}
Hi, Could someone give me the definition of LOC# identifiers and where can I found it? I cannot find any place with the criteria to give to a gene an identifier such as LOC#. Thanks!
From the <a href="http://www.ncbi.nlm.nih.gov/books/NBK3840/#genefaq.Conventions">NCBI book</a>: > **Symbols beginning with LOC.** When a published symbol is not available, and orthologs have not yet been determined, Gene will provide a symbol that is constructed as 'LOC' + the GeneID. This is not retained when a replacement symbol has been identified, although queries by the LOC term are still supported. In other words, a record with the symbol LOC12345 is equivalent to GeneID = 12345. So if the symbol changes, the record can still be retrieved on the web using LOC12345 as a query, or from any file using GeneID = 12345.
biostars
{"uid": 141793, "view_count": 4713, "vote_count": 1}
Hi All, We have a specific gene mutation and we would like to learn how it is effective on Breast cancer. So using the R, I get the mutation information from sequenced cases of TCGA Provisional and then stratified patients into two categories as Mutated & Wild Type. I downloaded the mRNA Expression z-Scores (RNA Seq V2 RSEM) from the cBioPortal website. I would like to look at the differentially expressed gene between these two groups but I have several questions : 1. The RNA seq data is Rsem.normalized, before I do any further analysis I transformed them into log2(rsem+1), that is correct right ? 2. For differential gene expression analysis what do you suggest me to use ? I cannot use DeSEQ2 or edgeR as they require raw counts as input. 3. I used limma package but I guess I get shows my data has some problem . Does it look ok or should I do something else ? ![ Voom & Final model][1] ---------- library(edgeR) library(limma) group = c( rep("Mut", 191), rep("WT", 660)) design <- model.matrix(~ 0 + group) colnames(design) <- c("Mut", "WT") y = TCGA_comb par(mfrow=c(1,2)) v <- voom(y,design,plot = TRUE) fit <- lmFit(v, design) cont.matrix <- makeContrasts(PIK3CA_mutVSwt=Mut - WT,levels=design) fit.cont <- contrasts.fit(fit, cont.matrix) fit.cont <- eBayes(fit.cont) plotSA(fit.cont) summa.fit <- decideTests(fit.cont) tab <- topTable(fit.cont, n=Inf, coef="PIK3CA_mutVSwt") 4. Would it be too superficial if I calculate Fold Change, p-value & FDR on my own? a) Fold change: Take average of each gene per group and then Log2(B)-Log2(A) b) p-value: t.test command of R c) FDR: p.adjust(pvalue,method="fdr") Many many thanks, Gokce [1]: https://i.imgur.com/D8jAseN.png
Hi Gokce Answers below 1) Yes you can you the log-transformation as you describe - but you could also use a smaller pseudo-count (say 0.01) punishing smaller changes less. 2) Raw count data from TCGA's RNA_Seq_V2 are available via the [CDC data portal][1] or a most of other APIs such as [TCGAbiolinks][2] 3) Don't do that - limma uses normality as an assumption (just like a t.test) - so if you dont trust the limma results you should not trust a t.test. Use voom+limma with raw counts obtained as in 2) instead (limma instead of edgeR since they perform almost identical when you have many replicates (which you have right?) - expect that limma is much much faster. Hope this helps [1]: https://portal.gdc.cancer.gov/projects?filters=~%28op~%27and~content~%28~%28op~%27in~content~%28field~%27projects.program.name~value~%28~%27TCGA%29%29%29%29%29 [2]: http://bioconductor.org/packages/release/bioc/html/TCGAbiolinks.html
biostars
{"uid": 270700, "view_count": 4185, "vote_count": 1}
I m using this featurecount command to extract read count from my bam files , is it the correct one im using or there is some issue with my command . featureCounts -T 40 -p -A -s -f -O -t exon -g gene_id -a /home/punit/ERCCgencode.v21.annotation.gtf -o /home/punit/FCOUNT/newhl60.txt ~/bamfiles/WT1.bam ~/bamfiles/WT2.bam ~/bamfiles/AT1.bam ~/bamfiles/AT2.bam ~/bamfiles/VD1.bam ~/bamfiles/VD2.bam
* `-A` takes a file with chromosome name aliases, which you're not providing and likely don't need. * `-s` takes a number indicating what sort of strand specificity you want. You likely want `-s 2`. * If you want to use these in edgeR/DESeq2/etc., don't use `-O`. * You probably don't want `-f`. In general, run commands first and spot-check the results to see if anything went wrong. Also, start using new commands/tools by using the defaults.
biostars
{"uid": 253593, "view_count": 4319, "vote_count": 1}
I want to change the format of the fasta file. >Name AAAAAAAAAAAAAAAAAAAAAAAAA >Fasta BBBBBBBBBBBBBBBBBBBBBBBBBB · · · Fasta files are in a state with no line breaks except for> lines. I would like to do this as tab delimited. #Name AAAAAAAAAAAAAAAAAAAAAAAAA #Fasta BBBBBBBBBBBBBBBBBBBBBBBBB #· #· #· What kind of commands and scripts are there? Could you please tell me?
Sure, just use `awk`: $ awk 'BEGIN{RS=">"}{print "#"$1"\t"$2;}' in.fa | tail -n+2 > out.txt
biostars
{"uid": 235052, "view_count": 12871, "vote_count": 2}
<p>Hi there,<br /> I know that this is probably a common and newbie question but I can&#39;t find the solution. I have a strand specific RNA-seq data, specifically first-strand. Which is the correct way to specify this in trinity, RF or FR? I&#39;ve read the manual but is not clear for me...<br /> <br /> Thanks in advance.</p>
The Trinity documentation says: If you have strand-specific data, specify the library type. There are four library types: - Paired reads: - **RF**: first read (/1) of fragment pair is sequenced as anti-sense (reverse(**R**)), and second read (/2) is in the sense strand (forward(**F**)); typical of the dUTP/UDG sequencing method. - **FR**: first read (/1) of fragment pair is sequenced as sense (forward), and second read (/2) is in the antisense strand (reverse) - Unpaired (single) reads: - **F**: the single read is in the sense (forward) orientation - **R**: the single read is in the antisense (reverse) orientation The [TopHat manual](https://ccb.jhu.edu/software/tophat/manual.shtml) says: Library Type Examples Description fr-unstranded Standard Illumina Reads from the left-most end of the fragment (in transcript coordinates) map to the transcript strand, and the right-most end maps to the opposite strand. fr-firststrand dUTP, NSR, NNSR Same as above except we enforce the rule that the right-most end of the fragment (in transcript coordinates) is the first sequenced (or only sequenced for single-end reads). Equivalently, it is assumed that only the strand generated during first strand synthesis is sequenced. fr-secondstrand Ligation, Standard SOLiD Same as above except we enforce the rule that the left-most end of the fragment (in transcript coordinates) is the first sequenced (or only sequenced for single-end reads). Equivalently, it is assumed that only the strand generated during second strand synthesis is sequenced. To complete that, in Trinity `fr-firststrand` corresponds to `RF` and `fr-secondstrand` corresponds to `FR`. You can have a look here: http://rnaseq.uoregon.edu. They well explain the difference between first-strand and second-strand synthesis. You can also have look to this publication: Sequencing technologies - the next generation. Nature reviews. Genetics. 2010. doi:10.1038/nrg2626 Briefly, if you know the technology used for the sequencing you should be able to guess.
biostars
{"uid": 169942, "view_count": 13025, "vote_count": 2}
Hi, I am trying to compare few models for evoultionary distance calculation. I want to compare just those basic ones - Jukes-Cantor, K2P, Tamura, Tamura-Nei, HKY(Hasegawa-Kishino-Yano), GTR (general time reversible = Tavare) and Falsenstein81. I have already found equations for evolutionary distance calculations for first 4 models mentioned above. I am still looking for HKY,GTR,F81. Can you help me? I mean I know how to calculate substitution rates (for transitions and transversions) but please help me how to calculate evolutionary distance then? Thanks Sam :)
R packages such as APE and Phangorn and so on allow you to compute evolutionary distance measures. Both packages and other related, phylogenetic based packages are written - at least in part - by the same author: Paradis. The function dist.dna, for example, accepts as input an aligned set of DNA sequences, and has an optional parameter "model", which allows you to choose from many different evolutionary models, including all the ones you are looking to use here. http://www.inside-r.org/packages/cran/ape/docs/dist.dna. If you'd like to see how that function implements various models you could take a look at the code - although some of it is likely written in C as well as R. If you read a book like the Phylogenetic Handbook [here][1], where they discuss the mathematical derivations of distance measures - in fact there is a chapter dedicated to it - you will learn the classic models are in fact specific cases of the GTR model. [1]: http://www.cambridge.org/gb/academic/subjects/life-sciences/genomics-bioinformatics-and-systems-biology/phylogenetic-handbook-practical-approach-phylogenetic-analysis-and-hypothesis-testing-2nd-edition
biostars
{"uid": 136434, "view_count": 4204, "vote_count": 2}
Hi, I am working on the expression datasets that has multiple-variables. I am using the `FactoMineR` and `pca3d` libraries for this purpose and was able to distinguish two factor levels belonging to a one variable column. However, I am not able to perform the PCA on the two variable columns that has different factor levels. Please let me know how can I do the PCA on multi-variable. Below is the code I have ran with the one variable column; Neg_Dct ## Column [1-4 and 269-272] of the data set contains variables/categorical data## df = Neg_Dct[,-c(1:4, 269:272)] library(FactoMineR) nb_1 = estim_ncpPCA(df,ncp.max=5) res.comp_1 = imputePCA(df,ncp=2) res.pca_1 = PCA(res.comp_1$completeObs) library(pca3d) Node <- res.comp_1$completeObs pca <- prcomp(Node, scale.=TRUE) gr <- factor(Neg_Dct[,272]) summary(gr) B_1 B_2 31 132 #2D plot## pca3d(pca, group=gr) #2D plot## pca2d(pca, group=gr) Thank you, Toufiq
Use PCAtools (I am the developer), where you can have as many levels as you want, and represent them as both different shapes and / or colours: <a href="https://bioconductor.org/packages/devel/bioc/vignettes/PCAtools/inst/doc/PCAtools.html#change-shape-based-on-tumour-grade-remove-connectors-and-add-titles">4.2.3 Change shape based on tumour grade, remove connectors, and add titles</a> <a href="https://ibb.co/5G4K7GT"><img src="https://i.ibb.co/7jCk9jv/kkkk.png" alt="kkkk" border="0"></a>
biostars
{"uid": 410205, "view_count": 3102, "vote_count": 2}
Hi, I am trying to convert a Canine gene annotation (GTF) file downloaded from Ensembl to BED file using the gtf2bed tool within the BEDOPS application. Using this command gives an error: $ gtf2bed < Canis_familiaris.CanFam3.1.85_noheader.gtf > Canis_familiaris.CanFam3.1.85_noheader.bed Error: Potentially missing gene or transcript ID from GTF attributes (malformed GTF at line [1]?) I checked the first few lines of the GTF file and it seems to match up with the required format: $ head Canis_familiaris.CanFam3.1.85_noheader.gtf X ensembl gene 1575 5716 . + . gene_id "ENSCAFG00000010935"; gene_version "3"; gene_source "ensembl"; gene_biotype "protein_coding"; X ensembl transcript 1575 5716 . + . gene_id "ENSCAFG00000010935"; gene_version "3"; transcript_id "ENSCAFT00000017396"; transcript_version "3"; gene_source "ensembl"; gene_biotype "protein_coding"; transcript_source "ensembl"; transcript_biotype "protein_coding"; ... I looked at the source code on github for this tool and can see that is check for gene or transcript id and if not present gives this error. But the gene_id is present here in the first line, so not sure how it is reaching the error condition. I would appreciate any help with troubleshooting this error. Thank you, - Pankaj
**Note: This should no longer be an error with *gtf2bed* v2.4.40 and on.** ---------- I added more stringent GTF format validation to BEDOPS v2.4.20. The error suggests that the first line is missing the `transcript_id` field. It has a `gene_id` field, as you note, but no `transcript_id` field. The GTF 2.2 specification indicates that this field is mandatory, though its value can be an empty string. There are a couple solutions: 1. Use an older version of `gtf2bed` that doesn't apply this validation check (e.g., 2.4.19 or earlier) 2. Or, modify the GTF and add a placeholder field where none exists I suggest the second solution. You could do the following: $ awk '{ if ($0 ~ "transcript_id") print $0; else print $0" transcript_id \"\";"; }' input.gtf | gtf2bed - > output.bed This adds `transcript_id "";` to lines in the GTF that do not contain that field, and leaves other lines unchanged. The GTF that comes out of this `awk` statement is more valid, enough to get through the conversion step, and so it can be piped to `gtf2bed` to get BED as output.
biostars
{"uid": 206342, "view_count": 15164, "vote_count": 3}
Hi Biostars, Is it legitimate to sum up raw read counts from technical replicates of RNAseq, and use these summed counts for DE analysis. Would appreciate detailed and justified answers. Thanks
There is a theoretical justification for summing and not averaging. Read counts follow a Poisson distribution so averaging them results in data that is not Poisson, but summing is still Poisson. See Mike Love's explanation [here][1] [1]: http://seqanswers.com/forums/showthread.php?t=60996#4
biostars
{"uid": 273421, "view_count": 9430, "vote_count": 14}
Dear everyone I'm confused that how to build Risk score formula for survival analysis. I have been seen many articles involved the Risk score formula building, but I have spent a day to solved it but failed. can someone taken a example to show how to build it? [This is the article details][1]: > Base on the expression level of five lncRNAs, we designed a risk-score formula for ccRCC patients’ survival prediction. The risk score formula is as following: Risk score= (1.43 × expression level of AC069513.4) + (0.81 × expression level of AC003092.1) + (1.64 × expression level of RP11-507K2.3) + (-6.56 × expression level of CTC-205M6.2) + (-1.72 × expression level of U91328.21) but I don't know how to calculate the Risk score formular Thank you very much! Alex [1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5601685/
As to which zx8754 alluded, it is simply about multiplying the expression values by a weight, and then summing these up. The weights that they use are the beta estimates (coefficients) from the fitted Cox proportional hazards model. The exponential function applied to these coefficients gives the odds ratios. This, and variations of it, represent a very common way to calculate 'risk scores'. Hopefully that helps. Kevin
biostars
{"uid": 338216, "view_count": 3093, "vote_count": 3}
<p>Simple question - I need to create a GTF file to use in HTSeq-count that contains gene regions plus 3kb upstream. (Background: doing a MeDIP-seq experiment, want to look for differential methylation in genic and 3kb promoter regions using count based method like edgeR/DESeq).</p> <p>I was planning on making one myself from the UCSC hg19 refFlat table. The refFlat table has gene coordinates, but I need to extend this 3kb upstream to capture promoter regions.</p> <p>Column 3 contains the strand (+/-) and columns 4-5 contain the transcription start (txStart) and end (txEnd) positions. </p> <p>If I want to capture 3kb upstream of the TSS, I was planning on adding 3000 to txStart, but is only for genes on the + strand, correct? If I want 3kb upstream of the TSS for genes on the - strand, should I add this 3000 to txEnd?</p> <p>E.g. the <a href='http://genome.ucsc.edu/cgi-bin/hgTracks?hgsid=299873535'>DENND1B gene</a> is on the - strand at chr1:197,473,879-197,744,623. However, <a href='http://genome.ucsc.edu/cgi-bin/hgTracks?hgsid=299873535'>looking at it in the browser</a>, it's transcribed "right to left", so I presume I would want to add 3000 to the txEnd number, 197,744,623, even though this is really where transcription starts.</p> <p>Am I thinking about this correctly?</p>
> "If I want 3kb upstream of the TSS for genes on the - strand, should I add this 3000 to txEnd?" yes. the position are always ordered from the 5' of the chromosome to the 3' and named xxStart and xxEnd (with xxStart<=xxEnd) whatever the value of "strand". $ mysql --user=genome --host=genome-mysql.cse.ucsc.edu -A -D hg19 -e 'select count(*) from refFlat where txStart>txEnd or cdsStart>cdsEnd' +----------+ | count(*) | +----------+ | 0 | +----------+ $ mysql --user=genome --host=genome-mysql.cse.ucsc.edu -A -D hg19 -e 'select txStart,txEnd from refFlat where strand="-" limit 4' +-----------+-----------+ | txStart | txEnd | +-----------+-----------+ | 62929370 | 62937380 | | 34610 | 36081 | | 153270337 | 153283194 | | 113235158 | 114449242 | +-----------+-----------+ So you'll only have to : newStart=(start<3000?0:start-3000); newEnd=end+3000; **EDIT:** I was too fast. My pseudo-code extends the sequence on both side. If you want to extend **upstream** sequence : newStart=(strand=='+'?start<3000?0:start-3000:start); newEnd=(strand=='+'?end:end+3000); to fetch the (positive strand) sequence; See [How to get the sequence of a genomic region from UCSC?][1] and [Extract sequence from the genome?][2] [1]: /p/56/ [2]: /p/3137
biostars
{"uid": 53655, "view_count": 8923, "vote_count": 5}
Hello all, I don't know if this is the appropriate place to ask to this question, but I am currently analyzing a single cell rna-seq dataset with 2 conditions and 10 samples each (total of 20 samples). The samples are not technical replicates, they come from different humans each. The files, once downloaded, are in .H5 format. After scouring the web, I still cannot find a pipeline that works for me. Should I analyze each of the samples individually or merge them into one some way and proceed? I am trying to follow the Seurat pipeline line so any help on that note would be helpful. I have written this code: h5_files<-list.files(pattern ="*.h5") h5_read <- lapply(h5_files, Read10X_h5) h5_seurat <- lapply(h5_read, CreateSeuratObject, min.cells=5, min.features=250, project="ccc") here h5_seurat in the last line of code is a list with 20 Seurat objects (from the original .H5) files. I want to follow the rest of the Seurat pipeline, but I don't know if should do it individually for each of the 20 samples or merge them somehow? Thanks
Hello, You can merge all the files into a Seurat object and continue the analysis with it. Here's an example similar to what you're looking for [scRNA-seq example][1]. Best, Rafael [1]: https://nbisweden.github.io/workshop-scRNAseq/labs/compiled/seurat/seurat_01_qc.html
biostars
{"uid": 9536950, "view_count": 489, "vote_count": 2}
Hi guys ! From a bam alignment file, I want to compute the ratio between the number of reads terminating and overlapping at all genomic positions (called *psi-ratio* in this figure). ![image description][1] For the number of reads overlapping, it's easy (its basically the coverage, given by `samtools depth` for instance), but for the reads terminating I'm a bit lost. I guess this has to do with the POS field of the sam/bam alignment (1-based leftmost mapping POSition) but I can't go much further than that... Any help or resource will be appreciated ! Subsidiary question : what if the input data is paired-end and I want the ratio between the **FRAGMENTS** terminating and overlapping ? For those interested in why I want to do that, the idea is very similar to the recently described [psi-seq][2] : I want to detect positions on RNAs that blocks the reaction of reverse transcription during the library preparation. Thanks a lot ! [1]: http://i.imgur.com/WlQko3x.png [2]: http://www.ncbi.nlm.nih.gov/pubmed/25219674
> for the reads terminating I'm a bit lost. using **bioalcidae** : https://github.com/lindenb/jvarkit/wiki/BioAlcidae ``` $ java -jar dist/bioalcidae.jar input.bam \ -e 'while(iter.hasNext()) { var rec = iter.next(); if(rec.getReadUnmappedFlag()) continue; out.println(rec.getContig()+"\t"+rec.getAlignmentEnd()); }' | LC_ALL=C sort -k1,1 -k2,2n | LC_ALL=C uniq -c 1 rotavirus 52 1 rotavirus 56 1 rotavirus 64 1 rotavirus 66 1 rotavirus 67 2 rotavirus 68 1 rotavirus 69 6 rotavirus 70 8 rotavirus 71 5 rotavirus 72 5 rotavirus 73 7 rotavirus 74 4 rotavirus 75 6 rotavirus 76 7 rotavirus 77 5 rotavirus 78 8 rotavirus 79 3 rotavirus 80 10 rotavirus 81 4 rotavirus 82 6 rotavirus 83 7 rotavirus 84 4 rotavirus 85 3 rotavirus 86 10 rotavirus 87 5 rotavirus 88 6 rotavirus 89 4 rotavirus 90 9 rotavirus 91 7 rotavirus 92 4 rotavirus 93 1 rotavirus 94 5 rotavirus 95 5 rotavirus 96 6 rotavirus 97 7 rotavirus 98 3 rotavirus 99 4 rotavirus 100 (...) ``` you might use getUnclippedEnd instead of getAlignmentEnd if you want the unclipped alignments.
biostars
{"uid": 187494, "view_count": 1676, "vote_count": 1}
Hi pals, I got some problems here. I have an excel file with several genomic coordinates in the format "strand:start:end", and I also have access to my genome of interest in different formats (fna, gbk...). Now, what I want to do is to extract and obtain a file with all the sequences corresponding to my coordinates, I don't mind if the sequences have a header or a custom ID, I just want the sequences in the same order as I have them in my excel file. Is there any tool that can do this? It would be much appreciated since I have to extract more than 2000 sequences, and doing this manually is humanly impossible. The next step is to perform a BLAST with another genome, also accessible in several formats. I'm using the blastn NCBI program and this script `-task blastn -query "query" -subject "subject" -oytfmt 6 >name`, how can I make the output file to only contain the most relevant matches? (E.g. only the matches that have an E value lower than 0.0001) Thanks beforehand!
If you're using Windows, save your excel file as text with tab delimiters, and first column chromosome name, second start, third end (this is BED format. Then install Python3, then open `cmd.exe:` ``` C:\Python34\Scripts\pip.exe install pyfaidx C:\Python34\Scripts\faidx.exe genome.fna --bed regions.bed > out.fasta ```
biostars
{"uid": 128224, "view_count": 9010, "vote_count": 4}
I used makeblastdb to search many short fasta sequences in a known organism. I successfully completed this step and got the mappings. My question is How can I get the start and stop coordinate? for eg Query 1 AATATAGGTGGTACCACGGAATATCCGTCCTATTTGTATATAGGATGGATAtttttattt 60 |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| Sbjct 1765181 AATATAGGTGGTACCACGGAATATCCGTCCTATTTGTATATAGGATGGATATTTTTATTT 1765122 Query 61 ttttAGGAGGTATAGCAAATGG 82 |||||||||||||||||||||| Sbjct 1765121 TTTTAGGAGGTATAGCAAATGG 1765100 How to get 1765100 and 1765181 from a text file with many mappings like this. I would also like to count the number of mappings or query sequences?
blastn -query some.fa -db some.fa -outfmt 6 | awk '{print $9 "\t" $10}'
biostars
{"uid": 102189, "view_count": 5145, "vote_count": 1}
I have a list of genomic ranges mapped to hg19 . my data is in matrix format lets call it `ranges` which has 600,000 rows and 4 clumns here is few row of my data head(ranges) chr start end strand [1,] "chr1" "10025" "10525" "." [2,] "chr1" "13252" "13752" "." [3,] "chr1" "16019" "16519" "." [4,] "chr1" "96376" "96876" "." [5,] "chr1" "115440" "115940" "." [6,] "chr1" "235393" "235893" "." Is there a function that gets sequences and calculates GC content for each row( each range) I would prefer that output be in a vector format I would really appreciate your help
A solution using GenomicRanges and BSgenome from Bioconductor. You will need to transform your data to a GRanges object for this. library(GenomicRanges) library(BSgenome.Hsapiens.UCSC.hg19) library(BSgenome) gr <- GenomicRanges::GRanges(seqnames = c("chr1", "chr2"), ranges = IRanges(start = c(123456,123490), end = c(500020, 600020))) GetGC <- function(bsgenome, gr){ seqs <- BSgenome::getSeq(bsgenome, gr) return(as.numeric(Biostrings::letterFrequency(x = seqs, letters = "GC", as.prob = TRUE))) } > gr GRanges object with 2 ranges and 0 metadata columns: seqnames ranges strand <Rle> <IRanges> <Rle> [1] chr1 123456-500020 * [2] chr2 123490-600020 * ------- seqinfo: 2 sequences from an unspecified genome; no seqlengths > GetGC(bsgenome = BSgenome.Hsapiens.UCSC.hg19, gr = gr) [1] 0.2823072 0.4355687
biostars
{"uid": 478444, "view_count": 2573, "vote_count": 2}
I am currently tinkering with an analysis bioconductor ppackage found here: "An end to end workflow for differential gene expression using Affymetrix microarrays" https://bioconductor.org/packages/devel/workflows/vignettes/maEndToEnd/inst/doc/MA-Workflow.html Under section 4.5 you perform the task of running a general linearized model where sva surrogate variables are modeled as a function of your covariates. The following does this by pulling from an expression set object created in the first part of the pipeline: glm.sv1 <-glm(pData(inpData_sv)[,"sv1"]~pData(inpData_sv)[,"Batch"]+pData(inpData_sv)[,"Sex"]) Unfortunately, I am getting an error telling me: Error in pData(inpData_sv) : could not find function "pData" I have tried getting the pData function by downloading ballgown and phylobase, the only two packages where I have seen pData(). Neither of these have seemed to provide me with the pData function required here. I'm pretty new to R, so could anyone recommend a substitute function for this R command?
From the linked manual: > The *pData* function of the Biobase package... Do you have Biobase package installed/loaded?
biostars
{"uid": 379925, "view_count": 2434, "vote_count": 1}
How to calculate effect size between two samples for a gene set in case if I have effect size of the two samples? for example s.dist sample1 sample 2 0.4730 -0.05460 -0.47000 How s,dist has been calculated? Thanks
> How s,dist has been calculated? Since you have *euclidean* among your keywords, I suspect the answer is the euclidean distance between the two samples: ``` sqrt((-0.05460)^2 + (-0.47000)^2) 0.4731608 ``` perhaps the discrepancy with your 0.4730 is due to some rounding error in reporting.
biostars
{"uid": 9540984, "view_count": 335, "vote_count": 1}
I've a table of blast results with ~5hits/query protein: ``` Protein Class ProtA 1 ProtA 1 ProtA 1 ProtA 0 ProtA 1 ProtB 1 ProtB 1 ProtB 0 ProtB 0 ProtB 1 ``` I would like to convert this into a feature vector matrix like this: ``` Protein Class1 Class2 Class3 Class4 Class5 ProtA 1 1 1 0 1 ProtB 1 1 0 0 1 ``` Can someone suggest me an efficient way to do this, since I've ~2300k hits in the file.
For any number of hits per protein: ``` > tmp protein class ProtA 1 ProtA 1 ProtA 1 ProtA 0 ProtA 1 ProtB 1 ProtB 1 ProtB 0 ProtB 0 ProtC 1 ProtD 1 ProtD 0 # install & load packages library(reshape2) library(plyr) library(doMC) registerDoMC(8) # assign 8 cores for ddply to use # define function myfunc myfunc <- function(x) { dim(x) x$name = paste("Class",1:nrow(x)) dcast(x,protein~name,value.var = "class") } # call myfunc in ddply. run ddply in parallel mode to use >1 cores res = ddply(.data = tmp,.variables = "protein",.fun = function(x) myfunc(x),.parallel=T) # output > res protein Class 1 Class 2 Class 3 Class 4 Class 5 ProtA 1 1 1 0 1 ProtB 1 1 0 0 NA ProtC 1 NA NA NA NA ProtD 1 0 NA NA NA ```
biostars
{"uid": 121126, "view_count": 4296, "vote_count": 2}
For each of several genomes, apart from the already available fasta sequence and associated GFF3 annotation files, I have also generated 5 additional GFF files for start-stop coordinates of 5 additional types of genomic features. My goal is four-fold in the context of these 6 GFF files and 1 genomic DNA sequence. 1. **Explore** where two or more of these genomic features overlap / intersect / co-localize - I am doing this via text manipulation, using bedtools overlap or bedtools intersect. But I want to visualize 2 or more tracks , and just text-based calculation of intersection is not satisfying during exploration phase. I want to visualize it. 2. Generate **high-detail images** (with flexibility of color / shapes like that in IGB or gff2ps) for **small intervals**, looking at specific and most interesting cases. 3. Generate **overview images** for **entire** chromosomes or even a **genome**, for overlap across these 6 types of genomic features , without confusing or overwhelming the reader. 4. Finally, to request advice on which tool and/or which **statistical test** to perform for verifying whether the observed physical **co-localization** of any 2 of the 6 types of genomic features is random or **non-random**. And if latter is true, are there more sophisticated tests to examine physical distribution of genomic loci types, relative to one another? And are there tests that can examine more than 2 types of genomic features at a time? The rather old thread at https://www.biostars.org/p/363/ discusses answers to questions 1 - 3 above, but I am curious to know if there are better / updates tools for my goals than the ones I mentioned above or at the link (bedtools or bedOps, IGB, GBrowse, GFF2PS). Thanks!
Hi, If you can use R you should be able to create these plots with [karyoploteR]( http://bioconductor.org/packages/karyoploteR/). You would need to load the data into R (probably using [rtracklayer](http://bioconductor.org/packages/rtracklayer/)'s ' import' function) and then plot them using `kpPlotGenes`for genes and `kpPlotRegions` for everything else. You can find more information and various examples on how to use it at [karyoploteR titorial page](https://bernatgel.github.io/karyoploter_tutorial/). As for point 4 you can use the Bioconductor package [regioneR](http://bioconductor.org/packages/release/bioc/html/regioneR.html). If you load the data into R you can use the function `overlapPermTest` to perform a permutation strategy to test if two sets of genomic regions overlap more (or less) than expected by chance. Hope this helps Bernat
biostars
{"uid": 332643, "view_count": 2985, "vote_count": 3}
I started with a 9GB file of a human Xth chromosome with a .bam format, and I am trying to get a specific genic region in .fasta classic format (you know, a one header file, starting with ">", with description on header, and a single line, of continuous tandem single nucleotides). I have been able to retrieve the reads of the genic region I want in bam format. I share the path I have followed: ``` samtools sort OriginalFile.bam -o OriginalFile.sort.bam samtools inex OriginalFile.sort.bam samtools view -b OriginalFile.sort.bam RegName:XX-XX+1 > GeneName.bam ``` after this I got the GeneName.bam file which contains all the reads on which I am interested in, but I can't run the phylogenetic tool I am trying to pipe the GeneName.bam sequence to ... in order to follow my workpath, I need to transform this GeneName.bam file, into this classic .fasta format I described lines ago. I tried a couple of pipelines in order to achieve this, but what I have got as a retrieved .fasta file, is a man readable file, but with all the single fastq reads, with headers and everything, stacked on top of each other. I need to get the consensus of this, with one header and only contiguous nucleotides. I got GeneName.fasta file running: samtools bam2fq GeneName.bam > GeneName.fasta or ``` samtools bam2fq GeneName.bam > GeneName.fastq samtools seqtk seq -A GeneName.fastq > GeneName.fasta # (i installed bowtie2, boost, tophat2, seqtk, and bcf tools, as they seem to complement each other's works at some times) ``` and I got the same result on both, analogous file. With this line of thinking, I thought I may have to run mpileup to the GeneName.bam file, before transforming to fasta, but mpileup gives as a result .bcf format... still, this last assumption is just a form of discussing my problem. Have anyone out there found the solution for this, can a .bam file converted to classic .fasta format, or can help in any way? Greetings
Alright, i solved my problem, so i share what i found to the community so you can solve similar problems after retrieving a .bam file with the desired length, through the command "samtools view -b ChrName:XX-XX+1 > Outputname.bam" you need to pipe this output into samtools mpileup command, in order to get a .bcf file "samtools mpileup -g -f refgenome.fa filename.bam > filename.bcf" (check samtools tutorials to change or check/confirm desired flag options) then you need to transform .bcf to .vcf callig: "bcftools call -m filename.bcf > filename .vcf" this last .vcf file needs to be compressed and indexed like following: bgzip -c filename.vcf > filename.vcf.gz tabix filename.vcf.gz after having succesfully run these commands, you can get to the final step, using samtools again: samtools faidx refgenome.fa ChrName:XX-XX+1 | bcftools consensus filename.vcf.gz > filename.fa I hope these pipeline is useful for you people, and i thank all the community, and people who aided on the go. Greetings!
biostars
{"uid": 284674, "view_count": 2497, "vote_count": 2}