INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
hi, how I can create my own annotation gtf for miRNA DE analysis by mirdeep2? thank you
Congratulation Angel for getting the rarest badge on all of Biostars! :) ![http://i.imgur.com/HuL4eGU.png][1] [1]: http://i.imgur.com/HuL4eGU.png
biostars
{"uid": 212093, "view_count": 2115, "vote_count": 1}
I have two set data, 4 samples with 2 replicates for each one from batch 1 and another 4 samples with 2 replicates from batch 2. I would like to remove batch effects from these samples and compare different methods together. I have done below commands but face with error: design samples method batch L4_rep1 L4 L b1 L4_rep2 L4 L b1 L6_L8_rep1 L6_L8 L b1 L6_L8_rep2 L6_L8 L b1 Q5_Q7_rep1 Q5_Q7 Q b1 Q5_Q7_rep2 Q5_Q7 Q b1 Q3_rep1 Q3 Q b1 Q3_rep2 Q3 Q b1 co_40d_A co_40d co_40d b2 co_40d_B co_40d co_40d b2 co_60d_A co_60d co_60d b2 co_60d_B co_60d co_60d b2 EB_A EB EB b2 EB_B EB EB b2 H9_A H9 H9 b2 H9_B H9 H9 b2 design$=samples <- factor(design$=samples, levels = c("L4","L6_L8", "Q3", "Q5_Q7","co_40d","co_60d", "EB", "H9")) design$method <- factor(design$method, levels = c("L", "Q", "co_40d","co_60d", "EB", "H9")) design$batch <- factor(design$batch, levels = c("b1", "b2")) design.matrix <- model.matrix(~0+batch+method,design) design.matrix batchb1 batchb2 methodQ methodco_40d methodco_60d methodEB methodH9 L4_rep1 1 0 0 0 0 0 0 L4_rep2 1 0 0 0 0 0 0 L6_L8_rep1 1 0 0 0 0 0 0 L6_L8_rep2 1 0 0 0 0 0 0 Q5_Q7_rep1 1 0 1 0 0 0 0 Q5_Q7_rep2 1 0 1 0 0 0 0 Q3_rep1 1 0 1 0 0 0 0 Q3_rep2 1 0 1 0 0 0 0 co_40d_A 0 1 0 1 0 0 0 co_40d_B 0 1 0 1 0 0 0 co_60d_A 0 1 0 0 1 0 0 co_60d_B 0 1 0 0 1 0 0 EB_A 0 1 0 0 0 1 0 EB_B 0 1 0 0 0 1 0 H9_A 0 1 0 0 0 0 1 H9_B 0 1 0 0 0 0 1 library(edgeR) data_filter<- count table edgeR.dgelist = DGEList(data_filter) edgeR.dgelist_normal = calcNormFactors(edgeR.dgelist) CommonDisp <- estimateGLMCommonDisp(edgeR.dgelist_normal, design.matrix) Error in glmFit.default(y, design = design, dispersion = dispersion, offset = offset, : Design matrix not of full rank. The following coefficients not estimable: methodH9 I like to know whether my design matrix is correct? Also, I like to compare Q method against other methods, would you please help me in making contrast?
Your **method** and **batch** are confounded ( b1 -> L, Q ) and ( b2 -> co_40d, co_60d_A, EB, H9 ), same for **sample** and **batch**. Also, for b2, **method** and **sample** are the same, and it is not possible to estimate effects for redundant variables. Are **samples** technical replicates? You could drop **samples** and keep only **method**, but in any case **batch** is still confounded - meaning you can't independently estimate **batch** and **methods** effects. Try searching for `Design matrix not of full rank`, there will be plenty of very good posts explaining the causes and how to solve it. For example: https://support.bioconductor.org/p/68092/ https://www.biostars.org/p/199519/ https://support.bioconductor.org/p/80408/
biostars
{"uid": 311320, "view_count": 3229, "vote_count": 1}
I'm currently trying to calculate the number of single-cell ATAC fragments that lie at peaks defined by a bed file. I have a tsv file containing fragments with the following columns: chromosome, start, end, cell barcode. This file is around 10gb in size (around 250 million rows). I also have another tsv file containing peaks with the following columns: chromosome, start, end. What I want to do is: 1. Assign fragment to each peak on bed file 2. Output a data frame containing cell barcode, region where the fragments are assigned (chr, start, end), and also the number of fragments assigned to this region. Here is what I tried, however R ran out of memory (I am running a desktop with 64gb memory and an i7 9700k). It also seems very slow. Maybe R is not suited for this kind of task, but currently I am most familiar with R. In summary what I did was: loop over fragments and filter the bed file if the fragment lies in between the region of the bed file, and then append another column to the fragments so that I have the region identifier, and finally count the unique identifier for each cell. fragments$region <- "none" for(i in 1:nrow(fragments)){ chr_fragment <- fragments[i,] %>% pull(chr) start_fragment <- fragments[i,] %>% pull(start) end_fragment <- fragments[i,] %>% pull(end) # region containing start of fragment extracted_region <- bed_file %>% filter(chr == chr_fragment) %>% filter(start <= start_fragment & start_fragment <= end) %>% paste(collapse = "_") extracted_region <- paste0("chr", extracted_region) # append to initial fragments tsv file fragments[i,5] <- extracted_region paste0("finished assigning ", i, "out of ", nrow(fragments), " fragments") %>% print() } fragments <- rename(fragments, region = 5) # remove fragments not in peak fragments <- fragments %>% filter(region != "none") # count number of fragments in region per cell fragments <- fragments %>% mutate(dplyr::count(fragments, barcode, region)) Is it possible to manipulate this in a way that it won't take too much memory? Or is it impossible to achieve using R? Or if there is any tool that can do this (I tried to look but did not find any that suits this). It also seems to take awfully long time to perform this loop, which I don't even think will finish in weeks. Any help is greatly appreciated!
I am not sure if you have looked up for any tools for scATAC analysis. [Signac][1], part of Seurat framework is developed for chromatin data set analysis, especially scATAC. The [FeatureMatrix][2] function does exactly what you are looking for. FeatureMatrix( fragments, features, cells = NULL, chunk = 50, sep = c("-", "-"), verbose = TRUE ) `fragments` are your fragment file and `features` are your peaks as a GRanges object. [1]: https://satijalab.org/signac/index.html [2]: https://satijalab.org/signac/reference/FeatureMatrix.html
biostars
{"uid": 448115, "view_count": 2844, "vote_count": 1}
<p>I have to plot some MLPA data using circos and the idea is to focus on cytobands. Circos uses the cytobands as described by UCSC/Ensembl. Looking at some genes in GeneCards I found different cytoband locations for the same gene (e. g. SCN5A). Now, I want to compare Ensembl and Entrez Gene cytoband systems and see how the plots behave. I was unable to find a file/page/table similar to UCSC goldenPath cytoBand at NCBI. Could someone point the direction, please? </p>
<p>Do these files from the NCBI MapView FTP site help?</p> <pre><code>wget ftp://ftp.ncbi.nih.gov/genomes/MapView/Homo_sapiens/objects/BUILD.37.3/initial_release/ideogram_9606_GCF_000001305.13_400_V1 wget ftp://ftp.ncbi.nih.gov/genomes/MapView/Homo_sapiens/objects/BUILD.37.3/initial_release/ideogram_9606_GCF_000001305.13_550_V1 wget ftp://ftp.ncbi.nih.gov/genomes/MapView/Homo_sapiens/objects/BUILD.37.3/initial_release/ideogram_9606_GCF_000001305.13_850_V1 # examine contents # head -5 ideogram_9606_GCF_000001305.13_400_V1 1 p 36.3 0 451 1 7200000 gneg 1 p 36.2 451 682 7200000 16200000 gpos 100 1 p 36.1 682 1259 16200000 28000000 gneg 1 p 35 1259 1583 28000000 34600000 gpos 100 1 p 34.3 1583 1779 34600000 40100000 gneg </code></pre>
biostars
{"uid": 94271, "view_count": 5350, "vote_count": 3}
I'm a Computer Science undergraduate writing my thesis which is about analyzing open source bioinformatics projects and extracting standalone modules. I need those projects to be written in Java and preferably be stable or mature. So far I've found a bunch of projects via SourceForge and I was wondering if there are any notable ones that are not hosted there. A different repository that has a bioinformatics category would be helpful too. Here's a list of the most promising projects I have so far: - Jenetics - JGap - Jmol - Juicebox - Picard - RDP Classifier - Shim - Tassel - The Chemistry Development Kit - Toxfree - TreeView - VarScan Thanks in advance.
- [htsjdk][1] - [gatk][2] - [igv][3] - [haploview][4] ... **Edit** and [using the seqanswer wiki][5]. [1]: https://github.com/samtools/htsjdk/ [2]: https://www.broadinstitute.org/gatk/ [3]: https://www.broadinstitute.org/igv/ [4]: https://www.broadinstitute.org/scientific-community/science/programs/medical-and-population-genetics/haploview/haploview [5]: http://seqanswers.com/w/index.php?title=Special%3AAsk&q=[[Language%3A%3AJava]]&po=&sort_num=&order_num=ASC&eq=yes&p[format]=broadtable&p[limit]=&p[sort]=&p[offset]=&p[headers]=show&p[mainlabel]=&p[link]=all&p[searchlabel]=&p[intro]=&p[outro]=&p[default]=&p[class]=sortable+wikitable+smwtable&eq=yes
biostars
{"uid": 178093, "view_count": 2145, "vote_count": 2}
Hello! everyone: I'm new to linux, here I got a problem: I have a file file1 like: 3 6 7 9 12 and file2 which is tab-delimited: chr1 3052600 3052800 1 E3 chr1 3052800 3053000 2 E3 chr1 3059400 3059600 3 E3 chr1 3059600 3059800 4 E3 chr1 3059800 3060000 5 E3 chr1 3062600 3062800 6 E3 chr1 3101000 3101200 7 E3 chr1 3105000 3105200 8 E3 chr1 3105200 3105400 9 E3 chr1 3116800 3117000 10 E2 chr1 3117000 3117200 11 E2 chr1 3164800 3165000 12 E2 and I want to extract the lines in file2 which its 4-th column equal the number in file1 like below: chr1 3059400 3059600 3 E3 chr1 3062600 3062800 6 E3 chr1 3101000 3101200 7 E3 chr1 3105200 3105400 9 E3 chr1 3164800 3165000 12 E2 I have spent several hours including wrote a very slow python script, and I searched for the oneline solution, but I got nothing! awk -v FS="\t" 'NR==FNR{rows[$1]++;next}(substr($NF,1,length($NF)-1) in rows)' fiel1 file2 Thanks a lot for some suggestions!
simple with https://github.com/shenwei356/csvtk csvtk grep -H -t -f 4 -P file1 file2 > result
biostars
{"uid": 322899, "view_count": 2596, "vote_count": 1}
<p>Is there a tool like bedtools shuffle which I can use to randomly shuffle a bam file? or will I have to convert my bam into a bed and then shuffle it? thanks</p>
<p>use https://github.com/lindenb/jvarkit/wiki/Biostar145820 to shuffle the reads with option `-n -1`</p>
biostars
{"uid": 151102, "view_count": 7404, "vote_count": 2}
I need to reformat headers in a fasta file with headers such as: >Agaricus_chiangmaiensis|JF514531|SH174817.07FU|reps|k__Fungi;p__Basidiomycota;c__Agaricomycetes;o__Agaricales;f__Agaricaceae;g__Agaricus;s__Agaricus_chiangmaiensis TTGAATTATGTTTTCTAGATGGGTTGTAGCTGGCTCTTCGGAGCATGTGCACGCCTGCCTGGATTTCATTTTCATCCACCTGTGCACCTATTGTAGTCTCTGTCGGGTATTGAGGAAGTG >Acarospora_laqueata|DQ842014|SH191965.07FU|refs|k__Fungi;p__Ascomycota;c__Lecanoromycetes;o__Acarosporales;f__Acarosporaceae;g__Acarospora;s__Acarospora_laqueata TCGAGTTAGGGTCCCTCGGGCCCAACCTCCAACCCTTTGTGTACCTACTTTTGTTGCTTTGGCGGGCCCGCTGGGAAACTCCACCGGCGGCCACAGGCTGCCGAGCGCCCGTCAGA >Ceratobasidiaceae_sp|DQ493566|SH185440.07FU|reps|k__Fungi;p__Basidiomycota;c__Agaricomycetes;o__Cantharellales;f__Ceratobasidiaceae;g__unidentified;s__Ceratobasidiaceae_sp TCGAACGAATGTAGAGTCGGTTGTCGCTGGCCCTCTCTGCTGGGCATGTGCACACCTTCTCTTTCATCCACACACACCTGTGCACTCGTGAAGACGGAAGGAGCGCCCTTGGGCGGCGTCC So that they look like: >SH174817.07FU Agaricus chiangmaiensis TTGAATTATGTTTTCTAGATGGGTTGTAGCTGGCTCTTCGGAGCATGTGCACGCCTGCCTGGATTTCATTTTCATCCACCTGTGCACCTATTGTAGTCTCTGTCGGGTATTGAGGAAGTG >SH191965.07FU Acarospora laqueata TCGAGTTAGGGTCCCTCGGGCCCAACCTCCAACCCTTTGTGTACCTACTTTTGTTGCTTTGGCGGGCCCGCTGGGAAACTCCACCGGCGGCCACAGGCTGCCGAGCGCCCGTCAGA >SH185440.07FU Ceratobasidiaceae sp TCGAACGAATGTAGAGTCGGTTGTCGCTGGCCCTCTCTGCTGGGCATGTGCACACCTTCTCTTTCATCCACACACACCTGTGCACTCGTGAAGACGGAAGGAGCGCCCTTGGGCGGCGTCC Is there a relatively simple code that can isolate these specific elements and re-order them? I think I can get the first part with something like: grep -r -o "SH.*FU" file.fasta But I am unsure how to isolate and reformat the genus and species names in addition to that.
Given `in.fa`: $ more in.fa >Agaricus_chiangmaiensis|JF514531|SH174817.07FU|reps|k__Fungi;p__Basidiomycota;c__Agaricomycetes;o__Agaricales;f__Agaricaceae;g__Agaricus;s__Agaricus_chiangmaiensis TTGAATTATGTTTTCTAGATGGGTTGTAGCTGGCTCTTCGGAGCATGTGCACGCCTGCCTGGATTTCATTTTCATCCACCTGTGCACCTATTGTAGTCTCTGTCGGGTATTGAGGAAGTG >Acarospora_laqueata|DQ842014|SH191965.07FU|refs|k__Fungi;p__Ascomycota;c__Lecanoromycetes;o__Acarosporales;f__Acarosporaceae;g__Acarospora;s__Acarospora_laqueata TCGAGTTAGGGTCCCTCGGGCCCAACCTCCAACCCTTTGTGTACCTACTTTTGTTGCTTTGGCGGGCCCGCTGGGAAACTCCACCGGCGGCCACAGGCTGCCGAGCGCCCGTCAGA >Ceratobasidiaceae_sp|DQ493566|SH185440.07FU|reps|k__Fungi;p__Basidiomycota;c__Agaricomycetes;o__Cantharellales;f__Ceratobasidiaceae;g__unidentified;s__Ceratobasidiaceae_sp TCGAACGAATGTAGAGTCGGTTGTCGCTGGCCCTCTCTGCTGGGCATGTGCACACCTTCTCTTTCATCCACACACACCTGTGCACTCGTGAAGACGGAAGGAGCGCCCTTGGGCGGCGTCC Here's one way: $ awk '{ if ($0~/^>/) { n=split($0, a, "|"); gsub(/_/," ", a[1]); printf(">%s %s\n", a[3], substr(a[1], 2)); } else { print $0; } }' in.fa >SH174817.07FU Agaricus chiangmaiensis TTGAATTATGTTTTCTAGATGGGTTGTAGCTGGCTCTTCGGAGCATGTGCACGCCTGCCTGGATTTCATTTTCATCCACCTGTGCACCTATTGTAGTCTCTGTCGGGTATTGAGGAAGTG >SH191965.07FU Acarospora laqueata TCGAGTTAGGGTCCCTCGGGCCCAACCTCCAACCCTTTGTGTACCTACTTTTGTTGCTTTGGCGGGCCCGCTGGGAAACTCCACCGGCGGCCACAGGCTGCCGAGCGCCCGTCAGA >SH185440.07FU Ceratobasidiaceae sp TCGAACGAATGTAGAGTCGGTTGTCGCTGGCCCTCTCTGCTGGGCATGTGCACACCTTCTCTTTCATCCACACACACCTGTGCACTCGTGAAGACGGAAGGAGCGCCCTTGGGCGGCGTCC
biostars
{"uid": 292436, "view_count": 2031, "vote_count": 1}
<p>Hi everyone</p> <p>Is there a fast way to do this filter?</p> <p>I have a huge Fasta file (sequences are short reads coming from an Illumina instrument). I have also a list of nucleotide sequences (not Fasta, just the sequences) and I want to remove from the big Fasta file all entries identical to those in the list.</p> <p>My idea was simply to go down through the Fasta file and then, for every read, check all the sequences of the list. If the read matches one of the sequences then do nothing, otherwise print the read into a new file. I made this with perl but it takes ages!</p> <p>The list is made up of nucleotide sequences, not IDs. It's something like this:</p> <p>AACGACTACTTATCGATC</p> <p>TCGGCGATATACGTAC</p> <p>CCAGTTTCGGGGCTAT ....</p> <p>Thanks!</p>
<ul> <li>linearize your fastq file into 3 columns: sequence, ID, qualities (using 'awk', or 'perl')</li> <li>sort it using the sequence as the key ( unix 'sort')</li> <li>sort your 2nd file of sequences ( unix 'sort')</li> <li>use unix '<a href="http://unixhelp.ed.ac.uk/CGI/man-cgi?join">join</a>' to filter out the first file and re-transform it to FASTQ using awk or perl</li> </ul>
biostars
{"uid": 4881, "view_count": 27304, "vote_count": 12}
I have a data table that I would like to run expression analysis on it but the data set is present in log2. I would like to know how can I get Raw value instead of log2? Data: in Log2 Gene_names G_rep1.Log2 G_rep2.Log2 G_rep3.Log2 ACTG1 38.49 38.53 33.39 TUBB4B 37.61 37.31 37.35 TUBA1B 37.53 37.30 36.99 RPLP2 33.20 32.90 33.78 ACTC1 35.19 35.35 29.73
2^x
biostars
{"uid": 457899, "view_count": 840, "vote_count": 1}
Hi all I have a large data frame and I am trying to duplicate each column right beside the original one. For example, Input V1 V2 V3 A T G output V1 V1 V2 V2 V3 V3 A A T T G G Please let me know if there is a way to do this. Would it be possible to use rep() for this. Thanks in advance!
I don't know. Is this what you need? df <- data.frame( a = c(1,2,3,4,5), b = c('a','b','c','d','e'), c = c(1,'b',3,'d',5), d = c('a',2,'c',4,'e')) df a b c d 1 1 a 1 a 2 2 b b 2 3 3 c 3 c 4 4 d d 4 5 5 e 5 e via `apply()` --------------- do.call(cbind, apply(df, 2, function(x) data.frame(x,x))) a.x a.x.1 b.x b.x.1 c.x c.x.1 d.x d.x.1 1 1 1 a a 1 1 a a 2 2 2 b b b b 2 2 3 3 3 c c 3 3 c c 4 4 4 d d d d 4 4 5 5 5 e e 5 5 e e via `lapply()` ----------------- do.call(cbind, lapply(df, function(x) data.frame(x,x))) a.x a.x.1 b.x b.x.1 c.x c.x.1 d.x d.x.1 1 1 1 a a 1 1 a a 2 2 2 b b b b 2 2 3 3 3 c c 3 3 c c 4 4 4 d d d d 4 4 5 5 5 e e 5 5 e e Now going out for a jog. Catch you later Kevin
biostars
{"uid": 9463098, "view_count": 2044, "vote_count": 1}
I am interested in adding multiple row annotations to my heatmap using the pheatmap function in R (example: https://stackoverflow.com/questions/41628450/r-pheatmap-change-annotation-colors-and-prevent-graphics-window-from-popping-up). The examples I see online all give only one annotation. What is the easy way to extend it to multiple row annotations?
from the pheatmap help: ``` # make test matrix test = matrix(rnorm(200), 20, 10) test[1:10, seq(1, 10, 2)] = test[1:10, seq(1, 10, 2)] + 3 test[11:20, seq(2, 10, 2)] = test[11:20, seq(2, 10, 2)] + 2 test[15:20, seq(2, 10, 2)] = test[15:20, seq(2, 10, 2)] + 4 colnames(test) = paste("Test", 1:10, sep = "") rownames(test) = paste("Gene", 1:20, sep = "") # define the annotation annotation_row = data.frame( GeneClass = factor(rep(c("Path1", "Path2", "Path3"), c(10, 4, 6))), AdditionalAnnotation = c(rep("random1", 10), rep("random2", 10)) ) rownames(annotation_row) = paste("Gene", 1:20, sep = "") pheatmap(test, annotation_row = annotation_row) ```
biostars
{"uid": 285370, "view_count": 23061, "vote_count": 2}
Hi All. I am using StringTie to assemble transcriptome from my RNA-Seq data. The question is that if I use refSeq as reference annotation which was download from UCSC genome browser website, does the CDS, start codon and stop codon segments in that gtf file will affect transcriptome assembly? Like would StringTie consider CDS/start codon/stop codon as a new exon, but actually these features are just parts of exons? The gtf file looks like this: chr1 hg19_refGene start_codon 67000042 67000044 0.000000 + . gene_id "NM_032291"; transcript_id "NM_032291"; chr1 hg19_refGene CDS 67000042 67000051 0.000000 + 0 gene_id "NM_032291"; transcript_id "NM_032291"; chr1 hg19_refGene exon 66999639 67000051 0.000000 + . gene_id "NM_032291"; transcript_id "NM_032291"; chr1 hg19_refGene CDS 67091530 67091593 0.000000 + 2 gene_id "NM_032291"; transcript_id "NM_032291"; chr1 hg19_refGene exon 67091530 67091593 0.000000 + . gene_id "NM_032291"; transcript_id "NM_032291";
hi, Plz. do not use this GTF file from UCSC. A GTF file not only has the information of individual exons (of a transcript isoform) but also of different transcripts (that originate from a particular gene). You would notice that the `gene_id` and `transcript_id` are same in the above GTF file. So any transcript assembler you use (like StringTie) would not be able to infer the transcript <-> gene relationship. See this GTF structure from Ensembl - 1 protein_coding exon 874655 874840 . + . gene_id "ENSG00000187634"; transcript_id "ENST00000455979"; exon_number "1"; gene_name "SAMD11"; gene_biotype "protein_coding"; transcript_name "SAMD11-004"; exon_id "ENSE00002715021"; 1 protein_coding CDS 874655 874840 . + 2 gene_id "ENSG00000187634"; transcript_id "ENST00000455979"; exon_number "1"; gene_name "SAMD11"; gene_biotype "protein_coding"; transcript_name "SAMD11-004"; protein_id "ENSP00000412228"; 1 protein_coding exon 876524 876686 . + . gene_id "ENSG00000187634"; transcript_id "ENST00000455979"; exon_number "2"; gene_name "SAMD11"; gene_biotype "protein_coding"; transcript_name "SAMD11-004"; exon_id "ENSE00003477353"; 1 protein_coding CDS 876524 876686 . + 2 gene_id "ENSG00000187634"; transcript_id "ENST00000455979"; exon_number "2"; gene_name "SAMD11"; gene_biotype "protein_coding"; transcript_name "SAMD11-004"; protein_id "ENSP00000412228"; Hope this is clear. Plz use GTF from [Ensembl][1] or [Gencode][2] [1]: ftp://ftp.ensembl.org/pub/release-84/gtf/homo_sapiens/ [2]: http://www.gencodegenes.org/releases/current.html
biostars
{"uid": 193637, "view_count": 2968, "vote_count": 1}
Hello All, I was curious if anyone is aware of any tools or approach that can convert hard-masked genome to soft-masked format. For example, genome: *....ATGCATGCATGC......* conversion required: *....ATGCNNNNATGC......* to *....ATGCatgcATGC......* Regards, B
I would first get the coordinates of Ns in the hard-masked genome and output them as a bed file. Here's an example using seqkit. ``` seqkit locate --bed -rPp "N+" hardmasked.fasta > N_coords.bed ``` You can then use bedtools to soft mask the non-masked genome. ``` bedtools maskfasta -soft -fi unmasked.fasta -bed N_coords.bed -fo softmasked.fasta ```
biostars
{"uid": 9556574, "view_count": 330, "vote_count": 1}
Hi Biostars, I got two output files (*genes.results and *isoforms.results) from aligning reads to the Trinity denovo assembly(Trinity.fasta) with Bowtie2 and expected read counts with RSEM. Out of two output files, which one should I feed to DESeq2 for identifying DEGs. Any suggestions will be appreciated. Thanks in advance.
That means if I want gene level LFC I need to input *genes.results and for transcript level LFC *isoforms.results file should be used. Thank you.
biostars
{"uid": 404323, "view_count": 1272, "vote_count": 1}
Hi, Assume a table as below: X = col1 col2 col3 row1 "A" "0" "1" row2 "B" "2" "NA" row3 "C" "1" "2" I select combinations of two rows, using the code below: pair <- apply(X, 2, combn, m=2) This returns a matrix of the form: pair = [,1] [,2] [,3] [1,] "A" "0" "1" [2,] "B" "2" NA [3,] "A" "0" "1" [4,] "C" "1" "2" [5,] "B" "2" NA [6,] "C" "1" "2" I wish to iterate over pair, taking two rows at a time, i.e. first isolate `[1,]` and `[2,]`, then `[3,]` and `[4,]` and finally, `[5,]` and `[6,]`. These rows will then be passed as arguments to regression models, i.e. `lm(Y ~ row[i]*row[j])`. I am dealing with a large dataset. Can anybody advise how to iterate over a matrix two rows at a time, assign those rows to variables and pass as arguments to a function? Thanks, S ;-) **Edit:** In response to the comments, I should specify that my problem concerns SNP and expression data where I aim to do a pairwise multiple regression analysis (first order regression) in order to assess any possible SNP-SNP interactions that may effect the expression phenotype.
A simple idiom for "all pairs of a matrix" M of dim(N, C) for( i in 1:(N-1) ){ for( j in (i+1):N ){ print(paste(i,j)) } } I've had to do this eQTL analysis. For any non-trivial number of genotypes or expression measurements, the naive approach (e.g. building a new lm for each comparison) is too slow. If PHENO is a matrix (not data.frame) of expression values and GENO the set of genotypes for a single locus at all of the samples in PHENO, you want something like for each GENO model = lm( t(PHENO)~GENO ) Extend the linear model to match whatever interactions you want to test. You can extract the p-values of coefficients of a solved linear model with calculate.lm.pval=function(linear.model){ # Given a solved linear model, return the p-values of the coefficients n.obs = dim(linear.model$residuals)[1] n.models = dim(linear.model$coefficients)[2] cc = matrix(coef(linear.model), nrow=linear.model$rank, ncol=n.models) se = calculate.linear.model.se(linear.model) t.stat = cc/se rdf = n.obs - linear.model$rank dm=data.frame( 2*pt(abs(t.stat), rdf, lower.tail = FALSE) ) names(dm) = as.vector(dimnames(linear.model$coefficients)[[2]]) rownames(dm) = dimnames(linear.model$coefficients)[[1]] dm }
biostars
{"uid": 3694, "view_count": 21433, "vote_count": 1}
Dear Users, I would like to know , that given I have only one tumor and its normal type and I have the sets of germline and somatic mutations with the frequency listed by VarScan2, how can I use this information along with the region information of this mutation corresponding to a gene , to outline the clonal and sub-clonal populations of mutation in the tumor. Is there any method that can help me generate a model which can help me understand which mutations are occupied in the entire tumor population and which less. This would help me understand all the sub clones of the tumor. In other way it will help me understand how the mutations are categorized in the entire tumor mass and also inform me to what extent this mutation is having a stake in the tumor. This classification helps to reconstruct the tumor fate and its evolution and will also enable me to list out the potential driver mutations and passenger mutations concerned with that tumor. Is there any tool that can help me do this? Most of the tools work on multiple samples. I would like to have some suggestions on these lines.
<p>We built the sciClone package for exactly this purpose: https://github.com/genome/sciclone</p> <p>It takes inputs of somatic mutations, with readcounts and VAFs, and uses that information to infer subclonal populations in heterogeneous tumors. It also gives you some nice visualization options.</p>
biostars
{"uid": 104136, "view_count": 5142, "vote_count": 2}
After peakcalling, i want to use igv to see my data(bedgraph). But i can not install IGV. Do you have good suggestion? Thanks!
If you are using Ubuntu 18.04 you can do `sudo apt install igv` For previous versions you need something like: sudo apt install default-jre curl -LO http://data.broadinstitute.org/igv/projects/downloads/2.4/IGV_2.4.10.zip unzip IGV_2.4.10.zip cd IGV_2.4.10 sudo ln -s $PWD/igv.sh /usr/local/bin/ igv.sh&
biostars
{"uid": 320203, "view_count": 17765, "vote_count": 1}
I have the code below to generate the heatmap with annotation. ![enter image description here][1] library(pheatmap) # Generate some data test = matrix(rnorm(200), 20, 10) test[1:10, seq(1, 10, 2)] = test[1:10, seq(1, 10, 2)] + 3 test[11:20, seq(2, 10, 2)] = test[11:20, seq(2, 10, 2)] + 2 test[15:20, seq(2, 10, 2)] = test[15:20, seq(2, 10, 2)] + 4 colnames(test) = paste("Test", 1:10, sep = "") rownames(test) = paste("Gene", 1:20, sep = "") # Add annotation as described above, and change the name of annotation annotation <- data.frame(Var1 = factor(1:10 %% 2 == 0, labels = c("Exp1", "Exp2"))) rownames(annotation) <- colnames(test) # check out the row names of annotation pheatmap(test, annotation = annotation) I'm wondering whether there is a way to add the labels in the annotation bar in R code. Thanks in advance. ![enter image description here][2] [1]: https://s22.postimg.cc/6beg2ma81/Picture1.png [2]: https://s22.postimg.cc/z2b9su8dt/Picture2.png
use grid.text (part of base functions, earlier part of library "grid") (play with x and y coordinates): pheatmap(test, annotation = annotation) grid.text(levels(annotation$Var1), x=c(0.25,0.6),y=c(0.89,0.89), gp=gpar(fontsize=10)) <a href="https://ibb.co/cxb6o9"><img src="https://preview.ibb.co/hMfXT9/Rplot01.png" alt="Rplot01" border="0"></a>
biostars
{"uid": 332239, "view_count": 12755, "vote_count": 3}
Hi everyone, I was wondering if anyone is familiar of any annotation term in the human genome annotation e.g. from gencode or ensembl to be able to extract **intronless genes** and separate them from genes containing introns. There is probably an automated way to extract intronless genes from the exon annotations in the gtf files. If there are any initial thoughts on this it would be much appreciated Thanks in advance, Sergio
A simple perl one-liner: perl -lne 'if (/.+\texon\t.+gene_id "([^"]+)/) { $g{$1}++ } END { foreach $i (sort keys %g) { print $i unless $g{$i} > 1 } } ' gencode.v26.annotation.gtf \ > intronless.txt If you want to make sure that the code works, you can have both intermediate (exoncounts.txt) and final results (intronless.txt) to check manually the exon count: perl -lne 'if (/.+\texon\t.+gene_id "([^"]+)/) { $g{$1}++ } END { foreach $i (sort keys %g) { print "$i\t$g{$i}" } } ' gencode.v26.annotation.gtf | tee exoncounts.txt \ | perl -lane 'print $F[0] if $F[1]==1' > intronless.txt Taking the same exon counting rationale into this `grep | cut | sort | uniq | perl` combination is even faster: grep exon gencode.v26.annotation.gtf | cut -d'"' -f2 | sort | uniq -c \ | tee exoncounts.txt | perl -lane '$F[0] == 1 and print $F[1]' > intronless.txt Important note: just realized that CDS lines do get through the previous grep, plus that HAVANA and ENSEMBL annotations may be redundant (therefore same exons could be counted twice), so the code should consider those issues in order to generate the proper output: awk '{ if ($3=="exon") print $1, $4, $5, $10 }' gencode.v26.annotation.gtf | sort -u \ | cut -d'"' -f2 | sort | uniq -c | perl -lane '$F[0] == 1 and print $F[1]' > intronless.txt Explanation: `awk` selects exon lines and prints only chromosome, start, end and geneid, `sort -u` collapses redundant exons, `cut -d'"' -f2` reduces output to geneids only, `sort | uniq -c` collapses same geneids while counting them, and `perl` prints geneids containing 1 exon only.
biostars
{"uid": 472156, "view_count": 816, "vote_count": 2}
I am trying to generate PON for WES samples following [GATK recommendations][1], they also have another explanation in this [Mutect2 article][2] but it's basically the same identical 3-steps procedure: > ***step 1**. Run Mutect2 in tumor-only mode for each normal sample*: ` gatk Mutect2 -R reference.fasta -I normal1.bam -max-mnp-distance 0 -O normal1.vcf.gz ` > ***step 2**. Create a GenomicsDB from the normal Mutect2 calls*: ` gatk GenomicsDBImport -R reference.fasta -L intervals.interval_list --genomicsdb-workspace-path pon_db -V normal1.vcf.gz -V normal2.vcf.gz -V normal3.vcf.gz -V ... ` > ***step 3**. Combine the normal calls using CreateSomaticPanelOfNormals*: ` gatk CreateSomaticPanelOfNormals -R reference.fasta --germline-resource af-only-gnomad.vcf.gz -V gendb://pon_db -O pon.vcf.gz ` I am using `gatk 4.1.7` (latest at the moment) but the output I got from step 2 (`GenomicsDBImport`) is a folder with some files in it, such as `vcfheader.vcf`, `vidmap.json` and what looks like a file for every chromosme with a `$` and contig boundaries specified in the BED file (e.g. `X$200786$155255277`). If I try to pass this directory in the `-V` option of `CreateSomaticPanelOfNormals` (step 3 ) I got an error that the specified input is not a regular file, and GATK documentation confirms that `-V` is supposed to be a VCF file. Does anybody, that maybe has generate PONs before or worked with this, knows what is the exact file output from step 2 that I am supposed to pass in step 3 `-V`? Thank you very much in advance for any help! [1]: https://gatk.broadinstitute.org/hc/en-us/articles/360042479112-CreateSomaticPanelOfNormals-BETA- [2]: https://gatk.broadinstitute.org/hc/en-us/articles/360035531132--How-to-Call-somatic-mutations-using-GATK4-Mutect2
I've done this quite recently, with what I hope is the latest version. In my case I had to do this as below: `-V gendb://pon_db` the `gendb://` shouldn't be changed, and what follows is the path to the directory created by `GenomicsDBImport`
biostars
{"uid": 447082, "view_count": 4119, "vote_count": 1}
How to Grep the complete sequences containing a specific motif in a fasta file using shell command? Also, I want to include the lines beginning with a `>` before these target sequences. I found this post :https://www.biostars.org/p/274859/ similar to my problem but I'm looking for different motif: My motifs look like that : SXXXX(F/S)XXXL All my fasta file in one line and I have more than 300 sequences: for example: my sequence : >sp|Q9H257.2|CARD9_HUMAN RecName: Full=Caspase recruitment domain-containing protein 9; Short=hCARD9 MSDYENDDECWSVLEGSRVTLTSVIDRSRITPYLRQTKVLNPDDEEQVLSDPNLVIRKRKVGVLLDILQRTGHKGYVAFLESLELYYPQLYKKVTGKEPARVFSMIIDASGESGLTQLLMTEVMKLQKKVQDLTALLSSK >sp|Q9H37.2|CTYU_HUMAN HHHSVLEGFRVTLTSVIDRFRITPYLRQTKVLNPDDEEQVLSDPNLVIRKRKVGVLLDILQRTGHKGYVAFLESLELYYPQLYKKVTGKEPARVFSMIIDASGESGLTQLLMTEVMKLQKKVQDLTALLSSK >sp|Q9re7.2|CARer_HUMAN RecName BKLSVLEGWRVTLTSVIDRFRITPYLRQTKVLNPDDEEQVLSDPNLVIRKRKVGVLLDILQRTGHKGYVAFLESLELYYPQLYKKVTGKEPARVFSMIIDASGESGLTQLLMTEVMKLQKKVQDLTALLSSK The result should be only the first two sequences because they have the motifs `SXXXX(F/S)XXXL` >sp|Q9H257.2|CARD9_HUMAN RecName: Full=Caspase recruitment domain-containing protein 9; Short=hCARD9 MSDYENDDECWSVLEGSRVTLTSVIDRSRITPYLRQTKVLNPDDEEQVLSDPNLVIRKRKVGVLLDILQRTGHKGYVAFLESLELYYPQLYKKVTGKEPARVFSMIIDASGESGLTQLLMTEVMKLQKKVQDLTALLSSK >sp|Q9H37.2|CTYU_HUMAN HHHSVLEGFRVTLTSVIDRFRITPYLRQTKVLNPDDEEQVLSDPNLVIRKRKVGVLLDILQRTGHKGYVAFLESLELYYPQLYKKVTGKEPARVFSMIIDASGESGLTQLLMTEVMKLQKKVQDLTALLSSK I tried these command but that returned all three sequences grep 'S...F\|S\|L.\(.\)\1\{4\}' jara3.fasta -B 1 > jara4.fasta
I believe that {4} that you are using is an extended regular expression and you would need to either use egrep or the -E flag with grep. I got it to work using this: grep -E 'S[A-Z]{4}[FS][A-Z]{3}L' jara3.fasta > jara4.fasta Hope that works!
biostars
{"uid": 339314, "view_count": 3052, "vote_count": 1}
Hey I already did automatic genome annotation using RAST software. I am looking for simple visualisation tool or anything which can provide me information about ORF total number of given dataset or would let me extract decription of all ORFs together. It's pretty simple to do, but programs I am using are not counting orfs and they won't let me copy all ORFs description where I could just number it on word or any other tool. I am using genebank format. I am not looking for tools like ORF FINDER, orfs are already predicted and translated in my dataset.
I solved this problem using program [SnapGene][1]. I put genebank sequences into the program. choosen chromosome -> features -> switch off "full descriptions" -> ctrl + a You will have info about number of selected features. "(X - 1 - Y)/2" is number of your annotated ORFs. X - number of features Y - number of features that are not genes or CDS ( i. e regulatory regions) (to check if you have that type of features press "sort by gene" button For smaller datasets you can just manually select all genes and program will count it for you, but it would be problematic with huge datasets. I choose this pathway because I had my files in genebank format. For fasta files I would reccomend solution of Mr Dlakic, much quicker. Thank everyone for help [1]: https://www.snapgene.com/
biostars
{"uid": 432955, "view_count": 770, "vote_count": 1}
Hi all, I have a customed database and I used it to BLASTn against a bacterial genome. **I would like to extract the unmatched regions only.** Is there a command line or another way to do it? Thanks very much for your precious help!
1. output in outfmt 6 2. convert to BED/GTF/GFF 3. `bedtools complement` 3. `bedtools getfasta`
biostars
{"uid": 9498768, "view_count": 646, "vote_count": 1}
I did google this honestly, but I didn't understand what is the difference between all these names. I'd highly appreciate any help.
Conda is a package manager - a program where when you ask for a package to be installed, it will download an install the package you ask for and all of its depedencies (things it needs to run). The information on where to download things from and what the dependencies are for each package are stored in a database called a "channel". bioconda is a conda channel, that contains the names, locations and dependencies of many bioinformatics tools. So when you say `conda install pysam -c bioconda` the conda program will go access the bioconda channel and ask 'where do I find the package "pysam", and what else needs to be installed?' and bioconda will reply "pysam is at https://blahblah.com/bioconda/pysam/versionXY, and when you download it, copy the files to direcoty ABC and then run script fgh.py. You will first need to install the packages htslib (version > 1.2.3) and samtools (version > 4.5.6) and python (version 3.8) and zlib (verion 9.10.11)". Conda then goes away and repeats that process for htslib, samtools, python and zlib until everything is ready to install, it downloads everything and installs it. Anaconda is both a conda channel (that is a list of packages, where they can be downloaded from, and what their dependencies are) and an installable bundle that includes python, the most common data science packages for python (including things like scipy, numpy, pandas and matplotlib) and the conda pacakge manager. miniconda is a different bundle that you can download from the same place as Anaconda, and it includes python and the conda package manager, but not the common data science packages, which you would have to install manually using the conda package manager if you later decided you wanted them.
biostars
{"uid": 9480933, "view_count": 3640, "vote_count": 3}
I have an alignment file for a protein encoded by about ~3000 bases (I have both bam and sam files), and I am looking to extract all the reads that from bases 877-981. Is there any way to do this with samtools? I tried using this command: samtools view 8.2alnCorrected.bam "877-981" > filteredreads.bam However I got this message with an empty file: region "877-981" specifies an unknown reference name. Continue anyway.
You need to specify the chromosome or contig as well. Your bam will have lines that look like: chr1 12345 ... So for reads are mapped to that chromosome or contig, you want: samtools view 8.2alnCorrected.bam chr1:877-981
biostars
{"uid": 178920, "view_count": 4178, "vote_count": 1}
Hello, I have been using scanpy to analyze some single cell data using google colab. Everything was working ok but all of a sudden I rerun the notebook and now when I try to import scanpy I get the error below. What could be causing this? ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-12-04a6a9427c98> in <module> 2 import numpy as np 3 import pandas as pd ----> 4 import scanpy as sc 5 import matplotlib.pyplot as plt 6 get_ipython().run_line_magic('matplotlib', 'inline') 3 frames /usr/local/lib/python3.8/dist-packages/scanpy/plotting/_utils.py in <module> 33 34 ---> 35 class _AxesSubplot(Axes, axes.SubplotBase, ABC): 36 """Intersection between Axes and SubplotBase: Has methods of both""" 37 TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases ``` Thank you
Hey y'all a fellow from scverse replied to my question and resolved our issue. The issue seems to be that matplotlib version 3.7 is incompatible with scanpy version 1.9.1 As such, to resolve this issue you need to do is install a version of matplotlib thats older than 3.7. I used version 3.6 and that completely eradicated the metaclass issue. pip install 'matplotlib == 3.6' For further reference see the Github page: https://github.com/scverse/scanpy/issues/2411
biostars
{"uid": 9554316, "view_count": 865, "vote_count": 2}
After going through the manual I couldn't proceed with collapse replicates sampleFiles <- list.files(path = "./", pattern = ".counts") sampleNames <- gsub(".counts", "", sampleFiles) sampleCondition <- c(rep("KO1", 4),rep("KO2", 4),rep("KO3", 4), rep("WT1", 4),rep("WT2", 4),rep("WT3", 4)) sampleTable <- data.frame(sampleName = sampleNames, fileName = sampleFiles, condition = sampleCondition) ddsHTSeq <- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design = ~ condition) treatments <- c("KO1", "KO2", "KO3", "WT1", "WT2", "WT3") library("DESeq2") ddsHTSeq <- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable,design = ~ condition) colData(ddsHTSeq)$condition <- factor(colData(ddsHTSeq)$condition, levels = treatments) #Analysis using DESeq dds <- DESeq(ddsHTSeq) resultsNames(dds) #Pre-filtering dds <- dds[ rowSums(counts(dds)) > 1, ] res <- results(dds) #summarise some basic tallies summary(res) **This was the example from collapseReplicates, I am not sure where should I apply this step and how should I proceed with my dataset** dds <- makeExampleDESeqDataSet(m=12) # make data with two technical replicates for three samples dds$sample <- factor(sample(paste0("sample",rep(1:9, c(2,1,1,2,1,1,2,1,1))))) dds$run <- paste0("run",1:12) ddsColl <- collapseReplicates(dds, dds$sample, dds$run) # examine the colData and column names of the collapsed data colData(ddsColl) colnames(ddsColl) # check that the sum of the counts for "sample1" is the same # as the counts in the "sample1" column in ddsColl matchFirstLevel <- dds$sample == levels(dds$sample)[1] stopifnot(all(rowSums(counts(dds[,matchFirstLevel])) == counts(ddsColl[,1])))
In order to use `collapseReplicates`, you need a colData like this : condition sample run 1 KO KO1 AdipoKO1a 2 KO KO1 AdipoKO1b 3 KO KO1 AdipoKO1c 4 KO KO1 AdipoKO12 5 KO KO2 AdipoKO2a 6 KO KO2 AdipoKO2b ... Then you will collapse your "runs" (technical replicates) at the level of your samples (biological replicates) : ddsColl <- collapseReplicates(ddsHTSeq, ddsHTSeq$sample, ddsHTSeq$run)
biostars
{"uid": 260838, "view_count": 7205, "vote_count": 3}
Hi, I need to create a table with how e-values are distributed for some sequences, as a way of reporting how conserved the sequences are. I got some inconsistent results, and boiled it down to if I query a sequence alone, or if I query it together with other sequences. The result page has a drop-down menu where you can only pick a single query sequence. So I assume that it is independent of the other query sequences? Here is an example to show it. In the first test, I query ">1" alone, and the top hits are 4e-9. In the second test, I query ">1" together with ">0" and ">2", and when I look at only ">2", the top hits are "3e-9" First test set-up: ![https://i.imgur.com/E3xJqOj.png][1] First test results: ![https://i.imgur.com/AOccFBj.png][2] Second test set-up: ![https://imgur.com/uYYnnkC][3] Second test results: ![https://imgur.com/v3SJUZ7][4] I just did the same test with other sequences, and I got either 0.35 or 0.1 as the top hits. All settings are identical between the two searches. I just go to nucleotide BLAST, enter my queries, enter an organism, change to "blastn", and change the number of hits to 20.000. All other settings are the defaults. So what is the **correct** way of doing this search? I'm so confused at the moment :< [1]: https://i.imgur.com/E3xJqOj.png [2]: https://i.imgur.com/AOccFBj.png [3]: https://i.imgur.com/uYYnnkC.png [4]: https://i.imgur.com/v3SJUZ7.png
Update. The NCBI help desk responded quickly, was able to replicate the bug, and quickly also addressed the bug. For others that may have performed similar BLAST searches I paste their response below. > The developers have addressed this issue. > > In summary: > >• The problem only occurred for BLASTN/megaBLAST searches. > >• It only happened if multiple queries were submitted at once. Results for the > first query would be correct, but all other searches would use the > search space for the first query instead of for each individual query. > > • It only affected the web page. Stand-alone BLAST+ does not have this > issue. > >• Also are you aware of this paper: > https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6662297/ This does have > implications for E-values in some situations.
biostars
{"uid": 434428, "view_count": 1456, "vote_count": 1}
<p>So, first I tested what results I should get from the blastall program using the command line, with e-value 0.001:</p> <pre><code>C:\Niek\Test\blast-2.2.17\bin\blastall -p blastp -d C:\Niek\Test\arabidopsis-smallproteins.fasta -i C:\Niek\Test\arabidopsis-HD.fasta -e 0.001 -F F -m 8 -o C:\Niek\Test\arab-HD-smallproteins-notfiltered.out </code></pre> <p>and</p> <pre><code>C:\Niek\Test\blast-2.2.17\bin\blastall -p blastp -d C:\Niek\Test\arabidopsis-smallproteins.fasta -i C:\Niek\Test\arabidopsis-HD.fasta -e 0.001 -m 8 -o C:\Niek\Test\arab-HD-smallproteins-filtered.out </code></pre> <p>After that I made a local blast program, which works fine but it only found 91 results with e-value equal or lower than 0.001, where the results from the blastall via cmd gave around 140~ something results. I first thought it missed some, but all the e-values are different.</p> <pre><code>from Bio.Blast import NCBIStandalone from Bio.Blast import NCBIXML my_blast_db = r"C:\Niek\Test\arabidopsis-smallproteins.fasta" my_blast_file = r"C:\Niek\Test\arabidopsis-HD.fasta" my_blast_exe =r"C:\Niek\blast-2.2.17\bin\blastall.exe" result_handle, error_handle = NCBIStandalone.blastall(my_blast_exe, "blastp", my_blast_db, my_blast_file) blast_records = NCBIXML.parse(result_handle) E_VALUE_THRESH = 0.001 x = 0 for blast_record in blast_records: blast_record = blast_records.next() for alignment in blast_record.alignments: for hsp in alignment.hsps: if hsp.expect &lt;= E_VALUE_THRESH: print "==========Alignment========" print "sequence:", alignment.title print "length:", alignment.length print "e value:", hsp.expect x += 1 </code></pre> <p>I first thought that the local blast from biopython uses a different algorithm, but at 'my<em>blast</em>exe =r"C:\Niek\blast-2.2.17\bin\blastall.exe"' I specify the same program but it should be the same. Then I thought it had something to do with the filtering option but I checked both filtered and unfiltered and wasn't any of that.</p> <p>If you know why the local blast from biopython NCBIStandalone gives a different result than doing it directly in the cmd, please let me know.</p> <p>Thanks in advance, Niek</p> <p>edit: I checked, and it seems that the NCBIStandalone filters out 100% identities, which the blastall called by cmd does not. However this doesn't explain why the e-values are so different.</p>
One major problem is you are skipping half the results with this: for blast_record in blast_records: blast_record = blast_records.next() ... It should be just: for blast_record in blast_records: .... Or if you would rather call the next method explicitly for some reason, something like: while True: blast_record = blast_records.next() if blast_record is None: break ... Secondary issue: blastall is now being phased out by the NCBI who call it "legacy" BLAST, they encourage people to use BLAST+ instead, in this case blastp at the command line. As a result, the old Biopython wrappers for calling "legacy" BLAST are all considered obsolete.
biostars
{"uid": 2390, "view_count": 4068, "vote_count": 1}
Hello, I have a table in R which consists of 70k rows and 37 columns. Lot of cells have **"./."** which I want to modify and make it as **"ab"** . I tried to use `gsub()` but it does not give me the required output. I used : file <- gsub("./.","ab",file) I want the change to happen throughout the file. Is there any other way with which I can modify it? Thanks in advance. Input: Eg: S.no chr pos gene_name S1 S2 S3 1 1 1290 X ./. 1/1 ./. 2 1 5822 Y 0/1 ./. ./. Output S.no chr pos gene_name S1 S2 S3 1 1 1290 X ab 1/1 ab 2 1 5822 Y 0/1 ab ab It can be either ab or NA
Try to use fixed match: file <- gsub("./.","ab",file, fixed = TRUE) Or file[ file == "./." ] <- "ab" ----------- **Edit:** Using example data provided by OP. # example input data df1 <- read.table(text = " S.no chr pos gene_name S1 S2 S3 1 1 1290 X ./. 1/1 ./. 2 1 5822 Y 0/1 ./. ./.", header = TRUE, stringsAsFactors = FALSE) df1 # S.no chr pos gene_name S1 S2 S3 # 1 1 1 1290 X ab 1/1 ab # 2 2 1 5822 Y 0/1 ab ab df1[, c("S1", "S2", "S3")][ df1[, c("S1", "S2", "S3")] == "./." ] <- "ab" df1 # S.no chr pos gene_name S1 S2 S3 # 1 1 1 1290 X ab 1/1 ab # 2 2 1 5822 Y 0/1 ab ab
biostars
{"uid": 338155, "view_count": 2700, "vote_count": 1}
I'm trying to decide what would be the most appropriate threshold criteria? In particular, this is for PFAMs and a human metagenomics dataset. I'm curious what people have done in the past? It looks like anvio uses --cut_ga as the default. https://github.com/merenlab/anvio/issues/498 **What been generally accepted as the most appropriate for metagenomics?** --cut_ga Use Pfam GA (gathering threshold) score cutoffs. Equivalent to --globT <GA1> --domT <GA2>, but the GA1 and GA2 cutoffs are read from the HMM file. hmmbuild puts these cutoffs there if the alignment file was annotated in a Pfam-friendly alignment format (extended SELEX or Stockholm format) and the optional GA annota- tion line was present. If these cutoffs are not set in the HMM file, --cut_ga doesn't work. --cut_tc Use Pfam TC (trusted cutoff) score cutoffs. Equivalent to --globT <TC1> --domT <TC2>, but the TC1 and TC2 cutoffs are read from the HMM file. hmmbuild puts these cutoffs there if the alignment file was annotated in a Pfam-friendly alignment format (extended SELEX or Stockholm format) and the optional TC annota- tion line was present. If these cutoffs are not set in the HMM file, --cut_tc doesn't work. --cut_nc Use Pfam NC (noise cutoff) score cutoffs. Equivalent to --globT <NC1> --domT <NC2>, but the NC1 and NC2 cutoffs are read from the HMM file. hmmbuild puts these cutoffs there if the alignment file was annotated in a Pfam-friendly alignment format (extended SELEX or Stockholm format) and the optional NC annotation line was present. If these cutoffs are not set in the HMM file, --cut_nc doesn't work. http://www.cbs.dtu.dk/cgi-bin/nph-runsafe?man=hmmsearch
As explained above, none of these options will matter unless your HMM has the cutoffs set. The appropriate lines in HMMs look like this: GA 25.00 25.00; TC 25.00 25.00; NC 24.90 24.90; These bit-scores have to be set manually during model building, and their main purpose is to catch borderline matches that may be true hits but have statistically insignificant E-values. On rare occasions, the use of these cutoffs would eliminate a match that has barely significant E-value. A simple way of setting these scores is to pick the worst score in a known group of trusted family members. Most of the time you will get the same result from these scores or from E-values, and I use the latter as a guide. It is good practice to manually inspect hits that are just above or just below the E-value threshold, and to set the database size (`-Z` switch) to a fixed number so that searches done over large period of time and with different databases can be compared. If you check [**how Pfam does it**][1], you will notice that they are not using cutoffs even though the HMMs support them. Instead, they set the database size to `45638612`. [1]: https://pfam.xfam.org/family/PF00041#tabview=tab6
biostars
{"uid": 430701, "view_count": 3180, "vote_count": 1}
I would like to create a LD vs distance(cm) plot in R using an output from PLINK. I have been trying with different --ld-window-kb numbers but I can not plot them in R. Furthermore, I have a question about average r2. In some papers ld_decay is plotted average r2 vs distance and some papers used r2 vs distance. Can everybody explain more? thanks for your attention
For LD block length distribution, I ran this on the thinned SNP set. plink --bfile "snp-thin" --blocks --blocks-max-kb 200 --out "snp-thin" This produces snp-thin.blocks.det. Then some R. dfr <- read.delim("snp-thin.blocks.det",sep="",header=T,check.names=F,stringsAsFactors=F) colnames(dfr) <- tolower(colnames(dfr)) # ld block density p <- ggplot(dfr,aes(x=kb))+ geom_density(size=0.5,colour="grey40")+ labs(x="LD block length (Kb)",y="Density")+ theme_bw() ggsave("snp-thin-ld-blocks.png",p,height=8,width=8,units="cm",dpi=250) ![enter image description here][5] Now you see LD block length distribution for your set of samples. [5]: https://image.ibb.co/mjzPOH/snp_thin_ld_blocks.png
biostars
{"uid": 300381, "view_count": 18728, "vote_count": 7}
I have a bunch of FPKM normalized mRNA sequencing from TCGA-PAAD, which covers pancreatic cancer patients. I wrote a script which goes through that data and makes a big data frame with all 60k+ Ensemble mappings for all 180+ patients, so that I can analyze various hypotheses regarding the clinical data. I have the clinical metadata in a separate table, which I obtained using the TCGABiolinks package. The problem is that all the columns in the sequencing data table are "HT-seq FPKM UUIDs". Now all I need to do is *map the HT-seq FPKM UUID to a Case UUID (patient)* I need an API query for this, or a call to some TCGABiolinks function. Thanks in advance for your help! Ok I have followed the advice here with some modifications. I obtained a json encoded case and file manifest which take the form given below. File manifest (first few records) --------------------------------- [{ "file_name": "232f085b-6201-4e4d-8473-e592b8d8e16d.FPKM.txt.gz", "data_format": "TXT", "access": "open", "data_category": "Transcriptome Profiling", "file_size": 514027, "cases": [ { "project": { "project_id": "TCGA-PAAD" }, "case_id": "620e0648-ec20-4a12-a6cb-5546fe829c77" } ], "annotations": [ { "annotation_id": "050203a0-12ab-5025-973d-e070d94f722b" } ] },{ "file_name": "b0159d01-f1eb-490d-875b-cfdabed6f529.FPKM.txt.gz", "data_format": "TXT", "access": "open", "data_category": "Transcriptome Profiling", "file_size": 515800, "cases": [ { "project": { "project_id": "TCGA-PAAD" }, "case_id": "16b38977-aea1-4c75-89ec-4fb551f652dd" } ] },{ "file_name": "f2389819-b8fc-460e-821c-01dba313cce1.FPKM.txt.gz", "data_format": "TXT", "access": "open", "data_category": "Transcriptome Profiling", "file_size": 510184, "cases": [ { "project": { "project_id": "TCGA-PAAD" }, "case_id": "23908554-b98e-4ff8-98e7-dee3e2c5feaf" } ] },{ Cases manifest (first few records) ---------------------------------- [{ "diagnoses": [ { "days_to_death": null } ], "case_id": "33833131-1482-42d5-9cf5-01cade540234", "submitter_id": "TCGA-2J-AAB4" },{ "diagnoses": [ { "days_to_death": 738.0 } ], "case_id": "67e9abc1-4b6f-4054-bdc4-29906c55c682", "submitter_id": "TCGA-3A-A9IC" },{ "diagnoses": [ { "days_to_death": null } ], "case_id": "a53c919a-4e08-46f1-af3f-30b16b597c33", "submitter_id": "TCGA-IB-AAUU" },{ "diagnoses": [ { "days_to_death": 278.0 } ], "case_id": "ab449860-46e5-485e-abd5-31c5abef2c58", "submitter_id": "TCGA-L1-A7W4" },{ Here is the code that was run: ------------------------------ rm(list = ls()) output_logfile <- file("log_output.txt", open="wt") message_logfile <- file("log_message.txt", open="wt") sink(file=output_logfile, type = "output") sink(file=message_logfile, type = "message") library(GSAR) library(org.Hs.eg.db) library(GSVAdata) library(GSEABase) library(GSVA) library(dplyr) data(c2BroadSets) #### The following should work with data from the GDC, where we have a shopping cart #### of zipped single patient sequencing data #### Example workflow for a GDC-hosted study (TCGA-PAAD): #### Go to the GDC homepage of the study: https://portal.gdc.cancer.gov/projects/TCGA-PAAD #### Click on "Cases" #### download the json manifest for cases #### move it to the project directory #### Click on "Files" #### download the json manifest for files #### move it to the project directory #### download the actual data by selecting appropriate filters at left and clicking "add all files to cart" #### unzip the downloaded archive and move the directory to the project folder #### This Creates a Matrix with all our data file.loc <- "PAAD-FPKM" # the name of the data dir file.manifest <- "files.json" # the name of the files manifest cases.manifest <- "cases.json" # the name of the cases manifest dsep <- "/" files <- fromJSON(file=file.manifest) cases <- fromJSON(file=cases.manifest) dlfiles <- list.files(file.loc, recursive = TRUE) #### remove the manifest, since we are using a separately downloaded json object unlink(paste0(file.loc, dsep, "MANIFEST.txt")) #for (file in 1:length(dlfiles)){ for (file in 1:5){ if(!exists("rna_table")){ ## get the patient barcode case_id = strsplit(dlfiles[file],"/") ## create a table with the expression profile where the count column is named with the patient barcode rna_table<-read.delim(paste0(file.loc,dsep,dlfiles[file]), sep="\t", col.names = c("ensemble_id",case_id[[1]][2])) } else { case_id = strsplit(dlfiles[file],"/") new_data<-read.delim(paste0(file.loc,dsep,dlfiles[file]), sep="\t", col.names = c("ensemble_id",case_id[[1]][2])) rna_table <- full_join(rna_table, new_data, by = NULL) } } rna_data <- as.matrix(rna_table) rownames(rna_data) <- rna_data[,1] rna_data <- rna_data[,-1] #### This creates a Matrix of all the death dates death_data <- matrix(nrow=ncol(rna_data),ncol=2) colnames(death_data) <- c("filename", "days_to_death") #### Do a sanity check list.files(file.loc, recursive = TRUE)[1:10] # here are the files we read to get the read counts str(death_data[1:5,]) # what our clinical data looks like str(rna_data[1:5,1:5]) # what our sequencing data looks like for (case in 1:ncol(rna_data)){ case_id <- files[[grep(colnames(rna_data)[case],files)]]$cases[[1]]$case_id #days_to_death <- cases[grep(case_id, cases)][[1]]$diagnoses[[1]]$days_to_death death_data[case,1] <- colnames(rna_data)[case] #death_data[case,2] <- days_to_death death_data[case,2] <- case_id } Now the output (non errors) is fairly straightforward. Notice that I am running this on data from only the first 5 patients in the data folder in the interest of time: ------------------------------------------------------ > sink(file=message_logfile, type = "message") > library(GSAR) > library(org.Hs.eg.db) > library(GSVAdata) > library(GSEABase) > library(GSVA) > library(dplyr) > data(c2BroadSets) > #### The following should work with data from the GDC, where we have a shopping cart > #### of zipped single patient sequencing data > #### Example .... [TRUNCATED] > file.manifest <- "files.json" # the name of the files manifest > cases.manifest <- "cases.json" # the name of the cases manifest > dsep <- "/" > files <- fromJSON(file=file.manifest) > cases <- fromJSON(file=cases.manifest) > dlfiles <- list.files(file.loc, recursive = TRUE) > #### remove the manifest, since we are using a separately downloaded json object > unlink(paste0(file.loc, dsep, "MANIFEST.txt")) > #for (file in 1:length(dlfiles)){ > for (file in 1:5){ + if(!exists("rna_table")){ + ## get the patient barcode + case_id = strsplit(dlfil .... [TRUNCATED] > rna_data <- as.matrix(rna_table) > rownames(rna_data) <- rna_data[,1] > rna_data <- rna_data[,-1] > #### This creates a Matrix of all the death dates > death_data <- matrix(nrow=ncol(rna_data),ncol=2) > colnames(death_data) <- c("filename", "days_to_death") > #### Do a sanity check > list.files(file.loc, recursive = TRUE)[1:10] # here are the files we read to get the read counts [1] "005c0660-3700-40ea-b037-b456319d369a/bb15d7d0-8705-49af-89e4-fc13c01de642.FPKM.txt.gz" [2] "030cf06f-890c-4193-9c7d-254980c73a48/3d771128-9e90-49c2-8ee5-23d994ee6398.FPKM.txt.gz" [3] "03a162ee-0be2-484d-ad86-17bba311a3f8/4172e3f8-3578-4f33-9168-6f8c2b8d0783.FPKM.txt.gz" [4] "051918c1-9bb2-4146-bf85-4e4a55c5759e/5aed2227-1f31-4159-9eed-430bc45c61dc.FPKM.txt.gz" [5] "0882ecec-b533-4912-adc1-8ffd6eaa47c1/c19f102d-47a0-48c6-9443-63730d9ea6d1.FPKM.txt.gz" [6] "0ae4ff1f-e2d3-46e0-95a2-0ea80a4ebb63/574df2fc-a608-49c5-8e83-f26d03ef8bb3.FPKM.txt.gz" [7] "0c2840a2-3a49-4f22-ae21-1cfbb0034212/fef65b57-c58d-4050-8de4-f09f5cd616ce.FPKM.txt.gz" [8] "0dfe7aef-a105-4a32-89ca-49a30a1b59ed/65a45bca-b5d4-4763-a51f-f7b9ad9efcb9.FPKM.txt.gz" [9] "0e7871dc-a721-4dae-8938-28a73ec3f968/232f085b-6201-4e4d-8473-e592b8d8e16d.FPKM.txt.gz" [10] "101e042e-efa2-4c6c-b629-55ecbde859d2/3de80dcb-4ff2-4125-b8e6-9e06ec1cd833.FPKM.txt.gz" > str(death_data[1:5,]) # what our clinical data looks like logi [1:5, 1:2] NA NA NA NA NA NA ... - attr(*, "dimnames")=List of 2 ..$ : NULL ..$ : chr [1:2] "filename" "days_to_death" > str(rna_data[1:5,1:5]) # what our sequencing data looks like chr [1:5, 1:5] "3.009793e-03" "2.945653e+00" "0.000000e+00" "3.861741e+00" ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:5] "ENSG00000270112.3" "ENSG00000167578.15" "ENSG00000273842.1" "ENSG00000078237.5" ... ..$ : chr [1:5] "bb15d7d0.8705.49af.89e4.fc13c01de642.FPKM.txt.gz" "X3d771128.9e90.49c2.8ee5.23d994ee6398.FPKM.txt.gz" "X4172e3f8.3578.4f33.9168.6f8c2b8d0783.FPKM.txt.gz" "X5aed2227.1f31.4159.9eed.430bc45c61dc.FPKM.txt.gz" ... > for (case in 1:ncol(rna_data)){ + case_id <- files[[grep(colnames(rna_data)[case],files)]]$cases[[1]]$case_id + #days_to_death <- cases[grep(cas .... [TRUNCATED] However, Somehow I am only able to locate the first data file in my files manifest. The last loop in the included source causes the following error to be recorded in my logged messages: ------------------------------------------------------------------------ Joining, by = "ensemble_id" Joining, by = "ensemble_id" Joining, by = "ensemble_id" Joining, by = "ensemble_id" Error in files[[grep(colnames(rna_data)[case], files)]] : attempt to select less than one element in get1index Thanks everybody for your help and looking forward to what you have to say :) ------------------------------------------------------------------------
**Edit: 21st September 2018:** More rapid ways of looking up TCGA barcodes from UUIDs or file-names: - https://www.biostars.org/p/318756/#318919 - https://www.biostars.org/p/306400/#306517 ---------------------------------------- -------------------------------- Hello, With the TCGA data, it can indeed be difficult to just figure out which sample is which. A lot of time and effort has to be invested just to organise the data for a particular project. Here's how I did it for a recent TCGA dataset that I re-analysed: In order to search for the Case UUID from these filenames: - download the manifest for your data in JSon format, from <a href="https://portal.gdc.cancer.gov/repository?facetTab=files&filters=%7B%22op%22%3A%22and%22%2C%22content%22%3A%5B%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-PAAD%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.analysis.workflow_type%22%2C%22value%22%3A%5B%22HTSeq%20-%20FPKM%22%5D%7D%7D%2C%7B%22op%22%3A%22in%22%2C%22content%22%3A%7B%22field%22%3A%22files.experimental_strategy%22%2C%22value%22%3A%5B%22RNA-Seq%22%5D%7D%7D%5D%7D&searchTableTab=files">here</a> - In R, get all of your HTseq FPKM count filenames in a vector or list called *filenames* (e.g. `c("657e19a6-e481-4d06-8613-1a93677f3425.FPKM.txt.gz", "b244f324-fd8a-4d4b-b8f5-bad973c649d5.FPKM.txt.gz", ..., et cetera)`) - Loop through each filename and look up their Case UUID in the JSon file, with a loop like this: . require(rjson) manifest <- fromJSON(file="RNAseqFPKM.json") caseUUIDs <- c() for (i in 1:length(filenames)) { record <- manifest[[grep(filenames[i], manifest, fixed=TRUE, ignore.case=FALSE)]] if (filenames[i]!=record$file_name) { print("FALSE") } caseUUIDs[i] <- record$cases[[1]]$case_id } I added the inner if statement to print 'FALSE' to screen if any file has no matching record (I never encountered such a situation). The loop will take a while to run (maybe 5-10 minutes for 150 filenames) - I could possibly develop it further with `lapply` or `mclappy` but it's one of those nuisance bit of codings that I always put on the back burner and just tolerate. Hope that this helps you out! Kevin
biostars
{"uid": 284708, "view_count": 4678, "vote_count": 2}
I went to [the blast ftp database][1], there are 18 `nt` files, each is less than 800 MB, and for `refseq_genome` it has 83 files, most of which are larger than 800 MB, which means the `refseq_genome` is much larger than `nt` database. However, when I search the definition of `nt` on http://www.ncbi.nlm.nih.gov/BLAST/blastcgihelp.shtml, it says `nt` database include All GenBank + RefSeq Nucleotides + EMBL + DDBJ + PDB sequences (excluding HTGS0,1,2, EST, GSS, STS, PAT, WGS). No longer "non-redundant". My question is: 1. In my understanding RefSeq Nucleotides should include `refseq_genome` and `refseq_rna`, so `refseq_genome` should be much smaller than `nt` database. why is `refseq_genome` alone is much larger than the whole `nt` database? 2. I tried one accession number `NZ_AARG01000001.1` from refseq bacteria genome, and blastn against `nt` and `refseq_genome` database. For `nt` case, it took a few seconds and got less than 10 hits. For `refseq_genome` database, it took more than 10 minutes and got more than 100 results (all the accession number began with NZ). Then I searched NZ and found NZ represent not completed project. So the difference between `nt` and `refseq_genome` is that nt doesn't include NZ records? [1]: ftp://ftp.ncbi.nlm.nih.gov/blast/db/
On ["RefSeq accession numbers and molecule types"][1] you will see that the RefSeq accessions with the prefix `NZ_` are from whole genome shotgun (WGS) projects. As such these are excluded from 'nt'. Looking through the other 'genomic' sections of RefSeq, many of these are from WGS projects and are thus also excluded. From the [NCBI BLAST][2] pages 'nt' is currently defined as: > Title:Nucleotide collection (nt) > Description:The nucleotide collection consists of GenBank+EMBL+DDBJ+PDB+RefSeq sequences, but excludes EST, STS, GSS, WGS, TSA, patent sequences as well as phase 0, 1, and 2 HTGS sequences. The database is partially non-redundant. In some cases identical sequences have been merged into one entry, while preserving the accession, GI, title and taxonomy information for each entry. Merged sequences include GenBank and RefSeq entries with identical sequences. Sequences added to the database since April, 2011 have also been merged with identical existing entries. > Molecule Type:mixed DNA > Update date:2014/07/24 > Number of sequences:23840180 In contrast 'refseq_genomic' is defined as: > Title:NCBI Genomic Reference Sequences >Molecule Type:mixed DNA > Update date:2014/07/23 > Number of sequences:6733817 Note the difference in the number of sequences. However the 'refseq_genomic' is much larger when you look at the number of bases: 435,293,002,525 vs. 62,649,172,490. This is due to 'refseq_genomic' including assembled contigs, and whole chromosome assemblies, which are excluded from 'nt'. [1]: http://www.ncbi.nlm.nih.gov/books/NBK21091/table/ch18.T.refseq_accession_numbers_and_mole/ [2]: http://blast.ncbi.nlm.nih.gov/
biostars
{"uid": 105529, "view_count": 6821, "vote_count": 5}
This might be an XY question, so I'll explain my premise: 1. I have 3 VCF files, `f1`, `f2` and `f3`. 2. `f1` is an annotated VCF covering 50 samples 3. `f2` is an annotated VCF covering 5 samples, but only sites that are not in `f1` 4. `f3` is an un-annotated VCF covering 5 samples across sites in `f1` as well as not in `f1` 5. All annotations are site-level I now wish to get this as one VCF files with all sites annotated and all sample-level information present. When I merge `f1` and `f2`, I get a VCF with all annotated sites and all samples, but for those sites overlapping with `f3`, the `GT/AD/...` fields are empty, because that information is in `f3`. How do I merge these three datasets? ####Question: In essence, can I do an operation to update genotype fields in one VCF file based on a sample+site match in another VCF file? If they were 2 `data.frame`s, the operation would be something like `vcf1[site, sample] <- vcf2[site, sample]`. #### Current solution: <s>The way I see it, I might have to subset `f3` to `f1`-sites only, then `bcftools merge <f1> <f3_subset> ><f1_F3_subset>` - that way I do not add any site, only samples. Then I `bcftools concat <f1+f3_subset> <f2> > <final_vcf>`, so this time I add only sites, no samples. Any other solution will be appreciated.</s> That solution does not work as `bcftools concat` cannot work on VCFs with different samples in them.
Here's my current solution: 1. Subset all `f1`-sites present in `f3`: `bcftools isec -n=2 -w1 -c none -o f3_subset f3 f1` 2. Pull annotations from `f1` into `f3_CommonSites_subset`: `bcftools annotate -c INFO -a f1 -o f3_subset_anno f3_subset` 3. Concat the new annotated file with `f2` to get all site annotations for the 5 samples: `bcftools concat -o f3_plus_f2 f2 f3_subset_anno` 4. Merge f1 and this 5-sample file to get final VCF: `bcftools merge -m none -o final_vcf f1 f3_plus_f2` Just realized while I was writing this, I could just do `bcftools merge -Ou -m none f1 f3 | bcftools annotate -c FORMAT -a f2 -o final_vcf -`, so that way I would pick up the FORMAT fields exactly as I intended in the first place. If anyone has a better solution, please add it in! Thank you!
biostars
{"uid": 361586, "view_count": 1181, "vote_count": 1}
Hello, I am trying to access information on genomic 3'UTR end and start positions using the Ensembl Biomart tool. However, there appear to be some transcripts which are labelled with the same transcript ID but nonetheless have different 3' UTR annotations. e.g.: ``` Transcript ID 3' UTR start 3' UTR end ENST00000474604 32793169 32793300 ENST00000474604 32792445 32792726 ENST00000474604 32791848 32791958 ENST00000474604 32791565 32791596 ENST00000474604 32790888 32791376 ``` I was just wondering if there was anyway to get Biomart (or any other Ensembl tool) to print out the stable ID version increment (e.g. ENST00000474604.1, ENST00000474604.2 etc.)? Otherwise, trying to use this data it is going to be a bit of a hassle. Also, I have downloaded the full cDNA set for Homo sapien (GRCh38) for processing and this includes the version increment - Ideally, I would like to easily map the data from the cDNA fasta file to the genomic co-ordinates obtained from BioMart I have combed through biomart to see any option for including transcript version number but I can't seem to see anything. I am confused as to why this information would be omitted from the output Thanks EDIT: My question lead from an interpretation of Biomart output which was based on a misconception. See Sean Davis's post for more details
I think the understanding of your results is perhaps not quite right (or I am misunderstanding your question). The five regions that you give are not for different versions of the transcript. They signify the fact that this particular transcript has five 3'-UTR exons. If you want the UTR start and end, you can take the minimum and maximum of the two columns; which is which will depend on the strand of the transcript.
biostars
{"uid": 174832, "view_count": 2956, "vote_count": 2}
Hi, I'm looking for any technical information regarding how subread-align distinguishes properly paired from not-properly paired alignments for paired-end data (example summary below). I've looked through the subread/Rsubread documents and not found it (although I could have missed it). Further, if anyone has experience aligning reads to a congener species assembly with subread and suggestions for parameter optimization, input would be greatly appreciated. Thanks! //================================ Summary =================================\\ || || || Total fragments : 12,563,751 || || Mapped : 8,918,955 (71.0%) || || Uniquely mapped : 8,918,955 || || Multi-mapping : 0 || || || || Unmapped : 3,644,796 || || || || Properly paired : 5,290,912 || || Not properly paired : 3,628,043 || || Singleton : 2,281,674 || || Chimeric : 80,494 || || Unexpected strandness : 25,231 || || Unexpected fragment length : 1,225,432 || || Unexpected read order : 15,212 || || || || Indels : 145,958 || || || || Running time : 7.7 minutes || || || \\============================================================================//
See arguments `minFragLength`, `maxFragLength` and `PE_orientation`. I think "Properly paired" means that both ends map to the same chromosome with the expected orientation and the fragment defined is within the expected length range.
biostars
{"uid": 9476835, "view_count": 642, "vote_count": 2}
I have a file from Chip-Seq data. I want to use this information to look for the GC content of promoters. Which software I can use? Thanks Ankur
The simplest would be to use nuc option in bedtools. bedtools nuc -fi $ref-fasta-file -bed $promoterbedfile > $outputfile Similar questions have been answered before - - https://www.biostars.org/p/47047/ - https://www.biostars.org/p/70167/
biostars
{"uid": 101397, "view_count": 2940, "vote_count": 1}
I am wondering what do the pieces of black lines between the coverage and reads mean? Thanks if anybody knows ![enter image description here][1] [1]: /media/images/d3699f66-d82d-49f3-b90f-6475db16
Hi. They are this. Found it on --> https://software.broadinstitute.org/software/igv/AlignmentData > Downsampled reads areas are marked with a black rectangle just under the coverage track. The coverage track represents coverage for all the reads. If you wanna get rid of the, go to preferences and turn off downsampling.
biostars
{"uid": 9498138, "view_count": 1426, "vote_count": 2}
Hi, I have been trying to find the difference between the above two online for a while now, but I haven't got a satisfactory answer. I also didn't find a similar question on Biostars, so I thought of formally asking it now. Tximport (and maybe other tools too) gives a couple of outputs for each gene, and two of them are - **abundance** and **counts**. What is the difference between them? [This paper][1] gives a general idea that count based methods assign reads to genes directly, whereas abundance based methods assign abundance of each transcript with a probabilistic model that makes use of info such as fragment length distribution etc. So, having said that, is this really the difference between the abundance and count values that I get for any gene from Tximport (or any tool in general)? And, in which situation is one of them a more meaningful/desirable quantity? [1]: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004393
A count is simply that, a count of reads on some feature. An abundance is a more biologically meaningful (though not necessarily statistically useful) quantification of expression of a gene or transcript that is normalized in some way. Most commonly in this is TPM or some variant of that, but it could also be "copies per cell", which would be an abundance metric you could get from rt-qPCR. In other words, normalized counts aren't an abundance estimate since reads aren't a thing present in the cell, but an artifact of how we perform library prep and sequencing. The exception to this would be if you use a minion or equivalent to sequence full-length transcripts, since then a normalized count would estimate the abundance (on some likely relative scale) of a transcript in a cell or tissue.
biostars
{"uid": 404211, "view_count": 5597, "vote_count": 8}
Dear community, i have trees (<3000) in newick format with four species like this example: ((Spec4:0.529207,(Spec3:0.0803395,Spec2:0.0124315)),Spec1:0,Spec1:0); I am only interested to detect the trees in which two species are clustering together, like in the example Spec3 and Spec2. Is it possible to do that with a simple script or does anybody knows a software (tried phybin, ete3 compare already). I will be grateful if you someone could help.
I think this might work, but it's a sort of 'brute force' way to do it. I would maybe re-factor your trees to cladograms and remove the branch lengths via a regex for the branch length and colon (in whatever your favourite regex language is), then you could simply `grep` or string search in some other manner for `(Spec3,Spec2)` and you'll find all trees which contain that grouping pretty easily. e.g.: Remove decimals, sole zeros and colons from the file (probably not the most elegant regex): Given your tree: ((Spec4:0.529207,(Spec3:0.0803395,Spec2:0.0124315)),Spec1:0,Spec1:0); One could do: cat test.tree | sed -e 's/[0-9]*\.[0-9]*//g' -e 's/0//g' -e 's/://g' Yeilding: ((Spec4,(Spec3,Spec2)),Spec1,Spec1); Then you can string search your yielded trees: egrep -r -l "Spec(2|3),Spec(2|3)" . Will give you all the filenames where Species 3 and Species 2 are adjacent nodes (in either orientation). If you want to keep branch length in your trees as you're not just interested in topology, you could concoct a regex for use with `grep`: egrep "Spec(2|3):(0?|[0-9]+\.[0-9]+),Spec(2|3):(0?|[0-9]+\.[0-9]+)" treefile.tree But having to conjure that regex for every possible combination of topologies looks awful to me, so I'd be inclined to try it without the branch lengths. I don't know how many topologies you're interested in finding in all your trees - this approach may not be feasible if it's a prohibitively large number. ----- Slightly more complex, if you'd like to see the match, and the file name, this is an option: 2 example sed-treated trees: ((Spec4,(Spec5,Spec6)),Spec2,Spec3); ((Spec4,(Spec3,Spec2)),Spec1,Spec1); Passing a 'dummy filename' in the form of `dev/null` tricks grep in to printing the filename (as it thinks it's working on multiple files) and the actual match itself by default: for file in *.tree ; do egrep "Spec(2|3),Spec(2|3)" "$file" /dev/null ; done Would yeild: sed2.tree:((Spec4,(Spec5,Spec6)),Spec2,Spec3); sed.tree:((Spec4,(Spec3,Spec2)),Spec1,Spec1); With the appropriate string matches highlighted (if your terminal is configured for it).
biostars
{"uid": 241525, "view_count": 2490, "vote_count": 1}
I'm doing a GWAS using ~15 million variants and ~800 people. I am unfamiliar with Linux, so I have tried using PLINK MDS and PCA functions to obtain principal components to be used as covariates in the association analysis to control for population stratification. When I plotted the p-values (QQ plot) obtained from the association analysis, the distribution was pretty messy, suggesting that I did not adequately control for population stratification. I took the following steps: 1. Pruned based on LD using PLINK --indep 2. Created a genome file: ./plink --bfile file --genome --extract plink.prune.in 3. Used --pca to generate an eigenvec file containing PCs ./plink --bfile gendep_merged --cluster --pca header --extract plink.prune.in --read-genome plink.genome 4. Performed the association analysis using 10 PCs from the eigenvec file as covariates: ./plink --bfile file --pheno phenotype.txt --allow-no-sex --covar plink.eigenvec --covar-name PC1,PC2,PC3,PC4,PC5,PC6,PC7,PC8,PC9,PC10 --out association --linear --adjust Am I missing a step or should any of the flags used by modified in order to produce PCs that will adequately control for population stratification in this sample? Any input would be greatly appreciated.
How exactly is using the first ten principle components controlling for "population stratification"? If I understand correctly, you're performing an association test, and telling the model fit to smooth out the ten biggest drivers of variance in your dataset? When you checked the principle components, did they indicate that the first ten explained the difference in population? Could you be smoothing out the effect you're testing for instead?
biostars
{"uid": 182040, "view_count": 6391, "vote_count": 1}
I have a batch of 'noteworthy positions' (50,000+) and I'm trying to determine the best way to batch annotate them with my annotated genome (GTF format or anything which it can be converted to). Does anyone know a package that already does this? I have considered writing a basic script but perhaps such a thing exists already. My intention is to provide the coordinates and return columns such as: ``` Nearest Is within Intron/exon Distance positional annotation gene gene/UTR/ from TSS (if exists) Promotor? ``` Thanks
To get the nearest gene, you could use BEDOPS <a href="http://bedops.readthedocs.org/en/latest/content/reference/set-operations/closest-features.html">*closest-features*</a>: $ closest-features --closest positions.bed genes.bed > answer.bed To answer if within gene/UTR/promoter/intron/exon, you could use BEDOPS <a href="http://bedops.readthedocs.org/en/latest/content/reference/statistics/bedmap.html#indicator">*bedmap --indicator*</a> (or similar options, depending on what level of detail you need): ``` $ bedmap --echo --indicator positions.bed genes.bed > answer_genes.bed $ bedmap --echo --indicator positions.bed UTR.bed > answer_UTR.bed $ bedmap --echo --indicator positions.bed promoter.bed > answer_promoter.bed ``` etc. To get the distance from the nearest TSS, you could use BEDOPS *closest-features* again: $ closest-features --closest --dist positions.bed TSS.bed > answer.bed To get annotations of any overlapping elements, you could use BEDOPS *bedmap --echo-map*: <pre> $ bedmap --echo --echo-map positions.bed annotations.bed > answer.bed</pre> If your annotations are in a non-BED format, you could use BEDOPS <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/conversion/convert2bed.html">*convert2bed*</a> to turn them into BED: ``` $ gtf2bed < annotations.gtf > annotations.bed $ gff2bed < annotations.gff > annotations.bed $ vcf2bed < annotations.vcf > annotations.bed ``` etc. Because BEDOPS tools use Unix I/O streams, you can convert inside a larger BEDOPS operation pipeline: $ gff2bed < annotations.gff | closest-features --closest --dist positions.bed - > answer.bed Or you can use process substitution, if you use `bash`: $ closest-features --closest --dist positions.bed <(gff2bed < annotations.gff) > answer.bed To turn BED-formatted annotations into BED-formatted lists of UTRs, introns, exons and promoters, etc. is easy and a cursory search of Biostars or seqanswers can help locate small scripts in *awk* or Perl etc. that can do this for various annotation subcategories. For instance, to turn a BED-formatted list of stranded gene annotations into 1k proximal promoter regions: ``` $ awk '{ \ if ($6=="+") { \ print $1"\t"($2 - 1000)"\t"$2"\t"$4"\t"$5"\t"$6; \ } \ else { \ print $1"\t"$3"\t"($3 + 1000)"\t"$4"\t"$5"\t"$6; \ } \ }' genes.bed \ > promoters.bed ``` Then it is a matter of using `promoters.bed` in downstream BEDOPS operations.
biostars
{"uid": 155133, "view_count": 4023, "vote_count": 1}
Hi everybody! I submitted a list of gene to DAVID Functional annotation chart tool. I decided to used EASE (p-value) cutoff of 0,05 (from the DAVID publication on Nature Protocols in 2009) and selected Benjamini correction. Now I have problems with results interpretation. Is the Benjamini value the (i/m)Q value? So if it is bigger than my p-value, is my term significantly enriched? Or should I interpret the Benjamini value like a new p-value corrected already and is it significant if it is <0.05? (It looks strange for a multiple comparison tests). Thank u
Benjamini correction is your false discovery rate and it is your adjusted p-value. So you should forget about your p-value after correction. So your test is significant if your adjusted p-value is smaller than criteria (such as 0.05 or 0.01). If you want to more about multiple testing, you can check [here][1] [1]: https://en.wikipedia.org/wiki/Multiple_comparisons_problem
biostars
{"uid": 293613, "view_count": 11062, "vote_count": 1}
Hi All, I just started working with methylation (WGBS) data. I have used Bismark and it generated the methylation of Cs and consecutive Gs. I am guessing this Gs methylation is from Cs from other strand (please correct me if I am wrong). My question is should I consider both while reporting the methylation in a region or should I filter only Cs rows and perform the further analysis? Thanks.
As you surmised, the "G" is actually the C on the - strand. The methylation of a region should include both strands, unless you know that whatever you're studying is only affected by single-stranded methylation. Note, however that the Cs in a CpG (the C and subsequent G in bismark's files) typically show roughly symmetric/equal methylation ratios, so a common strategy is to simply combine them into CpG-level methylation levels. Bismark likely has a method for doing that (if not, you can do it with MethylDackel).
biostars
{"uid": 395837, "view_count": 888, "vote_count": 3}
Example input: multi-sample VCF (adapted from [www.internationalgenome.org][1]): **Note:** my actual file is bgzipped and tabixed with ~2Mln variants (rows) and ~1000 samples (columns). ##fileformat=VCFv4.0 ##fileDate=20090805 ##source=myImputationProgramV3.1 ##reference=1000GenomesPilot-NCBI36 ##phasing=partial ##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data"> ##INFO=<ID=DP,Number=1,Type=Integer,Description="Total Depth"> ##INFO=<ID=AF,Number=.,Type=Float,Description="Allele Frequency"> ##INFO=<ID=AA,Number=1,Type=String,Description="Ancestral Allele"> ##INFO=<ID=DB,Number=0,Type=Flag,Description="dbSNP membership, build 129"> ##INFO=<ID=H2,Number=0,Type=Flag,Description="HapMap2 membership"> ##FILTER=<ID=q10,Description="Quality below 10"> ##FILTER=<ID=s50,Description="Less than 50% of samples have data"> ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"> ##FORMAT=<ID=GQ,Number=1,Type=Integer,Description="Genotype Quality"> ##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Read Depth"> ##FORMAT=<ID=HQ,Number=2,Type=Integer,Description="Haplotype Quality"> #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NA00001 NA00002 NA00003 20 14370 rs6054257 G A 29 PASS NS=3;DP=14;AF=0.5;DB;H2 GT:GQ:DP:HQ 0/0:48:1:51,51 1/0:48:8:51,51 1/1:43:5:.,. 20 17330 . T A 3 q10 NS=3;DP=11;AF=0.017 GT:GQ:DP:HQ 0/0:49:3:58,50 0/1:3:5:65,3 0/0:41:3 20 1110696 rs6040355 A G,T 67 PASS NS=2;DP=10;AF=0.333,0.667;AA=T;DB GT:GQ:DP:HQ 1/2:21:6:23,27 2/1:2:0:18,2 2/2:35:4 21 1230237 . T . 47 PASS NS=3;DP=13;AA=T GT:GQ:DP:HQ 0/0:54:7:56,60 0/0:48:4:51,51 0/0:61:2 21 1234567 microsat1 GTCT G,GTACT 50 PASS NS=3;DP=9;AA=G GT:GQ:DP 0/1:35:4 0/2:17:2 1/1:40:3 Expected output: **20.txt** NA00001 NA00002 NA00003 0/0 1/0 1/1 0/0 0/1 0/0 1/2 1/2 2/2 **21.txt** NA00001 NA00002 NA00003 0/0 0/0 0/0 0/1 0/2 1/1 I was thinking of using `cut | sed` combo with some `regex`, but thought there must be already some tool out there, maybe *[bcftools][2]* (couldn't get the right flags to work) ? Any other ideas? [1]: http://www.internationalgenome.org/wiki/Analysis/Variant%20Call%20Format/vcf-variant-call-format-version-40/ [2]: http://www.htslib.org/doc/bcftools.html
From your expected output, it is evident that you wish to not split multi-allelic records. I strongly recommend splitting them as tools become more predictable with normalized variants. You can use bcftools query -r <chr> -Hf '[ %GT]\n' # to get genotypes to get this output, but beware, the header will be a bit wonky, in the format `[<col_index>]<sample_name>:GT`. You can `sed` replace the `\[\d+\]([^:]+):GT` with `\1` to clean up the header and get to your desired output. for chr in 20 21 do bcftools query -Ov -r ${chr} -Hf '[ %GT]\n' | sed -r '1s#\[\d+\]([^:]+):GT#\1#g' >${chr}.txt #command untested, use with caution done
biostars
{"uid": 350404, "view_count": 2539, "vote_count": 1}
Hi, I used GATK germline variant calling pipeline to call short variants on paired end fastq files. After got the final analysis ready vcf, applied some extra filters, I inspected bam files in IGV for those variants of interest and found some strange things for one sample. Two variants of interest in this sample can only be found on inversion reads. In this first graph, the alternative allele G can only be found in RR and LL reads (blue color) in IGV. 13 out of 15 inversion reads have this G allele. ![enter image description here][1] In the second graph, similarly, the alternative allele T can only be found in inversion reads. All the inversion reads have this T allele. ![enter image description here][2] Further, I realized that all the inversion reads have same size shown in figure 3. ![enter image description here][3] I wonder if these inversions are true inversions or they are artifacts (given all the same size) thus the variants only found on these reads are also not real. [1]: /media/images/33ddd7ec-6365-4126-859f-b9b9b990 [2]: /media/images/fbff412d-756e-4f82-b0fc-72be07ff [3]: /media/images/623bd4b4-df4e-4fad-a59e-fb8f8c92
Hi xukeren, Not every LL or RR read shows a true inversion. I think you have to further inspect your reads in IGV if you want to understand what is going on. It could be a ambiguous mapping due to the short length of the reads. What is the mapping quality of the reads? You should see it if you click on a read. You should activate seeing mismatched bases if you have soft clipped bases: View->Preferences->Alignments->Show mismatched bases. Your short reads might get longer tails of colorfull sequences which were not used in mapping at this position, but might have been used for a supplementary alignment at another position. (You might have to reload your BAM to IGV to take effect) You need to find out where the corresponding reads for each of your RR/LL reads are: Click on a specific read and look at the information in the section about "Mate". To easily visualize it, you can try turning on "View as pairs" in the rightclick menu of the BAM-file. If the corresponding reads are close enough they get connected by slim lines. The other way would be "Go to mate" after right-clicking on a specific read (only available if "View as pairs is turned off"). IGV will jump to the mapping position of the corresponding read and highlight both in a unique color.
biostars
{"uid": 9492459, "view_count": 1354, "vote_count": 1}
Hi, I'm trying to pipe the following commands for ChipSeq analysis: ${bowtie2_source} -x ${ref_genome} -U ${fastq_file} -S - | ${samtools} view -Shub - | ${samtools} sort - | ${samtools} rmdup -S - - ${target_dir}/${sample_name}.bam Up till the sorting part it works fine, but in the end, no bam file is created and it gives me the message: [bam_rmdup] input SAM does not have header. Abort! What is the problem? and How should I fix it? Thanks!
In the last step you seem to specify two `-` flags, I think that might attempt to open the stream twice, and since the second time it is empty, there is no header information. ${samtools} rmdup -S - - ${target_dir}/${sample_name}.bam In general, in such cases decompose the command into individual steps, each operating on a differently named file, then investigate the resulting file after each step. The samtools error says that the SAM file does not have a header - check the file yourself - does it have a header? BTW the `samtools view` step is not needed anymore, all samtools actions will auto detect and work seamlessly on SAM or BAM files. I also think that samtools now detects input stream, so specifying the `-` should not be needed in say sorting. The commands chain together much more elegantly now.
biostars
{"uid": 306989, "view_count": 2698, "vote_count": 1}
Hi friends, I need your help please. I tried to adopt a tutorial in **human** (GSE27447) to my case in **citrus sinensis**, I searched a lot in google but I could not find fit equal to some parts and when I did these steps by removing parts I had doubts about, the result was a rma.txt which could not be read in some case (with ????) I do not know what I should type instead of... my query is **GSE67376 (citrus sinensis)** ```r source("http://bioconductor.org/biocLite.R") biocLite("GEOquery") biocLite("affy") biocLite("gcrma") biocLite("hugene10stv1cdf") # changed to biocLite("citruscdf") biocLite("hugene10stv1probe") # changed to biocLite("citrusprobe") biocLite("hugene10stprobeset.db") ???????????? biocLite("hugene10sttranscriptcluster.db") ??????? library(GEOquery) library(affy) library(gcrma) library(hugene10stv1cdf) # changed to library("citruscdf") library(hugene10stv1probe) # changed to library("citrusprobe") library(hugene10stprobeset.db) ????????? library(hugene10sttranscriptcluster.db) ????? setwd("C:/Users/Man/Desktop/New folder (3)") getGEOSuppFiles("GSE27447") setwd("C:/Users/Man/Desktop/New folder (3)/GSE27447") untar("GSE27447_RAW.tar", exdir="data") cels = list.files("data/", pattern = "CEL") sapply(paste("data", cels, sep="/"), gunzip) cels = list.files("data/", pattern = "CEL") setwd("C:/Users/Man/Desktop/New folder (3)/data") raw.data=ReadAffy(verbose=TRUE, filenames=cels, cdfname="hugene10stv1") ?????? data.rma.norm=rma(raw.data) rma=exprs(data.rma.norm) rma=format(rma, digits=5) ls("package:hugene10stprobeset.db") #Annotations at the exon probeset level ????????? ls("package:hugene10sttranscriptcluster.db") ?????????????? probes=row.names(rma) Symbols = unlist(mget(probes, hugene10sttranscriptclusterSYMBOL, ifnotfound=NA)) ??????????? Entrez_IDs = unlist(mget(probes, hugene10sttranscriptclusterENTREZID, ifnotfound=NA)) ???????? rma=cbind(probes,Symbols,Entrez_IDs,rma) write.table(rma, file = "rma.txt", quote = FALSE, sep = "\t", row.names = FALSE, col.names = TRUE) ```
```r biocLite("hugene10stprobeset.db") ???????????? biocLite("hugene10sttranscriptcluster.db") ??????? library(hugene10stprobeset.db) ????????? library(hugene10sttranscriptcluster.db) ????? ``` These lines do nothing. I don't think that there is a package called citrus.db. ```r raw.data=ReadAffy(verbose=TRUE, filenames=cels, cdfname="hugene10stv1") ?????? raw.data=ReadAffy(verbose=TRUE, filenames=cels, cdfname="citruscdf") ``` ```r ls("package:hugene10stprobeset.db") #Annotations at the exon probeset level????????? ls("package:hugene10sttranscriptcluster.db") ?????????????? ``` do nothing. this is to view the content/ functions of the package. ```r Symbols = unlist(mget(probes, hugene10sttranscriptclusterSYMBOL, ifnotfound=NA)) ??????????? ``` This is to get the gene symbols. I don't know how to get for citrus ```r Entrez_IDs = unlist(mget(probes, hugene10sttranscriptclusterENTREZID, ifnotfound=NA)) ???????? ``` This is to get the Entrez Ids. I don't know how to get for citrus
biostars
{"uid": 148741, "view_count": 2208, "vote_count": 1}
do u have any recommendations for easier learning of enrichr?
Use it through [gget enrichr][1] for super easy use! [1]: https://github.com/pachterlab/gget
biostars
{"uid": 9503275, "view_count": 710, "vote_count": 1}
Hi, I've done a model design and I hope someone can help out with my understanding of it! I have an experimental setup that looks something like this: **3 Time points (0hrs, 6hrs, 12hrs)** **3 Different Conditions (Treatments A, B and C)** So that makes **9 different combinations** of time points and treatments, each is in triplicate. There are therefore **27 Samples**. My design formula is : `~ Treatment + TimePoint + Treatment:TimePoint` My current understanding is this will give me small p-values of Treatment-specific effects over time? I wanted to further refine this and look at a specific treatment and how it differs between two time points. So I used the following line: ``` foo <- list(c("TreatmentA.TimePoint12hrs"), c("TreatmentA.TimePoint0hrs")) resMFType <- results(dds, contrast=foo) ``` Is this correct? Thanks
<p>&quot;My current understanding is this will give me small pValues of Treatment-specific effects over time?&quot;</p> <p>I wouldn&#39;t say that, rather you&#39;re fitting with a model that can measure treatment-specific effects over time (i.e., time:treatment interactions as well as time-specific and treatment-specific effects). Whether the resulting p-values are small or not depends on whether there are any significant effects in the dataset at hand (yes, I&#39;m being rather nit-picky here :) ).</p> <p>BTW, you can shorten your design to <code>~Treatment*TimePoint</code>.</p> <p>The contrast you mentioned looks correct for looking at changes between 12 and 0hrs within treatmentA.</p> <p>Anyway, you seem to have the correct design and know how to form the contrasts, so you should be good to go!</p>
biostars
{"uid": 121172, "view_count": 5735, "vote_count": 3}
Hi, I want to calculate RPKM values. I did DESeq and i got the results. My next step is i have to identify RPKM values for each gene. My pipeline is Bowtie>tophat>HTSeq>DESeq .plesase guide me through right path.
You can compute rpkm from a `DESeqDataSet` in the way indicated in @igor's answer. However, RPKMs should only be used for downstream analysis and not for testing differential expression. This is explicitly mentioned in the documentation of [DEseq2][1]. >In order to test for differential expression, we operate on raw counts and use discrete distributions as described in the previous Section 1.4. However for other downstream analyses – e.g. for visualization or clustering – it might be useful to work with transformed versions of the count data. Same is said in the documentation for the older package [DEseq][2]: >The count values must be raw counts of sequencing reads. This is important for DESeq’s statistical model to hold, as only the actual counts allow assessing the measurement precision correctly. Hence, please do do not supply other quantities, such as (rounded) normalized counts, or counts of covered base pairs – this will only lead to nonsensical results. Other relevant discussion: [Answer by Gordon Smyth about the use of RPKMs with voom, edgeR and DEseq2][3]. [1]: http://www.bioconductor.org/packages/release/bioc/html/DESeq2.html [2]: http://www.bioconductor.org/packages/release/bioc/html/DESeq.html [3]: https://support.bioconductor.org/p/56275/#56299
biostars
{"uid": 240686, "view_count": 9594, "vote_count": 1}
Hi, I have a previously created Boxplots using normalized log2 data using `ggplot2` library in R. Now, I am trying to create the Boxplot using fold change values (example data below) in ggplot2. Could you please assist me how to do the same or provide any useful resources. dput(head(Data)) structure(list(Genes = c("Gene A", "Gene B", "Gene C", "Gene D", "Gene E", "Gene F"), Healthy_Control_p_val = c(1L, 1L, 1L, 1L, 1L, 1L), Healthy_Control_FC = c(1L, 1L, 1L, 1L, 1L, 1L), Condition_1_p_val = c(0.002651515, 2.48e-05, 0.73640094, 0.009591015, 0.501723078, 0.226255478), Condition_1_FC = c(-1.114126532, 1.228203535, -1.054172268, -1.255231765, 1.028948329, -1.086890065), Condition_2_p_val = c(0.043930415, 0.163740086, 0.798708578, 0.575282124, 0.292885948, 0.008037055 ), Condition_2_Relapse_FC = c(-1.133444608, 1.163213331, -1.195016491, -1.070544661, 1.051757975, -1.190927361), Condition_3_p_val = c(0.472872641, 0.62361238, 0.31757426, 0.481332218, 0.169341523, 0.781283534 ), Condition_3_FC = c(-1.04283358, 1.04223539, 1.01103926, -1.112300841, 1.074033832, -1.041999594)), row.names = c(NA, 6L), class = "data.frame") Thank you, Toufiq
You first need to Convert your data from wide to long format. library("tidyverse") df <- df %>% select(!ends_with("p_val")) %>% pivot_longer(ends_with("FC"), names_to="condition", values_to="fc") > head(df, 5) # A tibble: 5 x 3 Genes condition fc <chr> <chr> <dbl> 1 Gene A Healthy_Control_FC 1 2 Gene A Condition_1_FC -1.11 3 Gene A Condition_2_Relapse_FC -1.13 4 Gene A Condition_3_FC -1.04 5 Gene B Healthy_Control_FC 1 Now that your data is in long format, you can make any sort of box plot you want. Here's an example. ggplot(df, aes(x=condition, y=fc)) + geom_boxplot() + theme(axis.text.x=element_text(angle=45, hjust=1)) Plot of example data. ![enter image description here][1] [1]: https://i.imgur.com/KYV7VRV.png
biostars
{"uid": 454267, "view_count": 3743, "vote_count": 1}
Dear all, I tried to open the example RNA-seq raw reads by using the simple command like head, but I got the error as shown below $ head ~/RNA-seq/Liu2015/SRR1272191_IDCLV_1.fastq.gz ??˲?Ʋ%8?_Qf2K1?)?/? gp+̪~??{?֣??p_ `?p ?K???&?[?΀???Z??????|;?????1??????_?Yu?fU???????խ:???????T??????????????{x????????[?X?t?\?????xV;?c???@?{<???N~????~?|?V?˩??Mo?&,X??v?q?w?k'?袰?Ύ?؍?8:[?ug??_k????뭶?7?V?q????Օ?YV???u?????^?Vz[?4???d?d?`?m????{?????Ew??vL??u?+֖?/?ʶ?VeY4?¦ٖO?x???*l?'?????q???C?=? ????f5???'m? ? ???d??o??d5???9X????σv8?Kaӽ+????`???|>?tG?{{>??s?<$??mz?? } ??l???n?j??ț?\????+=??Z?eiSM??ö?-6v??????_??O?=lw???C?????#??lS??])??`?q?????x?a????l???n?????Z۫2o??}0??ME?Sx\?J/l?oYi??~????>؈ܠ??~??g钦?x??]?G??$w?o?????&{b?-Ά??T?eȮ?CW?NnV????T_?!???m????5?6?n??VU??t?w:<~?.??????Wz)r^i??Qta?=?M:?,_?w"?T?N;?=???ֶ?)L?z?Q?:???|?????e??????W?O????-v$.^?\?ˮ?ƕ(@?x?[???;S??? ?+᳊???ޕ¦??`??#?=qZ??o?'z??`???"?e?m????}i?2?K?{Y6??X??oL???G?~?a???m?Oq?x\?|R?Ů?q?S??,~???*펝?x????>znaӽ+??? ۙ?o???*Gp I didn't find any problem when I used this read to run FastQC and TrimGalore. Could you please give me a suggestion, What should I solve this problem? Thank you so much, Best, Kamoltip
Dear all, Thank you so much for suggestions from everyone, I tried them all and I would like to inform everyone about my result. For `$ zcat test.fastq.gz | head` it does not function in my case. It always shows the error. `zcat: can't stat: test.fastq.gz (test.fastq.gz.Z): No such file or directory`. Anyway, `$ zless test.fastq.gz` and `$ zmore test.fastq.gz` work very well. Thanks again for your kind help :)
biostars
{"uid": 353313, "view_count": 4899, "vote_count": 1}
Hi, I need to add into my VCFs the AF INFO field, which is not output when variant calling with samtools/bcftools. I think this can be done with the bcftools annotate command and the AF tag is mentioned in the Expressions part of the bcftools man page. (http://www.htslib.org/doc/bcftools.html#expressions) But I can't determine how to apply this to my vcf. For example i've tried: bcftools annotate -a MyVCF.vcf.gz -c FORMAT,INFO,AF -o test.vcf -O v MyVCF.vcf.gz -h annots.hdr where `annots.hdr` is a text file with: ##INFO=<ID=AF,Number=1,Type=Integer,Descripion="Allele frequency"> The header is added in but the calculation on AC/AN is not conducted. Any suggestions? Thanks.
The solution is posted on GitHub: https://github.com/samtools/bcftools/issues/345 #1, get the latest htslib and BCFtools, and set-up the BCFtools plugins: git clone git://github.com/samtools/htslib.git git clone git://github.com/samtools/bcftools.git cd bcftools; make sudo make install Note the directory where the plugins were installed: `/usr/local/libexec/bcftools` export BCFTOOLS_PLUGINS=/usr/local/libexec/bcftools -------------------------------------- ------------------ NB - if you don't have root / super-user privileges, then you can avoid the `make install` command and set the **BCFTOOLS_PLUGINS** environmental variable to the '*plugins*' directory, which will be located under *bcftools/* where you downloaded the program. For example `export BCFTOOLS_PLUGINS=/Programs/bcftools/plugins/` --------------------------------- -------------------- #2, View the VCF without AF bcftools view test.vcf | tail -5 [W::bcf_hdr_check_sanity] PL should be declared as Number=G 5 135337248 . CT C . PASS END=135337249;HOMLEN=3;HOMSEQ=TTT;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2 GT:AD 0/1:205,118 5 135337259 . AG A . PASS END=135337260;HOMLEN=4;HOMSEQ=GGGG;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2 GT:AD 0/1:190,220 5 135337259 . A AG . PASS END=135337259;HOMLEN=5;HOMSEQ=GGGGG;SVLEN=1;SVTYPE=INS;AC=1;AN=2 GT:AD 0/1:192,71 5 135337264 . GA G . PASS END=135337265;HOMLEN=1;HOMSEQ=A;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2 GT:AD 0/1:184,83 5 135337274 . A ATTATTGCATCAACTCCTCCGACATCTCTTCCCCTGCAAGAGTTCAGGCCCACAGGTTCTGGTGTGGGCTTGCTCAGCTGGAGGTAGCCTGAGGTGAGCTGGAG .PASS END=135337274;HOMLEN=23;HOMSEQ=TTATTGCATCAACTCCTCCGACA;SVLEN=103;SVTYPE=INS;AC=1;AN=2 GT:AD 0/1:130,17 #3, Now add the AF bcftools +fill-tags test.vcf -- -t AF | tail -5 [W::bcf_hdr_check_sanity] PL should be declared as Number=G 5 135337248 . CT C . PASS END=135337249;HOMLEN=3;HOMSEQ=TTT;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2;AF=0.5 GT:AD 0/1:205,118 5 135337259 . AG A . PASS END=135337260;HOMLEN=4;HOMSEQ=GGGG;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2;AF=0.5 GT:AD 0/1:190,220 5 135337259 . A AG . PASS END=135337259;HOMLEN=5;HOMSEQ=GGGGG;SVLEN=1;SVTYPE=INS;AC=1;AN=2;AF=0.5 GT:AD 0/1:192,71 5 135337264 . GA G . PASS END=135337265;HOMLEN=1;HOMSEQ=A;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2;AF=0.5 GT:AD 0/1:184,83 5 135337274 . A ATTATTGCATCAACTCCTCCGACATCTCTTCCCCTGCAAGAGTTCAGGCCCACAGGTTCTGGTGTGGGCTTGCTCAGCTGGAGGTAGCCTGAGGTGAGCTGGAG .PASS END=135337274;HOMLEN=23;HOMSEQ=TTATTGCATCAACTCCTCCGACA;SVLEN=103;SVTYPE=INS;AC=1;AN=2;AF=0.5 GT:AD 0/1:130,17 #4, add all available tags in plugin bcftools +fill-tags test.vcf | tail -5 [W::bcf_hdr_check_sanity] PL should be declared as Number=G 5 135337248 . CT C . PASS END=135337249;HOMLEN=3;HOMSEQ=TTT;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2;NS=1;AF=0.5;MAF=0.5;AC_Het=1;AC_Hom=0;AC_Hemi=0;HWE=1;ExcHet=1 GT:AD 0/1:205,118 5 135337259 . AG A . PASS END=135337260;HOMLEN=4;HOMSEQ=GGGG;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2;NS=1;AF=0.5;MAF=0.5;AC_Het=1;AC_Hom=0;AC_Hemi=0;HWE=1;ExcHet=1 GT:AD 0/1:190,220 5 135337259 . A AG . PASS END=135337259;HOMLEN=5;HOMSEQ=GGGGG;SVLEN=1;SVTYPE=INS;AC=1;AN=2;NS=1;AF=0.5;MAF=0.5;AC_Het=1;AC_Hom=0;AC_Hemi=0;HWE=1;ExcHet=1 GT:AD 0/1:192,71 5 135337264 . GA G . PASS END=135337265;HOMLEN=1;HOMSEQ=A;SVLEN=-1;SVTYPE=DEL;AC=1;AN=2;NS=1;AF=0.5;MAF=0.5;AC_Het=1;AC_Hom=0;AC_Hemi=0;HWE=1;ExcHet=1 GT:AD 0/1:184,83 5 135337274 . A ATTATTGCATCAACTCCTCCGACATCTCTTCCCCTGCAAGAGTTCAGGCCCACAGGTTCTGGTGTGGGCTTGCTCAGCTGGAGGTAGCCTGAGGTGAGCTGGAG .PASS END=135337274;HOMLEN=23;HOMSEQ=TTATTGCATCAACTCCTCCGACA;SVLEN=103;SVTYPE=INS;AC=1;AN=2;NS=1;AF=0.5;MAF=0.5;AC_Het=1;AC_Hom=0;AC_Hemi=0;HWE=1;ExcHet=1 GT:AD 0/1:130,17 Kevin
biostars
{"uid": 180894, "view_count": 20398, "vote_count": 7}
Dear all, I am working on a method which combines the results of two procedures (limma, GSEA) to identify differentially expressed genes/pathways to obtain even better results using a simple machine learning approach. For this, I need some kind of gold standard to compare my results with. So my question is: Does somebody know a dataset for which the results (up or down regulated genes/pathways) are known/sufficiently proved (e.g data set used by a paper)? The data should be publicly available and microarray based. Best regards, Jan-Niklas
Hi Jan-Niklas, please have a look at the SEQC experiments ([see here][1]) and the ABRF study (see publication [here][2]). Both studies are based on the FDA-samples (A,B,C, & D) which are standardised mRNA-samples and have also been used in the MAQC studies for Microarray platforms. Both studies are publicly available at GEO. Cheers, Michael [1]: http://www.fda.gov/ScienceResearch/BioinformaticsTools/MicroarrayQualityControlProject/#MAQC-IIIalsoknownasSEQC [2]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4167418/
biostars
{"uid": 180809, "view_count": 2089, "vote_count": 1}
Hi Guys, I have noticed there are two folders on NCBI ftp server which contain viral genomes: ftp://ftp.ncbi.nlm.nih.gov/genomes/Viruses/ ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/viral/ I don’t understand why NCBI put viral genomes into two locations? And I found the first folder contains less genomes than the second one, for example Human herpesvirus 1 and 2 cannot be found in the first folder but can be found in the second folder. So, what’s the difference between viral genomes in the two folders? What's their purpose to put them in two locations? How should we choose if we need to build a blastdatabase on all viral genomes? Thanks, Tao
I shoot an email to NCBI, and I think their response would be very nicely solve my problem. Considering it will benefit someone else who might have similar confusion, I post it as follows: Dear Colleague, The following: ftp://ftp.ncbi.nlm.nih.gov/genomes/Viruses/ is a legacy (the old genome site) directory that will eventually be retired (archived and no longer updated). This one: ftp://ftp.ncbi.nlm.nih.gov/genomes/refseq/viral/ is a new directory from NCBI FTP genome site reorganization. Here is the information about the new site: To access current and actively updated genome assembly data, use the following three directories on the NCBI Genomes FTP site: genbank, refseq, and all. genbank is a directory of primary genome assembly data and contains assembled genome sequences and associated annotations (if available) that sequencing centers or individual investigators submitted to GenBank or to another member of the International Nucleotide Sequence Database Collaboration (INSDC). You should use this directory if you are interested in obtaining all submitted genome assemblies and your main focus is not accessing genome annotation. The directory is organized by taxonomic groups and you will be able to browse it directly. refseq is a directory of NCBI-derived genome assembly data containing assembled genomes that NCBI RefSeq staff selected from the primary INSDC data. You should use the refseq directory if you are interested in annotation data that are of high quality and regularly maintained. The sequences of a RefSeq genomic assembly are a copy of those present in the corresponding INSDC assembly. In some cases the copy may not be completely identical as the RefSeq staff may (1) remove smaller pieces (known as contigs) of a sequence or reported contaminants or (2) add non-nuclear genome sequences (for example, mitochondrion) to the assembly. To find primary GenBank (INSDC) assemblies used to create the RefSeq assemblies, use the assembly reports files. All RefSeq genome assemblies have annotations that RefSeq staff either propagated from the primary records or provided through NCBI prokaryotic or eukaryotic genome annotation pipelines. The number of genomic assemblies present in the refseq directory is smaller than that in the genbank directory. The directory is organized by taxonomic groups and you will be able to browse it directly. all is a directory that combines the contents of the genbank and refseq directories. Each individual assembly data file is contained in an individual sub-directory. The all directory holds many thousands of sub-directories and you should only access it as a path to a known assembly. Many of the sub-directories are for old versions of assemblies; these are archival and the RefSeq staff will not update them with new data or data in new file formats. All other directories on the NCBI Genomes FTP site are legacy directories and we will be sequentially archiving them. If you are using any of these directories, pay attention to their update dates to assure that you are obtaining current data. If you find a directory missing, check if it has already been moved into the archive directory, which you will also find on the Genomes FTP site. Read more about the FTP genomes site structure and learn details on the site reorganization, content, file formats, downloading instructions, and future plans. Best regards
biostars
{"uid": 192391, "view_count": 4438, "vote_count": 1}
Hello everyone I'm looking for a function in python or Biopython that can calculate the `tetranucleotide frequency` of a given regions of scaffold. The idea is that I have several **regions** and I want to identify possible `changes in nucleotide composition` that correspond to the an `endogenization` regions within my genome, for that I need to calculate theTNFs across regions for these contigs. I then need to calculate the Pearson correlation of these frequencies compared to the TNF of a set of the largest contigs in these genome assemblies (these contigs being probably really from the genome and were not endogenized). Does someone know a such package in python? Thanks you
[**CheckM**][1] can do what you need - see [**here**][2]: checkm tetra seqs.fna tetra.tsv You can use the frequencies to separate the sequences in a 2D plot using various dimensionality reductions methods such as PCA, tSNE (shown below) or UMAP. ![enter image description here][3] [1]: https://github.com/Ecogenomics/CheckM [2]: https://github.com/Ecogenomics/CheckM/wiki/Utility-Commands [3]: https://i.ibb.co/xG0Zfmt/t-SNE-cluster-plot.png
biostars
{"uid": 485985, "view_count": 824, "vote_count": 1}
Before I report this as a bug can someone show me the proper way to add metadata using the `AddMetaData` function. I first read a file that is new line delimited with the meta data labels I want using read table. The ftimecell_columns.txt file looks like this: "12wks" "12wks" "12wks" "12wks" "12wks" ... So I do this command to make the dataframe: timecell_col <- read.table("~/Projects/FetalPancreas/timecell_columns.txt", header=FALSE, sep="\n") and then this to AddMetaData: AddMetaData(object = scfp, metadata = timecell_col, col.name = 'time.cell') However, when I do: head(x = [email protected]) All I see is the original meta data values: orig.ident nCount_RNA nFeature_RNA GSM2978830 scfetpan 382565 5080 GSM2978831 scfetpan 634726 4932 GSM2978832 scfetpan 565912 4501 GSM2978833 scfetpan 717152 5117 GSM2978834 scfetpan 207659 3508 GSM2978835 scfetpan 869148 5055 Am I reading the data in correctly? Any help would be appreciated! Very Respectfully, Pratik
So figured it out. Months later, but nonetheless got it. For reference: My column names were the GSM sample names because I downloaded the data from GEO, where each GSM####### corresponds to a cell (see image below: ![enter image description here][1] Your gene expression matrix/data frame should be cells as column names, and genes as row names. It may be obvious to some, but it was not obvious to me then. In order to add meta data. Your **meta data, data frame should have your row names as cells** and the **corresponding meta data information should be the row values, so it should look something like this**, and lastly **your column names should be what you want that meta data to be named** see below: ![enter image description here][2] If your meta data, data frame is arranged in the way. You could simply do a modification of the default Seurat command like this by adding the `meta.data = yourmetadata` like so: scfp <- CreateSeuratObject(counts = df, project = 'scfp', meta.data = SCFPmetadata, min.cells = 3, min.features = 200) or the following after your make your Seurat object: scfp <- AddMetaData(object = scfp, metadata = SCFPmetadata) [1]: /media/images/7f0a6f5c-47f6-4b38-9267-61b235ee [2]: /media/images/9ac4db2f-b63a-44b9-a85f-c8c127c8 Hope this helps!
biostars
{"uid": 462861, "view_count": 7884, "vote_count": 1}
Hi all! I am having problems and I hope I can get some help from you. I will explain my situation: I'm trying to perform a PCA analysis to see how different several bam files are. I'm using the next pipeline: 1. Getting the accession files. I am using the R library "SRAdb", so I am getting 4 files in .sra format. 2. I use SRA-tools in order to convert the .sra file into .bam format with the following code: `sam-dump -r --min-mapq 25 $file | samtools view -bS > $file.bam` 3. Sort `samtools sort $file -o $file_sorted` 4. Index `samtools index $file_sorted $file_sorted.bai ` 5. Compute a matrix to generate the PCA plot ` multiBamSummary bins -b $files.bam -o my/out/path --smartLabels -bs 10000 -p 2` At this point I'm getting the following error: The file < myfile > does not have BAM or CRAM format. I haven't been able to trace the error, as any of the earlier steps reported any source of error. Any suggestions? (ideally I would like to skip the alignment step, I want to keep the file as original as possible) - sra-tools --version 2.9.1_1 - samtools --version 1.9 - deeptools --version 3.3.0 Thanks before hand!!
Let us use one of the example accession numbers above (SRR2089860). These are single-end reads. Your options are: Use `fastq-dump` to dump the reads out in fastq format (remove `-X 5` for full set) $ fastq-dump -X 5 SRR2089860 Read 5 spots for SRR2089860 Written 5 spots for SRR2089860 Use `sam-dump` to create fastq format files $ sam-dump --fastq SRR2089860 | head -16 @HWI-D00473:169:HFK7WADXX:1:1101:1202:2011/1 unaligned NGAGTCTATACTCGTTACATTCGCGTAACTCATTGTTAATCGCGAAGTTGA + #1=DDDDFGHHGHJJJIJIIJGIIJJIJHICGIIIJJJIJGIJEHJIGIIG @HWI-D00473:169:HFK7WADXX:1:1101:1195:2074/1 unaligned CTCGAACTCCTCGTAGTGGCGATTGTCGGTGCTGCCCACCAGGTCCACTGT + CCCFFFFFHGHHHJIJIJJJJHIIGGGHIECEHFHGIEFIGGJGHJIIGIG @HWI-D00473:169:HFK7WADXX:1:1101:1230:2087/1 unaligned TGCCGGGAATTGTACAGTGCTCAGCTTTATAGGACATTTCCAAACAGTTAT + BBBFFFF8FHHHHJJJIJJJJIJGJJIJFGJIFGIIIJJJIGIEIIIIJGG @HWI-D00473:169:HFK7WADXX:1:1101:1222:2168/1 unaligned CCGAGACTTGCCTGCTCACCAGCGAAGAGGGCGAGGAGCGTTTGACGGCCG + @@CDDADDHFHHHIIIIIHGGE<GEGIEHIGIIDHGHGGIHHHEFFFCCCB Use `sam-dump` to write SAM format files. This data appears to be unaligned (so --min-mapq should not affect anything, you can check). $ sam-dump -r SRR2089860 | head -4 HWI-D00473:169:HFK7WADXX:1:1101:1202:2011 4 * 0 0 * * 0 0 NGAGTCTATACTCGTTACATTCGCGTAACTCATTGTTAATCGCGAAGTTGA #1=DDDDFGHHGHJJJIJIIJGIIJJIJHICGIIIJJJIJGIJEHJIGIIG HWI-D00473:169:HFK7WADXX:1:1101:1195:2074 4 * 0 0 * * 0 0 CTCGAACTCCTCGTAGTGGCGATTGTCGGTGCTGCCCACCAGGTCCACTGT CCCFFFFFHGHHHJIJIJJJJHIIGGGHIECEHFHGIEFIGGJGHJIIGIG HWI-D00473:169:HFK7WADXX:1:1101:1230:2087 4 * 0 0 * * 0 0 TGCCGGGAATTGTACAGTGCTCAGCTTTATAGGACATTTCCAAACAGTTAT BBBFFFF8FHHHHJJJIJJJJIJGJJIJFGJIFGIIIJJJIGIEIIIIJGG HWI-D00473:169:HFK7WADXX:1:1101:1222:2168 4 * 0 0 * * 0 0 CCGAGACTTGCCTGCTCACCAGCGAAGAGGGCGAGGAGCGTTTGACGGCCG @@CDDADDHFHHHIIIIIHGGE<GEGIEHIGIIDHGHGGIHHHEFFFCCCB To do PCA analysis you will need to align fastq data to reference, count aligned reads to get an expression estimate. You could also use something like `salmon` to align to transcriptome to get counts.
biostars
{"uid": 382096, "view_count": 2193, "vote_count": 1}
Good night, could you please help me with the following question. Is it better to use logFC or logCPM to analyze my RNA-seq data between different treatments? which is better to use p-value or FDR? thanks for your help
Since logFC reflects the *difference* between your conditions, and that's what you're interested in, that is what you should pay attention to, and what will be meaningful to think about. LogCPM means very little in RNA Seq experiments. If you have RNA-Seq data, you're typically measuring thousands of genes, which means you're testing thousands of hypotheses, and so it is better to use FDR rather than simple p-values. Remember that for a p-value cut off of 0.05, you're essentially saying you are rejecting the null hypothesis (no difference between your conditions) in favor of the alternate hypothesis (there is an effect, a measurable difference between your conditions), and that if you're wrong (the null hypothesis is actually true), you would only see an effect as large as the one you measured 1 in 20 times. You can apply this same logic to all the genes you're measuring. Under the null hypothesis and a p-value cutoff of 0.05 you would expect a false positive 1 in 20 times, and since you're measuring thousands of genes, you can expect to have many genes pass a p-value threshold by chance simply because you are performing so many measurements. Thus you must "adjust" your p-values to account for this (calculate an FDR), which usually means inflating the p-values in some way. Genes with very low p-values will survive the adjustment (a very tiny number can still be very tiny even if multiplied by another number). (if you search here for FDR or adjusted p-value, you'll find better explanations by people who actually know what they're taking about). Also, make sure you read the [edgeR userguide][1], it's a good reference and explains what CPM is and isn't good for). [1]: https://www.bioconductor.org/packages/release/bioc/vignettes/edgeR/inst/doc/edgeRUsersGuide.pdf
biostars
{"uid": 9479902, "view_count": 1347, "vote_count": 1}
<p>Hi,</p> <p>I'm trying to run the <a href="http://www.broadinstitute.org/software/pathseq/Installation.html">PathSeq software</a> and I followed all the steps exactly, but I'm getting the following message:</p> <pre><code>Client.AuthFailure: Not Authorized for images: ami-5fb67036 </code></pre> <p>I'm new to cloud computing and I have no idea what the problem is. Any help would be greatly appreciated!!!</p> <p>Thank you!!!!!!</p> <p>Anna</p>
<p>The problem was with the image that the Broad Institute provided. They have fixed the problem.</p>
biostars
{"uid": 11637, "view_count": 2591, "vote_count": 1}
I have two: Table-A and Table-B. I want to extract the contents of Table-B matching Table-A something like Table-C for all the Motif_ID of column-1 of Table-A. Can anybody help me with R-script. Table-A ``` Motif_ID Size Overlap Overlap Fold enrichment P value FDR Motif_ID DBID TF_Name Family_Name M5301_1.01 181 181 14 16.19 6.81E-18 6.12E-15 M5301_1.01 ENSGALG00000013342 GBX2_CHICK Homeodomain M4396_1.01 500 500 16 6.7 1.53E-14 4.57E-12 M4396_1.01 ENSGALG00000005048 SMARCC1 Myb/SANT M4435_1.01 493 493 16 6.79 1.20E-14 5.38E-12 M4435_1.01 ENSGALG00000010036 FOSL2_CHICK bZIP ``` Table-B ``` Motif Gene Start Stop Log-odds p-value Site M0082_1.01 Chr25_scale5_25_989144_990145_+_-R 380 389 - 11.5048 2.30E-05 AGCCTCAGGG M0082_1.01 Chr25_scale6_25_989242_990243_+ 515 524 + 11.5048 2.30E-05 AGCCTCAGGG M0082_1.01 Chr25_scale9_25_997105_998106_+_-R 378 387 - 11.5048 2.30E-05 AGCCTCAGGG ``` Table-C ``` Gene Start Stop Log-odds p-value Site Size Overlap Overlap Fold enrichment P value FDR Motif_ID DBID TF_Name Family_Name M5301_1.01 Chr25_scale1_25_981718_982719_+_-R 516 529 + 10.3933 8.65E-05 TAATTTGCTGATTA 181 181 14 16.19 6.81E-18 6.12E-15 M5301_1.01 ENSGALG00000013342 GBX2_CHICK Homeodomain Chr25_scale2_25_981736_982737_+ 455 468 - 10.3933 8.65E-05 TAATTTGCTGATTA Chr25_scale3_25_985474_986475_+_-R 758 771 + 12.5796 2.05E-05 TAATTTGCCCATTA Chr25_scale3_25_985474_986475_+_-R 758 771 - 13.426 1.11E-05 TAATGGGCAAATTA Chr25_scale4_25_985197_986198_+ 508 521 + 13.426 1.11E-05 TAATGGGCAAATTA Chr25_scale4_25_985197_986198_+ 508 521 - 12.5796 2.05E-05 TAATTTGCCCATTA Chr25_scale5_25_989144_990145_+_-R 523 536 + 12.5796 2.05E-05 TAATTTGCCCATTA Chr25_scale5_25_989144_990145_+_-R 523 536 - 13.426 1.11E-05 TAATGGGCAAATTA ```
merge(Table-A, Table-B, by.x="Motif_ID", by.y="Motif") This should work, however IDs in `Table-A$Motif_ID` **must be unique**. PS., similar questions were asked [here][1] before. Also I don't know how merge will cope with duplicate column names (get rid of one `Motif_ID`). [1]: https://www.biostars.org/p/91074/
biostars
{"uid": 139737, "view_count": 1292, "vote_count": 1}
Hello community, I am using snakemake to make a pipeline. I want to add the bowtie2-build from bowtie2 to my current snakefile as follow: rule bowtie2Build: input: "refgenome/infected_consensus.fasta" output: "output/reference" shell: "bowtie2-build {input} {output}" So I should be expecting the following files: reference.1.bt2 reference.2.bt2 reference.3.bt2 reference.4.bt2 reference.rev.1.bt2 reference.rev.2.bt2 But it seems the problem lies in the output. How can I write the output?
You wrote this: rule bowtie2Build: input: "refgenome/infected_consensus.fasta" output: "output/reference" shell: "bowtie2-build {input} {output}" What snakemake is going to do is to check if the file output ( in this case "output/reference" ) exist after executing the rule. Which doesn't since it is the basename for bowtie What you can do is to pass the basename for the index as a parameter to snakemake function. Something like the following: rule bowtie2Build: input: "refgenome/infected_consensus.fasta" params: basename="output/reference" output: output1="output/reference.1.bt2", output2="output/reference.2.bt2", output3="output/reference.3.bt2", output4="output/reference.4.bt2", outputrev1="output/reference.rev1.bt2", outputrev2="output/reference.rev2.bt2" shell: "bowtie2-build {input} {params.basename}"
biostars
{"uid": 342988, "view_count": 4954, "vote_count": 2}
Hello! I have MBD-seq datasets of Dnmt1 Knock-Out and control cells. As expected, Dnmt1 KO covered very few genomic regions compared to control sample since no Dnmt1 presented in the KO. The problem is that the reads in the KO sample were concentrated in the "few genomic region" made the intensity too high. What I'm curious here is how I should normalize such data which the samples are expected to be different overall reads? For example, if KO and control have 100 and 10,000 detected bins (> 0 reads), and each of them has a million total reads, each bin in KO will 100 times higher reads leading to the biased quantification of MBD-seq enrichment. Would it be OK if I subset 1/100 of the reads from a KO sample to compensate the differences? How do everyone think? Thank you!
I suggest you go through the manual of MEDIPS https://bioconductor.org/packages/release/bioc/html/MEDIPS.html to get a guide line for the analysis. It also covers normalization.
biostars
{"uid": 416240, "view_count": 1186, "vote_count": 2}
Hello!! I have been trying to figure out how can I calculate the number of SNPs using sliding windows? I have a list with three columns: Scaffold"\t"Scaffold_Length"\t"Number_SNPs (per scaffold) Scaffold_28 70818817 894731 Scaffold_3 5123947 57985 Scaffold_13 4491039 67622 Scaffold_12 3793473 51663 Scaffold_23 3593776 31841 Scaffold_11 3547442 63973 Scaffold_26 2720936 36018 Scaffold_16 2719413 24318 Scaffold_27 1987753 53938 Scaffold_24 1647859 18408 Scaffold_9 1630703 15792 Scaffold_32 1545880 21094 . . . Based on the second column, I want to use sliding windows (500kbp) and calculate how many SNPs are there into each sliding windows. I performed: bedtools makewindows, but I have not figured out how to count and sum the SNP density. Thanks a lots for your help!!
One way is to pipe BED-formatted, sliding windows into `bedmap`, using its `--count` operator to count the number of SNPs that fall within each window. The following `bedops` statement would generate 500knt windows from scaffolds, spaced every 100knt. These windows are passed along to `bedmap`, which counts the number of SNPs that fall in each of those windows: $ bedops --chop 500000 --stagger 100000 -x <(awk -vOFS="\t" '{ print $1, $2-1, $2; }' scaffolds.txt | sort-bed -) | bedmap --echo --count --delim '\t' - <(vcf2bed < snps.vcf) > answer.bed The result is written to `answer.bed`, each line of which containing the window and the number of SNPs over that window.
biostars
{"uid": 321655, "view_count": 2812, "vote_count": 1}
I have stranded RNA-seq data which doesn't look stranded when I visualize bedGraphs in Genome Browser: ![enter image description here][1] I tried different aligners (star, bowtie, bwa), different ways to make bedGraph files, nothing works. But when I run salmon on these samples, it detects ISR (paired-end stranded library). I'm very confused why it doesn't look stranded in the browser but I know is stranded. Any help will be really appreciated! I've been trying to solve it on my own for a while... [1]: https://s8.postimg.cc/4yxzbs4bp/strand_error.png
Hi, The reason for this is that whether or not to consider a fragment for quantification in salmon is considered *after* the mapping is written. That is, the reads are mapped to the transcriptome, without regard for the inferred or provided library type, and then these mappings are written to file if requested by the user. However, when salmon considers the probability of these mappings arising from different transcripts, it will discard those that are not compatible with the library type. So, my guess is that you're seeing in this output multi-mapping reads where the sequence matches, and hence there is a possible allocation of this read to the transcript, but for which these mappings are later discarded during quantification. You could, for example, filter the mapping file based on the tags of the alignment records, to only keep those reads that align according to the ISR library type.
biostars
{"uid": 332526, "view_count": 2383, "vote_count": 1}
Hi all, I have BLAST+ version 2.11.0+ on my Linux computer and I would like to update this version to the latest (2.13.0+). I downloaded successfully the newest version from the [NCBI website][1]. However, after carefull reading of the manual, I'm still having trouble configuring this new version instead of the previous one.. When I type from anywhere in a terminal: ``` $ blastn -version $ blastn: 2.11.0+ ``` whereas when I go directly into the bin folder of what I've downloaded: ``` $ blastn -version $ blastn: 2.13.0+ ``` **So, the issue is with the PATH variable.** To try to solve it, I did: $ export PATH=$PATH:$HOME/ncbi-blast-2.13.0+ but it still doesn't work with the new version.. I know I always have trouble with the PATH variable when I'm installing/updating software. So if you have an idea on how to solve that, I would really appreciate it! Many thanks for your help, [1]: https://ftp.ncbi.nlm.nih.gov/blast/executables/LATEST/
You should prepend instead of appending the new directory to your PATH: export PATH=$HOME/ncbi-blast-2.13.0+:$PATH Now, when you run your blastn command, the shell will look in the 2.13.0 folder first.
biostars
{"uid": 9533249, "view_count": 852, "vote_count": 1}
Hi all, I get the following Error: Error in do_one(nmeth) : NA/NaN/Inf in foreign function call (arg 1) I'm trying to make a heatmap (pheatmap package) using K means. I have a large matrix (x) and I did the following: x[x < 0] <- NA x2 <- x[complete.cases(x),] RedBlue <- colorRampPalette(c("blue", "white", "red")) (20) pheatmap(x2, scale="row", kmeans_k=100, cluster_rows=TRUE, cluster_cols=TRUE, clustering_method = "ward.D2",color=RedBlue, legend=FALSE, show_colnames=TRUE, fontsize=15, fontface="bold", border_color=NA, width=20, heigth=100, cellwidth=30, cellheight = 5, show_rownames = FALSE, filename = "Heatmap.tiff") I checked for NA/NaN/Inf but I don't have this is my matrix. What can be another reason for this error? Thanks!
This is not a bioinformatics question. Also before posting in a more appropriate channel, provide a reproducible example. Now since I am here... This error can happen for several reasons the most common being: - presence of NAs, including those produced by scale() in the case of variables with 0 variance - presence of non-numerical values
biostars
{"uid": 433626, "view_count": 13124, "vote_count": 1}
I want to download the annotation file in gff3 format for the corresponding genome. As this fairly easy on the ncbi-webpage I don't find a possibility to do the same with efetch or the kind. I hoped I could use something like this: esearch -db nuccore -query "$genome_id" | efetch -format gff3 > "$path_data/$genome_id.gff"
There are a couple of strategies you can try, depending on what you mean by $genome_id. In each case, it's a matter of finding the right FTP path, and then using wget to get the *genomic.gff.gz file in that path: 1. If you have assembly accessions, you can get FTP paths for each from the assembly_summary.txt file, and loop through them with wget. See https://www.biostars.org/p/61081/ for a good post on the approach 2. If you have nucleotide sequence accessions for chromosomes, you can use esearch to directly query the Assembly database, and get the FTP path from the document summary: esearch -db assembly -query NC_000913.3 | esummary | xtract -pattern DocumentSummary -element Taxid,Organism,AssemblyAccession,FtpPath_RefSeq 3. If you have nucleotide sequence accessions that don't directly work for queries in the Assembly database (e.g. contigs or scaffolds), you can query in nucleotide first and link to assembly: esearch -db nuccore -query NZ_GL379776.1 | elink -target assembly | esummary | xtract -pattern DocumentSummary -element Taxid,Organism,AssemblyAccession,FtpPath_RefSeq
biostars
{"uid": 296825, "view_count": 10272, "vote_count": 5}
Hi All, I'm trying to speed up a BLASTP call as part of a bigger RBH workflow to detect orthologs, and I'm in the process of testing different approaches with a 100K sequence database and a 1107 sequence query (real database will be 350K, queries will differ in size). My function splits the query into separate files and processes them separately using python multiprocessing (Process or Pool), and I'm also looking at combining that with BLASTP's `-num_threads` parameter to increase speed further. I'm very knew to parallelisation in general (both threading and multiprocessing) but am keen to know more! I posted these questions together as they all relate to the same code, are generally continuing on from each other and I can accept multiple answers (unlike stack overflow), but please let me know if you'd suggest posting them separately. I'd be grateful for any answers, doesn't have to cover every point in one go :D ***Question 1*** - I'm running local BLASTP and was wondering about the details for the `num_threads` parameter. Am I right in thinking that (as the name suggests), it spreads the workload across multiple threads across a single CPU, so is kind of analogous to Pythons `threading` module (as opposed to the `multiprocessing` module, which spreads tasks across separate CPUs)? I've heard BLAST only goes above 1 thread when it 'needs too', but I'm not clear on what this actually means - what determines if it needs more threads? Does it depend on the input query size? Are threads split at a specific step in the BLASTP program? ***Question 2*** - To check I have the right ideas conceptually, if the above is correct, would I be correct to say that BLAST itself is I/O bound (hence the threading), which makes sense as its processing thousands of sequences in the query etc so lots of input? But if you want to call BLAST in a workflow script (e.g. using Python's `subprocess` module), then the call is CPU bound if you set `num_threads` to a high number, as it spreads the work across multiple threads in a single CPU, which takes a lot of the CPU power? Or does the fact that blastP is not taking full advantage of the threading mean that the CPU is not actually getting fully utilised, so a call will still be input/output bound independent of `num_threads`? If that's correct, then maybe I could use threading to speed separately process the split queries rather than multiprocessing... ***Question 3*** - Are there any suggestions for how to get the best core and thread parameters for general use across different machines without relying on individual benchmarking (I want it to work on other peoples machines with as little tuning and optimisation as possible). Is it just **cores = as many cores as you have** (ie `multiprocessing.cpu_count()`) and **threads = cores + 1** (defined by the BLASTP parameter `num_threads`)? Would this still be true on machines with more/less cores? ***Question 4*** - for benchmarking, how do external programs affect multiprocessing speed - would scrolling the web with 100 tabs open impact multiprocessing speed by increasing the work done by one of the CPUs, taking away resources from one of the processes running my script? If the answer is yes, whats the best way to benchmark this kind of thing? I'm including this question to give context on my benchmarking questions below (i.e. the numbers I am throwing around may be crap). I tried to include graphs of the numbers but they wont copy in, however I found a post explaining how to add pics so if they are helpful I can add them in. ***Question 5*** - Perhaps a more general question, I'm only splitting my query into 4 processes so would have thought `multiprocessing.Process` would be better (vs `multiprocessing.Pool`, which seems the preferred choice if you have lots of processes). But this isn't the case in my benchmarks, for multiprocessing using `blastP_paralellised_process` and `blastP_paralellised_pool` - any idea why? Timewise the `process` to `pool` 'seconds' ratio hovers around 1 with no obvious pattern for all `num_threads` (1-9) and `core` (1-5) combinations. ***Question 6*** - why does increasing the numbers of cores used to process `number of cores` * `split BLASTP queries` not result in obvious speed improvements? I would expect this with cores set >4, as my pc is a 4-core machine, but there seems to be little difference between processing 1/4 query files across 4 cores vs processing 1/2 query files across 2 cores. Is my assumption for **Question 2** incorrect? There is a little bit of slowdown for running on a single core and a dramatic increase for 1 core with 2 and 1 threads (1618 seconds and 2297 seconds), but for 2-5 cores with 1-9 threads the time for each blastP run is around 1000 seconds with some small random fluctuations (eg 4 cores 1 thread is 1323 seconds, but the other multicore single thread runs are normal timewise relative to the baseline of the other values). I've copied my code below. I've not included functions like `split_fasta` etc, as both they and BLASTP seem to be working (in the sense that im getting xml results files that I havent started parsing yet but look ok when i open in notepad) and I don't want to add 100 lines of unnecessary code and comments. Also, theyre used in the same way for both `blastP_paralellised_process` and `blastP_paralellised_pool`, so I don't think they are causing the time differences. Please let me know if including these would help though! def blastP_paralellised_process(evalue_user, query_in_path, blastp_exe_path, results_out_path, db_user, num_cores, thread_num): #function to split fasta query in 1 txt file per core filenames_query_in_split=fasta_split(query_in_path, num_cores) #function to construct result names for blastp parameter 'out' filenames_results_out_split=build_split_filename(results_out_path, num_cores) #copy a makeblastdb database given as iinput. generate one database per core. #Change name of file to include 'copy' and keep original database directory for quality control. delim=db_user.rindex('\\') db_name=db_user[delim::] db_base=db_user[:delim] databases=copy_dir(db_base, num_cores)#1 db per process or get lock #split blastp params across processes. processes=[] for file_in_real, file_out_name, database in zip(filenames_query_in_split, filenames_results_out_split, databases): #'blastP_subprocess' is a blast specific subprocess call that sets the environment to have #env={'BLASTDB_LMDB_MAP_SIZE':'1000000'} and has some diagnostic error management. blastP_process=Process(target=blastP_subprocess, args=(evalue_user, file_in_real, blastp_exe_path, file_out_name, database+db_name, thread_num)) blastP_process.start() processes.append(blastP_process) #let processes all finish for blastP_process in processes: blastP_process.join() def blastP_paralellised_pool(evalue_user, query_in_path, blastp_exe_path, results_out_path, db_user, num_cores, thread_num): ####as above#### filenames_query_in_split=fasta_split(query_in_path, num_cores) filenames_results_out_split=build_split_filename(results_out_path, num_cores) delim=db_user.rindex('\\') db_name=db_user[delim::] db_base=db_user[:delim] databases=copy_dir(db_base, num_cores) ################ #build params for blast params_new=list(zip( [evalue_user]*num_cores, filenames_query_in_split, [blastp_exe_path]*num_cores, filenames_results_out_split, [database+db_name for database in databases], [thread_num]*num_cores)) #feed each param to a worker in pool with Pool(num_cores) as pool: blastP_process=pool.starmap(blastP_subprocess, params_new) if __name__ == '__main__': #make blast db makeblastdb_exe_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Executables\NCBI\blast-2.10.1+\bin\makeblastdb.exe' input_fasta_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Precomputed_files\fasta_sequences_SMCOG_efetch_only.txt' db_outpath=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_queries\DEMgenome_old\database\smcog_db' db_type_str='prot' start_time = time.time() makeblastdb_subprocess(makeblastdb_exe_path, input_fasta_path, db_type_str, db_outpath) print("--- makeblastdb %s seconds ---" % (time.time() - start_time)) #get blast settings evalue_user= 0.001 query_user=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_queries\DEMgenome_old\genome_1_vicky_3.txt' blastp_exe_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Executables\NCBI\blast-2.10.1+\bin\blastp.exe' out_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_results\blastresults_genome_1_vicky_3.xml'#zml? num_cores=os.cpu_count() #benchmarking for num_cores in range(1,6)[::-1]: print() for num_threads in range (1,10)[::-1]: start_time = time.time() blastP_paralellised_process(evalue_user, query_user, blastp_exe_path, out_path, db_outpath, num_cores, num_threads) end_time=time.time() print(f"blastP process\t{end_time - start_time} seconds\t{num_cores} cores\t{num_threads} threads\treplicate {replicate}" ) start_time = time.time() blastP_paralellised_pool(evalue_user, query_user, blastp_exe_path, out_path, db_outpath, num_cores, num_threads) end_time=time.time() print(f"blastP pool\t{end_time - start_time} seconds\t{num_cores} cores\t{num_threads} threads\treplicate {replicate}" ) print()
First, beware that I am no expert on running these kinds of jobs from python scripts. In my experience, it is best to run this as a single job with maximum available `num_threads`. If you have a single `-query` group of sequences and a single `-db` database, I think the program will load the database into memory once, and keep it there for all subsequent sequences. Any other solution that splits your sequences into multiple jobs will have to deal with loading the database multiple times, and I think that is likely to be the slowest part of this process even if you have a solid-state drive and really fast memory. That said, I don't think you need to worry too much with a database that is 350K sequences and ~1000 queries. That will probably be done in couple of hours on any modern computer. In other words, you may spend more time thinking (and writing) about it than what the actual run will take.
biostars
{"uid": 487527, "view_count": 1993, "vote_count": 1}
I am working on NCI-60 database for data mining purpose. In NCBI I found miRNA expression profiles ([link][1]). But for mRNA of each cell lines I found 2 kinds of expression profile array in NCBI ([link][2]) categorized as Affymetrix HG-U133A and HG-U133B. They also overlap many mRNA probes and their expression values. I can't understand which one to use or how? What is the difference? [1]: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE26375 [2]: http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE5720
[This PDF][1] from Affymetrix clarifies the difference between 133A, 133 B and 133 plus. As for using the datasets, look at this question: https://www.biostars.org/p/56657/. It seems 60 samples are analyzed using both the chips. Copy/pasted from Affymetrix website: > The HG-U133A Array includes representation of the RefSeq database sequences and probe sets related to sequences previously represented on the Human Genome U95Av2 Array. The HG-U133B Array contains primarily probe sets representing EST clusters. [1]: http://media.affymetrix.com/support/technical/datasheets/human_datasheet.pdf
biostars
{"uid": 160402, "view_count": 7590, "vote_count": 1}
<p>I've assembled a de novo transcriptome from RNAseq data. I'm comparing it against a reference cDNA set that is publicly available. However, I'd like to see if my assembly has produced any novel transcripts. Is there something like a reverse alignment? I'd like to find transcripts that I've assembled that are not included in this file of cDNA. I will then map them to the genome using blat or a similar tool.</p> <p>How can I do what seems like the inverse of an alignment?</p>
Check here : https://www.biostars.org/p/14863/
biostars
{"uid": 19520, "view_count": 2830, "vote_count": 1}
<p>Hi,</p> <p>I have a set of transcripts that a collegaue has identified as being up regulated in a study species cf. a reference species. Each transcript has a set of GO terms associated with it. The GO Terms were pulled from a gb file.</p> <p>The data is in R in a list, names are transcript ID, and the values are the GO terms.</p> <p>e.g.</p> <pre> $WS010775.1 [1] &quot;0042567&quot; &quot;0005634&quot; &quot;0007155&quot; $WS007996.2 [1] &quot;0005834&quot; &quot;0005886&quot; &quot;0003924&quot; &quot;0004871&quot; &quot;0010659&quot; &quot;0071456&quot; &quot;0042462&quot; &quot;0007186&quot; &quot;0007602&quot; &quot;0008104&quot; &quot;0007165&quot; &quot;0007268&quot; $WS010604.4 [1] &quot;0016021&quot; &quot;0007155&quot;</pre> <p>I&#39;d like to present a summary of biological function/features that are represented by those up regulated transcripts. I found REVIGO (http://revigo.irb.hr/) but I believe that it requires that I do GO Term enrichment first to get the p-values that REVIGO requires. Are there any tools I can use to do this, given that the species I&#39;m study is not human/mouse/etc? Tools I&#39;ve looked at find the terms and do enrichment analysis. But I already know the GO terms of each transcript, I just want to calculate enrichment.</p> <p>Thanks,</p> <p>Ben W.</p>
One of the easiest GO enrichment tests to apply is the Fisher Exact Test approach. Take a look at this response: http://stats.stackexchange.com/a/72556/5143. This is easily done in R.
biostars
{"uid": 167790, "view_count": 2847, "vote_count": 1}
This question might not be exactly on-topic here, but I thought probably most of you have been in a similar situation before. When describing your methods in a manuscript, do you cite the components of a larger script? For example, MITObim uses MIRA to do mapping, so should I cite MIRA if I used MITObim in my work, or would citing MITObim be sufficient? MITObim is more than a simple wrapper script, but it relies heavily on MIRA. Just wanted to know what etiquette says in this case; I haven't written a real paper before.
If it is a published tool, cite only that tool and not anything that tool is. The publication of the tool will used should have citations for anything it used. In some cases you can adjust settings to programs inside that tool, you should make sure to mention any non-default settings. However, some tools allow you to choose which sub-program you use. For example RSEM can use STAR, bowtie or bowtie2. You should cite this because it allows readers to see exactly what software you used to perform your analysis. RSEM can also take SAM/BAM files as direct input, so if you used something totally different, you should obviously cite it. In your case, if you selected MIRA over another program for MITObim, you should cite it. If MITObim can use MIRA and only MIRA, you probably shouldn't cite it. If it is a custom script, cite the tools or libraries involved. As well as the author A fuzzier case may be in instances where the item in question is more protocol than software (e.g. Trinotate). In that case you should probably cite the tools used. There's always a judgement call in what level you stop at. The important thing is to generally cite things that contributed specifically to your analysis. I guess one way to think of it is what programs did you (or other authors) personally use directly, either by running the program or by specifying it specifically over another program. Giving credit is obviously important, but when dealing with methodological items (wet or dry) the most important thing is giving the reader enough information they could repeat the analysis.
biostars
{"uid": 173380, "view_count": 1368, "vote_count": 2}
How can I stack several manhattan plots in the same plot looks like this https://imgur.com/a/pj40c using R? That plot is made using excel and not follow their real map position. In my case I have 4 p-values from 4 different phenotypes. They are estimated from same SNPs. I want to stack them in the same plot but distinguish phenotypes by different colors. # Input data format, example. SNP CHR POS pval_1 pval_2 pval_3 pval_4 a 1 100 0.1 0.5 0.2 0.1 b 2 110 0.2 0.6 0.3 0.5 c 3 120 0.3 0.7 0.1 0.6 d 4 130 0.4 0.1 0.4 0.2 qqman is a very useful R package, unfortunately it is only allowed for single trait.
With a bit of customization you could do it in R with ggplot: library(dplyr) library(ggplot2) tmp <- data_frame( POS=seq(1,10000), pval1=1/runif(n = 10000, min=0, max=0.2), pval2=1/c(runif(n = 9999, min=0, max=0.2), 1e-5), pval3=1/runif(n = 10000, min=0, max=0.3), pval4=1/runif(n = 10000, min=0, max=0.4)) tmp.tidy <- tmp %>% gather(key, value, -POS) ggplot(tmp.tidy, aes(POS, value, color=key)) + geom_point() **Example**: [https://ibb.co/mYnb3w][1] Use the + facet_wrap(~CHR) to create a graph per chromosome. [1]: https://ibb.co/mYnb3w
biostars
{"uid": 276126, "view_count": 7182, "vote_count": 1}
Common Workflow Language (**CWL**) https://github.com/common-workflow-language/common-workflow-language / http://common-workflow-language.github.io/draft-3/ has been trending on my twitter timeline during the last weeks. However the spec is quite large and I find it hard to get some simple examples. Furthermore, I have the feeling that all engines require a lot of dependencies or docker. I'd like to test my makefile-based workflows using CWL, how should I write and test the following simple **Makefile** using CWL: ``` SHELL=/bin/bash .PHONY: all all : database.dna database.dna : seq1.dna seq2.dna seq3.dna cat seq1.dna seq2.dna seq3.dna > database.dna seq3.dna : seq3.rna tr "U" "T" < seq3.rna > seq3.dna seq3.rna : echo "AUGCGAUCGAUCG" > seq3.rna seq2.dna : seq2.rna tr "U" "T" < seq2.rna > seq2.dna seq2.rna : echo "AUGAAGACUGCGAUCGAUCG" > seq2.rna seq1.dna : seq1.rna tr "U" "T" < seq1.rna > seq1.dna seq1.rna : echo "AUGAAGACUGACUCGUCG" > seq1.rna ``` **EDIT**: feel free to add the file for your favorite workflow-engine as an answer. https://twitter.com/PaoloDiTommaso/status/625995681434607616 https://twitter.com/smllmp/status/625999447869231104
Responded with an example on this github issue: https://github.com/common-workflow-language/workflows/issues/1
biostars
{"uid": 152226, "view_count": 5484, "vote_count": 13}
<p>Hi All, I intersected to data sets and I have two columns like:</p> <pre><code>m01 m01 m01 m02 m01 m05 m01 m032 m01 m02 m01 m01 m02 m06 m02 m01 m02 m02 m02 m09 ... ... m0500 m023 ... </code></pre> <p>I would like to get number of matches with others like:</p> <pre><code> m01 m02 m03 ...... m01 25 45 98 ..... m02 90 223 12 ...... . . </code></pre> <p>Would you please help me how I can do that?</p> <p>Thank you very much</p>
<p>You didn't say which language you are using. I guess since you tagged it with awk and perl, that's what you want?</p> <p>Anyway, if you were using <a href="http://www.r-project.org]">R</a>, you could do it with a call to <code>table</code>:</p> <pre><code>## Maybe you have this in a tab delimited file already &gt; dat &lt;- read.table('/path/to/2-column-file.txt') ## but I'll generate a random table that looks close-enough to your data: &gt; set.seed(1) &gt; dat &lt;- data.frame(x1=rep(c('m01', 'm02', 'm03'), each=10), x2=sample(c('m01', 'm02', 'm03'), 30, replace=TRUE)) &gt; head(dat) x1 x2 1| m01 m01 2| m01 m02 3| m01 m02 4| m01 m03 5| m01 m01 6| m01 m03 &gt; table(dat$x1, dat$x2) m01 m02 m03 m01 3 4 3 m02 2 3 5 m03 4 4 2 </code></pre>
biostars
{"uid": 60546, "view_count": 6698, "vote_count": 1}
<p>Hi, I have one BAM file which contains all alignments (include those not used in variant calling, such as non-PF, non-mapping and duplicate reads) generated for an assembly. How to filter these useless mapping? I know that Picard MarkDuplicates can be used to remove duplicates.Thank you.</p>
<p>The <a href="https://github.com/pezmaster31/bamtools">bamtools</a> package offers a wide range of filters, including user-definable filters defined in <a href="http://json.org">JSON</a> notation. It includes filters for reads failing vendor QC, unmapped reads and pre-marked duplicates.</p>
biostars
{"uid": 5500, "view_count": 24146, "vote_count": 8}
This is a general question about results that I have seen several times but only recently considered. If you do something like ChIP-Seq or HITS-CLIP or other techniques that immunoprecipitate DNA or RNA bound by a protein, then run motif analysis on the areas of the genome mapped to the reads that are pulled down, but do not find the binding motif for the protein that you used for the IP, what does it mean? Is this expected? Did something go wrong with the wet lab steps, or analysis?
In my opinion it suggests one of two things: 1. Either your antibody has specificity issues. It is not uncommon that antibodies are pulling down other areas than the expected. TSS in particular. Best control for that is to look for signal in a knockout cell line. I know some people who raised five (as I remember it) antibodies against a target and got quite consistent peaksets in chipseq. They later discovered that all antibodies resulted in plenty of peaks in a cell line that did not have the gene encoding the target, suggesting that the peaks were due to some crossreactivity or 'sticky' regions. Ouch... Not a good day in lab. 2. Alternatively the antibody recognizes the target properly, but the assumed motif is not right or the majority of binding is due to indirect recruitment. Again, data from a knock out cell line would be handy to convince your audience of that.
biostars
{"uid": 185832, "view_count": 1400, "vote_count": 1}
<p>I&#39;ve got the DNA mutation information which can be in a file format like VCF or MAF. Actually what I want to look at is how the mutation influence the corresponding amino acid changes.</p> <p>Are there any tools can generate the mutated protein sequences corresponding to the mutated DNA in batch?</p> <p>Many thanks.</p>
<p>Great question (for that reason I upvoted it). You might try the R package &quot;CustomProDB&quot; which takes a VCF as input and outputs a protein fasta incorporating SNVs and Indels.</p> <p>http://www.bioconductor.org/packages/release/bioc/html/customProDB.html</p>
biostars
{"uid": 107321, "view_count": 7843, "vote_count": 8}
I couldn't find much in the SRA hand book about the differences between the read prefixes. I am downloading the Tardigrade reads and was just curious (https://www.ncbi.nlm.nih.gov/sra/DRX012456[accn])
They were submitted to different databases. SRR to NCBI, ERR to EBI and DRR to DDBJ, I believe.
biostars
{"uid": 248744, "view_count": 7791, "vote_count": 2}
I have two files with large data with matching id at different column. I want to merge those identical columns with other values along the line. I want to merge file 1 (column 10) and file 2 (column 8) File 1 ``` chr0 385308 T A 228 hom 17 17 . BCL026842 384745 386336 + chr0 589920 C T 73 het 16 5 . BCL026857 589920 590284 - chr0 589925 T C 203 hom 15 15 . BCL026858 589920 590284 - chr0 590091 C T 140 hom 6 6 . BCL026759 589920 590284 - chr0 590131 A C 74 hom 4 4 . BCL026660 589920 590284 - chr0 590142 A C 159 hom 7 7 . BCL025261 589920 590284 - chr0 590161 G A 228 hom 10 10 . BCL024262 589920 590284 - chr0 590193 A G 228 hom 15 15 . BCL023163 589920 590284 - chr0 590281 G A 228 hom 20 20 . BCL026864 589920 590284 - ``` File 2 ``` g111 scaffold00001 52496 52496 G C exonic BCL026842 nonsynonymous SNV "BCL001919:BCL001919T1:exon3:c.427C>G:p.P143A," g112 scaffold00001 52501 52501 G T exonic BCL026857 nonsynonymous SNV "BCL001919:BCL001919T1:exon3:c.422C>A:p.T141N," g122 scaffold00001 60197 60197 G A exonic BCL026858 synonymous SNV "BCL001920:BCL001920T1:exon2:c.276C>T:p.D92D," g156 scaffold00001 80052 80052 C T exonic BCL026859 synonymous SNV "BCL001921:BCL001921T2:exon1:c.240G>A:p.P80P,BCL001921:BCL001921T3:exon1:c.240G>A:p.P80P," g328 scaffold00001 166481 166481 C T exonic BCL026860 synonymous SNV "BCL001929:BCL001929T1:exon3:c.1110G>A:p.T370T," g329 scaffold00001 168237 168237 T A exonic BCL026861 nonsynonymous SNV "BCL001929:BCL001929T1:exon1:c.92A>T:p.N31I," g360 scaffold00001 178660 178660 T C exonic BCL026862 synonymous SNV "BCL001930:BCL001930T1:exon2:c.177A>G:p.G59G," g370 scaffold00001 180974 180974 A G exonic BCL026863 synonymous SNV "BCL001931:BCL001931T1:exon6:c.1521T>C:p.F507F,BCL001931:BCL001931T2:exon6:c.1521T>C:p.F507F," g414 scaffold00001 189463 189463 A G exonic BCL026864 nonsynonymous SNV "BCL001933:BCL001933T1:exon1:c.56T>C:p.V19A," ``` Desired output ``` chr0 385308 T A 228 hom 17 17 . BCL026842 384745 386336 + g111 scaffold00001 52496 52496 G C exonic BCL026842 nonsynonymous SNV "BCL001919:BCL001919T1:exon3:c.427C>G:p.P143A," chr0 589920 C T 73 het 16 5 . BCL026857 589920 590284 - g112 scaffold00001 52501 52501 G T exonic BCL026857 nonsynonymous SNV "BCL001919:BCL001919T1:exon3:c.422C>A:p.T141N," chr0 589925 T C 203 hom 15 15 . BCL026858 589920 590284 - g122 scaffold00001 60197 60197 G A exonic BCL026858 synonymous SNV "BCL001920:BCL001920T1:exon2:c.276C>T:p.D92D," chr0 590281 G A 228 hom 20 20 . BCL026864 589920 590284 - g414 scaffold00001 189463 189463 A G exonic BCL026864 nonsynonymous SNV "BCL001933:BCL001933T1:exon1:c.56T>C:p.V19A," ``` Any response in any command is appreciated. Thank you in advance
Using <a href="http://linux.die.net/man/1/join">join</a> it's quite easy. Of course, you need to sort the files according to the field to join them: join -1 10 -2 8 <(sort -k10 FILE1) <(sort -k8 FILE2) > joined_file.txt <p>If you want to select certain fields you can use either the -o option in the join command (-o 1.1,1.2...0,2.1,2.2...) or the cut tool on the joined file (`cut -f 1,2,3.....`)
biostars
{"uid": 164380, "view_count": 1774, "vote_count": 1}
Hi! Do you know how can I filter out supplementary alignments from a bam file? I was reviewing http://broadinstitute.github.io/picard/explain-flags.html and I am aware that the flag for this kind of alignments is "2048". However, depending on another features (ex: paired read, second in pair, etc), the flag can vary. So, I am not sure about how can I filter out this kind of alignments.
the SAM flag is a bit array https://en.wikipedia.org/wiki/Bit_array . Filtering with `samtools view -F` or `samtools view -f` uses Bit Wise operations https://en.wikipedia.org/wiki/Bitwise_operation so the other bits don't have any consequence on the filtering.
biostars
{"uid": 432137, "view_count": 4563, "vote_count": 1}
Hi - I was wondering if it is possible the download a random sample of proteins from a given protein database. I want to do this to compare proteins of interest to "background proteins". i.e. a control. Probably a little trickier would be to download proteins that aren't of a certain type i.e. non membrane proteins. Has anyone done anything like this. I see in papers all the time "we used non-XXX proteins as a negative training set. " And I'd imagine something like this would be a pain to do manually. Ideally I would not like to download entire databases, but rather do this task online. Anyone done this sort of thing?
<p>Here are all the ~20k reviewed proteins in homo sapiens in the UniProt database:</p> <p>http://www.uniprot.org/uniprot/?query=%28taxonomy%3A9606%29+AND+reviewed%3Ayes</p> <p>Play around, exclude some, include non-reviewed or change organism, then click download.</p> <p>Choose which format suits you, i.e. FASTA.</p> <p>Here is the first one (just for fun):</p> <pre> <code>&gt;sp|P31946|1433B_HUMAN 14-3-3 protein beta/alpha OS=Homo sapiens GN=YWHAB PE=1 SV=3 MTMDKSELVQKAKLAEQAERYDDMAAAMKAVTEQGHELSNEERNLLSVAYKNVVGARRSS WRVISSIEQKTERNEKKQQMGKEYREKIEAELQDICNDVLELLDKYLIPNATQPESKVFY LKMKGDYFRYLSEVASGDNKQTTVSNSQQAYQEAFEISKKEMQPTHPIRLGLALNFSVFY YEILNSPEKACSLAKTAFDEAIAELDTLNEESYKDSTLIMQLLRDNLTLWTSENQGDEGD AGEGEN</code></pre> <p>Then load some at random.</p>
biostars
{"uid": 106065, "view_count": 4360, "vote_count": 2}
I have 9 .bam files, produced from 2x75b PE Illumina reads (RNA-Seq) and aligned using STAR to the Ensemble rat reference genome. Each file has one @RG line with only two entries: ID and SM. So for sample s01, the @RG line looks as follows: `@RG ID:s01 SM:s01`. I have not included any library information (LB:) in the @RG line. When I run bamUtil's dedup to mark duplicates, I get the following error for each of the 9 .bam files: `WARNING: Cannot find library information in the header line @RG ID:s01 SM:s01 . Using empty string for library name` I'm a beginner here. As best as I can tell the duplication marking seems to have worked well. Should I be concerned that the input .bam files did not have a library defined? If I need to define a library for each .bam file, could you point me to some insights on what to define as the library? e.g. Should I just set the library to the sample name, so that between the 9 .bam files I will have 9 different libraries? Thanks, skhan
First: a warning is not an error. With an error, you would get no output, with a warning, you get output, but you may have to be careful and even discard it. The duplication marking may have worked, but probably not optimally. The intention is to mark PCR and optical duplicates. PCR duplicates appear at library preparation step, optical duplicates form at clusterization step. I don't know the innards of bamUtil duplication marking, but it is likely it uses library information to mark PCR duplicates, so it should be important. If you loaded each library is to be found only at a single lane, then as is well, but if you loaded the same library on several lanes or sequencing runs, then the marking of duplicates will be non-optimal. Some background at [Read Group In Sam/Bam Files: What Do They Exactly Describe?][1] and [Read Groups (GATK forums)][2]. [1]: https://www.biostars.org/p/43897/ [2]: https://gatkforums.broadinstitute.org/gatk/discussion/6472/read-groups
biostars
{"uid": 309425, "view_count": 1589, "vote_count": 1}
I followed the guide [here][1] and I was able to get almost to the end of the installation. At the point where I have to install the Genome module with sudo `which cpanm` -n http://apt.genome.wustl.edu/ubuntu/pool/main/g/genome/genome_0.80.1-3.tar.gz I get the following error (from the log.file): ``` Checking dependencies from MYMETA.json ... Checking if you have IO::String 0 ... Yes (1.08) Checking if you have IO::File 0 ... Yes (1.16) Checking if you have Getopt::Complete 0 ... Yes (0.26) Checking if you have Carp 0 ... Yes (1.29) Checking if you have File::Temp 0 ... Yes (0.23) Checking if you have File::Basename 0 ... Yes (2.84) Checking if you have UR 0.29 ... Yes (0.43) Checking if you have Sys::Hostname 0 ... Yes (1.17) Building Genome-0.080001 Building Genome failed to extract pod: : error in Genome::Site::local::MacBooks-MacBook-Pro: Bareword "Pro" not allowed while "strict subs" in use at (eval 206) line 1. at /Users/macbook/.cpanm/work/1429565645.21366/genome-0.80.1/blib/lib/Genome/Site.pm line 24. Genome::Site::BEGIN() called at /Users/macbook/.cpanm/work/1429565645.21366/genome-0.80.1/blib/lib/Genome/Site.pm line 36 eval {...} called at /Users/macbook/.cpanm/work/1429565645.21366/genome-0.80.1/blib/lib/Genome/Site.pm line 36 [ ... ] -> FAIL Installing http://apt.genome.wustl.edu/ubuntu/pool/main/g/genome/genome_0.80.1-3.tar.gz failed. See /Users/macbook/.cpanm/work/1429565645.21366/build.log for details. Retry with --force to force install it. ``` This tells me that there is a problem because of the word "Pro" from my MacBook Pro....however, I'm not sure how to fix this, any suggestions? I did try to use `--force` but it made no difference... [1]: https://www.biostars.org/p/62793/
Those instructions were tested on various Linux distributions, never on a Mac. One of the many differences between Linux and Mac OSX is how OSX allows a user-friendly ComputerName like "MacBooks MacBook Pro" that can contains spaces, while a linux hostname cannot. It doesn't perfectly fit your error message, but give it a shot. Run the following to change your Mac's ComputerName and HostName, and try again: ``` sudo scutil --set HostName "macbookpro" sudo scutil --set ComputerName "macbookpro" ```
biostars
{"uid": 138919, "view_count": 2342, "vote_count": 1}
Hi everyone, I'm having trouble trying to filter blast result outputs. So, I'm using a huge amount of sequences as queries against a certain genome in a local tblastn, which gives me an .txt output. The thing is, I need to extract the best hits, that I've defined as the lowest e-value, for each genomic region that the genome is divided. I tried sorting in excel with the Filter command, but as the e-value is presented like '1.08e-108', the excel only considers the numbers before the 'e'. Then, in a hypothetical list containing e-values with 1.08e-108, 2.34e-10 and 1.03e-03 values, excel always choose 1.03e-03. The next thing I tried to do was sorting each genomic region using Pandas, which I transformed the .txt output from blast in a dataframe for better manipulation, but the same thing that happened like in excel. This way, I'm selecting manually each best hit, but it is taking too much time. Here's an example of the output: ``` BrflORs150.1 KN907735.1 23.616 271 186 6 40 299 80310 81092 1.41e-12 75.1 BrflORs150.2 KN907735.1 24.242 264 178 6 41 296 80313 81062 7.55e-09 63.5 BrflORs155.1 KN907735.1 24.825 286 204 4 23 303 80253 81092 1.29e-17 92.4 BrflORs155.1 KN907735.1 22.388 268 188 7 33 290 181025 181798 1.24e-10 70.1 BrflORs155.1 KN907735.1 24.908 273 181 5 41 302 32141 32920 1.84e-10 69.7 BrflORs155.1 KN907735.1 24.254 268 187 7 39 298 191353 192132 2.81e-10 68.9** BrflORs155.1 KN907685.1 24.739 287 199 8 25 303 37370 38203 9.68e-13 77.0 BrflORs155.1 KN907685.1 25.926 297 189 12 20 301 14077 14919 9.72e-09 63.9 BrflORs155.1 KN909062.1 21.379 290 204 6 23 300 50032 49199 3.01e-12 75.5 BrflORs155.1 KN909062.1 23.132 281 198 5 27 298 33061 32246 7.06e-11 70.9 BrflORs155.1 KN907432.1 25.862 290 181 8 28 300 166293 165475 2.98e-11 72.0 BrflORs155.1 KN906695.1 26.102 295 191 9 22 303 463829 464671 1.27e-10 70.1 BrflORs155.1 KN906695.1 26.689 296 188 8 22 303 485691 486533 3.83e-10 68.6 ``` From those, for example, for the KN907735.1 region, I'd need to select only the query presenting e-value of 1.29e-17, because is the lowest one.
Did you try to use the shell command `sort -g`. For your file it will be `sort -k 11,11 -g yourFile.txt | sort -u -k2,2`
biostars
{"uid": 491301, "view_count": 827, "vote_count": 1}
I am trying to convert tab limited "name-seq" format to ">name \n seq\n". I think I can use "awk" function will do the job, but need your help. Thanks for your help! For example, From; CHP2_ABL1_1 ACGACAAGTGGGAGATGGAAC CHP2_ABL1_2 TAACTAGTCAAGTACTTACCCACTGAAA TO; \>CHP2_ABL1_1 ACGACAAGTGGGAGATGGAAC \>CHP2_ABL1_2 TAACTAGTCAAGTACTTACCCACTGAAA
Yes, oh my gawk gawk -F'\t' '{print ">"$1"\n"$2}' your.tsv
biostars
{"uid": 188055, "view_count": 1039, "vote_count": 1}
<p>I got RNASeq data in several samples. I checked the FastQC, seems the read quality are good (Hiseq 2000). But the problem is many reads are mapped to intronic region, and the regions have no any reference exons there (Refseq, ensembl, gencode). We don't know what they are. We guess the problem happend in library preparation, the concentration was low. Now the data has come out and we can't re-sequencing, so we want to remove the reads mapped to intronic region, is there a method to do that? Or anyone have an idea about the intronic reads. Thanks.</p>
You can easily use <a href='http://code.google.com/p/bedops/'>BEDOPS</a> to solve this problem quickly. It includes `bedops` and various conversion scripts for putting data into BED format, which `bedops` can process. Assuming your reads are in <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a> format: $ bam2bed < reads.bam \ | bedops --not-element-of -1 - introns.bed \ > reads-not-in-introns.bed The file `reads-not-in-introns.bed` is a sorted BED file containing all reads that do not overlap intronic elements. You can then pass this result to `bedmap` to do counting of reads over other region sets (whole-genome or subsets). Note that we assume your introns are in BED format and are sorted, *e.g.*: $ sort-bed unsorted-introns.bed > introns.bed Alternatively, if your introns are in some other format — say, GTF — then BEDOPS <a href='http://code.google.com/p/bedops/wiki/conversion'>conversion scripts</a> will losslessly turn them into sorted BED, *e.g.*: $ gtf2bed < introns.gtf > introns.bed
biostars
{"uid": 73821, "view_count": 3976, "vote_count": 1}
I am working on a mac, and need to work with some raw files. Essentially I am working on a tool that works with MGF and MZML files. I am extremely frustrated by the fact the only way to carry out this conversion seems to be either doing it on a different computer running windows or installing windows within bootcamp/virtual machine. Seriously in the age of information and data, are still being slowed down with vendor supplied libraries being old and incompatible? It reminds me graphics card drivers for Linux 10-15 years ago. So frustrating..
Thermo has released a new version of their RAW file reader written for `.NET` [http://planetorbitrap.com/rawfilereader#.W9R8oZNKiUk][1]. This is only a library, so you'll need to use a tool written on top of it like [https://github.com/compomics/ThermoRawFileParser][2] in order to convert a RAW file to an open format, but they should run on Mac and *nix systems which `.NET` has been ported to. [1]: http://planetorbitrap.com/rawfilereader#.W9R8oZNKiUk [2]: https://github.com/compomics/ThermoRawFileParser
biostars
{"uid": 342797, "view_count": 2790, "vote_count": 1}
Hello, I am trying to make a figure like in this image. I have used Geneious but the resolution is very poor and not suitable for publication. I also tried Bio3d R-package. But could not get the desired output. Can anybody suggest how can I obtain such a figure? I could not find the proper function in bio3d package to get such a figure. Please give some suggestions. Thank you. ![schematic of multiple sequence alignment][1] [1]: https://i.ibb.co/2Wnt934/Screen-Shot-2020-12-15-at-13-10-38.png
You can use `geom_tile` in R by assigning values to Reference and Alternate alleles (0 and 1 would work) Then plot it position on x axis and the alignment on Y axis. The rest is aesthetic settings.
biostars
{"uid": 479327, "view_count": 2449, "vote_count": 1}
Hi! First's run results occurred by this command: tblastn -query query.fasta -db plant_DB -out output-evalue -html and in some reads I got two targets. e.g : Score = 160 bits (404), Expect = 1e-44, Method: Composition-based stats. Identities = 76/77 (99%), Positives = 77/77 (100%), Gaps = 0/77 (0%) Frame = +3 Query 1016 GLPGVNGLSTEQRKRLTIAVELVANPSIIFMDEPTSGLDARAAAIVMRTVRNTVDTGRTV 1075 GLPGV+GLSTEQRKRLTIAVELVANPSIIFMDEPTSGLDARAAAIVMRTVRNTVDTGRTV Sbjct 3 GLPGVDGLSTEQRKRLTIAVELVANPSIIFMDEPTSGLDARAAAIVMRTVRNTVDTGRTV 182 Query 1076 VCTIHQPSIDIFEAFDE 1092 VCTIHQPSIDIFEAFDE Sbjct 183 VCTIHQPSIDIFEAFDE 233 Score = 50.4 bits (119), Expect = 2e-06, Method: Composition-based stats. Identities = 25/74 (34%), Positives = 45/74 (61%), Gaps = 1/74 (1%) Frame = +3 Query 346 VRGISGGQRKRVTTGEMLVGPANALFMDEISTGLDSSTTFQIVKSLRQAIHILGGTAVIS 405 V G+S QRKR+T LV + +FMDE ++GLD+ +++++R + G T V + Sbjct 15 VDGLSTEQRKRLTIAVELVANPSIIFMDEPTSGLDARAAAIVMRTVRNTVDT-GRTVVCT 191 Query 406 LLQPAPETYDLFDD 419 + QP+ + ++ FD+ Sbjct 192 IHQPSIDIFEAFDE 233 I want to get back only the first target and that's why i changed the `-evalue` parameter into this: tblastn -query query.fasta -db plant_DB -out output-evalue -html -evalue 3.1 but the results are still the same. Any idea to filter them better? Thank you.
The [E-value threshold in BLAST][1] is an upper limit, so a threshold of '3.1' is greater than the E-value reported for the hits (`1e-44` and `2e-06`), and thus they are reported. The use of [scientific notation][2] for the E-value, can be a bit confusing since the exponent has a large magnitude for very small numbers. In this case we have: - `1e-44` = `1 * 10^-44` which is less than `3.1 * 10^0` - `2e-06` = `2 * 10^-6` which is less than `3.1 * 10^0`, or in normal decimal notation: `0.000002 < 3.1` As noted by [Devon][3], in order to exclude the second hit by E-value, you would need to use a value between `1e-44` and `2e-06`, say `1e-07`. [1]: http://www.ncbi.nlm.nih.gov/blast/Blast.cgi?CMD=Web&PAGE_TYPE=BlastDocs&DOC_TYPE=FAQ#expect [2]: http://en.wikipedia.org/wiki/Scientific_notation [3]: /u/7403/
biostars
{"uid": 104379, "view_count": 3594, "vote_count": 1}
I'm using limma + voom to model an expression dataset, but I'm observing a weird subset of genes where the standard deviation increases with the expression level rather than decreasing as is the case for most genes. Any ideas for why this is occurring and/or what to do about it? I have found lots of advice regarding oddities at low-expression levels, but not this pattern. Thanks! ![enter image description here][1] [1]: /media/images/84921512-45ab-45b6-917b-30acaf2e
The high-variance genes probably have almost all counts equal to zero with just one or two very large non-zero counts. Normally such genes would be filtered out by `filterByExpr`. The sort of pattern you see can also be caused by a hidden batch effect that affects a minority of genes and which is not accounted for by your design matrix. I would start by identifying the wierd genes and examining their expression pattern, which may tell you something about quality or annotation issues with your data. Then you can either revise your filtering strategy to remove those genes or can you use `eBayes()` with `robust=TRUE` so that the high-variance genes will be isolated and their influence will be minimized.
biostars
{"uid": 9490022, "view_count": 1124, "vote_count": 1}
Hi there, I'm using [svtyper][1] in order to get the genotype of some structural variants called using `Lumpy`. Svtyper starts running and everything seems OK until it arrives to a specific variant of the input vcf file. At this record, svtyper crashes with the following error: ``` File "pysam/calignmentfile.pyx", line 836, in pysam.calignmentfile.AlignmentFile.mate (pysam/calignmentfile.c:10945) ValueError: mate not found ``` svtyper calls pysam at some point to get the mate pairs. I've read the code of pysam, and this is what I've found: Throws a ValueError if read is unpaired or the mate is unmapped. So, I think that svtyper crashes due to the presence of a read with unmapped mate supporting the SV. If this is the true reason of the error, does this make sense? I mean, I'm interested also in those reads which their mate is unmapped, this is a good pointer about the presence of a SV in that position, isn't it? Any ideas? [1]: https://github.com/cc2qe/svtyper
<p>Well, I did not know where the problem comes from but I found a &quot;temporary&quot; solution just in case that anyone is struggling to get this.<br /> Using <code>samblaster</code> tool (available here:<a href="https://github.com/GregoryFaust/samblaster">https://github.com/GregoryFaust/samblaster</a>) with <code>--addMateTags</code>, mate tags (MC, MQ) are added to the bam file. Without them, <code>svtyper</code> must seek to the corresponding mate position in the bam file. Using this approach, <code>svtyper</code> and therefore <code>pysam</code>, works nice without any error, and also, faster.</p>
biostars
{"uid": 139979, "view_count": 2676, "vote_count": 1}
Hi! I have an issue related to looping over a list containing 54 dataframes (in this case, converted as tibble). The list content looks like: > result $`sample1prnk _ mir.db` # A tibble: 2,368 x 8 pathway pval padj ES NES nMoreExtreme size leadingEdge <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <list> 1 MIR4795_3P 0.000114 0.000300 -0.432 -2.03 0 483 <chr [255]> 2 MIR5696 0.000114 0.000300 -0.495 -2.33 0 493 <chr [303]> 3 MIR4659A_3P_MIR4659B_3P 0.000114 0.000300 -0.526 -2.47 0 479 <chr [260]> 4 MIR7_1_3P 0.000114 0.000300 -0.549 -2.59 0 491 <chr [297]> 5 MIR7_2_3P 0.000114 0.000300 -0.548 -2.58 0 491 <chr [297]> 6 MIR3671 0.000114 0.000300 -0.530 -2.49 0 463 <chr [298]> 7 MIR1468_3P 0.000113 0.000300 -0.526 -2.48 0 496 <chr [279]> 8 MIR548N 0.000114 0.000300 -0.553 -2.60 0 476 <chr [264]> 9 MIR4328 0.000115 0.000300 -0.518 -2.42 0 445 <chr [228]> 10 MIR548H_3P_MIR548Z 0.000113 0.000300 -0.503 -2.37 0 500 <chr [298]> $`sample1prnk _ positional.db` # A tibble: 221 x 8 pathway pval padj ES NES nMoreExtreme size leadingEdge <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <int> <list> 1 chr10p11 0.0548 0.121 -0.503 -1.47 338 22 <chr [13]> 2 chr10p12 0.00248 0.0196 -0.566 -1.82 15 33 <chr [16]> 3 chr10p13 0.0133 0.0483 -0.564 -1.66 81 23 <chr [8]> 4 chr10p15 0.440 0.590 0.305 1.01 1609 27 <chr [2]> 5 chr10q11 0.0538 0.120 -0.423 -1.42 349 41 <chr [8]> 6 chr10q21 0.0104 0.0432 -0.533 -1.66 65 29 <chr [11]> 7 chr10q22 0.615 0.727 0.219 0.928 1872 80 <chr [7]> 8 chr10q23 0.00254 0.0196 -0.481 -1.73 16 57 <chr [18]> 9 chr10q24 0.876 0.913 -0.198 -0.777 6211 96 <chr [27]> 10 chr10q25 0.0107 0.0438 -0.513 -1.65 68 33 <chr [19]> # … with 211 more rows The operation I want to execute is to sort the each tibble respect to NES value and then subset data with padj < 0.05. For this purpose, I'm using arrange(desc(NES)) and filter (padj < 0.05) dplyr functions For one element of the list I ran ```result[[1]] %>% arrange(desc(NES)) %>% filter(padj < 0.05)``` or ```result$`sample1prnk _ mir.db` %>% arrange(desc(NES)) %>% filter(padj < 0.05)``` and the output was as I expected. However, when I'm trying to loop the operation using: for (i in 1:length(result)) { result[[i]] %>% arrange(desc(NES)) %>% filter(padj < 0.05) } nothing happens. I need your help to solve this issue! Rodo.
Using map (the tidyverse equivalent to lapply) and an anonymous function makes this pretty easy. library("tidyverse") result <- map(result, ~filter(.x, padj < 0.05) %>% arrange(desc(NES))) The equivalent in base R using lapply. result <- lapply(result, function(x) { x <- x[x$padj < 0.05, ] x <- x[order(x$NES, decreasing=TRUE), ] }) If you have a lot of data, the data.table library will be quicker. library("data.table") result <- lapply(result, function(x) { setDT(x) x <- x[padj < 0.05][order(-NES)] })
biostars
{"uid": 470274, "view_count": 894, "vote_count": 1}
I have two different ATAC-seq libraries that I wish to compare on a genome browser. I have used Macs to generate bedgraph files for each individual library using the command: macs2 callpeak -t macs2 callpeak -t bamfile --outdir /path/to/ -f BAMPE --keep-dup all --pvalue 1e-2 --call-summits --bdg I then convert bdg to bigwig. How can I best normalise the two ATAC-seq libraries that I want to compare by library size? Can I do this in MACS? Or do I do it after I have generated the separate bdg files? Thank you,
I never use these bedGraphs from macs2. They always look kind of weird (a very scientific statement, I know). You can generate a read-per-million normalized bigwig with this command, using the latest deeptools version: bamCoverage --bam in.bam -o out_normalized.bigwig -bs 1 --normalizeUsing CPM -e
biostars
{"uid": 325946, "view_count": 4762, "vote_count": 1}
I am trying to call peaks in ATAC-seq data. Not surprisingly, MACS is a popular option. According to the [MACS documentation][1]: > 1. To find enriched cutting sites such as some DNAse-Seq datasets. In this case, all 5' ends of sequenced reads should be extended in both > direction to smooth the pileup signals. If the wanted smoothing window > is 200bps, then use '--nomodel --shift -100 --extsize 200'. > > 2. For certain nucleosome-seq data, we need to pileup the centers of nucleosomes using a half-nucleosome size for wavelet analysis (e.g. > NPS algorithm). Since the DNA wrapped on nucleosome is about 147bps, > this option can be used: '--nomodel --shift 37 --extsize 73'. Based on a brief literature review, people use both `--shift -100 --extsize 200` and `--shift 37 --extsize 73` for ATAC-seq. Is one option more appropriate? Are there maybe different sub-types of ATAC-seq where one is better than the other? [1]: https://github.com/taoliu/MACS
I got another alternative from the MACS developer: > If you followed original protocol for ATAC-Seq, you should get > Paired-End reads. If so, I would suggest you just use "--format BAMPE" > to let MACS2 pileup the whole fragments in general. But if you want to > focus on looking for where the 'cutting sites' are, then '--nomodel > --shift -100 --extsize 200' should work. And then more recently: > Why do so many papers use --shift -100 --extsize 200 for MACS2 rather > than -f BAMPE if this is not recommended for paired-end data? > > --shift -100 --extsize 200 will amplify the 'cutting sites' enrichment from ATAC-seq data. So in the end, the 'peak' is where Tn5 transposase > likes to attack. The fact is that, although many information such as > the insertion length and the other mate alignment is ignored, such > result is still usable. Especially when the short fragment population > is extremely dominant, the final output won't be off much. Source: https://github.com/taoliu/MACS/issues/145
biostars
{"uid": 209592, "view_count": 31376, "vote_count": 22}
<p>I&#39;ve been searching for some time, but I cannot find any md5sums that come with FASTQ files from the <a href="http://trace.ncbi.nlm.nih.gov/Traces/sra/">SRA</a>. I&#39;m currently using sra-tools&#39; fasq-dump to get them from a script. On their website, only <a href="http://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?analysis=DRZ000001">analyses</a> seem to have md5sums.</p>
The SRA archive format ("vdb") contains an md5 checksum as well as a few other consistency checks (I think). The sra-toolkit has a utility, [vdb-validate][1] which will report any errors in the data, and perform an md5 checksum comparison. [1]: http://www.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=toolkit_doc&f=vdb-validate
biostars
{"uid": 147148, "view_count": 12737, "vote_count": 6}