INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
Hello Everyone, I have a fastq file and I want to extract only those reads which have length greater than 25 bp. So i want to make another fastq file with read length > 25 bp. How can I do this. This is my top 100 lines of fastq file ``` @SRR1024131.1 DBRHHJN1:259:D0PM7ACXX:1:1101:1911:1053 length=100 AGGGCAAGTATGAAGAAGTAGAATATT +SRR1024131.1 DBRHHJN1:259:D0PM7ACXX:1:1101:1911:1053 length=100 DDFHHFHHGGGHGGIFHIJIIDIIJJI @SRR1024131.2 DBRHHJN1:259:D0PM7ACXX:1:1101:2522:1198 length=100 GGCTCAACTTTCGATGGT +SRR1024131.2 DBRHHJN1:259:D0PM7ACXX:1:1101:2522:1198 length=100 FFFGHHHHJJJJJGFIJF @SRR1024131.3 DBRHHJN1:259:D0PM7ACXX:1:1101:3117:1165 length=100 ACATTTTTGAGTGCTTACTACAGT +SRR1024131.3 DBRHHJN1:259:D0PM7ACXX:1:1101:3117:1165 length=100 FFFHHHHHHHIHEHHFGHFHHGII @SRR1024131.4 DBRHHJN1:259:D0PM7ACXX:1:1101:3474:1075 length=100 TAGTACTTAGCAAAGAGTGA +SRR1024131.4 DBRHHJN1:259:D0PM7ACXX:1:1101:3474:1075 length=100 DDDFHDFHIAGHIGHG@33A @SRR1024131.5 DBRHHJN1:259:D0PM7ACXX:1:1101:3952:1099 length=100 TGAGAACTGAATTCCATAGGCTGT +SRR1024131.5 DBRHHJN1:259:D0PM7ACXX:1:1101:3952:1099 length=100 EFFHGHHHHJIJJJJJIBFHEHIG @SRR1024131.9 DBRHHJN1:259:D0PM7ACXX:1:1101:5277:1092 length=100 GCGGCGGCGTTATTCCCATGACCCGCCGG +SRR1024131.9 DBRHHJN1:259:D0PM7ACXX:1:1101:5277:1092 length=100 FDDDHHDHI@B>=B>?@BD>ACCCBC@BB @SRR1024131.11 DBRHHJN1:259:D0PM7ACXX:1:1101:6019:1101 length=100 AGTAGATTTGTATGGATTT +SRR1024131.11 DBRHHJN1:259:D0PM7ACXX:1:1101:6019:1101 length=100 DDDHHFFHIGHAGHEFIII @SRR1024131.14 DBRHHJN1:259:D0PM7ACXX:1:1101:8423:1248 length=100 AGTCGGTGATGGGAGTCTCT +SRR1024131.14 DBRHHJN1:259:D0PM7ACXX:1:1101:8423:1248 length=100 FFFHHHFHIJIIJIJBHIJJ @SRR1024131.15 DBRHHJN1:259:D0PM7ACXX:1:1101:9484:1233 length=100 TGCTGGGTCACACCTGAAGCT +SRR1024131.15 DBRHHJN1:259:D0PM7ACXX:1:1101:9484:1233 length=100 FFFHHGHFHIJHHHJJHHIJJ @SRR1024131.16 DBRHHJN1:259:D0PM7ACXX:1:1101:9807:1100 length=100 ACTATTCCAGCGAGAGTTAACATAAATTCCAAT +SRR1024131.16 DBRHHJN1:259:D0PM7ACXX:1:1101:9807:1100 length=100 FFFHHHHHJJIJJJJIHHGHIJJGJJJJIIJJI @SRR1024131.17 DBRHHJN1:259:D0PM7ACXX:1:1101:10857:1034 length=100 TAATCATTTTAATTGTACAGTTCAGTAATGT +SRR1024131.17 DBRHHJN1:259:D0PM7ACXX:1:1101:10857:1034 length=100 B?3CDFBFFFFFIIF:EFHAHIC?FE+ABHH @SRR1024131.19 DBRHHJN1:259:D0PM7ACXX:1:1101:13257:1082 length=100 ATGTGTTTGTAGGTTGTTTGTTGTCTTTA +SRR1024131.19 DBRHHJN1:259:D0PM7ACXX:1:1101:13257:1082 length=100 DFFHHHHHJFHHIHHJFGHIJJIFIIIIG @SRR1024131.20 DBRHHJN1:259:D0PM7ACXX:1:1101:14103:1161 length=100 TGAGGTAGTAGGTTGTATAGTT +SRR1024131.20 DBRHHJN1:259:D0PM7ACXX:1:1101:14103:1161 length=100 FFEHFCFHFGHGEFHC<HHIED @SRR1024131.21 DBRHHJN1:259:D0PM7ACXX:1:1101:16005:1093 length=100 TTCTCTCTCTCTGTGTGTGCGTGTGTGTGTGT +SRR1024131.21 DBRHHJN1:259:D0PM7ACXX:1:1101:16005:1093 length=100 DDFGHGGFJGIJFIBCBAFHHCGGFDCFGFED @SRR1024131.24 DBRHHJN1:259:D0PM7ACXX:1:1101:17113:1023 length=100 TCCCTGAGACCCTAACTTGTGA +SRR1024131.24 DBRHHJN1:259:D0PM7ACXX:1:1101:17113:1023 length=100 FFFHHHHHJJJIIJJIJJJJJJ @SRR1024131.26 DBRHHJN1:259:D0PM7ACXX:1:1101:18596:1025 length=100 TGAGGTAGGAGGTTGTATAGTTAT +SRR1024131.26 DBRHHJN1:259:D0PM7ACXX:1:1101:18596:1025 length=100 DDDDDACDEEEE:AF3CE@A9ABE @SRR1024131.27 DBRHHJN1:259:D0PM7ACXX:1:1101:19286:1068 length=100 TCCCTGAGACCCTAACTTGTGA +SRR1024131.27 DBRHHJN1:259:D0PM7ACXX:1:1101:19286:1068 length=100 DDDFHHHHIIIGG;CEGIEHHG @SRR1024131.28 DBRHHJN1:259:D0PM7ACXX:1:1101:20016:1230 length=100 CAAATAATTACAGTTAT +SRR1024131.28 DBRHHJN1:259:D0PM7ACXX:1:1101:20016:1230 length=100 DFFGBFBHG@HGHHGFA @SRR1024131.29 DBRHHJN1:259:D0PM7ACXX:1:1101:20465:1216 length=100 GTTACGCTCGCCTTGGCCGT +SRR1024131.29 DBRHHJN1:259:D0PM7ACXX:1:1101:20465:1216 length=100 FFFGHHHHJJJJGGHIFHGD @SRR1024131.30 DBRHHJN1:259:D0PM7ACXX:1:1101:20573:1152 length=100 AGAAGGAACTTTTACAACTGTGTGGTTTT +SRR1024131.30 DBRHHJN1:259:D0PM7ACXX:1:1101:20573:1152 length=100 DDBDBB+AFHGE>@<C<?:AA@HEE:)?F @SRR1024131.32 DBRHHJN1:259:D0PM7ACXX:1:1101:21322:1217 length=100 ATTACTGAAGAAAAGTTTACCT +SRR1024131.32 DBRHHJN1:259:D0PM7ACXX:1:1101:21322:1217 length=100 AADHHHHB<:EEF;C22A22AC @SRR1024131.35 DBRHHJN1:259:D0PM7ACXX:1:1101:4318:1259 length=100 AAAAGCATTCATCAGCCCAA +SRR1024131.35 DBRHHJN1:259:D0PM7ACXX:1:1101:4318:1259 length=100 FFFGHGHHJGCIJFGGIJII @SRR1024131.36 DBRHHJN1:259:D0PM7ACXX:1:1101:4391:1407 length=100 CTGGACTCTTACTGCGTTTCATACATCT +SRR1024131.36 DBRHHJN1:259:D0PM7ACXX:1:1101:4391:1407 length=100 FFFH?HHHIGGIGIII<FBEHIIIEIGE @SRR1024131.39 DBRHHJN1:259:D0PM7ACXX:1:1101:6327:1406 length=100 AAGTACGCACGGCCGGTACAGTGAAG +SRR1024131.39 DBRHHJN1:259:D0PM7ACXX:1:1101:6327:1406 length=100 FFFHGHHHIJIGIIII0?FHGHIJGH @SRR1024131.43 DBRHHJN1:259:D0PM7ACXX:1:1101:7579:1334 length=100 TGTGTATAAATGTATTT +SRR1024131.43 DBRHHJN1:259:D0PM7ACXX:1:1101:7579:1334 length=100 FFFHHHGHJJJJHGIJJ ``` Any help!! Regards Varun
I believe you could also use [seqtk][1] to do this: seqtk seq -L 25 yourseqs.fastq.gz > cleanseqs.fastq.gz [1]: https://github.com/lh3/seqtk/
biostars
{"uid": 105428, "view_count": 5462, "vote_count": 3}
I am tying to retrieve mouse mm10 gene information using biomart library in R, but I don't know how to do that (The information that I need are `mm10.knownGene.name`, `mm10.knownGene.chrom`, `mm10.knownGene.strand`, `mm10.knownGene.txStart`, `mm10.knownGene.txEnd` and `mm10.kgXref.geneSymbol`) ```r source("http://bioconductor.org/biocLite.R") biocLite("biomaRt") library(biomaRt) mouse = useMart("ensembl", dataset = "mmusculus_gene_ensembl") listFilters(mouse) getBM(attributes=c("ensembl_gene_id", "mgi_symbol"), filters= "mgi_symbol", mart=mouse) ```
If you just want mm10 symbol, chr, strand, transcript start & end, you could do this: ```r res <- getBM(attributes = c("ensembl_gene_id", "mgi_symbol","chromosome_name",'strand','transcript_start','transcript_end'), mart = mouse) ``` If you have list of genes: ```r #genesym is a character vector of gene symbols res <- getBM(attributes = c("ensembl_gene_id", "mgi_symbol","chromosome_name",'strand','transcript_start','transcript_end'), filters = genesym, mart = mouse) ```
biostars
{"uid": 147351, "view_count": 21742, "vote_count": 4}
I have in excess of 400 BED files of transcription factor binding site coordinates that I want to compare with one master BED file of binding regions. The aim is to identify which of the ~400 TFBS overlap with the master file. Normally I would go straight to bedtools, but given the number of TFBS files I wonder if there is a "better" method? [bedtools intersect][1] would appear to do the trick, but for 400 files....? The type of output I am looking to get is: ``` master_pos1 TFBS8, TFBS16, TFBS200 master_pos2 TFBS1, TFBS333 .. ``` Thank you [1]: http://bedtools.readthedocs.org/en/latest/content/tools/intersect.html
If your BED files are sorted, you could use [*bedops*][1] to union all the TFBS to standard output, and pipe that result to [*bedmap*][2] to do the mapping of TFs to master regions. (This technique assumes that the TFBS files are minimally BED4. That is, the fourth column in each TFBS file contains the ID of the TF. If that is not the case, describe your format in more detail and I'll suggest a quick one-liner with *awk* to fix up files into the correct form.) Here's the one-liner that unions and maps: $ bedops --everything tfbs*.bed | bedmap --echo --echo-map-id-uniq --delim '\t' master.bed - > answer.bed Piping to standard output avoids the unnecessary step of making an intermediate file somewhere on the hard drive, which is otherwise very expensive in time. So this should be very fast. Assuming that the ID fields in each of `tfbs001.bed` through `tfbs400.bed` contain the desired TF names or other identifiers of choice, the file `answer.bed` contains the results as you expect, except that it uses a semi-colon as an ID delimiter, instead of a comma. You could add `--multidelim ','` to the *bedmap* statement, if that is a requirement. If your BED files are not sorted, you could first prepare them with BEDOPS [*sort-bed*][3], which is faster at sorting BED files than GNU *sort*. ``` $ for tfbs_fn in `ls tfbs*.bed`; do sort-bed $tfbs_fn > sorted.$tfbs_fn; done $ sort-bed master.bed > sorted.master.bed ``` Then use the sorted files in downstream BED ops. You only need to sort once. [1]: http://bedops.readthedocs.org/en/latest/content/reference/set-operations/bedops.html [2]: http://bedops.readthedocs.org/en/latest/content/reference/statistics/bedmap.html [3]: http://bedops.readthedocs.org/en/latest/content/reference/file-management/sorting/sort-bed.html
biostars
{"uid": 152829, "view_count": 2719, "vote_count": 2}
Dear all, I am trying to generate exactly the same results in Limma for my result table as I did in DESEQ2 (Differential Expression Analysis). With the toptable function I am not getting lfcSE and basemean. Can anyone help me how to extract this information via Limma? I do get logFC, AveExpr, t, P-Value, adj.Pvalue and B in Limma. Thank you, Bine
It would be superfluous to include a SE column in the limma output because it is not needed and because it is immediately computable from the other columns.
biostars
{"uid": 9539579, "view_count": 649, "vote_count": 1}
hi, I am trying to read my CEL files from [Mouse430_2] Affymetrix Mouse Genome 430 2.0 Array but I get this error library(affy) > Data<-ReadAffy() Error in affyio::read_abatch(filenames, rm.mask, rm.outliers, rm.extra, : Cel file C:/Users/Lenovo/Desktop/GSE50833_RAW/GSE10000_RAW/GSM44660.CEL/GSM252007.CEL does not seem to have the correct dimensions > celfiles<- list.files("GSE10000/CEL", full = TRUE) > rawData<- read.celfiles(celfiles) All the CEL files must be of the same type. Error: checkChipTypes(filenames, verbose, "affymetrix", TRUE) is not TRUE how to read my CEL files for normalization?
The code you show is intriguing because you have in the error message a reference to the GSE10000 dataset, which is indeed Mouse430_2 arrays, but also to GSE50833, which are Agilent-028005 SurePrint G3 arrays. At any rate, the error suggests that you are trying to read different arrays with ReadAffy() and that fails because different arrays have different dimensions. First thing I would try myself is to make sure that all the files are of the same platform/array. **EDIT** I could replicate the problem and confirm my guess using the following experiment (there must be a better way to do this than reading the whole set of files one by one): f <- list.files(pattern = "CEL.gz") celf <- lapply(f, function(x) ReadAffy(filenames = x)) table(sapply(celf, annotation)) mouse4302 mouse430a2 18 17 The solution is to read them separately. **EDIT 2** OK, this is the most effective (fast) way to check the chip type of a bunch of cel files: library(affyio) f <- list.files(pattern = "CEL.gz") table(sapply(f, function(x) read.celfile.header(x)$cdfName)) Mouse430_2 Mouse430A_2 18 17 **EDIT 3** And this is how you can use the information above to read the files in different batches: ff <- split(f, sapply(f, function(x) read.celfile.header(x)$cdfName)) ff $Mouse430_2 [1] "GSM250879.CEL.gz" "GSM250880.CEL.gz" "GSM250881.CEL.gz" "GSM250882.CEL.gz" "GSM250919.CEL.gz" "GSM250920.CEL.gz" [7] "GSM250922.CEL.gz" "GSM250923.CEL.gz" "GSM250925.CEL.gz" "GSM250927.CEL.gz" "GSM250928.CEL.gz" "GSM250943.CEL.gz" [13] "GSM44658.CEL.gz" "GSM44659.CEL.gz" "GSM44660.CEL.gz" "GSM44661.CEL.gz" "GSM44662.CEL.gz" "GSM44663.CEL.gz" $Mouse430A_2 [1] "GSM252007.CEL.gz" "GSM252008.CEL.gz" "GSM252009.CEL.gz" "GSM252010.CEL.gz" "GSM252011.CEL.gz" "GSM252014.CEL.gz" [7] "GSM252015.CEL.gz" "GSM252016.CEL.gz" "GSM252017.CEL.gz" "GSM252018.CEL.gz" "GSM252021.CEL.gz" "GSM252022.CEL.gz" [13] "GSM252033.CEL.gz" "GSM252040.CEL.gz" "GSM252051.CEL.gz" "GSM252052.CEL.gz" "GSM252053.CEL.gz" library(affy) abatch1 <- ReadAffy(filenames = ff$Mouse430_2) abatch2 <- ReadAffy(filenames = ff$Mouse430A_2) And so on.
biostars
{"uid": 219523, "view_count": 13232, "vote_count": 3}
Dear community, I have a huge paired-end HiC dataset (BAM format) which I want to format like this way: ``` HWI-D00283:117:C5KKJANXX:2:1101:1139:77789 chr6 153338506 153338556 37 + chr6 153338031 153338081 37 - HWI-D00283:117:C5KKJANXX:2:1101:1139:77856 chr6 149915169 149915219 37 - chr6 149914908 149914958 37 + HWI-D00283:117:C5KKJANXX:2:1101:1139:79414 chr4 184474969 184475019 37 - chr4 184474811 184474861 37 + HWI-D00283:117:C5KKJANXX:2:1101:1139:81280 chr6 153641723 153641773 37 - chr6 153641551 153641601 37 + HWI-D00283:117:C5KKJANXX:2:1101:1139:81917 chr8 87070282 87070332 37 - chr8 87069851 87069901 37 + HWI-D00283:117:C5KKJANXX:2:1101:1139:82575 chr17 56970884 56970934 37 - chr6 151400450 151400500 37 - HWI-D00283:117:C5KKJANXX:2:1101:1139:86642 chr6 150043041 150043091 37 - chr6 150042915 150042965 37 + ``` This is an example which I obtained by first converting the BAM format to BED and separating each mate into different files and then with a AWK command joined the mates. This is the awk command I used: awk 'NR==FNR {h[$4] = $1"\t"$2"\t"$3"\t"$5"\t"$6; next} {OFS="\t"; print $4,$1,$2,$3,$5,$6,h[$4]}' mate1 mate2 This command worked fine with a small dataset (1M, 10M reads), but when I tried with 200M reads file, it crashes because memory reasons I suppose. Is there a way to efficiently join paired-end reads as I showed in my example? Thanks!
extract the mapped reads F, sort on name: samtools view -f 64 -F 3976 your.bam | cut -f LC_ALL=C sort -k1,1 > F.txt extract the mapped read R, sort on name: samtools view -f 128 -F 3976 your.bam | LC_ALL=C sort -k1,1 > R.txt join both files and format the output with awk: LC_ALL=C join -t ' ' -1 1 -2 1 F.txt R.txt | awk -f your.script > result.txt
biostars
{"uid": 119565, "view_count": 2490, "vote_count": 1}
<p>I have a large number of short patterns (5-10 &quot;letters&quot; or amino acids in this context). I want to search for these in a large database of &quot;texts&quot; (proteins). Are there any existing packages which provide efficient algorithms for this? Ideally they would be</p> <ul> <li>Handle 140k patterns and 280k texts</li> <li>Able to handle ambiguous residues such as &#39;J&#39; for [IL] (this can be faked by duplicating some patterns)</li> <li>python is preferred (I&#39;m already using BioPython, but it is missing this afaik)</li> </ul> <p>I&#39;m currently using a naive O(n^2) search, but it&#39;s taking too long. I think this problem is generally solved using keyword trees (Aho-Corasick algorithm), but I&#39;d rather not implement that myself unless I have to.</p> <p><em>Update</em></p> <p>I should have specified that for output I need both the text and the pattern which matched it. This prevents some of the otherwise great implementations (eg fgrep, ahocorasick) from solving my problem.</p>
I ended up using the <a href="http://code.google.com/p/esmre/">esmre</a> library, as suggested by <a href="https://www.biostars.org/u/6141/">dariober</a> in the comments. It dropped my run time from hours (with a brute force search) to seconds. Full code is on <a href="https://github.com/sbliven/proteinsearch">github</a>, but the important parts are quite simple: import esmre vocab = esmre.Index() for word in patterns: vocab.enter(word,word) for text in texts: matches = vocab.query(text) for match in matches: print "Match for %s in %s" % (match, text)
biostars
{"uid": 100719, "view_count": 4042, "vote_count": 2}
Hello, I'm starting to use cutadapt instead of sickle, and I'm not sure I understand how it works exactly. I tried to use minimum length of 20 bp and minimum quality score of 20, and the results are super different between the three programs. This is cutadapt output: cutadapt -q 20 -m 20 -o output_q20_m20.fastq input.fastq This is cutadapt 1.13 with Python 2.7.9 Command line parameters: -q 20 -m 20 -o output_q20_m20.fastq input.fastq Trimming 0 adapters with at most 10.0% errors in single-end mode ... Finished in 12.03 s (3 us/read; 17.89 M reads/minute). === Summary === Total reads processed: 3,587,045 Reads with adapters: 0 (0.0%) Reads that were too short: 571,747 (15.9%) Reads written (passing filters): 3,015,298 (84.1%) Total basepairs processed: 125,546,575 bp Quality-trimmed: 21,189,744 bp (16.9%) Total written (filtered): 98,543,830 bp (78.5%) This is sickle output: sickle se --fastq-file input.fastq --qual-type sanger --qual-threshold 20 --length-threshold 20 --output-file output_sickle_q20_m20.fastq SE input file: input.fastq Total FastQ records: 3587045 FastQ records kept: 2449731 FastQ records discarded: 1137314 This is BBDuk output: ./bbduk.sh -Xmx1g in=input.fastq out=output_bbduk.fq qtrim=r trimq=20 ml=20 overwrite=true java -Djava.library.path=/home/rioualen/Desktop/bbmap/jni/ -ea -Xmx1g -Xms1g -cp /home/rioualen/Desktop/bbmap/current/ jgi.BBDukF -Xmx1g in=input.fastq out=output_bbduk.fq qtrim=r trimq=20 ml=20 overwrite=true Executing jgi.BBDukF [-Xmx1g, in=input.fastq, out=output_bbduk.fq, qtrim=r, trimq=20, ml=20, overwrite=true] BBDuk version 37.10 Initial: Memory: max=1029m, free=993m, used=36m Input is being processed as unpaired Started output streams: 0.020 seconds. Processing time: 2.012 seconds. Input: 3587045 reads 125546575 bases. QTrimmed: 3156968 reads (88.01%) 68828955 bases (54.82%) Total Removed: 1592853 reads (44.41%) 68828955 bases (54.82%) Result: 1994192 reads (55.59%) 56717620 bases (45.18%) Time: 2.059 seconds. Reads Processed: 3587k 1742.23k reads/sec Bases Processed: 125m 60.98m bases/sec How can there be such a big difference between them, eg 15.9%, 31.7% and 44.4% of reads filtered? Is cutadapt saying " Trimming 0 adapters with at most 10.0% errors in single-end mode " a bug? Cause I checked the fastq files and it is properly removing bp with a score <20. Below are links to fastQC results [no trimming][1] [cutadapt][2] [sickle][3] [bbduk][4] [1]: https://ibb.co/dMdjo5 [2]: https://ibb.co/fVYmJ5 [3]: https://ibb.co/hve45k [4]: https://ibb.co/fTUM1Q
OK based on Cutadapt documentation I think I get the idea: http://cutadapt.readthedocs.io/en/stable/guide.html#quality-trimming-algorithm Basically the "problem" is that it's trimming by starting at the end of the read. In my case, it seems many reads have a quality that is dropping, then going up, then dropping again. Cutadapt seems to cut out only the latest part, while Sickle uses a sliding window that cuts sequences from the first dropping point. It's interesting to see there can be such a huge difference when using two seemingly similar programs. As a bioinformatician it gives yet one more reason to not rely on one single program for one specific task...
biostars
{"uid": 247741, "view_count": 11599, "vote_count": 8}
Like this: ``` HETATM 1519 C1 AAB1 B 100 -12.826 12.835 34.863 0.50 14.18 C HETATM 1520 C1 BAB1 B 100 -4.549 17.796 20.909 0.50 14.18 C ________________^ ``` There is *only one* C1 atom in AB1, but now we have the A and B version of this. I guess that because of the X-ray crystallography method, which has diffraction. But I am not sure about that.
When interpreting the x-ray diffraction and its correspondent electron density map, sometimes the position the a particular atoms is not crystal clear (pun intended). The atom can be too mobile, the density particularly bad at in that region, etc.. it's also tied to the occupancy factor (the second to last column, 0.50 in your example) which is 1.00 if the atom is observed 100% of the time at one place, less if split between different locations. In these cases, you have multiple entries per location of the same atom, differentiated by the character at position 17.
biostars
{"uid": 105413, "view_count": 3295, "vote_count": 1}
Hi, I have been trying to run RNA seq analysis on some paired end data. I have aligned on HISAT2, and run Stringtie, Stringtie Merge and then Stringtie again. To do the analysis I am using: grch38_tran.tar.gz - https://ccb.jhu.edu/software/hisat2/index.shtml Homo_sapiens.GRCh38.84.gtf - ftp://ftp.ensembl.org/pub/release-84/gtf/homo_sapiens/Homo_sapiens.GRCh38.84.gtf.gz My issue is that despite running stringtie again after merge to remove some of the MSTRGs, I am getting a large number of them in my data set. More alarmingly the MSTRGs that do exist represent the highest counts in my sample.HISAT2-2.1.0.aligned.sorted.StringTie.1.3.3.gene_count_matrix. Number of each: 24801 mstrg / 33970 ensg Fraction of total: .42199 mstrg / .57800 ensg Sum of each counts: 78615368 mstrg / 778402 ensg Fraction of counts: .99019 mstrg / .00980 ensg So while the MSTRG only makes up ~42% of the gene ids, it is 99% of what has been counted. I have minimum coverage set to 5, and have -G set, as well as -e to restrict to the reference given. Is there anyway to further optimize this? Have I missed out on an important step?
This has always been an issue as far as novel transcript discovery goes, you can see a lot of hits. Keep in mind that the vast majority of these are very slight changes to known transcripts and splice events, which are generally meaningless. When performing this kind of analysis I generally get rid of any MSTRG ID that falls within a known annotation, and then look for protein coding potential of transcripts identified, finally prioritising on abundance. I'll then go through a short list of these transcripts and visualise them in IGV to see if they're convincing. A lot of this prioritisation I've been able to do with awk, and drastically reducing noise with [TACO][1], as a replacement for stringtie-merge. TACO also includes a utility to compare your merged GTF against a reference GTF, which is handy for subsetting. [1]: http://tacorna.github.io/
biostars
{"uid": 357002, "view_count": 1979, "vote_count": 3}
I like to know is there any way to merge the same coordinate in a table. Table: chr1 155944562 155945214 fantom_neuron GSM1554667 9.84447 chr1 155944562 155945214 fantom_neuron GSM1554672 7.43630 chr1 155944562 155945214 fantom_neuron GSM1554678 32.77627 chr1 155945743 155946196 fantom_neuron BAMPE 3.87072 chr1 155945743 155946196 fantom_neuron GSM1554666 18.14939 chr1 155945746 155946202 fantom_neuron GSM1554655 1.14939 Expected table: chr1 155944562 155945214 fantom_neuron GSM1554667,GSM1554672,GSM1554678 9.84447,7.43630,32.77627 chr1 155945743 155946196 fantom_neuron BAMPE,GSM1554666 18.14939 ,3.87072 chr1 155945746 155946202 fantom_neuron GSM1554655 1.14939
Using **R**, and *data.table* package: library(data.table) # use fread for your data # mydata <- fread("myFile.bed") # example data mydata <- fread(" chr1 155944562 155945214 fantom_neuron GSM1554667 9.84447 chr1 155944562 155945214 fantom_neuron GSM1554672 7.43630 chr1 155944562 155945214 fantom_neuron GSM1554678 32.77627 chr1 155945743 155946196 fantom_neuron BAMPE 3.87072 chr1 155945743 155946196 fantom_neuron GSM1554666 18.14939 chr1 155945746 155946202 fantom_neuron GSM1554655 1.14939 ") # then group by paste mydata[, lapply(.SD, toString), .SDcols = c(5:6), by = list(V1, V2, V3, V4) ] # V1 V2 V3 V4 V5 V6 # 1: chr1 155944562 155945214 fantom_neuron GSM1554667, GSM1554672, GSM1554678 9.84447, 7.4363, 32.77627 # 2: chr1 155945743 155946196 fantom_neuron BAMPE, GSM1554666 3.87072, 18.14939 # 3: chr1 155945746 155946202 fantom_neuron GSM1554655 1.14939
biostars
{"uid": 365049, "view_count": 1217, "vote_count": 1}
Hi I have a set of reads mapped to a reference sequence, what I what to do is to take the reads that map to one sequence and assemble them in a comtig as best as possible. Then I want to repeat this for thousands of sequence-read sets. I do not want to put this in a regular assember like mira, abyss, etc. because it is an overkill for what I want to do. I also do not want any software messing with my assignment and going its own way. Is there such a command line tool? Thanks
TASR and SSAKE are traditional bets for this sort of thing, as well as PRICE. Haven't tried Tadpole, as genomax2 suggests, but I'm sure that's worth a look as well - especially as ones I've mentioned aren't going to break any speed records!
biostars
{"uid": 178795, "view_count": 1331, "vote_count": 1}
Hi there Can anyone explain to me how to use the ESTIMATE package in RNA-seq analysis? I want to calculate immune scores and stromal scores by employing the ESTIMATE algorithm, then analyze the relationship of immune/stromal scores with subtype classification and cytogenetic risk by one-way analysis of variance, but I don't know how to do this! I will be grateful for any help you can provide.
A vignette PDF comes installed with the package, and should be located at: - R/x86_64-pc-linux-gnu-library/4.0/estimate/doc/ESTIMATE_Vignette.pdf In this vignette, they use some data that comes bundled with the package (*R/x86_64-pc-linux-gnu-library/4.0/estimate/extdata/sample_input.txt*), which represents Affymetrix U133 microarray data that appears to be normalised and transformed by log [base 2]. So, if you have RNA-seq data, I would normalise the data in the usual way, and then transform via rlog or vst. Then, with ESTIMATE, use the rlog or vst expression levels. Kevin
biostars
{"uid": 455438, "view_count": 4180, "vote_count": 1}
I have several CPU nodes connected to the same network mounted directory. I want to parallelize the abyss-map step after abyss generates unitigs. When I run abyss-map on my reads, it outputs a message saying it is generating a suffix array for the mapping. Does it store that on disk? Or does it just store it in memory for the mapping? If I parallel run several abyss-map in the same directory, will the suffix array generated overwrite the suffix array generated by other abyss-map runs?
Hi Damian, `abyss-map` builds the indexes in memory by default, so there should be no conflict about running multiple instances on the same file in parallel. However, you can also use `abyss-index` to pre-build the indexes (.fai and .fm files). Any subsequent `abyss-map` runs will re-use those index files (similar to how it works with BWA), which allows you to run `abyss-map` more quickly and with a smaller memory requirement. (The initial indexing step requires an amount of RAM that is about 10 times the size of the input FASTA file.)
biostars
{"uid": 165687, "view_count": 1863, "vote_count": 1}
Hello, I am trying to analyze the public dataset https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE126030 I've downloaded the fastq files onto my cluster, and would like to proceed with cellranger count. I am in a test folder and the only file is: SRR8526547_1.fastq and refdata-cellranger-GRCh38-1.2.0 cellranger count --id=cellranger \ --transcriptome=/home/jl2/scratch60/refdata-cellranger-GRCh38-1.2.0/ \ --fastqs=.\ --sample=SRR8526547_1.fastq \ I keep getting the error of Invalid path/prefix combination: /gpfs/ycga/scratch60/k/jl2/test, ['SRR8526547_1.fastq'] No input FASTQs were found for the requested parameters. Can't seem to figure out what's wrong. Does it need fastq.gz instead of fastq?
`cellranger count` expects a certain nomenclature for the `fastq` files, please see the last section [here](https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/using/fastq-input), "My FASTQs are not named like any of the above examples". Basically this is how your file names should look like: `[Sample Name]_S1_L00[Lane Number]_[Read Type]_001.fastq.gz`. For the `Read Type`, you can take a look at your fastq files with `head` to see what is what. The link above explains different read types.
biostars
{"uid": 427428, "view_count": 8375, "vote_count": 2}
<p>Hi all,</p> <p>I am analyzing some Illumina paired-end sequencing experiment. I would like to track the duplicates in my lanes and be able to distinguish between PCR duplicates and optical duplicates.</p> <p>To this purpose, I use Picard MarkDuplicates. This function has an OPTICAL_DUPLICATE_PIXEL_DISTANCE parameter ... nice ... but as the function simply set a flag to true in the sorted <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a> file, there is no way in the end to distinguish between the two. (Am I right ?)</p> <p>So, basically I am wondering if this option is really useful ? It is explained that MarkDuplicates starts to find the 5' coordinates and mapping orientations of each read pair, thus to look at the coordinates of the cluster on the flowcell seems unnecessary (?), as the pair will be tagged as a duplicate anyway.</p> <p>Do you use in-house script or a particular API for such a goal ?</p> <p>Cheers Tony</p> <p>EDIT : I am aware that Picard creates a metrics file to report some values. But in some lanes generated with a PCR-free protocol, I expected a proportion of my duplicates to be optical duplicates. Nevertheless, in Picard metrics file, I always have %optical_dup=0. So I am wondering if some of you had some issues with this measure as well.</p>
<p>I omitted to have a close look to the read names in my files. A read name has the following format :</p> <pre><code> @identifier:lane:tile:x:y </code></pre> <p>Picard, by default, only match numbers and letters in the 'identifier' part. So if you have underscores (and it's quite usual to have some actually), Picard will not be able to get the coords back and then no optical duplicates will pop up...</p> <p>Use the READ_NAME_REGEX option of MarkDuplicates to customize the read name matching.</p>
biostars
{"uid": 12538, "view_count": 13599, "vote_count": 6}
I am new to programming in R and Python, however have some basics. I have a technical question about computation. I would like to know if there are any functions for performing subtraction of all features(rows) to a particular value (row) from the same data list. I would like to obtain the output_value1 as shown below and post this, multiply by (-1) to obtain the output_value2. Please let me know if you need more details. I have tried performing the same operation in the MS Excel, this is very tedious and time consuming. I have many large datasets with several hundred rows and columns which becomes more complex to manually perform the same in MS Excel. Hence, I would prefer to write a code and obtain the desired outputs. |Feature| |Value| |Output_value1| |Output_value2| |Gene_1| |14.25633934| |0.80100922| |-0.80100922| |Gene_2| |16.88394578| |3.42861566| |-3.42861566| |Gene_3| |16.01| |2.55466988| |-2.55466988| |Gene_4| |13.82329514| |0.36796502| |-0.36796502| |Gene_5| |12.96382949| |-0.49150063| |0.49150063| |Normalizer| |13.45533012| |0| |0|
Try this example: # example data df1 <- read.table(text = " Feature Value Gene_1 14.25633934 Gene_2 16.88394578 Gene_3 16.01 Gene_4 13.82329514 Gene_5 12.96382949 Normalizer 13.45533012 ", header = TRUE) # find the row with normaliser, use the value to substract from each row df1$Output_value1 <- df1$Value - df1[ df1$Feature == "Normalizer", "Value"] # then flip the sign (could be skipped, if we flip the substract in above step) df1$Output_value2 <- df1$Output_value1 * -1
biostars
{"uid": 395190, "view_count": 749, "vote_count": 1}
**This question below turned to be completely faulty. I don't have to do anything with DNase data for GRCh38. I asked it because of the file count difference between hg38 and hg37, which I thought to be too big. For hg38 there're 95 files *\*Peak.txt.gz*. For hg37 there're 236 *\*narrowPeak.gz*, but after merging pairs PkRep1 & PkRep2 (probably FASTQ(SE/PE) reps) we get only 123 files. Finally, this difference (123 & 95) no longer seems to be big and we have even cleaner situation without PkRep1 & PkRep2.** **One again: there's no problem with DNase data for GRCh38 assembly and only my question was misleading. I'd like to apologise for the confusion I introduced.** --- <s>I'm interesed in transciptional activity, thus I'm willing to use DNase hypersensitivity sites to detect regions where transcription factors are allowed to bind. In previous genome assembly GRCh37 / hg19 I used to use narrow peaks files from these to sources (University of Washington and Duke University, respectively) (files with suffixes *.narrowPeak.gz*): [http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeUwDnase/][1] [http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeOpenChromDnase/][2] With the most contemporary assembly GRCh38 there're also some annotations attached (files with trailing *Peak.txt.gz*): [http://hgdownload.soe.ucsc.edu/goldenPath/hg38/database/][3] And here four complementary question arise: 1. Consider only datasets, which come from University of Washington. For GRCh38 / hg19 I counted 236 narrow peak files, whereas for newer GRCh38 there're only 95 files. **How to explain this differene? Do the datasets represent exactly the same coverage, but with much lower granularity / precision (datasets that come for several tissue lines are merged into fewer files)?** 2. With GRCg37 / hg19 we have both narrow peaks as well as broad peaks, whereas GRCh38 comes with only one type of of file *\*Peak.txt.gz*. **Does it mean that with the newest version we have only narrow peaks? Are the broad peaks hidden somewhere else?** 3. With GRCh37 / hg19 we have two separate sources of DNase data: UofW and Duke. For GRCh38, it seems that only UofW datasets are availabe. **Is any other source of DNase data available, maybe stored separately (Duke or other lab)?** 4. **Let's suppose that you're in my place and you would like to determine cis-regulatory areas. What type of data can be used to do so? Mabey DNase datasets but from other source or even completly different type of data (NOT DNase)?** Thank you in advance for your answer.</s> [1]: http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeUwDnase/ [2]: http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeOpenChromDnase/ [3]: http://hgdownload.soe.ucsc.edu/goldenPath/hg38/database/
There's an awful lot of ENCODE data so I'm not 100% sure of this answer but I'll have a crack. The first phase of ENCODE finished around 2012 so all the data were mapped onto GRCh37/Hg19 (2009). I believe most of the first wave data were generated in human cell lines. GRCh38 was released in 2013 so I'm guessing that the second wave of ENCODE data (primary tissue) currently in progress, is being mapped onto the 2013 (GRCh38) release. That would mean that the bulk samples in 38/39 are not the same samples. It is entirely possible to convert data currently on GRCh38 into GRCh39 but I don't know if ENCODE is doing this. That would involve remapping onto 39 instead of 38 and re-running the analyses. If *you* want to convert peak files, you can always use a [liftover][1] tool to put all datasets onto the build of your choice. Just make sure that the samples aren't duplicates. [1]: http://genome.ucsc.edu/cgi-bin/hgLiftOver?hgsid=497338351_bFqySkBTXkRBHyEFMHY7hlcgpyPV
biostars
{"uid": 194821, "view_count": 1972, "vote_count": 3}
<p>I have a fastq file with over 2 million reads. I am trying to use FastqSampler to select 2 million reads at random. But strangely, I don't get 2 million, I get something less -- 1929951 in the following example. Why?</p> <p>Might it have something to do with the way FastqSampler chunks the input file? (author is describing this here: <a href='/p/6544/#6578'>A: Selecting random pairs from fastq?</a> )</p> <p>It works fine if I set n to 1 million reads.</p> <pre><code>&gt; library(ShortRead) &gt; fq=readFastq("file.fq") &gt; fq class: ShortReadQ length: 2198402 reads; width: 100 cycles &gt; &gt; fqs=FastqSampler("file.fq", n=2e6) &gt; yield(fqs) class: ShortReadQ length: 1929951 reads; width: 100 cycles &gt; &gt; sessionInfo() R version 2.15.2 (2012-10-26) Platform: x86_64-redhat-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 LC_PAPER=C LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] ShortRead_1.14.4 Rsamtools_1.8.6 lattice_0.20-13 Biostrings_2.24.1 GenomicRanges_1.8.13 IRanges_1.14.4 loaded via a namespace (and not attached): [1] Biobase_2.16.0 grid_2.15.2 hwriter_1.3 tools_2.15.2 </code></pre>
<p>Sorry for the delayed reply. I'm not able to reproduce this with the current version of ShortRead (1.16.3); there were bug fixes related to FastqSampler between the version you are using and the current version. </p> <pre><code>&gt; set.seed(123) &gt; fl &lt;- "~/b/working/tmp.fastq" &gt; readFastq(fl) class: ShortReadQ length: 2198402 reads; width: 37 cycles &gt; fqs=FastqSampler(fl, n=2e6) &gt; yield(fqs) class: ShortReadQ length: 2000000 reads; width: 37 cycles &gt; sessionInfo() R version 2.15.2 Patched (2012-12-23 r61401) Platform: x86_64-unknown-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=C LC_NAME=C [9] LC_ADDRESS=C LC_TELEPHONE=C [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] ShortRead_1.16.4 latticeExtra_0.6-24 RColorBrewer_1.0-5 [4] Rsamtools_1.10.2 lattice_0.20-13 Biostrings_2.26.3 [7] GenomicRanges_1.10.7 IRanges_1.16.6 BiocGenerics_0.4.0 [10] BiocInstaller_1.8.3 loaded via a namespace (and not attached): [1] Biobase_2.18.0 bitops_1.0-5 grid_2.15.2 hwriter_1.3 [5] parallel_2.15.2 stats4_2.15.2 zlibbioc_1.4.0 </code></pre> <p>Try</p> <pre><code>source("http://bioconductor.org/biocLite.R") biocLite(character()) </code></pre> <p>to update packages. Please ask follow-up questions on the <a href="http://bioconductor.org/help/mailing-list/mailform/">Bioconductor mailing list</a> (no subscription required).</p>
biostars
{"uid": 62382, "view_count": 2935, "vote_count": 1}
One thing I really like about CWL is the ability to load CWL files into a CommandLineTool or Workflow object after generating the classes using `schema-salad-tool --codegen=python CommonWorkflowLanguage.yml > cwl_classes.py`. Is there a schema file that describes CWL inputs/job files similar to CommonWorkflowLanguage.yml? I have been working with these files as dicts, but would be super nice to be able to load them straight into a class object.
Hello Karl, Yes, the `inputs` section of a CWL document is a schema for the input job object.
biostars
{"uid": 383396, "view_count": 2522, "vote_count": 2}
Hello ! I want to do a construction and analysis of regulatory genetic networks in specific disease based on involved microRNAs, target genes and transcription factors Please can you inform me about free tools that can help me to build and analyze a miRNA/Target genes/TFs network. Thank you in advance.
<p>We are now building a database and online dataviz tool on cytoscape web -- <a href="http://mirob.interactome.ru/">MirOB (MicroRNA OncoBase)</a>.</p> <p>Database includes validated <strong>microRNAs</strong> and <strong>Transcription Factors Targets</strong> associated with human cancers. Currently, MirOB supports <em>only</em> homo sapiens organism.</p> <p>Network shows all interactions containing transcription factor of miRNA.</p> <p>You can try our simple tool <a href="http://mirob.interactome.ru/">here</a>.</p>
biostars
{"uid": 111180, "view_count": 5405, "vote_count": 5}
<p>I am seemingly stuck with something that should be very simple and I hope I haven't overlooked something obvious. </p> <p>Question: How can I make a valid Blast-database <em>with Taxids</em> from a NCBI query export? </p> <p>What I have tried so far:</p> <p>For a meta-genomics project I need a custom made blast database which I wish to generate from the result of the following N<a href='http://www.ncbi.nlm.nih.gov/nuccore'>CBI Nucleotide query</a>:</p> <pre><code>Viruses[Organism] AND srcdb_refseq[PROP] NOT cellular organisms [ORGN] </code></pre> <p>The result is 3986 entries which I exported and saved (via 'Send to') in FASTA and ASN1 format. (Both files are seemingly containing the right amount of entries) As this is a meta-genomics project I would love to have the taxon ids in the blastdb.</p> <p>I was successful with making a valid blast database from the FASTA file using makeblastdb, but the FASTA header doesn't include taxids, hence I tried to make a blast database from the ASN1 export using the following command (it is not clear from <a href='http://www.ncbi.nlm.nih.gov/books/NBK1763/#CmdLineAppsManual.4_User_manual'>the documentation</a> which formats can be used to create the database):</p> <pre><code> $ makeblastdb -in AllViralDNARefSeq.asn1 -dbtype nucl -out ViralASN1 -title "All Viral RefSeq DNA from NCBI ASN1" Building a new DB, current time: 12/20/2011 10:37:28 New DB name: ViralASN1 New DB title: All Viral RefSeq DNA from NCBI ASN1 Sequence type: Nucleotide Keep Linkouts: T Keep MBits: T Maximum file size: 1073741824B Adding sequences from FASTA; **added 10 sequences** in 0.00906897 seconds. </code></pre> <p>As you can see, this does not work as it adds only 10 sequences.</p> <p>Any help to get in the Taxonids is appreciated it doesn't have to be elegant, I just need the database from that query. I am using Blast+ 2.2.25 </p>
I think this one requires some scripting. Here are a few ideas. First, I'd fetch viral Refseq sequences in Genbank format from the FTP site: wget ftp://ftp.ncbi.nih.gov/refseq/release/viral/viral.1.genomic.gbff.gz gunzip viral.1.genomic.gbff.gz grep -c "^LOCUS" viral.1.genomic.gbff # 3984 Then I'd parse the Genbank file to extract an ID, description and sequence for each entry. Since taxon IDs are contained in a field of the form `/db_xref="taxon:NNNN` where NNNN = taxon ID, they can be extracted and written into the header of a new fasta file. Some quick and dirty Bioperl to illustrate: #!/usr/bin/perl -w use strict; use Bio::SeqIO; my $file = "viral.1.genomic.gbff"; my $seqio = Bio::SeqIO->new(-file => $file, -format => "genbank"); my $fasta = Bio::SeqIO->new(-file => ">$file.fa", -format => "fasta"); while(my $seq = $seqio->next_seq) { my $taxid = ""; for my $feat($seq->get_SeqFeatures) { if($feat->has_tag("db_xref")) { for my $id($feat->get_tag_values("db_xref")) { if($id =~/taxon:(\d+)/) { $taxid = $1; } } } } my $fa = Bio::Seq->new(-id => $seq->id, -desc => $taxid.", ".$seq->description, -seq => $seq->seq); $fasta->write_seq($fa); } This gives a fasta file *viral.1.genomic.gbff.fa*, with headers that look like: >NC_003038 176652, Invertebrate iridescent virus 6, complete genome.
biostars
{"uid": 15602, "view_count": 15436, "vote_count": 4}
<p>This question was inspired by Peter Cock 's tweet</p> https://twitter.com/pjacock/status/118012750105546752 <p>Most softwares (C, java...) use a int32 (unsigned or not) to store the length of the chromosome. It isn't enough when the length of a chromosome is greater than INT_MAX or UINT_MAX</p> <pre><code># define INT_MAX 2147483647 # define UINT_MAX 4294967295U </code></pre> <p>So my question is:</p> <ul> <li>Is there any resource where one can find the length of the chromosomes. Something like <a href='http://bionumbers.hms.harvard.edu/'>BioNumbers</a>.</li> <li>What's the length of the longest chromosome ? </li> </ul>
I just remembered reading an article on the largest genome size. It claims the flower named Paris Japonica has 150 billion bases over 40 chromosomes, I would imagine that the largest chromosome is bigger than the UINT_MAX. Alas a quick search did not turn up any sequence information. Most likely the size of the DNA was measured by other means.
biostars
{"uid": 12560, "view_count": 10615, "vote_count": 17}
let's say I have a fasta of a protein sequence > albumin MKWVTFISLL FLFSSAYSRG ... ... ... I want to split the sequence into all possible consecutive 8 amino acids (only in 1 direction, amino -> carboxyl) (And no looping(I don't know if it is the right expression), say, GMKWVTFIS) I need > fasta.albumin1 MKWVTFIS >fasta.albumin2 KWVTFISL > fasta.albumin3 WVTFISLLF ... > fasta.albumin13 FSSAYSRG And, I want to do this for all known human protein sequences. How would I do it??? I need the result as a fasta or fasta files. And IDs for resulting 8-mer seuqeunces need to be unique.
You requirement is to generate simple kmers from sequences. BioPython solution. Please change last print statement to: print('>'+str(record.id)+'|kmer_'+str(count)+'\n'+str(my_kmer)) #!/usr/bin/env python3 from Bio import SeqIO myfile=SeqIO.parse('test.fa','fasta') for record in myfile: sequence=record.seq seq_len=len(sequence) #define the kmer len kmer=8 count=0 for seq in list(range(seq_len-(kmer-1))): count=count+1 my_kmer=(sequence[seq:seq+kmer]) print('>'+str(record.id)+'|kmer_'+str(count)+'\n'+str(my_kmer))
biostars
{"uid": 297681, "view_count": 2907, "vote_count": 2}
Hello. I am building a pipeline that starts by sub-sampling PE FASTQ files with seqtk. Unfortunately, seqtk does not accept PE files directly, so they have to to be fed one by one with the same seed number. I want to repeat this process several times with different seed numbers, and number of reads kept. Downstream, I am going to assemble these sub-sampled reads. I have been inspired by https://github.com/h3abionet. Using their workflow as a template, I have managed to get pretty close to what I want. I have created a new record schema to hold my data: class: SchemaDefRequirement types: - name: FilePair type: record fields: - name: forward type: File - name: reverse type: File - name: seed type: int[] - name: number type: int[] - name: rep type: int[] - name: id type: string And, here is an input file I have created: fqSeqs: - forward: class: File path: /pat/to//SRR2736093_1.fastq.gz reverse: class: File path: /path/to/SRR2736093_2.fastq.gz id: SRR2736093 seed: [42,10] number: [10,10] rep: [1,2] - forward: class: File path: /path/to/SRR2736094_1.fastq.gz reverse: class: File path: /path/to/SRR2736093_4.fastq.gz id: SRR2736094 seed: [69, 12] number: [10,10] rep: [1,2] I then have a master workflow: cwlVersion: v1.0 class: Workflow requirements: - class: ScatterFeatureRequirement - class: InlineJavascriptRequirement - class: StepInputExpressionRequirement - class: SubworkflowFeatureRequirement - $import: readPair.yml inputs: fqSeqs: type: type: array items: "readPair.yml#FilePair" outputs: fqout: type: "readPair.yml#FilePair[]" outputSource: subsample/resampled_fastq steps: subsample: in: onePair: fqSeqs scatter: onePair out: [resampled_fastq] run: seqtk_sample_PE.cwl The sub-workflow `seqtk_sample_PE.cwl` makes sure seqtk is run appropriately across each pair of FASTQ: cwlVersion: v1.0 class: Workflow requirements: - class: ScatterFeatureRequirement - class: InlineJavascriptRequirement - class: StepInputExpressionRequirement - $import: readPair.yml inputs: onePair: "readPair.yml#FilePair" outputs: resampled_fastq: type: "readPair.yml#FilePair" outputSource: collect_output/fastq_pair_out steps: subsample_1: in: fastq: source: onePair valueFrom: $(self.forward) seed: source: onePair valueFrom: $(self.seed) number: source: onePair valueFrom: $(self.number) rep: source: onePair valueFrom: $(self.rep) scatter: seed scatterMethod: dotproduct out: [seqtkout] run: seqtk_sample.cwl subsample_2: in: fastq: source: onePair valueFrom: $(self.reverse) seed: source: onePair valueFrom: ${ console.log(self.seed); return self.seed;} number: source: onePair valueFrom: $(self.number) rep: source: onePair valueFrom: $(self.rep) scatter: seed scatterMethod: dotproduct out: [seqtkout] run: seqtk_sample.cwl collect_output: run: class: ExpressionTool inputs: seq_1: File seq_2: File id: string outputs: fastq_pair_out: "readPair.yml#FilePair" expression: > ${ var ret={}; ret['forward'] = inputs.seq_1 ret['reverse'] = inputs.seq_2 ret['id'] = inputs.id return { 'fastq_pair_out' : ret } } in: seq_1: subsample_1/seqtkout seq_2: subsample_2/seqtkout id: source: onePair valueFrom: $(self.id) out: [ fastq_pair_out ] And, finally, `seqtk_sample.cwl` actually does the work: cwlVersion: v1.0 class: CommandLineTool baseCommand: ['seqtk', 'sample'] stdout: $(inputs.fastq.nameroot)_$(inputs.number)_$(inputs.seed)_$(inputs.rep).fq inputs: seed: type: int inputBinding: prefix: -s position: 1 fastq: type: File inputBinding: position: 2 number: type: int inputBinding: position: 3 outputs: seqtkout: type: stdout However, when I try to run the master workflow, I get the following error: [workflow subsample] initialized from file:///Users/andersg/Documents/dev/mdu-qc-cwl/workflows/seqtk_sample_PE.cwl [workflow subsample] workflow starting [workflow subsample] starting step subsample_2 Unhandled exception Traceback (most recent call last): File "/usr/local/lib/python2.7/site-packages/cwltool/workflow.py", line 311, in try_make_job Callable[[Any], Any], callback), **kwargs) File "/usr/local/lib/python2.7/site-packages/cwltool/workflow.py", line 672, in dotproduct_scatter jo[s] = joborder[s][n] File "/usr/local/Cellar/python/2.7.12_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ruamel/yaml/comments.py", line 502, in __getitem__ return ordereddict.__getitem__(self, key) KeyError: 0 [workflow subsample] outdir is /var/folders/fj/s582ngbs28d78t98hf4gv0qjt74n0_/T/tmpchJbVd Workflow cannot make any more progress. Removing intermediate output directory /var/folders/fj/s582ngbs28d78t98hf4gv0qjt74n0_/T/tmpchJbVd Removing intermediate output directory /var/folders/fj/s582ngbs28d78t98hf4gv0qjt74n0_/T/tmpRzqZLf Final process status is permanentFail I seems that I am not specifying my arrays correctly? Any help would be greatly appreciated. Thank you. Anders.
Hello andersgs, Very advanced CWL usage! The problem is you are trying to scatter over a component of a single item, but that is not currently allowed in the CWL v1.0: http://www.commonwl.org/v1.0/Workflow.html#WorkflowStepInput > The value of `inputs` in the parameter reference or expression must be > the input object to the workflow step after assigning the `source` > values and then scattering. The order of evaluating `valueFrom `among > step input parameters is undefined and the result of evaluating > `valueFrom` on a parameter must not be visible to evaluation of > `valueFrom` on other parameters.
biostars
{"uid": 245032, "view_count": 2101, "vote_count": 1}
Hello, I have some microarray data from a few years ago, and the rs identifier in the file was from dbSNP 131. In order to do some analysis on this old data, I will need to know the positions of the microarray SNPs, and thus would like to download dbSNP131. However, on [UCSC Table Browser][1], I can no long find this database. I tried: assembly: hg19, group: Variations - then there is no dbSNP131 under tracks. Does anyone know any other places to access dbSNP131 please? Or, is it save to just use dbSNP 138? The microarray chip was HumanOmni2.5-8v1 from Illumina. Thank you [1]: http://genome.ucsc.edu/cgi-bin/hgTables
Okay after some digging, I found that you can downloading all old version of dbSNP databases from [ANNOVAR][1]. Command: annotate_variation.pl -buildver hg19 -downdb -webfrom annovar snp131 humandb/ The format is exactly the same with the dbSNP table from UCSC: https://genome.ucsc.edu/cgi-bin/hgTables [1]: http://annovar.openbioinformatics.org/en/latest/
biostars
{"uid": 177157, "view_count": 1773, "vote_count": 1}
I want to extract **gene name** , **gene start position** and **gene stop position** from the fasta header of the fasta file. I have tried to extract based on the position but those locations are not consistent. Is there any other way to extract them ? This is what I have tried so far. #I have a vector of these file names. Here I have just one element names1 =>"lcl|NC_005336.1_cds_NP_957781.1_1 [locus_tag=ORFVgORF001] [db_xref=GeneID:2947687] [protein=ORF001 hypothetical protein] [protein_id=NP_957781.1] [location=complement(3162..3611)] [gbkey=CDS]" #Then I extracted words from the string list string_list1 <- str_extract_all(names1, boundary("word")) #result string_list1[1] [[1]] [1] "lcl" "NC_005336.1_cds_NP_957781.1_1" [3] "locus_tag" "ORFVgORF001" [5] "db_xref" "GeneID" [7] "2947687" "protein" [9] "ORF001" "hypothetical" [11] "protein" "protein_id" [13] "NP_957781.1" "location" [15] "complement" "3162" [17] "3611" "gbkey" [19] "CDS" So, I was trying to extract 4th ,16th and 17th element from this list. It works for this particular example. This does not work for other headers where these positions are different. Usually, gene name is consistently present at the 4th position. But, the start and stop location differ among the fasta headers. So, this strategy is not working and I can't think of any other strategy.
If gene name is like [locus_tag=gene_name] and coordinates like [location=complement(3162..3611)] library(tidyverse) names1 <- "lcl|NC_005336.1_cds_NP_957781.1_1 [locus_tag=ORFVgORF001] [db_xref=GeneID:2947687][protein=ORF001 hypothetical protein] [protein_id=NP_957781.1] [location=complement(3162..3611)] [gbkey=CDS]" (res <- str_replace_all(names1, "^.*?locus_tag=(.*?)\\].*?\\[location.*?(\\d+)\\.\\.(\\d+).*?$", "\\1___\\2___\\3") %>% str_split("___") ) If names will be a "gene_name" column in a data.frame called df, a clean final table can be easily produced: df %>% mutate(gene_name = str_replace_all(gene_name, "^.*?locus_tag=(.*?)\\].*?\\[location.*?(\\d+)\\.\\.(\\d+).*?$", "\\1___\\2___\\3")) %>% separate(gene_name, sep="___", into = c("gene", "start", "end"))
biostars
{"uid": 443876, "view_count": 778, "vote_count": 1}
Hello. I read the paper "Whole-exome sequencing identifies a recurrent NAB2-STAT6 fusion in solitary fibrous tumors" briefly. After reading it, I tried to find the gene fusion in my exome sequence data with cancer patients from hospital. But I think that it is a better strategy to identify the gene fusion case through the use of the public data that is specified in the above paper before trying to test on my own set. So I tried to find an explanation about how to access public data in the paper to identify the gene fusion. Finally I found the bold sentence (below). **Accession code. Sequence data used for this analysis are available at the database of Genotypes and Phenotpyes(dbGap) under accesssion. phs000568.v1.p1.** Just reading it, I think I can download data if i have permission. So I go to dbGap stite and search phs000568.v1.p1 and many results occur to me. But I don't know how to access above databases and also don't know how to download. So Is there anyone who have experienced this before? If so, I need your help! Thank you.
http://www.ncbi.nlm.nih.gov/sra?linkname=gap_sra_all&from_uid=874173
biostars
{"uid": 113366, "view_count": 5652, "vote_count": 2}
<p>Hey all :]</p> <p>I use samtools&#39;s depth, and occasionally samtool&#39;s pileup commands to calculate coverage of my reads to the genome before binning for coverage. I&#39;m pretty sure everyone else does too ;)</p> <p>One very common problem is that people find &quot;samtools depth&quot; and &quot;samtools mpileup&quot; don&#39;t match up - and it&#39;s commonly attributed to filtering poor quality reads, duplicates, etc (mpileup doing more filtering).</p> <p>But there&#39;s a ton of other questions I just can&#39;t get the answers too from the sametools docs, namely:</p> <p>Does depth/pileup count the region between a paired reads, or just the reads themselves?<br /> Does samtools depth/pileup count singletons in paired end sequencing? Does it extrapolate based on average read length to fillout the whole fragment?<br /> If a read maps to multiple locations, is it counted multiple times?</p> <p>I&#39;m working with ChIP-Seq data, so i might have to correct for peak-shift. What do you guys think?<br /> <br /> Thanks! :)</p>
> Does depth/pileup count the region between a paired reads, or just the reads themselves? samtools depth 'just' runs a multi-mpileup and count the number of bases under each base of the reference. smal l'qlen', deletions and base quality below 'baseQ' are ignored. Here is the core code of 'depth': if (!(b->core.flag&BAM_FUNMAP)) { if ((int)b->core.qual < aux->min_mapQ) b->core.flag |= BAM_FUNMAP; else if (aux->min_len && bam_cigar2qlen(&b->core, bam1_cigar(b)) < aux->min_len) b->core.flag |= BAM_FUNMAP; (...) while (bam_mplp_auto(mplp, &tid, &pos, n_plp, plp) > 0) { if (pos < beg || pos >= end) continue; // out of range; skip if (bed && bed_overlap(bed, h->target_name[tid], pos, pos + 1) == 0) continue; fputs(h->target_name[tid], stdout); printf("\t%d", pos+1); for (i = 0; i < n; ++i) { // base level filters have to go here int j, m = 0; for (j = 0; j < n_plp[i]; ++j) { const bam_pileup1_t *p = plp[i] + j; if (p->is_del || p->is_refskip) ++m; else if (bam1_qual(p->b)[p->qpos] < baseQ) ++m; // low } printf("\t%d", n_plp[i] - m); // this the depth to output } }
biostars
{"uid": 107273, "view_count": 7874, "vote_count": 3}
<p>Hi,</p> <p>I am trying to subsample from a <a href='http://samtools.sourceforge.net/SAM1.pdf'>bam</a> file using the <a href='http://samtools.sourceforge.net/'>samtools</a> view -s command. This is working when sampling 50% or lower (-s 42.50, 42 being the seed), but anything higher fails (returns an empty file).</p> <p>He are the exact commands i use</p> <pre><code>samtools view -s 0.25 -b chr6_all.bam &gt; chr6_25p.sam </code></pre> <p>Working.</p> <pre><code>samtools view -s 0.50 -b chr6_all.bam &gt; chr6_50p.sam </code></pre> <p>Working.</p> <pre><code>samtools view -s 0.75 -b chr6_all.bam &gt; chr6_75p.sam </code></pre> <p>Not working. </p> <p>I also made sure that 49% is working, but 51% is not. Any ideas, suggestions, or is this an intented mechanic ? There doesn't seem to by any documentation about the subsampling parameter in <a href='http://samtools.sourceforge.net/'>samtools</a> docfile. </p> <p>Thanks. </p>
Subsampling not working for fractions above 50% is a known bug in samtools 0.1.18. (See [[Samtools-help] Randomized Subsampling Bam File / Subsampling above 50%][1].) The bug was fixed in [March last year][2]; samtools 0.1.19 contains the corrected version. [1]: http://sourceforge.net/mailarchive/message.php?msg_id=29174919 [2]: https://github.com/samtools/samtools/commit/8cf329904235d82c6c781113c404f298d06bc2b0
biostars
{"uid": 76791, "view_count": 50224, "vote_count": 24}
Dear all, I have one problem, it is only one condition for my easy script for counting of GC content per row awk 'NR>1{n=length($1); gc=gsub("[gcGC]", "", $1); print gc/n}' $i How to count length of row without N character. for example: Input: ACAGCTTGCNNNN => length= 9 Gc content=5/9 format of output is not important, only how to count it. Thanks a lot
This command should work, but it's not in awk, it is in `bash`. ``` while read p; do len=$(echo $p | sed 's/N//g' | tr -d '\n' | wc -c) cnt=$(echo $p | grep -oh 'C\|G\|g\|c' | tr -d '\n' | wc -c) gc=$(awk "BEGIN {printf \"%.2f\",${cnt}/${len}}") echo -e length:$len --- GC:$gc done<file ```
biostars
{"uid": 143821, "view_count": 3093, "vote_count": 1}
I have one variant calling file (VCF) which has three samples (affected, control1 and control2). I want to generate variants that are 100% unique in affected sample but not in 2 control samples. VCF variant information lines are CHROM POS ID REF ALT QUAL FILTER INFO FORMAT affected control1 control2 chr4 x x A C 78 pass xx GT:AD:DP:GQ:PL 0/1:x:x:x:x 0/1:x:x:x:x 1/1:x:x:x:x chr8 x x A T 1444 pass xx GT:AD:DP:GQ:PL 0/1:x:x:x:x 0/0:x:x:x:x 0/0:x:x:x:x chr10 x x T C 230 pass xx GT:AD:DP:GQ:PL 1/1:x:x:x:x 0/0:x:x:x:x 0/0:x:x:x:x If I want get the new variant calling file: CHROM POS ID REF ALT QUAL FILTER INFO FORMAT affected chr8 x x A T 1444 pass xx GT:AD:DP:GQ:PL 0/1:x:x:x:x chr10 x x T C 230 pass xx GT:AD:DP:GQ:PL 1/1:x:x:x:x I want generate a new VCF only has affected variant callings that are unique (by comparing with variants of 2 controls). How can I do it?
My tool [VCFFilterJS](https://github.com/lindenb/jvarkit/wiki/VCFFilterJS) can do this. Here is a javascript file that select the variant where one sample has a genotype different from the others: function accept(ctx) { var x,y,g1,g2,count_same=0; var sampleList=header.getSampleNamesInOrder(); /** loop over one sample */ for(x=0;x < sampleList.size();++x) { g1=ctx.getGenotype( sampleList.get(x) ); /** ignore non-called */ if(! g1.isCalled() ) continue; count_same=0; /** loop over the other samples */ for(y=0;y< sampleList.size() && count_same==0 ;++y) { if(x==y) continue;/* same sample ?*/ g2=ctx.getGenotype( sampleList.get(y) ); /** ignore non-called */ if(! g2.isCalled() ) continue; /** is g1==g2 ? */ if( g1.sameGenotype( g2 ) ) { count_same++; } } /* found no other same genotype */ if(count_same==0) return true; } return false; } accept(variant); and a test with 1000 genomes: curl "ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20110521/ALL.chr1.phase1_release_v3.20101123.snps_indels_svs.genotypes.vcf.gz" |\ gunzip -c |\ java -jar dist/vcffilterjs.jar -f select.js |\ grep -v "#" | cut -f 1-5 1 13957 rs201747181 TC T 1 51914 rs190452223 T G 1 54753 rs143174675 T G 1 55313 rs182462964 A T 1 55326 rs3107975 T C 1 55330 rs185215913 G A 1 55388 rs182711216 C T 1 55416 rs193242050 G A 1 55427 rs183189405 T C 1 62156 rs181864839 C T 1 63276 rs185977555 G A 1 66457 rs13328655 T A 1 69534 rs190717287 T C 1 72148 rs182862337 C T 1 77470 rs192898053 T C 1 79137 rs143777184 A T 1 81949 rs181567186 T C 1 83088 rs186081601 G C 1 83977 rs180759811 A G 1 84346 rs187855973 T C verify with `rs201747181` curl "ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20110521/ALL.chr1.phase1_release_v3.20101123.snps_indels_svs.genotypes.vcf.gz" |\ gunzip -c |\ grep rs201747181 |\ cut -f 10- |\ tr " " "\n" |\ cut -d ':' -f 1 |\ sort |\ uniq -c 1013 0|0 26 0|1 7 1|0 1 1|1 <============ HERE Here is another example with only one sample: function accept(ctx) { var y,g2; var sampleList=header.getSampleNamesInOrder(); var g1=ctx.getGenotype("M10475"); /** ignore non-called */ if(g1== null || ! g1.isCalled() ) return false; /** loop over the other samples */ for(y=0;y< sampleList.size();++y) { g2=ctx.getGenotype( sampleList.get(y) ); if(g2.getSampleName().equals(g1.getSampleName())) continue; /** ignore non-called */ if(! g2.isCalled() ) continue; /** is g1==g2 ? */ if( g1.sameGenotype( g2 ) ) return false; } /* found no other same genotype */ return true; } accept(variant); exec: $ curl -k "https://raw.github.com/arq5x/gemini/master/test/test5.vep.snpeff.vcf" | java -jar dist/vcffilterjs.jar -f script.js | grep -A 1 CHROM #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT M10475 M10478 M10500 M128215 chr1 145273345 . T C 289.85 . AC=3;AF=0.38;AN=8;BaseQRankSum=1.062;CSQ=missense_variant|Tct/Cct|S/P|ENSG00000213240|NOTCH2NL|ENST00000369340|4/6|benign(0.238)|tolerated(0.45),missense_variant|Tct/Cct|S/P|ENSG00000213240|NOTCH2NL|ENST00000362074|3/5|benign(0.238)|tolerated(0.45),missense_variant&NMD_transcript_variant|Tct/Cct|S/P|ENSG00000255168||ENST00000468030|3/23|benign(0.416)|tolerated(0.55),missense_variant|Tct/Cct|S/P|ENSG00000213240|NOTCH2NL|ENST00000344859|3/6|possibly_damaging(0.545)|tolerated(0.44);DP=1000;DS;Dels=0.00;EFF=EXON(MODIFIER|||||RP11-458D21.5|nonsense_mediated_decay|NON_CODING|ENST00000468030|3),NON_SYNONYMOUS_CODING(MODERATE|MISSENSE|Tct/Cct|S67P|230|NOTCH2NL|protein_coding|CODING|ENST00000344859|),NON_SYNONYMOUS_CODING(MODERATE|MISSENSE|Tct/Cct|S67P|236|NOTCH2NL|protein_coding|CODING|ENST00000362074|),NON_SYNONYMOUS_CODING(MODERATE|MISSENSE|Tct/Cct|S67P|236|NOTCH2NL|protein_coding|CODING|ENST00000369340|);FS=3.974;HRun=1;HaplotypeScore=17.4275;MQ=29.25;MQ0=0;MQRankSum=-1.370;QD=0.39;ReadPosRankSum=-1.117 GT:AD:DP:GQ:PL 0/0:226,22:250:99:0,158,4259 0/1:224,24:250:6:6,0,5314 0/1:219,28:249:57:57,0,5027 0/1:215,34:250:99:269,0,3796
biostars
{"uid": 88921, "view_count": 6562, "vote_count": 2}
<p>I'm relatively new in this area. Is there instances where published public domain data is reanalysed similar to its original context and reported? Are such papers accepted by well known journals?</p> <p>As technology and our understanding improves we can generate more information but are there any interest to reanalyse published data? How about ethics? Bioinformatics allows this oppurtunity to analyse or extend existing pool of knowledge but I haven't seen much people out there except those involved in large scale projects/collaborations. Please share your experiences or any papers that are relevant.</p>
<p>Public data is re-used in new publications in a variety of ways, so I would say this is one of the core features of bioinformatics. Of course, the data should then be used either: </p> <ul> <li>to correct old features, or</li> <li>to show new characteristics.</li> </ul> <p>I would say the latter has much more chances of being published, except if the corrections are substantial.</p> <p>Just for fun, and because I think public data is very important for the development of the bioinformatics field, I will quote what one of my previous advisors would say: "Of course most of what we do should be called 'parasitic bioinformatics'. After all, we keep using public data -- generated by other people -- to answer our own questions!"</p> <p>I could give you an enormous list of papers re-analysing public data :-)</p>
biostars
{"uid": 47521, "view_count": 2780, "vote_count": 4}
Dear all, I have downloaded the GRCh38 reference human genome in fasta format from the ftp.ncbi.nlm.nih.gov following the suggestions of this [post][1] and [this][2]. Now I need to generate a fusion genome thus I need the headers right. The headers of the GRCh38 are in this form: >chr1 AC:CM000663.2 gi:568336023 LN:248956422 rl:Chromosome M5:6aef897c3d6ff0c78aff06ac189178dd AS:GRCh38 >chr2 AC:CM000664.2 gi:568336022 LN:242193529 rl:Chromosome M5:f98db672eb0993dcfdabafe2a882905c AS:GRCh38 >chr3 AC:CM000665.2 gi:568336021 LN:198295559 rl:Chromosome M5:76635a41ea913a405ded820447d067b0 AS:GRCh38 [...] >chrUn_GL000218v1 AC:GL000218.1 gi:224183305 LN:161147 rl:unplaced M5:1d708b54644c26c7e01c2dad5426d38c AS:GRCh38 >chrEBV AC:AJ507799.2 gi:86261677 LN:171823 rl:decoy M5:6743bd63b3ff2b5b8985d8933c53290a SP:Human_herpesvirus_4 tp:circular May I ask what is the format of the header? What the individual fields refer to? Thank you [1]: http://lh3.github.io/2017/11/13/which-human-reference-genome-to-use [2]: https://www.biostars.org/p/318811/#334309
>chr1 AC:CM000663.2 gi:568336023 LN:248956422 rl:Chromosome M5:6aef897c3d6ff0c78aff06ac189178dd AS:GRCh38 AC accession number https://www.ncbi.nlm.nih.gov/nuccore/CM000663.2 gi: ncbi global identfier https://www.ncbi.nlm.nih.gov/nuccore/568336023 LN: length rl: Assigned-Molecule-Location/Type (chromosome, mitochondrial...) M5 : md5 cheksum https://en.wikipedia.org/wiki/Md5sum AS: assembly
biostars
{"uid": 350594, "view_count": 2511, "vote_count": 1}
I have a list of 3000 genes and their start and end locations as shown below. I need to extract the SNPs from VCF file within these locations. What would be the easiest way to do this (or preferably in R)? gene_id symbol chromosome start_location end_location 1 "1" "A1BG" "19" " 58858171" " 58864865" 2 "10" "NAT2" "8" " 18248754" " 18258723" 3 "100" "ADA" "20" " 43248162" "43280376" 4 "1000" "CDH2" "18" " 25530926" " 25616549"
Ok I have finally managed to write this code in R where you can select the genes from `biomart` or sample it randomly to extract the SNPs from VCF files separated per chromosome. It is not dandy, but does the job and can be customized as needed. #Extracting some genes and positions from Biomart #Using biomart library(biomaRt) listMarts() ensembl=useMart("ensembl") #To use hg19/GrCh37 ensembl <- useMart(biomart="ENSEMBL_MART_ENSEMBL", host="grch37.ensembl.org", path="/biomart/martservice" ,dataset="hsapiens_gene_ensembl") # grch37 = useMart(biomart="ENSEMBL_MART_ENSEMBL", host="grch37.ensembl.org", path="/biomart/martservice") # listMarts(grch37) ensembl<- useDataset("hsapiens_gene_ensembl",mart=ensembl) # from ensembl using homosapien gene data listFilters(ensembl) listDatasets(ensembl) # #genes.with.id <- getBM(attributes=c("ensembl_gene_id", "external_gene_id"),values=gene_names, mart= ensembl) # human <- useMart("ensembl", dataset = "hsapiens_gene_ensembl") # cc <- getBM(attributes = c("hgnc_symbol","chromosome_name", "start_position","end_position"), # filters = "hgnc_symbol", values = *,mart = human) all.genes <- getBM(attributes=c('ensembl_gene_id','gene_biotype','hgnc_symbol','chromosome_name','start_position','end_position'), mart = ensembl) colnames(all.genes) # "ensembl_gene_id" "gene_biotype" "hgnc_symbol" "chromosome_name" "start_position" "end_position" ##getting only protein coding genes all.genes <- all.genes[(!(all.genes[,"hgnc_symbol"])=="" &all.genes[,"gene_biotype"]=="protein_coding" & grepl("(^[0-9]+$)|^(X|Y)$",all.genes[,"chromosome_name"]) & !duplicated(all.genes[,"hgnc_symbol"])),] ##################################### library(stringr) library(VariantAnnotation) library(TxDb.Hsapiens.UCSC.hg19.knownGene) txdb <- TxDb.Hsapiens.UCSC.hg19.knownGene path.file<-"/mydir/" all.files <- list.files("/mydir") all.files <- all.files[grepl("recalibrated.vcf$",all.files)] ##for single file; for loop go below # bgzip("/mydir/myvcf.chr1.recalibrated.vcf", overwrite=FALSE) # indexTabix("/mydir/myvcf.chr1.recalibrated.vcf.bgz", format="vcf") ##creating .bgz and .bgz.tbi files # all.files <- list.files("/mydir", pattern = "*.vcf$", full.names = TRUE) #this bit changes the vcf to .gz format setwd(path.file) for (i in all.files) { file_gz <- paste0(i, ".gz") file_gz_tbi <- paste0(i, ".gz.tbi") if(!exists(file_gz)) bgzip(i, paste0(i, ".gz")) if(!exists(file_gz_tbi)) indexTabix(file_gz, format = "vcf") } save.genes<-all.genes #all.genes #this is the table of all genes in bed format all.files <-list.files("/mydir") # all.files <- list.files("/mydir", pattern = "*.gz$", # full.names = TRUE) all.gz.files<-all.files[grepl(".gz$",all.files)] save.all.genes<-save.genes ##Extracting the genes from VCF # i<-2 for(i in 1:length(all.gz.files)){ print(paste0("Doing chromosome:",i)) #if (save.all.genes[,"chromosome_name"]== sub(".*?chr(.*?)\\.recalibrated.*", "\\1", all.bgz.files[i])){ #This bit here selects only the chromosome number: sub(".*?chr(.*?)\\.recalibrated.*", "\\1", all.gz.files[i]) all.genes <- save.all.genes[save.all.genes[,"chromosome_name"]== sub(".*?chr(.*?)\\.recalibrated.*", "\\1", all.gz.files[i]),] if(!is.na(all.genes[,"chromosome_name"][1]) & all.genes[,"chromosome_name"][1]==sub(".*?chr(.*?)\\.recalibrated.*", "\\1", all.gz.files[i])){ ##file.gz <- "/mydir/myvcf.chr1.recalibrated.vcf.bgz" file.gz<-all.gz.files[i] stopifnot(file.exists(file.gz)) file.gz.tbi <- paste(file.gz, ".tbi", sep="") if(!(file.exists(file.gz.tbi))) indexTabix(file.gz, format="vcf") # start.loc <- 45959538 # end.loc <- 203047868 #chr1.gr <- GRanges("chr1", IRanges(start.loc, end.loc)) chr.gr <- GRanges(paste0("chr",all.genes$chromosome_name), IRanges( all.genes$start_position, all.genes$end_position)) params <- ScanVcfParam(which=chr.gr) vcf <- readVcf(TabixFile(file.gz), "hg19", params) #locateVariants(vcf[6,], txdb, AllVariants()) ##Writing the vcf subset writeVcf(vcf, paste0("chr.",sub(".*?chr(.*?)\\.recalibrated.*", "\\1", all.gz.files[i]),".test-sub.vcf")) }else{ next; } }
biostars
{"uid": 187830, "view_count": 8831, "vote_count": 1}
Hello, I'm trying to remove unwanted variables using RUVSeq and SVA but i'm meeting an issue with both libraries. with RUVSeq I get this "list" error: > for(k in 1:4) { + set_g <- RUVg(x = gg, cIdx = house_keeping_genes, k = k) + DESeq2::plotPCA(set_g, col=as.numeric(coldata$group)) + } ```Error in Ycenter[, cIdx] : invalid subscript type 'list'``` while with SVA I get this: > res <- svaseq(norm_by_count_per_million, mod, mod0) ```Number of significant surrogate variables is: 46``` ```Iteration (out of 5 ):Error in density.default(x, adjust = adj) : 'x' contains missing values``` I've found a couple of references but haven't been able to fix these issues. Full reproducible example bellow, any help would be much appreciated! library(SummarizedExperiment) library(RUVSeq) library(sva) # http://duffel.rail.bio/recount/v2/TCGA/rse_gene_bladder.Rdata load('data/rse_gene_bladder.Rdata') unique(rse_gene$gdc_cases.diagnoses.tumor_stage) colData(rse_gene)$group <- NA rse_gene$gdc_cases.diagnoses.tumor_stage == "stage iv" colData(rse_gene)$group[rse_gene$gdc_cases.diagnoses.tumor_stage == "stage i"] <- "early" colData(rse_gene)$group[rse_gene$gdc_cases.diagnoses.tumor_stage == "stage ii"] <- "early" colData(rse_gene)$group[rse_gene$gdc_cases.diagnoses.tumor_stage == "stage iii"] <- "late" colData(rse_gene)$group[rse_gene$gdc_cases.diagnoses.tumor_stage == "stage iii"] <- "late" keep <- !is.na(rse_gene$group) rse_gene <- rse_gene[, keep] table(rse_gene$group) counts <- assay(rse_gene, "counts") norm_by_count_per_million <- sweep(counts, 2, FUN="/", colSums(counts)) * 10^6 coldata <- colData(rse_gene) HK_genes <- read.table(url("https://m.tau.ac.il/~elieis/HKG/HK_genes.txt")) house_keeping_genes <- intersect(rowRanges(rse_gene)$symbol, HK_genes$V1) gg <- ceiling(norm_by_count_per_million) rownames(gg) <- as.vector(rowRanges(rse_gene)$symbol) par(mfrow = c(2, 2)) for(k in 1:4) { set_g <- RUVg(x = gg, cIdx = house_keeping_genes, k = k) DESeq2::plotPCA(set_g, col=as.numeric(coldata$group) ) } mod <- model.matrix(~group, data=coldata) mod0 <- model.matrix(~1, data=coldata) res <- svaseq(norm_by_count_per_million, mod, mod0)
The `?RUVg` help: > cIdx A character, logical, or numeric vector indicating the subset of genes to be used as negative controls in the estimation of the factors of unwanted variation. But `class(house_keeping_genes)` returns `list`, hence `as.character(house_keeping_genes)` should work. I would recommend normalizing data with DESeq2 or edgeR, not with this plain per-million scaling. Here is a full DESeq2+RUVseq analysis from the DESeq2 author: https://github.com/mikelove/preNivolumabOnNivolumab/blob/main/preNivolumabOnNivolumab.knit.md
biostars
{"uid": 9527660, "view_count": 770, "vote_count": 1}
I have many VCFs from different samples. If I merge these into a single VCF using `vcftools` (`vcf-merge`), the samples where a variant wasn't called are labeled as missing that variant. Instead, I want the VCF to show that the sample has the reference allele (safe to assume in my application). Is there a way to call missing variants in a VCF as the reference allele? What tools can I use to do this? **EDIT:** The sequences were originally variant called using FreeBayes (through the LongRanger pipeline). **RE-EDIT:** Turns out I can simply use the `--ref-for-missing` flag in `vcf-merge` to achieve this. Problem solved. **RE-RE-EDIT:** Using `--ref-for-missing` flag in `vcf-merge` does of course not give the variants any annotation, like depth and genotype quality.
( The best way is to call all the BAMs in the same command, to get a multi-sample VCF) I've written two tools related to your question: * **VcfNoCallToHomRef** http://lindenb.github.io/jvarkit/VcfNoCallToHomRef.html will arbitrarily convert the `./.` to `0/0` . * **FixVcfMissingGenotypes** http://lindenb.github.io/jvarkit/FixVcfMissingGenotypes.html : will re-use the BAM to test the depth at each NO-CALL site. It's slow;
biostars
{"uid": 276811, "view_count": 4505, "vote_count": 3}
<p>Hi,</p> <p>I heard about a tool which lets you search for papers based on a set of genes that you input and it returns papers that have mentioned a significant set of those genes.</p> <p>Does anyone know what tool this is?</p> <p>Thanks</p>
In R you can do this using Org.HS. Below is an example: ``` source("http://bioconductor.org/biocLite.R") biocLite("org.Hs.eg.db") library("org.Hs.eg.db") # mapped_genes are all the genes that org.HS.egPMID covers (the HS refers to homosapien). These genes are in Entrez format. # entrez2Pmid is a list taking entrez gene ids to a vector of Pmids. mapped_genes <- mappedkeys(org.Hs.egPMID) entrez2Pmid <- as.list(org.Hs.egPMID[mapped_genes]) # now if you have an entrez gene id you can look up the relevant papers by doing entrez2Pmid[myEntrezGene] ``` [Here][1] is a pastebin of the code since I stink at using the biostars code formatter There is a tutorial [here][2]. Also you can just use http://idconverter.bioinfo.cnio.es/IDconverter.php. Although I've found that sometimes the results seem to be a subset of the results you get using the method outlined above. [1]: http://pastebin.com/JCa6L7sB [2]: http://www.bioconductor.org/packages/release/data/annotation/manuals/org.Hs.eg.db/man/org.Hs.eg.db.pdf
biostars
{"uid": 106608, "view_count": 1561, "vote_count": 1}
Hi all, I have a file like this group1 group2 group3 group4 ax as we aw as we rt ty aw aq yu pl aq qw oo se I need to count unique (pairwise) one and save as a matrix form like this group1 group2 group3 group4 group1 4 2 0 1 group2 2 4 1 0 group3 0 1 4 0 group4 1 0 0 4 I used R intersect function for this but I could not able to save the output as a matrix form.
How about double loop: # example data df1 <- read.table(text = " group1 group2 group3 group4 ax as we aw as we rt ty aw aq yu pl aq qw oo se ", header = TRUE, stringsAsFactors = FALSE) # double loop: get length of intersect sapply(df1, function(i) sapply(df1, function(j) length(intersect(i, j)))) # group1 group2 group3 group4 # group1 4 2 0 1 # group2 2 4 1 0 # group3 0 1 4 0 # group4 1 0 0 4
biostars
{"uid": 410324, "view_count": 568, "vote_count": 1}
<p>Hi,</p> <p>I have a Python script which takes as input a list of SNPs, then calls <a href='http://samtools.sourceforge.net/tabix.shtml'><code>tabix</code></a> to retrieve .<em>VCF file and does some stuff with it. The problem is that i need to manually add (</em>using UCSC genome browser web*) the coordinates of the SNP for tabix to work properly.</p> <p>Does any one know any simple python API / function / module to easy add this information without having to perform the search manually?</p> <p>Thanks!</p>
You can do this with <a href='http://pypi.python.org/pypi/cruzdb/'>cruzdb</a> from cruzdb import Genome import sys fname = sys.argv[1] hg19 = Genome('hg19') snp135 = hg19.snp135 for rs in (l.split()[0].rstrip() for l in open(fname) if l.startswith("rs")): print snp135.filter_by(name=rs).first() which takes a file with the rs id's in the first columns and prints out a bed file with the chromosome locations from dbsnp 135.
biostars
{"uid": 59249, "view_count": 11159, "vote_count": 2}
Hi, I'm very confused about the IGH gene. So on NCBI it looks to be a single IGH@ gene or locus. however the region seems to cover many other IGH genes IGHD1, G1, exc... this is how it looks like, [browser][1] Why I'm interested is because of gene fusions where the 3' gene is written as IGH. However, this is ambiguous because this region seems to cover many variants. My questions are the following. 1. is IGH a single gene or locus containing many small variants. Or if this a gene that gets post prossessed post joining different segments? 1b. do these IGH "variant" contain exons/intron boundaries. 2. On the CCDS website I cannot seem to be be able to locate any IGH genes at all. What I want is to have the coordinates and sequences for each variant. 3. Is there a place I can get the the sequences for each segment and coordinate, best if I can find it on CCDS. thanks in advance. [1]: https://genome.ucsc.edu/cgi-bin/hgTracks?db=hg38&lastVirtModeType=default&lastVirtModeExtraState=&virtModeType=default&virtMode=0&nonVirtPosition=&position=chr14%3A105586437%2D106879844&hgsid=1475821023_Ps5LU7lpkag5rANMUtJpmWA2WtQW
1.The IGH locus contains many V, D, and J gene segments, as well as constant region genes. The exact number of V, D, and J genes will depend on the species. For example, cows have 12 V, 16 D, and 4 J regions at the IGH locus. Canonical antibodies are composed of two heavy chains (HC) and two light chains (LC). B cells make antibodies. During B cell development, a V, a D, and a J gene combine to form the variable region of the HC. The variable region is joined to a constant region to make a full length HC. See the Wikipedia link below for a description of V(D)J recombination: [V(D)J recombination][1] 1b. Yes, there are introns and exons in IGH. 2 and 3. You can find sequences and coordinates of IGH genes for human, mouse, pig, teleostei, chondrichthyes, camel, llama, alpaca, caiman, sheep, frog, rat, nonhuman primates, rabbit, platypus, bovine, chicken, dog, Atlantic salmon, rainbow trout, horse, and ring-tailed lemur in the IMGT gene tables: [IMGT Repertoires][2] Note: antibodies also contain signal sequences (called L-part on IMGT) that direct the antibody to be displayed on the plasma membrane. [1]: https://en.wikipedia.org/wiki/V(D)J_recombination [2]: https://www.imgt.org/IMGTrepertoire/LocusGenes/#F
biostars
{"uid": 9541901, "view_count": 370, "vote_count": 1}
Hi all, I have 4 bam files which correspond to 4 runs of one library. I want to merge those 4 files in order to do some variant calling analysis with only one file per sample. I tried some basic method, with samtools merge, and MergeSamFiles from picard tools, but I am now not sure what they commands do exactly, because at the end I don't find the same number of SNP with one way or the other. So my question is: What is the method for merge multiple related bam files from the same sample in one? And what is the difference between merge from samtools and MergeSamFiles from picard tools? I know this is a really basic question and I apologize for that, but I never did this kind of analysis before.
I am wondering whether the differences you get between using MergeSamFiles and samtools merge may be related to improperly handling read group (RG) information. Each of your BAM files should have a specific RG assigned. This information is taken into account during different steps of variant-calling pipelines: duplicates filtering, variant calling... I would inspect the merged BAM file to see whether RG tags have been properly assigned.
biostars
{"uid": 173340, "view_count": 9904, "vote_count": 2}
<p>hi, everyone I am working on calculate the substitution rate of noncoding rnas now. if i have a multiple sequence alignment from three species, the topology of tree is ((1,2),3), how can i use baseml and the REV substitution model to get my desire result? i have tried the example 'brown.nuc' and 'brown.trees' in paml4.7 package, but i cannot make out which number is subtitution rate of that sequences. i have read a lot of papers, but the formula is too difficult to understand, can you give me suggestion?any reply will be appreciated!</p>
<p>So, it's been a while since I did this type of analysis, but there are two levels of modeling:</p> <p>1) Modeling substitution rates between two samples (to try and take the probability of multiple substitutions at the same site into consideration). This wikipedia page talks about the models for doing this:</p> <p><a href='http://en.wikipedia.org/wiki/Models_of_DNA_evolution'>http://en.wikipedia.org/wiki/Models_of_DNA_evolution</a></p> <p>2) Using an outgroup (sample 3 in your case) to try and estimate the ancestral state (or, perhaps more precisely, the rate of divergence from an ancestral state). In this particular paper (looking at substitutions at intergenic and synonymous sites), this was done with a relative rate test. See links below for paper and description of strategy:</p> <p><a href='http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.0020163'>http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.0020163</a></p> <p><a href='http://en.wikipedia.org/wiki/Relative_rate_test'>http://en.wikipedia.org/wiki/Relative_rate_test</a></p> <p>From a practical standpoint, I don't remember what specific program outputs look like. However, I know MEGA is typically one of the most popular packages for evolutionary genomics analysis, and I think it has fairly comprehensive documentation:</p> <p><a href='http://www.megasoftware.net/'>http://www.megasoftware.net/</a></p> <p>If you can get a hold of this textbook, it might also be helpful to you:</p> <p><a href='http://www.amazon.com/Fundamentals-Molecular-Evolution-Dan-Graur/dp/0878932666'>http://www.amazon.com/Fundamentals-Molecular-Evolution-Dan-Graur/dp/0878932666</a></p>
biostars
{"uid": 76701, "view_count": 4063, "vote_count": 1}
Hi everyone, I'm having trouble trying to filter blast result outputs. So, I'm using a huge amount of sequences as queries against a certain genome in a local tblastn, which gives me an .txt output. The thing is, I need to extract the best hits, that I've defined as the lowest e-value, for each genomic region that the genome is divided. I tried sorting in excel with the Filter command, but as the e-value is presented like '1.08e-108', the excel only considers the numbers before the 'e'. Then, in a hypothetical list containing e-values with 1.08e-108, 2.34e-10 and 1.03e-03 values, excel always choose 1.03e-03. The next thing I tried to do was sorting each genomic region using Pandas, which I transformed the .txt output from blast in a dataframe for better manipulation, but the same thing that happened like in excel. This way, I'm selecting manually each best hit, but it is taking too much time. Here's an example of the output: ``` BrflORs150.1 KN907735.1 23.616 271 186 6 40 299 80310 81092 1.41e-12 75.1 BrflORs150.2 KN907735.1 24.242 264 178 6 41 296 80313 81062 7.55e-09 63.5 BrflORs155.1 KN907735.1 24.825 286 204 4 23 303 80253 81092 1.29e-17 92.4 BrflORs155.1 KN907735.1 22.388 268 188 7 33 290 181025 181798 1.24e-10 70.1 BrflORs155.1 KN907735.1 24.908 273 181 5 41 302 32141 32920 1.84e-10 69.7 BrflORs155.1 KN907735.1 24.254 268 187 7 39 298 191353 192132 2.81e-10 68.9** BrflORs155.1 KN907685.1 24.739 287 199 8 25 303 37370 38203 9.68e-13 77.0 BrflORs155.1 KN907685.1 25.926 297 189 12 20 301 14077 14919 9.72e-09 63.9 BrflORs155.1 KN909062.1 21.379 290 204 6 23 300 50032 49199 3.01e-12 75.5 BrflORs155.1 KN909062.1 23.132 281 198 5 27 298 33061 32246 7.06e-11 70.9 BrflORs155.1 KN907432.1 25.862 290 181 8 28 300 166293 165475 2.98e-11 72.0 BrflORs155.1 KN906695.1 26.102 295 191 9 22 303 463829 464671 1.27e-10 70.1 BrflORs155.1 KN906695.1 26.689 296 188 8 22 303 485691 486533 3.83e-10 68.6 ``` From those, for example, for the KN907735.1 region, I'd need to select only the query presenting e-value of 1.29e-17, because is the lowest one.
Yes! It worked perfectly, thank you guys! I'm very happy right now hahahaa, it has been consuming me for months! Can I ask just one more question?? For each genomic region (column 2), there are more than one candidate for Best hit, because a genomic region is composed of thousands of nucleotide bases and the candidates I'm looking for have around 750bp. So, for example, in a genomic region KV926207.1, the following result is presented: query acc.ver subject acc.ver % identity alignment length mismatches gap opens q. start q. end s. start s. end evalue bit score BrflORs155.1 KV926207.1 25.510 294 190 9 28 304 130633 131478 8.49e-14 81.6 BrflORs155.1 KV926207.1 24.014 279 191 6 32 298 142379 143188 7.75e-11 72.4 BrflORs157.1 KV926207.1 27.113 284 179 8 46 318 130675 131475 1.70e-17 92.8 BrflORs157.1 KV926207.1 26.259 278 164 7 51 310 173618 174382 1.89e-13 80.1 BrflORs157.1 KV926207.1 23.711 291 195 8 37 317 101835 102656 2.25e-13 80.1 BrflORs157.1 KV926207.1 25.000 296 182 7 34 310 142373 143197 2.31e-12 76.6 From those, the best hits may be the ones below. Because, taking for example queries BrflORs155.1 and BrflORs157.1: both hit the same genomic region that is arround 130600 to 131400, but BrflORs157.1 is chosen as the best hit because presented the lowest evalue. The other regions comprising 101800 to 102600, 142300 to 143000 and 173000 to 174000, may contain other best hits because they are very separated from each other. BrflORs157.1 KV926207.1 27.113 284 179 8 46 318 130675 131475 1.70e-17 92.8 BrflORs157.1 KV926207.1 25.000 296 182 7 34 310 142373 143197 2.31e-12 76.6 BrflORs157.1 KV926207.1 26.259 278 164 7 51 310 173618 174382 1.89e-13 80.1 BrflORs157.1 KV926207.1 23.711 291 195 8 37 317 101835 102656 2.25e-13 80.1 I know now how to get the lowest evalue for a certain evalue, but is there a way I can get other best hits for a certain genomic region (for example, KV926207.1), putting some kind of filter saying something like 'Take the lowest evalue in a genomic region if the subject start and end are higher than 1000bp'? I really appreciated the help. I'm studying right now the sort command on linux so I can get a way to do that :)
biostars
{"uid": 491301, "view_count": 827, "vote_count": 1}
I am trying to download entire dataset for a bioproject using esearch and efetch from the Entrez Utilities. My syntax is based on syntax posted by @Istvan Albert at https://www.biostars.org/p/111040/#359440, which is > `esearch -db sra -query PRJNA40075 | efetch --format runinfo | cut > -d > ',' -f 1 | grep SRR | head -5 | xargs fastq-dump -X 10 --split-files` For the BioProject PRJNA269201 I am interested in, slightly truncated syntax as shown below, creates 144 empty files as expected: esearch -db sra -query PRJNA269201 | efetch --format runinfo | cut -d ',' -f 1 | grep SRR | xargs touch However, when I try the full-length syntax, it behaves differently from what I expected under both scenarios 1 and 2 detailed below: **Scenario 1**. On head-node of a cluster: esearch -db sra -query PRJNA269201 | efetch --format runinfo | cut -d ',' -f 1 | grep SRR | head -2 | xargs fastq-dump --split-files one file finished download, but it is 5.5G which is way larger than the 1.2GB I expected based on info at this [link][1] - is this difference because of file compression?! How can I download to a much more compressed version for both storage and downstream RNA-Seq analyses? -rw-rw-r-- 1 aksrao aksrao 1.1G Jan 19 19:47 SRR1726554_1.fastq -rw-rw-r-- 1 aksrao aksrao 5.5G Jan 19 19:44 SRR1726553_1.fastq **Scenario 2**. When I try to submit this as a shell script, the STDERR stream (SLURM queue management on UBUNTU cluster) captures the following error message: > `2019-01-20T02:28:55 fastq-dump.2.8.2 err: param empty while validating` > `argument list - expected accession` This same problem was reported on the original post by user @ bandanaschapagain, but it may not have been answered and resolved, hence I am posting this afresh. Could someone please help me? Thank you! [1]: https://www.ncbi.nlm.nih.gov/sra/?term=SRR1726553
I don't think you're doing anything wrong; the first run (SRR1726554) matches what's on SRA (1.1G). I downloaded SRR1726553 myself and also got a .fastq file of 5.6G. It could be that the SRA metadata is wrong; I would contact them and ask for more information (e-mail at <[email protected]>). You can get a compressed version by calling `--gzip` in your fastq-dump calls. Most aligners will accept gzipped fastq files as input. My full fastq-dump command is: `fastq-dump $SRA_FILE --outdir $SRA_DIR --gzip --skip-technical --readids --dumpbase --split-files --clip` I would recommend reading the [Edward's lab fastq-dump article][1] to learn more about some useful options. For your error message in Scenario 2; I suspect your accession is not getting passed correctly (based on the `expected accession` part.) Maybe add some print calls (like wrapping the fastq-dump part with an `echo` and writing it to a file) in your shell script to see what command it's actually trying execute? The error message seems rather rare, so maybe it's worth asking SRA support about that too. [1]: https://edwards.sdsu.edu/research/fastq-dump/
biostars
{"uid": 359441, "view_count": 7056, "vote_count": 2}
<p>I want to normalise my paired-end ChIP-seq data (4 samples treated, 4 samples control) to reads per million (RPM). So far my calculation takes the number of reads mapped to a position, divides it by the total number of mapped reads in the sample, and then multiples this by one million. My question is, should the number of mapped paired-end reads be counted as 1 or 2 in the calculation? I believe Samtools reports the total number of mapped reads as half the actual number of paired-end reads which map?</p> <p>My current pseudo script for normalising to RPM:</p> <ul> <li>mappedReads = samtools view -c -F 4 input.bam</li> <li>scalingFactor = 1000000 / mappedReads</li> <li>bedtools genomecov -ibam input.bam -bg - scale scalingFactor -g chrom.sizes &gt; output.bedGraph</li> <li>bedGraphToBigWig input.bedGraph chrom.sizes output.bigWig</li> </ul>
You can use [deepTools][1] to do the normalisation that you want. The output is a bigwig file. It is quite fast if you have multiple cores. bamCoverage -b file.bam --normalizeUsingRPKM -o file.bw The program first extends the read to match the paired-end length before computing the coverage. I opted to count paired end reads as 2 and not as 1 to avoid a bias when a read is not properly paired which could be a significant fraction. [1]: http://deeptools.github.io/
biostars
{"uid": 129912, "view_count": 9038, "vote_count": 1}
I'm working with a [LINCS L1000 dataset][1] that gives the GE of a cell line before and after perturbation by a small molecule. I am using Level 4 data. After loading the .gct file into matlab, I get a matrix of 22268-by-40172 as well as a vector of column_ids and a vector of row_ids. Using the row ids and the gene metadata txt file included in the download, I know that each row represents a gene. I can't figure out what a column represents. Obviously, each columns is a single experiment but I can't understand what each id means. For example, here is a column id "LJP001_BT20_24H_X1_B2_DUO52HI53LO:A03". So far, I know that "LJP001" refers to LINCS Joint Project and "BT20" refers to the specific cell line. Somewhere, it must contain information about the small molecule used as a pertubagen but I don't know how to interpret this. Any help would be greatly appreciated! [1]: http://lincsportal.ccs.miami.edu/datasets/#/view/LDS-1203
To answer my own question. The column ids for Level 3 and Level 4 data is basically the distil_id. The example I posted LJP001_BT20_24H_X1_B2_DUO52HI53LO:A03 can be broken into - the perturbagen group "LJP001" - the cell line "BT20" - the brew prefix "LJP001_BT20_24H" - the plate index "X1_B2_DUO52HI53LO" - the well index "A03" - the distil_id "LJP001_BT20_24H_X1_B2_DUO52HI53LO_A03" (note the switch from ':' to '_') It turns out that the distil_id doesn't contain enough information to identify the perturbagen used. To identify this, you need to use the [LINCS api][1]. Here is [more information][2] about using the LINCS api to query the metadata. I also used [this Coursera video][3] as a reference. Note that the example given in the question doesn't work with the API. [1]: http://api.lincscloud.org/a2/docs/instinfo [2]: http://support.lincscloud.org/hc/en-us/articles/202107633-Metadata-Getting-Lists-of-Perturbagens-Cell-Lines-Cell-Types [3]: https://www.coursera.org/learn/bd2k-lincs/lecture/grLin/introduction-to-lincs-l1000-data
biostars
{"uid": 211896, "view_count": 4011, "vote_count": 2}
In GSEA, What does it mean enriched gene sets at nominal p-value <1%? and what does it mean also regarding specific phenotype for example (cancer in cancer vs Normal samples)
Hello The P-value, or calculated probability, is the probability of finding the observed, or more extreme, results when the null hypothesis of a study question is true. Having a p-value <1% means that is very likely that we will reject the null hypothesis, in your case, that these enriched gene sets are not statistically different between the cancer and the normal samples. In other words, you can say that your enriched gene sets are statistically different between the cancer and the normal samples. You can watch this video if you have more doubts [StatQuest: P Values, clearly explained][1] Best [1]: https://www.youtube.com/watch?v=5Z9OIYA8He8
biostars
{"uid": 9537071, "view_count": 329, "vote_count": 1}
Hi, I need to change the affection status in my PLINK files (data1.bed, data1.bim, data1.fam) **to '2' for all samples**. To do this, I was thinking of: 1\. Convert bed to ped using: plink --bfile data1 --recode --tab --out data2 2\. Copy `data1.fam` as `data2.fam` 3\. Change the 6th column of both the `data2.fam` and `data2.ped` files so that the new affection status is '2' for everything 4\. Convert the modified ped file back to bed using: plink --ped data2.ped --map data2.map Do these steps look complete and correct for what I am trying to do? Thanks
You could: 1) Change the affected status in the fam file. Smaller file, less steps. or 2) Use the option `--pheno` to use an alternate phenotype.
biostars
{"uid": 148849, "view_count": 6764, "vote_count": 1}
Hello Biostars Community, General question(s) here, specifically in regards to sequencing platforms and some related questions on sequencing, as well? 1. Could using a newer/the newest computational genome/annotation (for example, presently, Ensembl 107 or the newest Gencode version) adversely effect the actual truth of what was sequenced? 2. When sequencing is done through an Illumina machine or other big name company machines, are those sequencing platforms *completely independent* from the genome or DNA/cDNA being sequencing? 3. What happens if, for example, "famous gene ABC" and "low-profile gene XYZ" are found to have different 3' and/or 5' ends by some new discovery, in Illumina, would adapters still link to them to perform those bridge PCR reactions on the flow cell lanes, or would it be that all the data published before on "famous gene ABC" and "low-profile gene XYZ" should be revisited? Or is it like question #2, "completely independent" - are even adapters independent from genes? I was doing some reading, and I guess the gene sequence really only matters for probe-based sequencing (chips and arrays?). Hopefully this question could be a good resource for others? Thank you in advance. - Pratik
> using a newer/the newest computational genome/annotation (for example, > presently, Ensembl 107 or the newest Gencode version) adversely effect > the actual truth of what was sequenced? No. Annotation represents the current understanding of genome of an organism. Could it have errors? Possibly/more than likely. But those errors get corrected over time (patch releases). Sequence that you got from a run is not going to change. It is independent of annotation/reference you use. Reference used **will** influence your conclusions, so reference will/does play a vital role in final outcome/conclusion. > are those sequencing platforms completely independent from the genome > or DNA/cDNA being sequencing? This will be influenced by limits/characteristics of sequencing technology. e.g. some platforms may not be able to sequence more than a certain number of base homopolymers (most platforms will have some limitations w.r.t this). It may be difficult to get representation of certain areas of genome because they are hard to convert into sequenceable libraries. > adapters still link to them to perform those bridge PCR reactions on > the flow cell lanes Remember that you are adding the adapters to create necessary flowcell compatible ends. The fragments that do not have these ends are not going to bind and will not be sequenced. As long as your fragments have compatible ends (T overlang etc) they will be made into sequence-able libraries. > gene sequence really only matters for probe-based sequencing It may matter in case of technologies where you need to be able to unwind the two strands. Sequence may form secondary structures that could be hard to resolve/sequence through.
biostars
{"uid": 9539218, "view_count": 505, "vote_count": 2}
I have a big data matrix and each column has named with multiple information and separated by an underscore. e.g.: Genotype, Tissue, Time, Treatment, and Replication (e.g.WT_Shoot_0t_NTrt_1) A sample of my data frame; structure(list(Proteins = c("SnrK", "MAPKK", "PP2C"), WT_Shoot_0t_NTrt_1 = c(0.580784899, 1.210078166, 1.505880218), WT_Shoot_0t_NTrt_2 = c(0.957816536, 1.42644091, 0.943047498), WT_Shoot_0t_NTrt_3 = c(0.559338535, 1.481513748, 1.114371918), WT_Shoot_1t_Trt_1 = c(0.831382253, 1.478551276, 0.837832395), WT_Shoot_1t_Trt_2 = c(1.180515054, 1.445100969, 1.18151722), WT_Shoot_1t_Trt_3 = c(1.332735497, 1.515484415, 0.99774335), WT_root_0t_NTrt_1 = c(1.717008073, 2.048229681, 1.448233358), WT_root_0t_NTrt_2 = c(1.431501693, 1.850835296, 1.128499829), WT_root_0t_NTrt_3 = c(1.752086402, 2.047380811, 1.190984777), WT_root_1t_Trt_1 = c(1.368684187, 1.507348975, 1.531142731), WT_root_1t_Trt_2 = c(1.204974777, 1.440904968, 1.103257306), WT_root_1t_Trt_3 = c(0.996016342, 1.630774074, 1.141581901), mut1_Shoot_0t_NTrt_1 = c(1.05451186, 1.916352545, 1.030983014), mut1_Shoot_0t_NTrt_2 = c(1.54792871, 1.676837161, 1.244400719), mut1_Shoot_0t_NTrt_3 = c(1.318611728, 1.613611, 1.28740667), mut1_Shoot_1t_Trt_1 = c(1.551790106, 1.619609895, 1.097308351), mut1_Shoot_1t_Trt_2 = c(1.638951097, 1.437759761, 1.139143972), mut1_Shoot_1t_Trt_3 = c(1.18670455, 1.530006726, 1.583110853), mut1_root_0t_NTrt_1 = c(0.981436287, 0.5156177, 0.799418798), mut1_root_0t_NTrt_2 = c(1.143837649, 0.772921721, 1.098218628), mut1_root_0t_NTrt_3 = c(1.163352788, 1.371823855, 1.278531528), mut1_root_1t_Trt_1 = c(1.13334394, 0.768721169, 1.155071974), mut1_root_1t_Trt_2 = c(1.015317761, 0.838696502, 0.9622491), mut1_root_1t_Trt_3 = c(1.961461109, 0.697184247, 0.926734427)), row.names = c(NA, -3L), class = c("tbl_df", "tbl", "data.frame"), spec = structure(list(cols = list(Proteins = structure(list(), class = c("collector_character", "collector")), WT_Shoot_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_1t_Trt_3 = structure(list(), class = c("collector_double", "collector")), WT_root_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), WT_root_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), WT_root_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), WT_root_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), WT_root_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), WT_root_1t_Trt_3 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_1t_Trt_3 = structure(list(), class = c("collector_double", "collector")), mut1_root_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), mut1_root_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), mut1_root_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), mut1_root_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), mut1_root_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), mut1_root_1t_Trt_3 = structure(list(), class = c("collector_double", "collector"))), default = structure(list(), class = c("collector_guess", "collector"))), class = "col_spec")) How can I make a table to like below to process downstream statistical analysis (i.e. ANOVA, Tukey) ![Sample Table][1] [1]: https://i.stack.imgur.com/xugYC.png
Convert from wide to long format, then *separate* delimited strings to new columns. library(tidyverse) dd <- structure(list(Proteins = c("SnrK", "MAPKK", "PP2C"), WT_Shoot_0t_NTrt_1 = c(0.580784899, 1.210078166, 1.505880218), WT_Shoot_0t_NTrt_2 = c(0.957816536, 1.42644091, 0.943047498), WT_Shoot_0t_NTrt_3 = c(0.559338535, 1.481513748, 1.114371918), WT_Shoot_1t_Trt_1 = c(0.831382253, 1.478551276, 0.837832395), WT_Shoot_1t_Trt_2 = c(1.180515054, 1.445100969, 1.18151722), WT_Shoot_1t_Trt_3 = c(1.332735497, 1.515484415, 0.99774335), WT_root_0t_NTrt_1 = c(1.717008073, 2.048229681, 1.448233358), WT_root_0t_NTrt_2 = c(1.431501693, 1.850835296, 1.128499829), WT_root_0t_NTrt_3 = c(1.752086402, 2.047380811, 1.190984777), WT_root_1t_Trt_1 = c(1.368684187, 1.507348975, 1.531142731), WT_root_1t_Trt_2 = c(1.204974777, 1.440904968, 1.103257306), WT_root_1t_Trt_3 = c(0.996016342, 1.630774074, 1.141581901), mut1_Shoot_0t_NTrt_1 = c(1.05451186, 1.916352545, 1.030983014), mut1_Shoot_0t_NTrt_2 = c(1.54792871, 1.676837161, 1.244400719), mut1_Shoot_0t_NTrt_3 = c(1.318611728, 1.613611, 1.28740667), mut1_Shoot_1t_Trt_1 = c(1.551790106, 1.619609895, 1.097308351), mut1_Shoot_1t_Trt_2 = c(1.638951097, 1.437759761, 1.139143972), mut1_Shoot_1t_Trt_3 = c(1.18670455, 1.530006726, 1.583110853), mut1_root_0t_NTrt_1 = c(0.981436287, 0.5156177, 0.799418798), mut1_root_0t_NTrt_2 = c(1.143837649, 0.772921721, 1.098218628), mut1_root_0t_NTrt_3 = c(1.163352788, 1.371823855, 1.278531528), mut1_root_1t_Trt_1 = c(1.13334394, 0.768721169, 1.155071974), mut1_root_1t_Trt_2 = c(1.015317761, 0.838696502, 0.9622491), mut1_root_1t_Trt_3 = c(1.961461109, 0.697184247, 0.926734427)), row.names = c(NA, -3L), class = c("tbl_df", "tbl", "data.frame"), spec = structure(list(cols = list(Proteins = structure(list(), class = c("collector_character", "collector")), WT_Shoot_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), WT_Shoot_1t_Trt_3 = structure(list(), class = c("collector_double", "collector")), WT_root_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), WT_root_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), WT_root_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), WT_root_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), WT_root_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), WT_root_1t_Trt_3 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), mut1_Shoot_1t_Trt_3 = structure(list(), class = c("collector_double", "collector")), mut1_root_0t_NTrt_1 = structure(list(), class = c("collector_double", "collector")), mut1_root_0t_NTrt_2 = structure(list(), class = c("collector_double", "collector")), mut1_root_0t_NTrt_3 = structure(list(), class = c("collector_double", "collector")), mut1_root_1t_Trt_1 = structure(list(), class = c("collector_double", "collector")), mut1_root_1t_Trt_2 = structure(list(), class = c("collector_double", "collector")), mut1_root_1t_Trt_3 = structure(list(), class = c("collector_double", "collector"))), default = structure(list(), class = c("collector_guess", "collector"))), class = "col_spec")) ========== dd %>% gather(var, response, WT_Shoot_0t_NTrt_1:mut1_root_1t_Trt_3) %>% separate(var, c("Genotype", "Tissue", "Time", "Trtment", "Replication"), sep = "_") %>% arrange(desc(Proteins)) # A tibble: 72 x 7 Proteins Genotype Tissue Time Trtment Replication response <chr> <chr> <chr> <chr> <chr> <chr> <dbl> 1 SnrK WT Shoot 0t NTrt 1 0.581 2 SnrK WT Shoot 0t NTrt 2 0.958 4 SnrK WT Shoot 1t Trt 1 0.831 5 SnrK WT Shoot 1t Trt 2 1.18 ##UPDATE with `tidyr 1.0.0` no need to use `separate` dd %>% tidyr::pivot_longer(cols = WT_Shoot_0t_NTrt_1:mut1_root_1t_Trt_3 , names_to = c("Genotype", "Tissue", "Time", "Trtment", "Replication"), values_to = "response",names_sep = "_")
biostars
{"uid": 316452, "view_count": 1181, "vote_count": 1}
Hi guys, I have R programming question: I have more than 1000 genes names `Names`. I want to match the samples containing those genes in `Names` and paste the corresponding gene with the ".AD" extension next to it in the same order to get the `Result` as shown below. Thank you. ```r Names <- c("cebi", "pithe", "MAPK", "sapiens", "JUNK", "calli", "STR") samples <- c("MAPK", "JUNK", "STR") ``` `Result`: "cebi", "pithe", "MAPK", "MAPK.AD", "sapiens", "JUNK", "JUNK.AD", "calli", "STR", "STR.AD"
```r Names <- c("cebi","pithe","MAPK","sapiens","JUNK","calli","STR") samples <- c("MAPK","JUNK","STR") Result <- vector() count <- 1 for(i in 1:length(Names)){ index <- which(samples == Names[i]) if(length(index) == 0){ Result[count] <- Names[i] count <- count + 1 next } else{ Result[count] <- Names[i] count <- count + 1 Result[count] <- paste(Names[i], ".AD", sep="") count <- count + 1 } } ```
biostars
{"uid": 140566, "view_count": 5060, "vote_count": 1}
Hello, I have a BED file that has many intervals that are overlapping. My objective is to merge the intervals to see how much of the chromosome the bed file spans. BEDtools merge was my natural go-to method, however, there's a catch. I only want to merge overlaps that reciprocally overlap by say, 90%. BEDtools merge as far as I know, doesn't have this feature. Does anyone know any tool that I can use that can do this? Thanks a bunch and merry Christmas!
With [BEDOPS][1] *[bedmap][2]*: $ bedmap --count --echo-map-range --fraction-both 0.9 --delim '\t' intervals.bed \ | awk '$1>1' - \ | cut -f2- - \ | sort-bed - \ | uniq - \ > answer.bed If the intervals are unsorted or are of unknown sort state, first [sort][3] before mapping:</p> $ sort-bed intervals.unsorted.bed > intervals.bed On further thought, I think you may want to filter single-overlap cases; please see discussion further in this thread. **Edit:** Changed `--fraction-either` to `--fraction-both` to do a correct mutual overlap test. [1]: http://bedops.readthedocs.io/en/latest/index.html [2]: http://bedops.readthedocs.org/en/latest/content/reference/statistics/bedmap.html [3]: http://bedops.readthedocs.org/en/latest/content/reference/file-management/sorting/sort-bed.html
biostars
{"uid": 170298, "view_count": 6844, "vote_count": 1}
AFIK, Short sequencing and short pairwise sequencing generated by next-generation-sequencing is common now-a-days and there exists several alignment tools for them. But I want to ask about long pairwise sequencing. There exists some alignment tools for long sequences like [rHAT][1]. That means there is at least one tool which generates long reads. Now, I am asking that is there any tool which generates **Long Pairwise Reads**? if yes, then is there exist any aligner or mapper for that? Give me some idea about it. Thanks in advance. [1]: https://github.com/derekguan/rHAT
See this nice graphic for various sequencing technologies currently available: ![enter image description here][1] from: https://flxlexblog.wordpress.com/2016/07/08/developments-in-high-throughput-sequencing-july-2016-edition/ None of the long read technologies are paired-end. However, with long reads, the paired-end option becomes a lot less useful. [1]: https://flxlexblog.files.wordpress.com/2016/07/developments_in_high_throughput_sequencing.jpg
biostars
{"uid": 208458, "view_count": 1999, "vote_count": 3}
Hi everyone, Sometimes I am asked to mention the most exciting moment or part of my research works, I often say that I was excited first time I know how to work with command lines in Linux. However I think might be there is a specific purpose behind asking this question. What would you reply if you asked so? Thank you very much
Seeing my work in a published paper, be it figures or analysis or, ultimately, a citation to a paper where I was one of the authors. Seeing my first high-profile paper get used in a Coursera genomics class that I had signed up for was a genuine thrill.
biostars
{"uid": 282918, "view_count": 1088, "vote_count": 1}
Hi, why we performed cufflinks and what does the assembled transcripts mean? I read the protocol, if I am right that cufflink will provide a reference transcriptome on which the reads can be aligned and enables use to calculate the expression level of genes???
Cufflinks builds the transcriptome for a sample. Transcripts are assembled using aligned reads which you provide in the form of a bam file. It also quantifies the assembles transcripts. Reference transcriptome is provided if it is available for that species. It helps cufflinks in building the transcriptome of the sample.
biostars
{"uid": 211203, "view_count": 1504, "vote_count": 1}
<p>Hi Biostars!</p> <p>I m meddling with one doubt.. Could anyone tel me why would we do loop modeling of nucleic acids?</p>
<p>Loop modeling in nucleic acids (i.e. in RNA) is done for the same reasons as loop modeling in proteins: to find out the possible 3D structure of certain regions in a biomolecule (here, RNA). For instance, a structure resulting from an X-ray experiment might miss a couple of bases/nucleotides, possibly because they couldn't be traced properly during processing (one example would be the PDB entry 1u9s of an RNAse P domain that misses the atomic coordinates for a complete tetraloop hairpin). Another use-case would be to check which loops could fit into a specified site in an RNA molecule, e.g. to find alternative loop structures that might serve (at least from a geometrical point of view) as replacement for an existing structure.</p> <p>In any case, "loop modeling" in RNA is a 3D structure problem. Here, secondary structure prediction can only help you with finding the stem and loop regions in a sequence with unknown 3D structure. Since you already know the 3D structure of the molecule (with the exception of your target loop), secondary structure prediction does not yield any useful information for the loop modeling at all.</p> <p>References: </p> <p>Schudoma et al., 2010, Nucleic Acids Research <a href="http://nar.oxfordjournals.org/content/38/3/970">http://nar.oxfordjournals.org/content/38/3/970</a></p> <p>Schudoma et al., 2010, Bioinformatics <a href="http://bioinformatics.oxfordjournals.org/content/26/13/1671">http://bioinformatics.oxfordjournals.org/content/26/13/1671</a></p>
biostars
{"uid": 10236, "view_count": 2034, "vote_count": 2}
I am trying to install CopywriteR, which failed at updating a dependent package RMySQL. The error I got was as follows: ------------------------- ANTICONF ERROR --------------------------- Configuration failed because libmysqlclient was not found. Try installing: * deb: libmariadb-client-lgpl-dev (Debian, Ubuntu 16.04) libmariadbclient-dev (Ubuntu 14.04) * rpm: mariadb-devel | mysql-devel (Fedora, CentOS, RHEL) * csw: mysql56_dev (Solaris) * brew: mariadb-connector-c (OSX) If libmysqlclient is already installed, check that 'pkg-config' is in your PATH and PKG_CONFIG_PATH contains a libmysqlclient.pc file. If pkg-config is unavailable you can set INCLUDE_DIR and LIB_DIR manually via: R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...' -------------------------------------------------------------------- My question is: where can I specify --configure-vars='INCLUDE_DIR=... LIB_DIR=...'? I cannot do this at command line because RMySQL is installed through bioconductor. If I do R CMD INSTALL --configure-vars='INCLUDE_DIR=/mydir/mariadb-connector-c' RMySQL I will get: -------------------------------------------------------- Warning: invalid package ‘RMySQL’ Error: ERROR: no packages specified -------------------------------------------------------- Thanks in advance!
I was able to install CopywriteR after the following steps: 1. Download CopyhelpeR_1.0.2.tar.gz 2. sudo R CMD INSTALL CopyhelpeR*.tar.gz 3. sudo R CMD INSTALL CopywriteR*.tar.gz This somehow bypassed the RMySQL installation. Part of the problem might come from the R version: I was using R 3.2.3, while CopywriteR was built under R version 3.4.2.
biostars
{"uid": 281236, "view_count": 2420, "vote_count": 1}
Hi guys, I have a simple question. I have RNA-Seq data from different batches. As suggested looking at many posts on-line I have pre-normalized my data (using the TMM from edgeR) then I have corrected them using Combat and then I have re-normalized them (for the library-size) using DESeq2. My question is: is it correct the second normalisation after Combat? Or at least is it not dramatically not-correct? Thank you in advance Best
A quick note since this is a common problem, but for batch correction you generally need to have multiple conditions per batch. If all your WT samples are in one batch, and all your KD samples are in another batch, you can't correct for it (as an example). With that being said, you can usually add batch as a covariate to the regression formula in edgeR and DESeq2 as the simpler and more robust option. Your study design would look like the following example for DESeq2: > df condition batch WT-1 WT batch_1 WT-2 WT batch_2 WT-3 WT batch_2 KO-1 KO batch_1 KO-2 KO batch_2 KO-3 KO batch_2 Your regression formula would then be `~ condition + batch`, which means your differential expression results for condition will be corrected for batch.
biostars
{"uid": 459558, "view_count": 3307, "vote_count": 1}
Hi everyone, I would like to know if there is any statistical motivation to plot -log(p-value) vs log(FC); why not -log(FDR) vs log(FC) directly?!? Thanks! !!
There is none as far as I know reading different papers and reports, you can actually see [here][1] which reports volcano plot with -log10(FDR) vs log2(FC), at the end of the day it depends upon the user to use the best scaling metrics for the most represented visualization. According to the definition it is the plots significance versus fold-change on the y- and x-axes, respectively. So your significance can be FDR corrected as well and in that case you are just restricting your FC values to a much stricter subsets to give more confidence to the visualization. There should not be any other statistical motivation other than giving it a more reliable plotting with less error prone significant points. [1]: http://www.nature.com/articles/srep05698/figures/5
biostars
{"uid": 190779, "view_count": 17890, "vote_count": 8}
<p>Hello,</p> <p>Is it possible to use vcftools weir-fst-pop on haploid genomes? I do not seem to be able to find any option to specify the ploidy of my samples.</p> <p>Thank you in advance.</p>
<p>Fst involves measuring expected heterozygosity. Given that, it is impossible to calculate Fst for haploid organisms.</p>
biostars
{"uid": 98555, "view_count": 3686, "vote_count": 2}
Hello. How to extract a protein sequence? I have two files. < file 1. complete_protein.fasta > >protein_1 >DCXSTEISLFHEIWLF >protein_2 >AJFOWIDJLSIDJFJ >protein_3 >DJFLWIDJFLSKDJFL >protein_4 >DKSJFLEISJDKJF < file 2. only proteinID.fasta > >protein_1 >protein_4 I need about sequence in file2. That sequence have in file1. So, I tried "diff" command, but result is I don't want data. How to extract??
Hi, If you are working with linux distribution you can try the following command-line: grep -f target_protein.txt -A1 protein_file.txt | sed '/--/d' > retrieved_protein.txt This takes the `protein_file.txt` file that contains the same content that you posted above: > protein_1 DCXSTEISLFHEIWLF protein_2 AJFOWIDJLSIDJFJ protein_3 DJFLWIDJFLSKDJFL protein_4 DKSJFLEISJDKJF And the file `target_protein.txt` that contains the target protein names that you want to retrieve from the file above: > protein_1 protein_4 The output of protein target sequences of interest is saved in `retrieved_protein.txt`, that looks like: > protein_1 DCXSTEISLFHEIWLF protein_4 DKSJFLEISJDKJF I hope this helps, António
biostars
{"uid": 446682, "view_count": 731, "vote_count": 1}
1. How did it come to be that the alternate nucleotide was more frequent than the reference nucleotide? 2. How does one account for this phenomenon when designing a strategy to filter for variants of interest? Should I go through the complicated process of selecting those individuals who DO NOT have the variant and calculate that the REFERENCE frequency in the population is probably around (1 - esp6500siv_all)? I am researching a rare disease and have whole exome sequence data with the corresponding variant calls. Each variant call has been passed to annovar and among other data, we have looked up the frequency of the variant in the esp6500siv2_all data. Clearly a variant that was observed to have a high frequency in our sample but that had low frequency in esp6500siv2_all would be of disproportionate interest. Low and behold I was surprised to find that 13% of the all of our variants (4055 out of 32131) had an allele frequency that was greater than 0.5. How can that be? I expected that all the allele frequencies would be <0.5. I had thought that the variants would be akin to a minor allele frequency (MAF). Clearly I was wrong. I pulled 3 random variants from among the variants that had more than 0.5 frequency, to check them against the Exome Variant Server. avsnp147 Chr Start End Ref Alt Gene.refGene esp6500siv2_all 1: rs3803530 15 89632842 89632842 C A KIF7 0.5373 2: rs621383 3 125118840 125118840 T C SLC12A8 0.9988 3: rs633561 11 64229857 64229857 A G NUDT22 0.9418 Looking up at NHLBI Exome Sequencing Project (ESP) [Exome Variant Server][2] and using All Allele 1. rs3803530: C>A; A=6984/C=6014 which means A is 6984/(6984+6014) or 0.537 2. rs621383: T>C; C=12479/T=15 which means C is 12479/(12479+15) or 0.999 3. rs633561: A>G; G=12240/A=756 which means G is 12240/(12240+756) or 0.941 [1]: https://drive.google.com/file/d/1jA7Azm2f1ss_rw3NoVgMSJ75tBqAf_qt/view?usp=sharing [2]: http://evs.gs.washington.edu/EVS/
reference ≠ major ≠ ancestral ≠ wildtype As Kevin says, the reference is just whatever is in the reference sequence, which is the sequence of whoever they happened to sequence for that region. GRCh38 is an improvement compared to GRCh37 because the GRC sought out some loci where the reference allele was not the major allele in the 1000 Genomes project, and replaced those regions with tiny contigs which did have the major allele. Some, not all.
biostars
{"uid": 282029, "view_count": 7002, "vote_count": 9}
Hi ! I have many .bam that I want to get their .bai using samtools in the terminal. I tried the following command : samtools index *.bam However, I did not get any .bai file. Regards
Samtools index only accepts a single input file, so using a shell metacharacter to specify multiple files will not work. I usually use a shell wrapper to run samtools index on a single file at a time. ``` #!/usr/bin/env bash # index_all.sh for INFILE in "$@" do samtools index $INFILE done ``` Then it is simple to run: ./index_all.sh /directory/*.bam
biostars
{"uid": 114921, "view_count": 104475, "vote_count": 14}
Hey all, searching for ideas and advices. i have 4 of enrichment results for biological processes. They should be relatively different from each other but i expect a certain number of categories to overlap between each other. My idea was indeed plotting all results in a comprehensive way, in order to show the overlapping categories (storing their enrichment folds), and also showing the "private" categories from each result. any idea or suggestion will be much appreciated
Something like this should be just fine (this is recent code that I pieced together for a project): <a href="https://ibb.co/nGQND7"><img src="https://preview.ibb.co/mjqpt7/y.png" alt="y" border="0"></a> This is built on top of the *ComplexHeatmap* package in R, with much customisation by myself. If it looks good, then I can guide you further with code examples. I have masked the gene names just to protect information. Kevin
biostars
{"uid": 298840, "view_count": 3399, "vote_count": 1}
Hi everyone, I am working with genome wide data (SNPs) using plink and want to compute pairwize LD (r²) between SNPs for each chromosome (29 chromosomes). This is an example of script I used: plink --bfile myData1 --r2 --ld-window-r2 0 --out chr1 plink --bfile myData2 --r2 --ld-window-r2 0 --out chr2 . . . . . . . . . . . . . . . . . . plink --bfile myData29 --r2 --ld-window-r2 0 --out chr29 And the result is like : CHR_A BP_A SNP_A CHR_B BP_B SNP_B R2 23 10121 T2500005 23 14911 T2300007 1 23 10121 T2500005 23 15894 T2300003 0.0175439 23 10121 T2500005 23 43444 T2300006 0.416667 23 10121 T2500005 23 60163 T0015398 0.416667 In fact, it works for all chromosomes except chromosome 24, which give an embty table : CHR_A BP_A SNP_A CHR_B BP_B SNP_B R2 The same probleme for 3 breeds (always no results for chr24). Do you have an idea? Tanks for help.
—chr-set *is* required for chromosome 24 here. Otherwise, it is treated as chrY. It may also affect the results for chromosome 23. To be safe, you should always use it with nonhuman species.
biostars
{"uid": 357124, "view_count": 1493, "vote_count": 1}
<p>Hi,</p> <p>Is there a pipeline/pacakage that can easily get me genotypes (eg in AA, AG, GG) type format from Affy genome wide SNP CEL files. Thus far I've played around with Birdseed and CLRMM. Both suffer from a combination of being impossible to install, having awful documention or producing unusable output. I literally want, CEL files go in, (annotated!!!!) genotypes and confidence scores come out. Any suggestions?</p>
**Edit by Kevin Blighe on March 10, 2020**: Affymetrix was subsequently acquired by ThermoFisher, and the Genotyping Console can now be downloaded from <a href="https://www.thermofisher.com/br/en/home/life-science/microarray-analysis/microarray-analysis-instruments-software-services/microarray-analysis-software/genotyping-console-software.html">HERE</a>. ---------------- --------- <p><a href="http://www.affymetrix.com/browse/level_seven_software_products_only.jsp?productId=131535&amp;categoryId=35625#1_1">Affymetrix genotyping console</a></p> <p>For SNP 6 it supports </p> <ul> <li>calling genotypes</li> <li>copy number/LOH</li> <li>copy segments data</li> <li>copy number variation analysis</li> </ul> <p>Also <a href="http://media.affymetrix.com/support/downloads/manuals/gtc_4_0_user_manual.pdf">see manual</a></p>
biostars
{"uid": 64203, "view_count": 7593, "vote_count": 1}
Hi, I am only interested in aligning DNAseq reads to certain genes. If I split my reference genome based on the coordinates of my gene of interest (as present in the GTF/GFF file) and then use BWA for aligning my reads to the resulting 'smaller reference genomes', will it be a good idea? If yes, is there a threshold to the number of bases upstream and downstream of the gene coordinates that should be considered? And what caveats does this method involving splitting the reference genome can have that I should pay attention to? My motto for using this method is to reduce alignment time as I am only interested in say 20-30 genes and not all genes.
> for aligning my reads to the resulting 'smaller reference genomes', will it be a good idea? NO, you'll get false positives. It's the same as : https://www.biostars.org/p/4405/ (you're 'masking' a whole chromosome) . Citing Heng Li: > This will lead to wrongly mapped sequences, spurious SNPs/indels calls and all sorts of problems. I cannot think of a single use case when masking [before mapping] may lead to better outcomes."
biostars
{"uid": 279462, "view_count": 1468, "vote_count": 1}
Hi, I want to access an archived version of biomart and select the human dataset. mart=useMart("ensembl_mart_92",host="http://apr2018.archive.ensembl.org") works but if I do: mart=useMart("ensembl_mart_92",host="http://apr2018.archive.ensembl.org", dataset = "hsapiens_gene_ensembl") it gives error: Error in checkDataset(dataset = dataset, mart = mart) : The given dataset: hsapiens_gene_ensembl , is not valid. Correct dataset names can be obtained with the listDatasets() function. If I look into listDatasets() as suggested I get: mart=useMart("ensembl_mart_92",host="http://apr2018.archive.ensembl.org") listDatasets(mart) [1] dataset description version <0 rows> (or 0-length row.names) Also tried other suggestions online that didn't work: listMarts(archive = TRUE) Error in listMarts(archive = TRUE) : The archive = TRUE argument is now defunct. Use listEnsemblArchives() to find the URL to directly query an Ensembl archive. listEnsemblArchives() name date url version current_release 1 Ensembl GRCh37 Feb 2014 http://grch37.ensembl.org GRCh37 2 Ensembl 94 Oct 2018 http://oct2018.archive.ensembl.org 94 * 3 Ensembl 93 Jul 2018 http://jul2018.archive.ensembl.org 93 4 Ensembl 92 Apr 2018 http://apr2018.archive.ensembl.org 92 5 Ensembl 91 Dec 2017 http://dec2017.archive.ensembl.org 91 6 Ensembl 90 Aug 2017 http://aug2017.archive.ensembl.org 90 7 Ensembl 89 May 2017 http://may2017.archive.ensembl.org 89 8 Ensembl 88 Mar 2017 http://mar2017.archive.ensembl.org 88 9 Ensembl 87 Dec 2016 http://dec2016.archive.ensembl.org 87 10 Ensembl 86 Oct 2016 http://oct2016.archive.ensembl.org 86 11 Ensembl 85 Jul 2016 http://jul2016.archive.ensembl.org 85 12 Ensembl 84 Mar 2016 http://mar2016.archive.ensembl.org 84 13 Ensembl 83 Dec 2015 http://dec2015.archive.ensembl.org 83 14 Ensembl 82 Sep 2015 http://sep2015.archive.ensembl.org 82 15 Ensembl 81 Jul 2015 http://jul2015.archive.ensembl.org 81 16 Ensembl 80 May 2015 http://may2015.archive.ensembl.org 80 17 Ensembl 79 Mar 2015 http://mar2015.archive.ensembl.org 79 18 Ensembl 78 Dec 2014 http://dec2014.archive.ensembl.org 78 19 Ensembl 77 Oct 2014 http://oct2014.archive.ensembl.org 77 20 Ensembl 76 Aug 2014 http://aug2014.archive.ensembl.org 76 21 Ensembl 75 Feb 2014 http://feb2014.archive.ensembl.org 75 22 Ensembl 74 Dec 2013 http://dec2013.archive.ensembl.org 74 23 Ensembl 67 May 2012 http://may2012.archive.ensembl.org 67 24 Ensembl 54 May 2009 http://may2009.archive.ensembl.org 54 mart=useMart("ensembl_mart_92", dataset="hsapiens_gene_ensembl", archive=T) rror in listMarts(host = host, path = path, port = port, includeHosts = TRUE, : The archive = TRUE argument is now defunct. mart=useMart("ensembl_mart_92", dataset="hsapiens_gene_ensembl") Error in useMart("ensembl_mart_92", dataset = "hsapiens_gene_ensembl") : Incorrect BioMart name, use the listMarts function to see which BioMart databases are available listMarts() biomart version 1 ENSEMBL_MART_ENSEMBL Ensembl Genes 94 2 ENSEMBL_MART_MOUSE Mouse strains 94 3 ENSEMBL_MART_SNP Ensembl Variation 94 4 ENSEMBL_MART_FUNCGEN Ensembl Regulation 94 I'm walking in circles here...Suggestions??
Try: > mart=useMart("ensembl",host="http://apr2018.archive.ensembl.org", dataset = "hsapiens_gene_ensembl")
biostars
{"uid": 356917, "view_count": 4910, "vote_count": 1}
<p>This question was inspired by Peter Cock 's tweet</p> https://twitter.com/pjacock/status/118012750105546752 <p>Most softwares (C, java...) use a int32 (unsigned or not) to store the length of the chromosome. It isn't enough when the length of a chromosome is greater than INT_MAX or UINT_MAX</p> <pre><code># define INT_MAX 2147483647 # define UINT_MAX 4294967295U </code></pre> <p>So my question is:</p> <ul> <li>Is there any resource where one can find the length of the chromosomes. Something like <a href='http://bionumbers.hms.harvard.edu/'>BioNumbers</a>.</li> <li>What's the length of the longest chromosome ? </li> </ul>
With the publication of [A chromosome-based draft sequence of the hexaploid bread wheat (*Triticum aestivum*) genome][1], the initial release of the Wheat ta 3B chromosome is 774 Mbp (or to be exact, 774434471bp in file ``https://urgi.versailles.inra.fr/download/wheat/3B/ta3bPseudomolecule.genom.fa.gz``) [1]: http://dx.doi.org/10.1126/science.1251788
biostars
{"uid": 12560, "view_count": 10615, "vote_count": 17}
My task is to repeat the DATA analysis of RNA-seq data as presented in a journal article using the tophat cufflinks pipeline. For simplicity Ill just mention the 4 controls The authors run cufflinks without a reference annotation on each control "to detect possible novel transcripts" --> then cuffmerge on the results --> they then say they run cufflinks again using the merged transctiprts.gtf as the reference annotation. It seems over complicated. Cufflinks requires a .BAM file as input but cuffmerge output doesnt give a BAM file....so the only way i can see they did it is by re running cufflinks on every sample for a second time (waste of time?) except this time using the cuffmerge output as the reference annotation. This would mean re running cuffmerge again also afterward. Surely " to detect possible novel transcripts" doesnt require running cufflinks on everything twice....I mean, isnt this the whole point of cufflinks. Thanks in advance. Kenneth
The first Cufflinks run is to generate a new annotation for each sample to discover novel transcripts. The Cuffmerge run is to merge together all the annotations for each individual sample to create one merged annotation of better quality. The second Cufflinks run is to quantify the transcripts based on the merged annotation file. Yes, it is complicated, and the results will contain many false positives. More importantly, it's generally a waste of time, unless you're working on a poorly annotated genome. For well-annotated genomes like the mice, human, or drosophila genomes, you shouldn't bother trying to discover novel transcripts. Just use the most recent annotation available.
biostars
{"uid": 200519, "view_count": 3029, "vote_count": 1}
<p>Hello everyone,</p> <p>I'm trying to do a simple t-test on my microarray sample in R. My sample looks like this:</p> <pre><code>gene_id gene sample_1 value_1 sample_2 value_2 XLOC_000001 LOC425783 Renal 20.8152 Heart 14.0945 XLOC_000002 GOLGB1 Renal 10.488 Heart 8.89434 </code></pre> <p>So the t-test is between sample 1 and sample 2 and my code looks like this:</p> <pre><code>ttestfun = function(x) t.test(x[4], x[6])$p.value p.value = apply(expression_data, 1, ttestfun) </code></pre> <p>It gives me the following error: Error in t.test.default(x[6], x[8]) : not enough 'x' observations In addition: Warning message: In mean.default(x) : argument is not numeric or logical: returning NA</p> <p>What am I doing wrong? Please help.</p> <p>Many thanks.</p>
<p>I think there's some misconceptions operating here from the original questioner. First and foremost, a t-test is not just a way of calculating p-values, it is a statistical test to determine whether two populations have varying means. The p-value that results from the test is a useful indicator for whether or not to support your null hypothesis (that the two populations have the same mean), but is not the purpose of the test.</p> <p>In order to carry out a t-test between two populations, you need to know two things about those populations: 1) the mean of the observations and 2) the variance about that mean. The single value you have for each population <em>could</em> be a proxy for the mean (although it is a particularly bad one - see below), but there is no way that you can know the variance from only one observation. This is why replicates are <em>required</em> for microarray analysis, not a nice optional extra.</p> <p>The reason a single observation on a single microarray is a bad proxy for the population mean is because you have no way of knowing whether the individual tested is typical for the population concerned. Assuming the expression of a given gene is normally distributed among your population (and this is an assumption that you have to make in order for the t-test to be a valid test anyway), your single individual could come from anywhere on the bell curve. Yes, it is most likely that the observation is somewhere near the mean (by definition, ~68% within 1 standard deviation, see the graph), but there is a significant chance that it could have come from either extreme.</p> <p><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/500px-Standard_deviation_diagram.svg.png" alt="Normal Distribution"/></p> <p>Finally, I've read what you suggest about the hypergeometric test in relation to RNA-Seq data recently, but again the use of this test is based on a flawed assumption (that the variance of a gene between the 2 populations is equivalent to the population variance). Picking a random statistical test out of the bag, just because it is able to give you a p-value in your particular circumstance is almost universally bad practise. You need to be able to justify it in light of the assumptions you are making in order to apply the test.</p> <p>BTW, your data does not look like it is in log2 scale (if it is, there's an ~32-fold difference between the renal and heart observations for the first gene above) - how have you got the data into R &amp; normalised it?</p>
biostars
{"uid": 57152, "view_count": 15442, "vote_count": 1}
I'm trying to write a CWL workflow that will do a simple trim and map procedure on some bacterial short read data that I've got. For the trimming I'm planning on using trimmomatic and I have modified the Duke-GCB trimmomatic tool slightly for this purpose (see https://github.com/pvanheus/GGR-cwl/blob/master/trimmomatic/trimmomatic.cwl). Here is the CWL for the workflow (`trim.cwl`): cwlVersion: v1.0 class: Workflow inputs: reads1: File reads2: File slidingw: string minl: string outputs: trimmed_reads: type: File outputSource: trim_reads/output_read1_trimmed_file steps: trim_reads: run: { $import: "../../tools/trimmomatic/trimmomatic.cwl" } inputs: input_read1_fastq_file: reads1 input_read2_fastq_file: reads2 slidingwindow: slidingw minlen: minl out: [output_read1_trimmed_file] but trying to run this with `cwltool trim.cwl` yields: cwltool 1.0.20160714182449 Tool definition failed validation: Validation error in object file:///net/ceph-mon1.sanbi.ac.za/sanbi/scratch/pvh/cwl/workflows/trim/trim.cwl Could not validate as `CommandLineTool` because could not validate field `outputs` because At position 0 could not validate field `outputSource` because it is not recognized and strict is True, valid fields are: label, secondaryFiles, format, streamable, doc, id, outputBinding, type missing required field `baseCommand` could not validate field `steps` because it is not recognized and strict is True, valid fields are: id, inputs, outputs, requirements, hints, label, doc, cwlVersion, class, baseCommand, arguments, stdin, stderr, stdout, successCodes, temporaryFailCodes, permanentFailCodes Could not validate as `ExpressionTool` because could not validate field `outputs` because At position 0 could not validate field `outputSource` because it is not recognized and strict is True, valid fields are: label, secondaryFiles, format, streamable, doc, id, outputBinding, type missing required field `expression` could not validate field `steps` because it is not recognized and strict is True, valid fields are: id, inputs, outputs, requirements, hints, label, doc, cwlVersion, class, expression Could not validate as `Workflow` because could not validate field `steps` because the value `[{'id': 'file:///net/ceph-mon1.sanbi.ac.za/sanbi/scratch/pvh/cwl/workflows/trim/trim.cwl#trim_reads', 'inputs': [{'id': u'file:///net/ceph-mon1.sanbi.ac.za/sa[...]` is not a valid type in the union, expected one of: - array of <WorkflowStep>, but At position 0 missing required field `in` could not validate field `inputs` because it is not recognized and strict is True, valid fields are: id, in, out, requirements, hints, label, doc, run, scatter, scatterMethod I'm sure there are all sorts of errors in my CWL, but the parsing as a CommandLineTool is surely the wrong thing?
Hello Peter vH! For workflows we have "in" and "out". CommandLineTools use 'inputs' and 'outputs'. You've mixed the two in your workflow and the last line of the overly detailed error message says as much. Our apologies for the error messages being so hard to read.
biostars
{"uid": 203041, "view_count": 3339, "vote_count": 1}
Hi all, Sorry I know that this question has been asked several times, but unfortunately I haven't been able to find the right answer, or didn't understand. I'm trying to get TMM normalized counts thanks to edgeR. I understand that I have to compute normalization factors : dgList <- calcNormFactors(dgList, method="TMM") which gives me a normalization factor for all samples : head(dgList$samples) group lib.size norm.factors S1 1 21087314 0.9654794 S2 1 16542810 1.1589117 S3 1 18875473 0.8763291 S4 1 15865414 1.0864038 S5 1 19179795 1.0488230 S6 1 15063992 1.0707007 But at this step I don't know what to do to get a matrix of normalized TMM counts. I know that I can get CPM normalized counts thanks to : cpm(dgList) But CPM and TMM are not the same, right ? Thanks in advance for any of your input on this topic.
If you run the cpm function on a DGEList object which contains TMM normalisation factors then you will get TMM normalised counts. Here is a snippet of the source code for the cpm function: cpm.DGEList <- function(y, normalized.lib.sizes=TRUE, log=FALSE, prior.count=0.25, ...) # Counts per million for a DGEList # Davis McCarthy and Gordon Smyth. # Created 20 June 2011. Last modified 10 July 2017 { lib.size <- y$samples$lib.size if(normalized.lib.sizes) lib.size <- lib.size*y$samples$norm.factors cpm.default(y$counts,lib.size=lib.size,log=log,prior.count=prior.count) } The function checks to see if a DGEList object was provided with a *lib.size* and *norm.factors* column (created when you run calcNormFactors), if so then it uses those in the normalisation of the raw counts. You were right in your original post, just run the following and you will have TMM normalised counts: dge <- calcNormFactors(dge, method = "TMM") tmm <- cpm(dge)
biostars
{"uid": 317701, "view_count": 29756, "vote_count": 10}
I've simulated RNA abundance with wgsim. The simulation itself was error free. There is a single factor in my experiment that looks like: ```r A1 A2 A3 B1 B2 B3 R1_101 113 113 113 13 11 9 R1_102 247 246 246 12 12 14 R1_103 20835 20915 20788 9973 9955 9973 ``` A1, A2, A3 are the simulated replicates for the first level. B1, B2 and B3 are the simulated replicated for the second level. As expected, the reads counts for each level are very close because it was an error-free simulation. The purpose of the experiment is to compare it with cuffdiffs (another differential package) in detecting log-fold changes. Unfortunately, I run into an error in DESq2: ```r Error in estimateDispersionsFit(object, fitType = fitType, quiet = quiet) : all gene-wise dispersion estimates are within 2 orders of magnitude ``` It looks like the package is unable to estimate a dispersion factor (most likely it's too small). However, I had no problem with cuffdiffs. Is there anything that I can do to make it work?
The DESeq2 method has empirical Bayes parts to it, which involve sharing information across genes to improve estimates (see [the paper][1]). In this case, during dispersion estimation, we look across the genes at the distribution of gene-wise dispersion estimates to improve the final estimates (posterior modes). If you simulate data which has no (over)dispersion, then these methods don't make sense. Did you clip the warning message which was printed in the console for some reason? Because the rest of it tells you what to do: ```r # all gene-wise dispersion estimates are within 2 orders of magnitude # from the minimum value, and so the standard curve fitting techniques will not work. # One can instead use the gene-wise estimates as final estimates: dds <- estimateDispersionsGeneEst(dds) dispersions(dds) <- mcols(dds)$dispGeneEst ``` [1]: http://www.genomebiology.com/2014/15/12/550
biostars
{"uid": 149165, "view_count": 9151, "vote_count": 1}
I would like to identify a "canonical" transcript for every protein-coding gene in Ensembl. For project-related reasons, I'm using the `EnsDb.Hsapiens.v75` package in R. I realize, of course, that "canonical" is a working definition at best, and inappropriate in some cases - but for ease of graphing some data I just want one transcript per gene for now. From manually inspecting genes in Ensembl, it looks like the lowest-numbered transcript ID for each corresponds to what I'm looking for. Some code to pull out a few examples: library(EnsDb.Hsapiens.v75) library(tidyverse) genes <- keys(EnsDb.Hsapiens.v75, keytype='GENEID') ensembl <- AnnotationDbi::select(EnsDb.Hsapiens.v75, keys=genes, keytype='GENEID', columns=c('GENEID', 'SYMBOL', 'GENEBIOTYPE')) ensembl_cds <- filter(ensembl, GENEBIOTYPE=='protein_coding') ensembl_cds_tx <- AnnotationDbi::select(EnsDb.Hsapiens.v75, keys=genes, keytype='GENEID', columns=c('SYMBOL', 'TXID')) head(ensembl_cds_tx) gois <- c('RSPO1', 'PRSS1', 'CDH1') gois_tx <- filter(ensembl_cds_tx, SYMBOL %in% gois) %>% arrange(SYMBOL, TXID) %>% print() gois_tx_lowest <- gois_tx[!duplicated(gois_tx$SYMBOL),] %>% print() Each of the lowest transcript IDs pulled out above (ENST00000261769, ENST00000311737, ENST00000356545) corresponds to an Ensembl transcript for the respective genes (CDH1, PRSS1, RSPO1) that matches with Refseq and the Consensus CDS database. (Although, for RSPO1, there are three other transcripts that also have Refseq matches, which speaks to the arbitrariness of picking a single canonical transcript.) My question is, is this the general practice across the Ensembl transcript database, that the lowest numbered transcript for a gene corresponds to a canonical or semi-canonical transcript, or have I just gotten lucky so far?
No. The numbers are arbitrary. The canonical transcript is the one which is labelled canonical, which you can get as a filter or an attribute. The stable IDs are assigned in order, so the first transcript every identified was ENST00000000001, the second ENST00000000002 etc. This means that for a gene, the one with the lowest number was the first one to be identified. In all probability, the first one identified is the one that is the most highly expressed, highly conserved and well-studied, which makes it coincidentally also the canonical. But it's not always the case.
biostars
{"uid": 9496967, "view_count": 647, "vote_count": 1}
Hi, I am trying to do PCA analysis for my samples for initial quality control. I have 2 different sets of samples - one were sequenced 50bp and other was sequenced 75bp (both of them have disease and control cases). To do the PCA on those samples, I ran DEseq2 on them (which necessarily requires non-normalized counts), followed by vst and plotPCA. But in my PCA plot, I see two clusters - one for the 50bp samples and the other for 75 bp samples. This is not necessarily expected, since there is nothing different between the samples except for sequencing depth. Someone said I should normalize my data. But I think that will be taken care of by DESeq2, and it anyway shouldn't be fed normalized counts. Here is the plot. Any suggestions? ![PCA plot link][1] Here is the image link in case it doesn't show - https://imgur.com/CFXHky4 Here is my code. Here P and NP mean Pain and Non-Pain, which is the effect I am studying. Some files are 50bp, others are 75bp, and I have **not** provided that info to DESeq - counts_all_fullGTF <- featureCounts(nthreads=3, isGTFAnnotationFile=TRUE, annot.ext='/Volumes/bam/DRG/annotations/Homo_sapiens.GRCh38.95.gtf', files=c('/Volumes/.../47T7L.fastqAligned.sortedByCoord.out.bam','/Volumes/.../47T7R.fastqAligned.sortedByCoord.out.bam'))$counts sampleTable_all <- data.frame(condition=factor(c('P','NP','P','P','P','P','NP','P','P','NP','P','P','P','P','P','P','NP','P','P','NP','P','P','P','P','P','NP','NP','NP','NP','P','P','P','P','P'))) coldata <- sampleTable_all deseqdata_fullGTF <- DESeqDataSetFromMatrix(countData=counts_all_fullGTF, colData=coldata, design=~condition) dds_fullGTF <- DESeq(deseqdata_fullGTF) vsd_fullGTF <- vst(dds_fullGTF) library(ggrepel) plotPCA(vsd_fullGTF, ntop=5000, "condition") + geom_label_repel(aes(label=substr(name, start = 1, stop = 6), colour = "white", fontface = "bold")) [1]: https://imgur.com/CFXHky4
Firstly, yes, the data should be normalised, and ideally transformed to a normal distribution. When you ran `vst()`, did you set `blind = FALSE` and did you have `ReadLength` (or `batch`) as a covariate in your design formula? Setting `blind = FALSE` 'exposes' the variance stabilising transformation to the design formula - it's useful for producing expression data for downstream applications outside of DESeq2. Read here for further information: http://bioconductor.org/packages/devel/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#blind-dispersion-estimation If you still need to remove that batch effect, then use `limma::removeBatchEffect()` on the variance-stabilised expression levels. As to the origin of the observed difference, I think that multi-mapping will obviously be a bigger issue, given the shorter read length. However, multi-mapping may be dealt with by the aligner or pseudo-aligner that you use - which were used? Kevin
biostars
{"uid": 434664, "view_count": 2945, "vote_count": 1}
`**Package ‘![VennDiagram][1]’.**` How can I get the list of common objects in each combination? For e.g. what are the common 44 Genes in interset (similar image got from online)? <a href="https://imgbb.com/"><img src="https://i.ibb.co/h2gLjkQ/venn.png" alt="venn" border="0"></a> I have used below scripts Overlap <- calculate.overlap(x = list(BB, BB, BRR)) Overlap[[1]] But in this method, I can not find the list of 1761, 466 genes how these numbers are given? Even when I tried `Overlap$a5` , `Overlap$a7`, etc this also not given some of the values correctly? Could someone help me?
From what I can tell, the `a`s are distributed from top left to right, to bottom. i.e. ``` a1 = 1761 a2=126 a3=466 a4=64 a5=44 a6=27 a7=366 ``` This is based on: ![enter image description here][1] > library("VennDiagram") > > gene_list = paste0("GENE", 1:1000) > > studies = list( S1=sample(gene_list, 700, replace = FALSE), + S2=sample(gene_list, 700, replace = FALSE), + S3=sample(gene_list, 700, replace = FALSE) ) > > ol = calculate.overlap(x = studies) > ol_size=sapply(ol, length) > > > venn.diagram( + x = studies, + euler.d = TRUE, + filename = "Euler_3set_scaled.tiff", + cex = 2.5, + cat.cex = 2.5, + cat.pos = 0 + ); [1] 1 > # Get the 63 > length( setdiff(studies$S1, union(studies$S2, studies$S3)) ) [1] 63 > # Confirm order > ol_size a5 a2 a4 a6 a1 a3 a7 351 143 143 134 63 72 72 And it matches the order from `a1` to `a7` going left-right, top-bottom. If you want to be sure of what you're getting, you can use [set operators][2] like I did with `setdiff()` and `union()`. [1]: https://i.imgur.com/85fPa1e.png [2]: https://stat.ethz.ch/R-manual/R-devel/library/base/html/sets.html
biostars
{"uid": 417100, "view_count": 7312, "vote_count": 5}
I'm in the process of creating concensus sequences, and my specific aim is to remove any ambiguous characters (for reasons that are unimportant to the question). I couldn't find a simple tool that would do what I want satisfactorily, so I started writing my own (until someone reminded me about HMMs but more on that is a second). I have the following MSA as a AlignIO object: >>> print(msa) SingleLetterAlphabet() alignment with 16 rows and 149 columns MSTTPEQIAVEYPIPTYRFVVSLGDEQIPFNSVSGLDISHDVIE...QAA PAU_02775 MSTTPEQIAVEYPIPTYRFVVSIGDEQIPFNSVSGLDISHDVIE...QAA PLT_01696 MSTTPEQIAVEYPIPTYRFVVSIGDEQVPFNSVSGLDISHDVIE...QAA PAK_02606 MSTTPEQIAVEYPIPTYRFVVSIGDEKVPFNSVSGLDISHDVIE...QAA PLT_01736 MTTTT----VDYPIPAYRFVVSVGDEQIPFNNVSGLDITYDVIE...QAA PAK_01896 MATTT----VDYPIPAYRFVVSVGDEQIPFNSVSGLDITYDVIE...QAA PAU_02074 MSVTTEQIAVDYPIPTYRFVVSVGDEQIPFNNVSGLDITYDVIE...QAA PLT_02424 MTITPEQIAVDYPIPAYRFVVSVGDEKIPFNNVSGLDVHYDVIE...QAP PLT_01716 MAITPEQIAVEYPIPTYRFVVSVGDEQIPFNNVSGLDVHYDVIE...QAA PLT_01758 MSTSTSQIAVEYPIPVYRFIVSIGDDQIPFNSVSGLDINYDTIE...QAV PAK_03203 MSTSTSQIAVEYPIPVYRFIVSVGDEKIPFNSVSGLDISYDTIE...QAV PAU_03392 MSITQEQIAAEYPIPSYRFMVSIGDVQVPFNSVSGLDRKYEVIE...QVP PAK_02014 MSITQEQIAAEYPIPSYRFMVSIGDVQVPFNSVSGLDRKYEVIE...QVP PAU_02206 MSTTADQIAVQYPIPTYRFVVTIGDEQMCFQSVSGLDISYDTIE...EFH PAK_01787 MSTTADQIAVQYPIPTYRFVVTIGDEQMCFQSVSGLDISYDTIE...EFH PAU_01961 MSTTVDQIAVQYPIPTYRFVVTVGDEQMSFQSVSGLDISYDTIE...EFH PLT_02568 * (Note the asterisk I've added for the moment). If I build a concensus with `hmmemit -c` ("majority-rule consensus sequence"), I get the following sequence: >hmmemit-consensus * MSTTAEQIAVEYPIPTYRFVVSVGDEQIPFNSVSGLDISYDVIEYKDGVGNYYKMPGQRQ AINITLRKGVFSGDTKLFDWINSIQLNQVEKKDISISLTNEAGTEILLTWSVANAFPTSL TSPSFDATSNEVAVQEISLTADRVTIQAA At column 23 (22 if zero based) (the asterisk), `hmmemit` places a Valine (`V`). My own tool however, places an Isoleucine (`L`). Part of my process is to use `collections.Counter` to score the string, and in that column, `I` occurs 8 times, and `V` 7 times. Why is `hmmemit` choosing a lower frequency amino acid? Basic process of my script (relevant parts only): def enumerate_string(string): """Returns the most common characters of a string. Multiple characters are returned if there are equally frequent characters.""" from collections import Counter counts = Counter(string) keys = [] for key, value in counts.iteritems(): if value == max(counts.values()): keys.append(key) return keys ------ >>> msa = AlignIO.read('myalignment.fasta', 'fasta') >>> msa_summary = AlignInfo.SummaryInfo(msa) >>> Counter(msa_summary.get_column(22)) Counter({'I': 8, 'V': 7, 'L': 1}) >>> enumerate_string(msa_summary.get_column(22)) ['I'] This clearly shows that Isoleucine is the most common, so why doesn't `hmmemit` choose it?
Because hmmemit is emitting the maximum likelihood sequence, according to the profile HMM's parameterization. ("majority-rule" in hmmemit means w.r.t. the profile HMM, not w.r.t. the MSA that the HMM was built from. hmmemit takes the HMM as input, not the MSA.)
biostars
{"uid": 270083, "view_count": 1729, "vote_count": 1}
Dear all, there are many posts about remove duplicate sequences in a fasta file (https://www.biostars.org/p/3003/), but I want to remove only the duplicate sequences with the same ids. I have many duplicate sequences in my fasta file, but with different ids and I want to keep them. How to remove only same id sequence duplicates? I have protein sequences and my sequences are split in different lines.
http://bioinf.shenwei.me/seqkit/usage/#rmdup
biostars
{"uid": 230686, "view_count": 7542, "vote_count": 1}
Hi everyone, I have 2 datasets that I will be using for my university project. The first dataset is relatively small and the other is much larger. Would it look odd in my university project to have a different threshold for the interaction probability for each of the data sets. The first one is smaller and I set the interaction probability to 0.5. Whereas the second dataset has several lists of proteins that are nearly 1000 proteins in length, so I will have to up the interaction probability threshold to 0.7 or 0.9 to make my Cytoscape/STRING network look less 'messy'. Is this bad practice?
While it is usually best and easier to explain/justify the use of a single threshold, I think it is ok to use different ones as long as it is properly explained. The true bad practice here would be to use different threshold and not make it clear. One way to be completely transparent about this would be to build 4 networks: the big and small dataset with both the high and low threshold. Then it is easy to discuss the benefits of each threshold by showing both of them and by explaining, for instance, why the low threshold is better for visualization of the small dataset and the high threshold is better for the visualization of the big dataset.
biostars
{"uid": 9464697, "view_count": 509, "vote_count": 1}
Hi all. Now, I’m trying to get an assembly stat with reference to table S1 and figure S1 in this paper. > [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7266049/][1] I used the following command to get the result of BLAST for my fasta. blastx -query /home/nkarim/avenae/trinity_even_out_dir/Trinity.300.longest.fasta \ -db /home/nkarim/blast/db/nr \ -outfmt 6 \ -evalue 1e-3 \ -out /home/nkarim/blast/output/avenae.out But I don't know how do I get the items;"Top BLASTx-hit species", "Percent of gene with at least one BLASTx hit" and "Percent of gene with at least one GO annotation". Can I get each items using the result of BLAST? And could you tell me how to get a table like table S1 if you know it? Thank you very much for your help in advance! [1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7266049/
Since you're blasting with ncbi-NR you're almost there. By changing the blastx output options to outfmt 6 you're receiving tabular output in `avenae.out`, you can customise the output columns to also receive the species names. For example, you could run: ``` blastx -query /home/nkarim/avenae/trinity_even_out_dir/Trinity.300.longest.fasta \ -db /home/nkarim/blast/db/nr \ -outfmt "6 qseqid sseqid sscinames scomnames staxid pident length mismatch gapopen qstart qend sstart send ppos evalue bitscore" \ -evalue 1e-3 \ -out /home/nkarim/blast/output/avenae.out ``` Now you have all these extra columns, `sscinames` and `scomnames` are useful for your end as these contain the scientific name and the common name of your hits. The laziest way to get the 'top' hit of your queries is to keep only the first hit seen, that normally has the lowest e-value or the highest score : awk '! a[$1]++' avenae.out > avenae_first_hit_only.out This falls apart when you have several species with equally good hits, you should still check whether the first hits reported are actually the best hits (manually?). From this out file you also get the percentage of genes with at least one BLAST hit: wc -l avenae_first_hit_only.out That will report the number of genes with at least one hit. For GO terms, I'm not sure - you could run GO annotation for your own genes, there are a few online tools for that like PANNZER2 or eggnog-mapper where you can upload your proteins. For Table S1 (Assembly stats), Trinity has scripts for that, see this older thread https://www.biostars.org/p/233160/
biostars
{"uid": 9491491, "view_count": 983, "vote_count": 1}
I am trying to calculate the MFE along with the secondary structure for multifasta using RNAFold. The output generated is of the format. <pre>>abc GGCGGAGGUAGGGAGGCACGCGAUGGUAUUUCAGAGCCUCCCGAAUACAACUCCAGGGUAGGGUGUUGAAAGCGUUGGAGAUGUCUAAAGACACCGCCAG (((((.....(((((((.(..((.......)).).)))))))........((((((.((............)).)))))).((((....))))))))).. (-35.80) >lmn GGGAGGCACGCGAUGGUAUUUCAGAGCCUCCCGAAUACAACUCCAGGGUAGGGUGUUGAAAGCGUUGGAGAUGUCUAAAGACACCGCCAGUACCACCCCA (((((((.(..((.......)).).))))))).............((((..((((.........((((.(.((((....)))).).)))))))))))).. (-29.30) >xyz CGAUGGUAUUUCAGAGCCUCCCGAAUACAACUCCAGGGUAGGGUGUUGAAAGCGUUGGAGAUGUCUAAAGACACCGCCAGUACCACCCCACCCCGGGACA ....(((........)))(((((............((((.(((((.((......((((.(.((((....)))).).)))))).))))).))))))))).. (-28.40) </pre> Is there a way to get the output in tubular format with 1. Identifier, 2. sequence, 3. secondary structure and 4. MFE as columns? I have written regular expression scripts to capture each of the four and paste it in a file but I don't think that's an efficient way of doing it. Is there any other convenient way of doing it?
In Perl: #!/usr/bin/perl use strict; use warnings; while (<>) { chomp; if (/>/) { s/>//; print "$_\t"; # just the seq id } elsif (/\((-\d+\.\d+)\)$/) { my $mfe = $1; s/ \($mfe\)//; print "$_\t$mfe\n"; # fold + MFE } else { print "$_\t"; # the seq } } Validation: $ perl fold2tab.pl < fold.fa abc GGCGGAGGUAGGGAGGCACGCGAUGGUAUUUCAGAGCCUCCCGAAUACAACUCCAGGGUAGGGUGUUGAAAGCGUUGGAGAUGUCUAAAGACACCGCCAG (((((.....(((((((.(..((.......)).).)))))))........((((((.((............)).)))))).((((....))))))))).. -35.80 lmn GGGAGGCACGCGAUGGUAUUUCAGAGCCUCCCGAAUACAACUCCAGGGUAGGGUGUUGAAAGCGUUGGAGAUGUCUAAAGACACCGCCAGUACCACCCCA (((((((.(..((.......)).).))))))).............((((..((((.........((((.(.((((....)))).).)))))))))))).. -29.30 xyz CGAUGGUAUUUCAGAGCCUCCCGAAUACAACUCCAGGGUAGGGUGUUGAAAGCGUUGGAGAUGUCUAAAGACACCGCCAGUACCACCCCACCCCGGGACA ....(((........)))(((((............((((.(((((.((......((((.(.((((....)))).).)))))).))))).))))))))).. -28.40
biostars
{"uid": 400256, "view_count": 1731, "vote_count": 1}
I'm a new PhD student and bioinformatics is all very new to me so apologies if this seems trivial. I have some paired-end RNA-seq data which I've managed to convert to sorted and indexed bam files however I'm confused to where I go from here, am I right in thinking I can now visualise this data and if so how?
You can visualise the indexed BAMs with [IGV][1]. It should be pretty straight forward if you **choose the correct genome build**. If you are working human or mouse genome, or any of the genomes listed on [ucsc genome browser][2], you can convert your bam files to bigwig files using [deepTools2][3] and create [tracks][4] to be viewed on genome browser. These tracks can be saved and visualised on any web browser and can be shared with collaborators. [1]: http://software.broadinstitute.org/software/igv/home [2]: https://genome-euro.ucsc.edu/cgi-bin/hgGateway?hgsid=602554305_kMaKYkyWlt61jjMmoBrR3w3UtOHA&redirect=manual&source=genome.ucsc.edu [3]: http://deeptools.readthedocs.io/en/latest/content/tools/bamCoverage.html [4]: https://genome.ucsc.edu/goldenpath/help/customTrack.html
biostars
{"uid": 266450, "view_count": 2098, "vote_count": 1}
Dear users, I read through UCSC wiggle format (BigWig) in depth. I understand the format about variable step, fixed step and use of bigwig to visualize in a browser etc. However, I don't have clarity on using ChIP-Seq, ATAC-Seq data in bigWig format. > Example of the Wig file after converting from bigwig format using bigWigToWig. This example data is from ATAC-Seq. #bedGraph section chr1:0-870999 chr1 0 9999 0 chr1 9999 10099 16.561 chr1 10099 10199 24.2045 chr1 10199 10299 2.54784 chr1 10299 10399 5.09568 chr1 10399 10499 11.4653 chr1 10499 10599 7.64352 chr1 10599 10699 3.82176 chr1 10699 13199 0 In this context, could someone help understanding: 1. What is the real number value in 4th column. Is the read depth for that position? or some transformed value of read depth. Typically in ChIP-Seq or ATAC-Seq, what is the value that one would represent in this column. 2. If this is threshold, how users specify thresholds for selecting significantly enriched regions. (I am not sure if any statistical test is associated with significance, I cannot find any reference but users call it so). 3. I have 6 ATAC-Seq bigwig files for 6 different samples. How do I find the regions of interest in at least 4 of samples. Thank you for your help. -Adrian
The bedGraph (or bigwig) format is always the same: `chr-start-end-value`. Value can actually be anything that can be associated with a stretch of DNA as defined in the first three columns. It can be the raw read count for that interval, it can be normalized read count like reads per million, it can be an enrichment score for this experimental condition over a control experiment, it can be the mean methylation store, the GC content etcetc. Most commonly, people use bedGraph/bigwig to create browser tracks displaying the normalized read count across the genome, and in this case, it would not matter if it is ATAC-seq, ChIP-seq or Whatever-seq. One simply counts the number of reads that cover each base and aggregates bases with equal coverage into one interval to make the files smaller, so if the first 100 bases of a chromosome have coverage of 0, one would write: chr1 0 100 0 instead of 100 intervals like: chr1 0 1 0 chr1 1 2 0 chr1 2 3 0 (...) For statistical analysis, one typically calls peaks (e.g. with MACS) and then makes a count matrix to obtain the raw counts for each replicate per peak. Significances between conditions are then inferred with appropriate statistical frameworks, such as `DESeq2, edgeR, csaw` etc. Please use the search function and google for differential analysis of ATAC-seq data, there is plenty of material available.
biostars
{"uid": 354635, "view_count": 4998, "vote_count": 2}
Hi, The fastq format right now has: header, sequence string, "phantom" header and quality string. For storage purposes, why doesn't the fastq format incorporate nucleotide information in the quality strings? Is it just to make it more human readable?
<p>FASTQ is almost never stored uncompressed. With compression, merging bases and quality may actually hurt compression as compression is usually less efficient when you mix different types of information together. FASTQ is not meant to be read by a human from the start to end, but it is meant to be eye-read in a small portion and manipulated with the many unix tools. These are critical features.</p>
biostars
{"uid": 145684, "view_count": 1420, "vote_count": 2}
<p>&quot;discontiguous megablast&quot; on the NCBI website, as well as other implementations, appears to have a &quot;Discontiguous Word Options&quot; parameter set. These include a template length and template type; the latter can be:</p> <ul> <li>Coding</li> <li>Maximal</li> <li>Two templates</li> </ul> <p>What do these options do?</p>
The way BLAST works is by first matching a word from the query and the database, this match is then expanded using dynamic programming. The word is usually 7-256 in length. When using discontiguous BLAST the word is 11-12 letters in a 25 letters long sequence. The way these letters that must match between the query and database are spread differently along the longer sequence. In coding the matching letters will be in the first two nucleotides of every triplet, in maximal they will be spread in a predefined pattern that should maximize the number of matches, two templates just try to match these two patterns. In short discontiguous BLAST allows matches that don't have a 7 letters word that perfectly match between the two.
biostars
{"uid": 102279, "view_count": 2491, "vote_count": 1}
I have a large fasta file of 16S sequences and I want to retrieve sequences using a list of organism names. Do you know a script capable of doing it? EDIT: Headers look like that: ``` >S000000859 Bacillus sp. USC14; AF346495 sequence >S000001027 Paenibacillus borealis; KN25; AJ011325 sequence ``` And I have a list like the following: ``` Paenibacilus borealis Paenibacillus sp. 1-18 Paenibacillus sp. 1-49 Paenibacillus sp. A9 Paenibacillus sp. Aloe-11 ``` I want to retrieve those sequences that match with names present in the list.
You can do this easily by installing the FAST: Fast Analysis of Sequences Toolbox ([publication][1])([github][2]). You can install it by using this command (only use sudo if you need to): (sudo) perl -MCPAN -e 'install FAST' Once it is installed here is a small bash script to do what you need: ``` #!/bin/bash while read line ;do cat original_fasta.fa | fasgrep -di "$line" >> reduced_fasta_file.fa done < species.txt ``` [1]: http://journal.frontiersin.org/article/10.3389/fgene.2015.00172/abstract [2]: https://github.com/tlawrence3/FAST
biostars
{"uid": 141241, "view_count": 14949, "vote_count": 3}
Hi friends, I am attempting to sort my bam files that I obtained from my bowtie sam files. I am not indexing them appropriate according to this error I am receiving after creating my bam file. random alignment retrieval only works for indexed BAM or CRAM files. I understand I am suppose to index the file before sorting them. #creating the appropriate files samtools view -Sb sample.sam.pair > sample.pair samtools view -bt ~/bigdata/refgenome/genome.fa.fai - - | samtools sort sample.pair -o sample.pair.bam samtools view -Sb sample.sam.single > sample.single samtools view -bt ~/bigdata/refgenome/genome.fa.fai - - | samtools sort sample.single -o sample.single.bam #merge samtools merge sample.all.bam sample.pair.bam sample.single.bam -@ 2 rm sample.pair sample.single #index the final bam samtools index sample.all.bam Any help would be appreciated.
I think you're over-thinking things :) You can only index BAM files on position, and only when the data is sorted by position to begin with (don't ask...) So to sort by position just do: samtools sort my.sam > my_sorted.bam Then index with samtools index my_sorted.bam It's as easy as that. If you want to merge the output files from bowtie do that as the very first step, because I don't think samtools performs any optimisations for merging sorted BAMs/SAMs. However, i'd also recommend against bowtie2 in favour of STAR or BWA-MEM, but that's just a personal preference at the end of the day.
biostars
{"uid": 260419, "view_count": 85405, "vote_count": 3}
Hello, I've hit a bit of a snag here: I had to update to the latest version of R to get ballgown working, but one of the functions I see included in filtering out low expression genes, rowVars, is not available for this version of R. Using this tutorial: https://rpubs.com/kapeelc12/Ballgown The rowVars function is employed in filtering: bg_filt = subset(bg,"rowVars(texpr(bg)) >1",genomesubset=TRUE) I have R studio, would it be possible to download an older version of R and switch to it temporarily to use rowVars, then switch back to continue the ballgown output analysis? If so does anyone know what the last version of R rowVars worked on? Or should I try to find an older version of ballgown, and downgrade my R version? I know there are many functions on bioconductor that do not necessarily work on the latest version of R, so is there a benifit to running older versions of R generally when working with these packages?
Is it the `rowVars` function from [metaMA][1] ? install.packages('metaMA') library(metaMA) ?rowVars [1]: https://www.rdocumentation.org/packages/metaMA/versions/3.1.2
biostars
{"uid": 367562, "view_count": 4017, "vote_count": 1}
I have 2 sorted Bam files and I want to merge them and get a VCF file from them using samtools pileup. first i merge these 2 files with `samtools merge` and then I get a VCF file as output. samtools mpileup genome.fa merged-bam.bam output.vcf then I use samtools mpileup genome.fa 1.bam 2.bam output.vcf but the output of these 2 approach had different size. where am I doing wrong? thanks a lot
my answer depends of your sam header+ read-group ( https://gatkforums.broadinstitute.org/gatk/discussion/6472/read-groups ) 1) samtools merge : would produce one bam and you'll get only **one** virtual sample in the final vcf (see the columns after FORMAT in the '#CHROM' line) 2) the 2nd command 'samtools mpileup genome.fa 1.bam 2.bam output.vcf' will produce a vcf with **two** samples (this is what your want in 99% of the use cases)
biostars
{"uid": 399693, "view_count": 2124, "vote_count": 1}
Dear all, considering a RNA-seq experiment and analysis that provides the expression values as TPM, please would you let me know what is a minimum TPM value in order to consider a gene to be expressed ? talking about RPKM.FPKM units, I remember that a gene was considered expressed if RPKM (or FPKM) > 1 ... thanks a lot, -- bogdan
As already pointed out, there is no ideal cutoff. However, there is at least one method, zFPKM, that tries to define an expression cutoff. BioC: https://bioconductor.org/packages/release/bioc/vignettes/zFPKM/inst/doc/zFPKM.html Publication: https://www.ncbi.nlm.nih.gov/pubmed/24215113 > the community adopted several heuristics for RNA-seq > analysis, most notably an arbitrary expression threshold of 0.3 - 1 > FPKM for downstream analysis. However, advances in RNA-seq library > preparation, sequencing technology, and informatic analysis have > addressed many of the systemic sources of uncertainty and undermined > the assumptions that drove the adoption of these heuristics. ... We > use ENCODE data on chromatin state to show that ultralow-expression > genes are predominantly associated with repressed chromatin; we > provide a novel normalization metric, zFPKM, that identifies the > threshold between active and background gene expression; and we show > that this threshold is robust to experimental and analytical > variations.
biostars
{"uid": 366965, "view_count": 12865, "vote_count": 1}
Hi all, I'm trying to run a single-cell velocity analysis on hg19 data using kallisto, following along with this tutorial: [https://www.kallistobus.tools/velocity_index_tutorial.html][1] In the tutorial, the introns file they generate theoretically looks like this, with the familiar looking ENST transcript IDs: $ head -4 introns.bed chr1 12118 12721 ENST00000456328.2_intron_0_109_chr1_12228_f 0 + chr1 12612 13329 ENST00000456328.2_intron_1_109_chr1_12722_f 0 + chr1 11948 12287 ENST00000450305.2_intron_0_109_chr1_12058_f 0 + chr1 12118 12721 ENST00000450305.2_intron_1_109_chr1_12228_f 0 + However, mine look like this, though I followed their exact instructions on how to generate the introns.bed.gz file from the UCSC table browser. >> head -4 introns.bed chr1 12200 12639 uc001aaa.3_intron_0_27_chr1_12228_f 0 + chr1 12694 13247 uc001aaa.3_intron_1_27_chr1_12722_f 0 + chr1 12200 12672 uc010nxr.1_intron_0_27_chr1_12228_f 0 + chr1 12670 13247 uc010nxr.1_intron_1_27_chr1_12698_f 0 + Because of this, I don't think I am able to continue following along with the tutorial, and also don't imagine my file is correct anyways. Can someone please help me understand a) why I can't get normal-looking transcript IDs for GRCh37 (hg19) from UCSC or b) whether I am just doing something totally wrong or c) whether this is fine and there is a workaround. I am just confused as to what those strange IDs are... Thanks for the help! [1]: https://www.kallistobus.tools/velocity_index_tutorial.html
If you want your transcript IDs to have the ENST identifiers, you should select Ensembl Genes instead of UCSC Genes for the track.
biostars
{"uid": 465322, "view_count": 995, "vote_count": 1}
What happened with miRecords mirna database website? The webpage is not available and I used it's data as part of an analysis in my paper. Now the reviewer can't see the page and asks for an explanation. Edit: Sorry, I forgot to post the link. The miRecords was suppose to be available at http://miRecords.umn.edu/miRecords.
The persistence (or lack of) bioinformatics resources online has been analysed a number of times: - [On the persistence of supplementary resources in biomedical publications][1] - [A survey of the availability of primary bioinformatics web resources][2] - [404 not found: the stability and persistence of URLs published in MEDLINE][3] - [Persistence and Availability of Web Services in Computational Biology][4] - [A Survey of the Availability of Primary Bioinformatics Web Resources][5] And sadly you have fallen foul of a resource that has either moved (with no redirect), been deprecated, or the server in the server room has died, and the one person who knew how to maintain the resource left the lab 3 years ago. Reliance on sources that you cannot download and archive yourself, or are hosted outside of institutional repositories, or specific data repositories such as Figshare is asking for trouble. You will probably need to alter the paper to reflect data from a similar resource or attempt to contact the authors of the original paper. Or share some of the URLs above with your reviewer. [1]: http://www.biomedcentral.com/1471-2105/7/260 [2]: http://www.sciencedirect.com/science/article/pii/S1672022907600175 [3]: http://bioinformatics.oxfordjournals.org/content/20/5/668.abstract [4]: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0024914 [5]: http://www.sciencedirect.com/science/article/pii/S1672022907600175
biostars
{"uid": 110420, "view_count": 5805, "vote_count": 3}
I have a dataset of proteins that I have blasted against the uniprot-swissprot database. I'd now like to identify which proteins are likely to have a mitochondrial sub-cellular localisation based on the sub-cellular localisation of their best blast hit in the swiss-prot database. The fasta headers of the uniprot proteins look like this: ">sp|Q64602|AADAT_RAT Kynurenine/alpha-aminoadipate aminotransferase, mitochondrial OS=Rattus norvegicus GN=Aadat PE=1 SV=1" I have found a gene ontology mapping file (link below) but the fasta headers don't contain the GO IDs necessary to map them. ftp://ftp.ebi.ac.uk/pub/databases/GO/goa/external2go/uniprotkb_sl2go Is there some intermediate file that I need to use and does anyone know where to find it? Any help would be appreciated.
using xslt: $ awk -F '|' '/^>/ {printf("%s\n",$2);}' input.fa | while read ACN ; do curl -s "https://www.uniprot.org/uniprot/${ACN}.xml"| xsltproc transform.xsl - ; done Q64602 Mitochondrion with transform.xsl: https://gist.github.com/lindenb/92ae5d03183d1ff56a17684d30dd8f7e
biostars
{"uid": 249178, "view_count": 1656, "vote_count": 1}
<p>I have used <code>tophat2</code> to map rna-seq reads to a draft genome. The alignment percentage is around 75-80% for all samples. When I take the unmapped reads and blast them, they hit the same organism, indicating the unmapped reads might have potential information. How do I deal with the unmapped reads and include them in DE analysis or any other downstream analysis ? Should I go with entirely different pipeline like <code>trinity</code> ?</p>
<p>I have tried STAR and the mapping percentage increased up to 90-92% ( with tophat2, it was only up to 75-85%). I will try BBMap soon.</p>
biostars
{"uid": 138707, "view_count": 4691, "vote_count": 3}
It seems that the multiple test for the GO enrichment result is important. However, the TopGO didn't do that, so I decide to do that by myself. The delemma looks like this: fdr correction with `p.adjust()` in R need the complete GO enrichment result, whereas the `GenTable(,topNodes =N)` in TopGO only give out the top N result. So how can I do the "fdr" mutiple test correction to the GO enrichment results from "TopGO"? Sincerely
Instead of specifying `topNodes` to some value N, you can get all available GO Terms using: ```r allGO = usedGO(object = GOdata) # use it in GenTable as follows: GenTable(GOdata, ... ,topNodes = length(allGO)) ``` Then you can go about doing your p-value adjustment.
biostars
{"uid": 143083, "view_count": 5578, "vote_count": 2}
Hi! I need advice on the processing steps for my project. It would be really nice to get some feedback on this. I want to perform Trinity de novo assembly from metatranscriptome samples. Since there are different microorganisms inside, I want to use the big assembly as a reference to map then the individual samples (replicates under 2 different conditions) and count the transcripts for testing differential expression. The problem that I am struggling with is: for the big assembly I will have around or more than 200 million reads (PE), depending on how I will process the sequences (I could have 2 big assemblys, each one for the different conditions, and in this case will be less data; or I could maybe get a better "reference assembly" by using all data together) . So I don't now if it will be possible to perform this without problem on the requested resources using Trinity in a HPC cluster, until now I have only used 40 million as a maximum for assembly and its really difficult to keep running a job for so long. Maybe you could give me some advice on how could I improve the data (pre)processing steps? Another thing that I'm not sure about is: as I dont have a reference genome and I'm not expecting to have a big percentage of further annotation, It would be better for me to use merged PE (longer reads) that its about 20-30% of my sequences, but in this case I would loose the rest of the information from the unpaired reads AND I should treat my sequences in single mode with Trinity... Is there a way that I could combine my merged data with the unmerged and include everything in my analysis? without having to treat everything as single end data? Thanks in advance!
You should make only one consensus transcriptome with all your reads. After that, make the differential gene expression, both analyses can be made using Trinity. Overmore, you should use Trinotate for functional annotation. You will need to use a cluster with approx 250G of RAM during 24 hours. Follow the instructions in Trinity GitHub wiki: https://github.com/trinityrnaseq/trinityrnaseq/wiki. Additionally, you should use Megan to analyze the putative species. I have the pipeline to do that already so if you need any help contact me.
biostars
{"uid": 290780, "view_count": 3071, "vote_count": 1}
Hello everyone, I am interested in a sequence ranging from 111614 to 111868 in a fasta sequence (scaffold sequence) I am trying to use samtools faidx to tkae this sequence but it doesn't work and keep returning me : [fai_fetch] Warning - Reference 111-555 not found in FASTA file, returning empty sequence After seeing some other users with the same problems, i tried to change the header name for it to contain no space (like ">scaffoldX") but it still doesn't work Here is the exact command I type (my fasta file only contain the sequence of my scaffold) : samtools faidx scaffold.fasta 111614-111868 Thanks a lot for your help.
Shouldn't it be "samtools faidx scaffold.fasta scaffoldX:111614-111868"?
biostars
{"uid": 292078, "view_count": 5388, "vote_count": 1}
I have a file with around 20000 columns as gene names. I want to grep out the rpkm values for specific genes. Is there a way to grep out the column information? > sample &nbsp;&nbsp;&nbsp; gene17 &nbsp;&nbsp;&nbsp; gene92 &nbsp;&nbsp;&nbsp; gene1 ... gene20000 > patient1 &nbsp;&nbsp;&nbsp; 0.03569654 &nbsp;&nbsp;&nbsp; 1.020565 &nbsp;&nbsp;&nbsp; 0.0036522 ... 0.25247236 I only want gene72 for example, but it's not sorted in increasing order. Thanks.
Hi, in order to find the column number you can use: head -n 1 file | tr '\t' '\n' | cat -n | grep gene72 With `head -n 1` , you get only the file's first line. With `tr` you replace the tab-separator by a new line. With `cat -n`, you print the input with line numbers on which you use finally `grep` to get the column of interest. with the found number - let it be j - you can use cut: cut -f 1,j file Cheers, Michael
biostars
{"uid": 300774, "view_count": 14798, "vote_count": 2}