INSTRUCTION
stringlengths
48
14.1k
RESPONSE
stringlengths
14
8.79k
SOURCE
stringclasses
1 value
METADATA
stringlengths
48
55
Hello, I was reading through https://www.biorxiv.org/content/10.1101/2020.04.06.025635v1.full.pdf which happens to be a pooled sequencing method using barcodes to test about 10000 covid samples at one go. I came across compressed DNA barcoding space and DNA barcoding deconvolution for the first time and need some help understanding it. Although, I searched about it on internet i couldn't find any information on it. I will highly appreciate if someone can help me with explanation. Thanks.
The general idea behind pooled sequencing is that we sequence N samples with X barcodes where X << N. A major cost in NGS is uniquely barcoding each sample. For each unique barcode, a primer of ~ 60 - 90 bases (depending on design) needs to be synthesized and purified, and roughly adds a cost of ~ $1 - 2 per sample. For sequencing a large number of samples at a time to make full use of the sequencing capacity, say 10,000 samples per day like in Covid-19 testing, we need to uniquely barcode each sample so that we can identify each sample post sequencing. So now you can see the problem in terms of cost. Ordering 10,000 barcoded primers is going to cost several hundred thousand dollars and managing the workflow is going to be a non-trivial. However, if you have add multiple barcodes to each sample, now you can uniquely tag each sample with a smaller set of barcodes. For example, with 10 barcodes and uniquely adding 5 barcodes to each sample, you can individually barcode ~ 30,000 samples (use permutation formula as barcode order also matters n! / (n-r)! ; n = 10, r = 5). Now you have drastically reduced the cost of ordering barcode primers. Sure, you are using more of each barcode primer but ordering a few barcode oligos in bulk is cheaper and makes managing the workflow easier. Sample 1 gets barcodes B1,B2,B3,B4,B5 Sample 2 gets barcodes B1,B2,B3,B4,B6 and so on. There are other ways to find the identities of N samples using X barcodes/NGS libraries where X << N, but here, we make pools of samples so that each sample is distributed over a unique set of pools and after sequencing the pool, we solve the sample IDs based on the pools in which the samples occurred. See the following papers for some simple examples. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6134198/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5109470/
biostars
{"uid": 438598, "view_count": 1108, "vote_count": 1}
Hello, I am trying to convert a batch of BAM files to FASTQs. I started out testing SAMTOOLS (collate/bam2fq) and PICARD (SAMTOFATQ). On the outset the numbers seemed OK but the statistics suggests that the SAMTOOLS out has twice the amount of duplicates as the Picard out. Has anyone experienced this? I am not sure if it is a samtools problem or I am not comprehending the QC stats. Any advice/recommendation/comments are welcome. Thanks! PS: In both cases I am outputting both first end of the pair and the second end of the pair as separate files. **UPDATED:** The commands used were: ##Samtools samtools collate -o name-collate.bam sample.bam samtools fastq -1 sample_1.fastq.gz -2 sample_2.fastq.gz -0 sample_0.fastq.gz name-collate.bam ##Picard java -Xmx2g -jar picard.jar SamToFastq I=sample.bam FASTQ=sample_1p.fastq.gz SECOND_END_FASTQ=sample_2p.fastq.gz UNPAIRED_FASTQ=sample_0p.fastq.gz ##Fasqc check fastqc -o fastqc_out/ sample_1p.fastq.gz ###Picard QC ![Picard][1] ###Samtools QC ![Samtools][2] [1]: https://s15.postimg.cc/w6lv1rd8b/picard.png [2]: https://s15.postimg.cc/i0646jhsr/samtools.png
By default, picard don't output non-primary alignments, and samtools does. These secondary alignments which `samtools fastq` outputs should have two effects: an increase in duplication rate, as you noticed, and a larger number of reads - can you confirm this? Probably Picard behavior is what you want. If you read the samtools manual carefully, you will see how to avoid outputting non-primary alignments.
biostars
{"uid": 331986, "view_count": 8670, "vote_count": 3}
<p>Can anyone point me to a program [C/Python] that runs a Smith-Waterman pairwise local alignment but (unlike EMBOSS/water) has options for one or both of these:</p> <ol> <li><p>Return the best N hits of query sequence.</p></li> <li><p>Extend the/each local alignment to produce the best full local alignment for the (much smaller) query sequence.</p></li> </ol>
I like <a href="http://www.ebi.ac.uk/~guy/exonerate/">exonerate</a>, which can be run with the `affine:local` and `-n` options to get what you want. It is a nice tool that runs from the command line and does not need to format databases or further configurations $: exonerate -q querysequence.txt -t targetsequences.txt -m affine:local -n 3 # n=3 to get the 3 best alignments.
biostars
{"uid": 5131, "view_count": 3485, "vote_count": 1}
Briefly, I am trying to automate the process of aligning a query sequence (from FASTA) to an assembly graph (FASTG from Spades assembly). As output I need the sequences of the paths in the assembly graph corresponding to the alignment(s). More detail: I have used Spades to assemble the genome from a diploid yeast starting with short reads (WGS sequencing). Using the wonderful program [Bandage](https://rrwick.github.io/Bandage/) I am then able to BLAST a certain query sequence against the assembly graph (FASTG file). Because of the diploid nature of the genome, the result looks like this: <a href="https://ibb.co/C7zMzqf"><img src="https://i.ibb.co/xjDfDyN/image1-path.png" alt="image1-path" border="0" /></a> There are two paths corresponding to this BLAST alignment. In Bandage I can select the nodes corresponding to a path, and then export that path's sequence to a FASTA file. Doing this manually gives me exactly what I want (essentially, haplotypes derived from the assembled genome). However, I would very much like to automate this process. What tools should I be looking into?
I didn't realize it when I posted the question, but Bandage has a command line mode, including a command `querypaths` that accomplishes precisely the task in question. First, find out where the Bandage executable lives. On a Mac, you select the Bandage application, do "Show Package Contents", and the executable is in `Contents/MacOS/Bandage`. I will just call this executable `Bandage`. Then you can run `Bandage assembly_graph.fastq query_sequence.fasta output_prefix` and it will produce `output_prefix.tsv` with the exactly the paths desired! What a wonderful program. Note that it works with both `fastg` and `gfa` formats as input.
biostars
{"uid": 377915, "view_count": 2714, "vote_count": 1}
Dear all, I have several FASTA nucleotide file, containing millions identifiers and their sequence. Among them, there are some identifier like this, ">AT1G01340|AT1G01340.2 Sequence unavailable". Would you please let me know how I can remove them? looking forward to hearing your helpful commands. thanks
I have modified the perl script which should work for your cause ``` #!/usr/bin/perl use strict; use warnings; $/="\n>"; while (<>) { s/>//g; my ($id, $seq) = split (/\n/, $_); print ">$_" if ((length $seq) > 10 && $seq !~ "Sequence unavailable"); } ``` hth
biostars
{"uid": 127842, "view_count": 4600, "vote_count": 1}
**We have encountered strange pattern in bam file that is generated from amplicon sequencing. (Nextera XT, illumina MiSeq).** As you can see at the middle of bam file, rectangular-shaped coverage is formed. Half of the reads finished at the right side of the rectangular; while other half of reads finished at the left side of rectangular. What can cause such abnormal coverage distribution? Any structural variant? ![enter image description here][1] ![enter image description here][2] ![enter image description here][3] ![enter image description here][4] [1]: http://i.imgur.com/Z2zFnj5.png [2]: http://i.imgur.com/uMrYmHO.png [3]: http://i.imgur.com/uAqsVEj.png [4]: http://i.imgur.com/ZVarV4M.png
**We solved the real cause of this pattern. It's duplication event of 200 bp area in that rectangular area.** We BLAST the unmapped parts of reads at the ends of rectangular area. And we found out that they are perfectly matched to region inside of this area. ![enter image description here][1] [1]: http://i.imgur.com/u6ZjM9E.png
biostars
{"uid": 191876, "view_count": 2147, "vote_count": 4}
Hello, If I execute this in bcftools 1.10.2 clinvar.vcf.gz is https://ftp.ncbi.nlm.nih.gov/pub/clinvar/vcf_GRCh37/clinvar.vcf.gz <pre>bcftools query -i "CLNREVSTAT='criteria_provided,_conflicting_interpretations'" -f '%CLNREVSTAT\n' clinvar.vcf.gz | sort -u</pre> I get this results: <pre> criteria_provided,_conflicting_interpretations criteria_provided,_multiple_submitters,_no_conflicts criteria_provided,_single_submitter </pre> I was expecting to get only criteria_provided,_conflicting_interpretations. Is there anything that I don't understand. Many thanks
I supect it's invoked as an OR operator. Can you please try "CLNREVSTAT='criteria_provided' && CLNREVSTAT='_conflicting_interpretations' "
biostars
{"uid": 435498, "view_count": 1027, "vote_count": 1}
Hi, I'm new in this area, so thanks a lot for any help in advance. I have some fastq files, in which in some lines there are additional quotes " " added to the quality score in the beginning and the end sometimes and I want to remove them now. For example: @NGSNJ-086:647:GW2112051649th:1:1101:6506:1016 1:N:0:CTGAAGCT+ATAGCCTT AAACTAAGTCAATTCTAATACGACTCACTATAGGAGCTCAGCCTTCACTGCTTCTTAAAGATGCGCACACAACACTCTTTACGTATGTACCGGCACCACGGTCGGATCCTAGATCGGAAGAGCACACGTCTGAACTCCAGTCACCTGAAG + FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF @NGSNJ-086:647:GW2112051649th:1:1101:7428:1078 1:N:0:CTGAAGCT+ATAGCCTT AAACTAAGTCAATTCTAATACGACTCACTATAGGAGCTCAGCCTTCACTGCGACAAAATTGGCCATCTTTCCGACAAACAACATGCCCCACGGCACCACGGTCGGATCCTAGATCGGAAGAGCACACGTCTGAACTCCAGTCACCTGAAG + "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF,FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" So I want to remove the " " in the last line, is there any efficient way to do this, thanks a lot
$ sed '0~4 s/"//g' test.fq
biostars
{"uid": 9508793, "view_count": 921, "vote_count": 1}
I'm trying to understand the formats of different SNP callers. I have made test bam files with a subset of two stickleback WGS samples. I made a vcf using VarScan's commad `mpileup2snp`. Here's an example of a single SNP in the vcf; chrI 1789 . T G . PASS ADP=105;WT=1;HET=1;HOM=0;NC=0 GT:GQ:SDP:DP:RD:AD:FREQ:PVAL:RBQ:ABQ:RDF:RDR:ADF:ADR 0/0:66:108:108:87:18:17.14%:1.7211E-6:63:74:64:23:15:3 0/1:68:102:102:73:21:22.34%:1.3568E-7:60:56:48:25:19:2 I can understand most of the scores in the `#FORMAT` field, however I'm confused how DP is calculated. The reference and alternative allele depths are given (RD & AD), but the sum of the two doesn't equal DP. Using the example above; Sample 1: - DP = 108 - AD = 18 - RD = 87 - AD + RD = 105 Sample 2 : - DP = 102 - AD = 21 - RD = 73 - AD + RD = 94 Does anyone have any ideas why the reported DP doesn't equal AD + RD?
Different variant have different rules regarding thresholds for counting reads. Additionally, these criteria may vary for different fields. VarScan offers an [explanation on their website][1]: > VarScan requires that bases meet the minimum Phred quality score > (default 15 for most commands) to count them for things like read > counts (reads1, reads2) and to compute variant allele frequency. > However, when VarScan reports the depth (such as in the DP field of > VCF output), it reports SAMtools raw depth. To get VarScan read counts > to more closely match another tool, set use parameter --min-avg-qual > 0. And use caution! Low-quality bases, with the occasional exception of BAQ penalties, should not be trusted. > > Also, VarScan reports variants on a biallelic basis. That is, for a > given SNP call, the "reads1" column is the number of > reference-supporting reads (RD), and the "reads2" column is the number > of variant-supporting reads (AD). There may be additional reads at > that position showing other bases (SNP or indel variants). If these > other variants meet the calling criteria, they will be reported in > their own line. If not, it may look like you have "missing" reads. [1]: http://varscan.sourceforge.net/support-faq.html
biostars
{"uid": 360300, "view_count": 1778, "vote_count": 1}
<p>Hi, this question is somehow related to <a href="http://biostar.stackexchange.com/questions/1681">this previous question</a>. I'm playing with the <strong>C API for BAM</strong> and I wrote the following code: <a href="https://gist.github.com/736059">https://gist.github.com/736059</a></p> <p>I'm now trying to find the indexes of the genomic bases covered by the <strong>CIGAR</strong> string (my final goal is to create a <strong>WIG</strong> file containing the coverage of the genome)</p> <p>What would be the correct way to change my code to get the genomic indexes of the bases covered by each CIGAR element ?</p> <pre><code>(...) for( k=0;k&lt; b-&gt;core.n_cigar;++k) { int cop =cigar[k] &amp; BAM_CIGAR_MASK; // operation int cl = cigar[k] &gt;&gt; BAM_CIGAR_SHIFT; // length switch(cop) { case BAM_CMATCH: printf("M");break; case BAM_CINS: printf("I");break; case BAM_CDEL: printf("D");break; case BAM_CREF_SKIP: printf("N"); break; case BAM_CSOFT_CLIP: printf("S");break; case BAM_CHARD_CLIP: printf("R");break; case BAM_CPAD: printf("P");break; default:printf("?");break; } printf("%d",cl); } (...) </code></pre> <p>Thanks</p>
For example ``` int depth[REF_LEN], x, j, k; uint32_t *cigar = bam1_cigar(b); for (k = 0, x = b->core.pos; k < b->core.n_cigar; ++k) { int op = cigar[k]&16; int l = cigar[k]>>4; if (op == BAM_CMATCH) { for (j = x; j < x + l; ++j) ++depth[j]; x += l; } else if (op == BAM_CREF_SKIP || op == BAM_CDEL) x += l; } ``` Something like this...
biostars
{"uid": 4211, "view_count": 5325, "vote_count": 4}
Hi, Could provide the faster way to filter this data : chr1 43 1000 gene_name=boby gene_type=trucA chr2 44 1000 gene_name=natt gene_type=trucB chr3 45 1000 gene_name=alurika gene_type=trucC To : chr1 43 1000 boby trucA chr1 44 1000 natt trucB chr1 45 1000 alurika trucC CORRECTION : Original text data looks like this : chr1 43 1000 TEST gene_name=boby;gene_type=trucA;foo=34 chr2 44 1000 TRUC gene_name=natt;gene_type=trucB;foo=34 chr3 45 1000 PASS gene_name=alurika;gene_type=trucC;foo=34
sed -e 's/gene_name=//g' -e 's/gene_type=//g' file > file2
biostars
{"uid": 179260, "view_count": 1732, "vote_count": 1}
I would like to "mimic" the online BLASTn wgs search (NOT nt, this works already) on my local machine. The reason is that I want to extend from the BLAST nt search to a broader range of bacterial strains. It seems like there is no BLAST wgs database to download. How do I determine which genomes I have to download to catch the genomes in wgs. I only need it for bacteria for a start. An alternative would be to not use the entire wgs but some method to at least "broaden" the search compared to BLAST nt.
Did you take a look at [this README][1] file NCBI provides for instructions on how to deal with WGS data with `blast+`? Scripts you need are in [this directory][2]. Bacteria are `taxid 2`. [1]: ftp://ftp.ncbi.nih.gov/blast/WGS_TOOLS/README_BLASTWGS.txt [2]: ftp://ftp.ncbi.nih.gov/blast/WGS_TOOLS/
biostars
{"uid": 377840, "view_count": 2628, "vote_count": 1}
Hi Biostars Leaders, Freebayes(version:v1.0.1-1-g683b3cc-dirty) defines AF as Description="Estimated allele frequency in the range (0,1]", but the values are always either 0.5 or 1.0, and they are not actual observed frequencies. I have observed the same with GATK's HaplotypeCaller, and I have custom calculated the actual frequencies from Ref & Alt Alleles i.e. from the AD field. Freebayes does not spit out the AD field, but it has these following fields which I think I can use : RO = "Reference allele observation count, with partial observations recorded fractionally" AO = "Alternate allele observations, with partial observations recorded fractionally" I am wondering if there is any advice on how to calculate actual allele frequencies for Freebayes ? thanks, gsr
I got answer to my own question. I just installed a newer version of freebayes (v1.1.0-3-g961e5f3-dirty) which does spit out the AD field "Number of observation for each allele" I then used the AD field to calculate the actual allele frequencies in human sample(NA12878)
biostars
{"uid": 243073, "view_count": 4427, "vote_count": 5}
I have few VCF Files which contains Chromosome number, position, Variation allele (Ref and Alt). How can I generate the Gene name, its Refseq and listing all of its transcripts? 1. Any tool which gives these kind of results when upload VCF data? 2. Is there any module in Biopython or Bioconductor or script for this? 3. Is there any lists that links gene name to Refseq and transcript information? Thank you for the valuable suggestions
You need the [variant effect predictor][1], does exactly what you need. [1]: http://www.ensembl.org/info/docs/tools/vep/index.html
biostars
{"uid": 179619, "view_count": 2937, "vote_count": 1}
For drug [Fluvastatin][1], [Fluvastatin [sodium salt]][2] (CAS-93957-55-2) is usually used for testing the potential of Fluvastatin. They have different structures, though it's slight difference. In [Drugbank][3], the Fluvastatin structure is at the beginning, while the salt form in the latter section of the webpage. It seems the active form should be Fluvastatin ion in cell. So which structure should be used when considering the structure ot this drug in chemoinformatics? Thank you. Update: Another example: For [**tamoxifen**][4], one of its Synonyms is tamoxifen citrate shown in guidetopharmacology. In pubchem, they are different (CID [2733526][5] and [2733525][6]) with the latter includes a citrate. At least in the Connectivity Map, tamoxifen citrate [54965-24-1] (as a catalog name) was used instead of tamoxifen (as a cmap_name) [[ref1][7]]. I think they are just different, resulting in different effect on cells. In this situation, should I use the structure of tamoxifen to discover something in the Connectivity Map? It's a little odd. I'm wondering why they use tamoxifen citrate instead of tamoxifen, but name it as tamoxifen. Is it common for drug testing in chemistry? [1]: https://pubchem.ncbi.nlm.nih.gov/compound/1548972#section=Top [2]: https://pubchem.ncbi.nlm.nih.gov/compound/23663976#section=Top [3]: http://www.drugbank.ca/drugs/DB01095 [4]: http://www.guidetopharmacology.org/GRAC/LigandDisplayForward?ligandId=1016 [5]: https://pubchem.ncbi.nlm.nih.gov/compound/2733526#section=Top [6]: https://pubchem.ncbi.nlm.nih.gov/compound/2733525#section=Top [7]: http://www.connectivitymap.org/cmap/cmap_instances_02.xls
You have chosen a tricky one because it has salts, racemates, enantiomers and virtual deuteration this adds up to 113 different representations http://www.ncbi.nlm.nih.gov/pccompound?cmd=Link&LinkName=pccompound_pccompound_parent_connectivity_pulldown&from_uid=1548972 Choices depends on exactly what cheminformatic operations you intend to perform, but stripping salts back to parents is usually advisable To keep life simple just go with this GtoPdb entry http://www.guidetopharmacology.org/GRAC/LigandDisplayForward?ligandId=2951
biostars
{"uid": 148233, "view_count": 2123, "vote_count": 1}
where to get the file of hg19 exon, intron, UTR region ? I read lots of post in biostar. However it's been a long time post.
using **bioalcidaejdk** : http://lindenb.github.io/jvarkit/BioAlcidaeJdk.html ``` $ wget -q -O - "ftp://ftp.ensemblgenomes.org/pub/release-45/metazoa/gtf/apis_mellifera/Apis_mellifera.Amel_4.5.45.gtf.gz" |\ gunzip -c |\ java -jar dist/bioalcidaejdk.jar -F GTF -f biostar.code (...) 6 4682136 4682216 + GB52198-RA.Intron16 6 4682387 4682473 + GB52198-RA.Intron17 6 4682696 4682760 + GB52198-RA.Intron18 6 4682905 4682967 + GB52198-RA.Intron19 6 4676837 4677042 + 5' UTR of GB52198-RA 6 4683076 4683853 + 3' UTR of GB52198-RA 6 4691339 4691384 + GB52199-RA.Exon1 6 4692448 4692491 + GB52199-RA.Exon2 6 4693914 4694249 + GB52199-RA.Exon3 6 4691384 4692448 + GB52199-RA.Intron1 6 4692491 4693914 + GB52199-RA.Intron2 6 4691339 4691339 + 5' UTR of GB52199-RA (...) ``` with biostar.code: ``` stream(). flatMap(GENE->GENE.getTranscripts().stream()). flatMap(TRANSCRIPT->{ final List<Interval> L = new ArrayList<>(); TRANSCRIPT.getExons().stream().forEach(E->L.add(E.toInterval())); TRANSCRIPT.getIntrons().stream().forEach(I->L.add(I.toInterval())); TRANSCRIPT.getUTRs().stream().forEach(U->L.add(U.toInterval())); return L.stream(); }).forEach(R->println(R.getContig()+"\t"+(R.getStart()-1)+"\t"+R.getEnd()+"\t"+R.getStrand()+"\t"+R.getName())); ```
biostars
{"uid": 400463, "view_count": 1716, "vote_count": 1}
I am trying to pipe the output from BWA to sambamba to sort and index the sam files. I have 20 files with reads from sequencing (pair end) and want to have the resulting bam file (not the intermediate sam or bam files). This is the code I have at the minute: for filename in ./seqtk_1/subsample_1/*_1.fq.gz; do file=`echo $filename|sed 's/_1.fq.gz//'`; filenopath=`basename $file`; outputpath=./BWA/seqtk_1/subsample_1; bwa mem -v 3 ./combine_reference.fa.gz ${file}_1.fq.gz ${file}_2.fq.gz > ${outputpath}/align_${filenopath}_BWA.sam | sambamba view -S -f bam - > ${outputpath}/align_${filenopath}_BWA.bam | sambamba sort -o - > ${outputpath}/sorted_${filenopath}_BWA.bam | sambamba index - > {outputpath}/indexed_${filenopath}_BWA.bam; done This is the output: -bash: {outputpath}/indexed_sub_NC_001539_BWA.bam: No such file or directory sambamba-view: Unrecognized option - sambamba-sort: Cannot open or create file '' : No such file or directory [M::bwa_idx_load_from_disk] read 0 ALT contigs [M::process] read 100000 sequences (10000000 bp)... [M::process] read 100000 sequences (10000000 bp)... That continues through the rest of the files. I get a sam file and a sorted_${filenopath}_BWA.bam file but the bam file isnt populated. My thinking is that the code isn't read/completed linearly and it is trying to create files that can't be created because BWA hasn't started running yet. Is there a way to fix this? Or do I just need to run BWA and sambamba separately? I don't want to keep these sam files because the size is too large. Thanks in advance
I don't know sambamba, so I don't know if you can use it to read from stdin, but I'll assume it does. Your concept of pipes is wrong. Normally you would do the following command1 input > output1 command2 output1 > output2 With pipes you would do command1 input | command2 > output2 So you don't send the output of bwa to a sam file, but directly to sambamba.
biostars
{"uid": 259398, "view_count": 5643, "vote_count": 1}
In short, my question is: 1. what is the running time of building FM-Index, is it linear in the sense of reference genome? I was told that FM-Index can be built in linear time, i.e. O(n), where n is the length of reference genome. But I actually can not find the paper describing it. The paper cited mostly "Indexing Compressed Text" by Ferragina et al. 2004 focus on the analysis of memory footprint and query time analysis, but not the running time of building FM Index.
In 2003, two (or maybe three) conference papers first showed that suffix array can be constructed in linear time. As generating BWT from suffix array takes linear time, they also proved that FM-index can be constructed in linear time. One of the most influential linear-time algorithms, SA-IS, was invented by Nong et al (2008). They provided a ~100-line implementation in C. It is much simpler and faster than the previous works. Yuta Mori optimized Nong et al's implementation into the sais library. A few years ago (when I was still following the literature), that library was fastest linear-time implementation in practical benchmarks. However, at least at that time, libdivsufsort, an O(nlogn) algorithm developed also by Yuta Mori, was widely believed to be the fastest open-source library to construct suffix arrays, faster than sais. Linear-time algorithms are not necessarily faster. If a linear algorithm is associated with a large constant and lots of cache misses, an O(nlogn) algorithm can be faster in practice, just as Rob said.
biostars
{"uid": 206931, "view_count": 2497, "vote_count": 3}
Previously, I split out a vcf file by chromosome, and for my project, I have combined the X and XY vcf files into a single one. After changing the "XY" chromosome designation to "X" via: awk '{gsub(/"XY"/, "X"); print;}' Genome_newX.vcf > Genome_newX2.vcf I'm running into the issue of sorting this new "Genome_newX2.vcf" by position. The idea is that I'll subsequently run the vcf through GenotypeHarmonizer. Are there any suggestions on how to do this easily? I'm brand new to this style of work, and I'd love some direction on where to read up on it as well. Thank you!
Would there be an equivalent for a BCF? `bcftools view | [...]` code? Or why not using `bcftools sort -Oz output.bcf -o output_sort.vcf.gz`?
biostars
{"uid": 299659, "view_count": 23526, "vote_count": 2}
After running picard ValidateSamFile I get errors for all reads like the one below - "NM" tags are missing. WARNING: Read name SRR6251016.24364087_TGTTATGAGA, A record is missing a read group WARNING: Record 1, Read name SRR6251016.24364087_TGTTATGAGA, NM tag (nucleotide differences) is missing I am using bam files produced by a STAR mapping pipeline which have "nM" tags as shown below. These are identical in function to NM tags but are alternatively named. SRR6251016.24364087_TGTTATGAGA 99 chr1 3043025 255 70M = 3043191 236 AGAAAATTGGACATAGTACTACCGGAGGATCCAGCAATACCTCTCCTGGGCATATATCCAGAAGATGCCC EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEAEE<<EEEEEEEEEEEEEEAAEEEE NH:i:1 HI:i:1 AS:i:136 nM:i:1 Does anyone know how to set Picard to recognise these tags (which are of the same format) as I need to run Picard MarkDuplicates next in my analysis?
If STAR's `nM` was actually identical in function to the standard `NM`, one might ask why on earth they were making life hard for everyone by using a different tag name. But in fact it is not: > nM : is the number of mismatches per (paired) alignment, not to be confused with NM, which is the number of mismatches in each mate. Look at STAR's `--outSAMattributes` option, which can be used to also output `NM`. One might ask the STAR developers why `NM` and other tags desired by Picard's “typical usage” validation are not in STAR's standard set of attributes…
biostars
{"uid": 405236, "view_count": 798, "vote_count": 4}
Hi, Assume a table as below: X = col1 col2 col3 row1 "A" "0" "1" row2 "B" "2" "NA" row3 "C" "1" "2" I select combinations of two rows, using the code below: pair <- apply(X, 2, combn, m=2) This returns a matrix of the form: pair = [,1] [,2] [,3] [1,] "A" "0" "1" [2,] "B" "2" NA [3,] "A" "0" "1" [4,] "C" "1" "2" [5,] "B" "2" NA [6,] "C" "1" "2" I wish to iterate over pair, taking two rows at a time, i.e. first isolate `[1,]` and `[2,]`, then `[3,]` and `[4,]` and finally, `[5,]` and `[6,]`. These rows will then be passed as arguments to regression models, i.e. `lm(Y ~ row[i]*row[j])`. I am dealing with a large dataset. Can anybody advise how to iterate over a matrix two rows at a time, assign those rows to variables and pass as arguments to a function? Thanks, S ;-) **Edit:** In response to the comments, I should specify that my problem concerns SNP and expression data where I aim to do a pairwise multiple regression analysis (first order regression) in order to assess any possible SNP-SNP interactions that may effect the expression phenotype.
Using `mapply` you can map a function over more than one sequence (list or vector) in one pass e.g. the odd and even rows of a matrix fn <- function(i, j) paste("i =", i, "j =", j) odds <- seq(1, 9, by = 2) evens <- odds + 1 mapply(fn, odds, evens)
biostars
{"uid": 3694, "view_count": 21433, "vote_count": 1}
<p>I have two genotype tables, one for parents generated from snp arrays following hg18 coordinates, and the other for their offspring generated from ngs following hg19 coordinates. I finished liftover from hg18 to hg19 for parental data (refer to <a href="https://www.biostars.org/p/93399/">this post</a>) and I got a new list of coordinates, but I found I couldn&#39;t merge the parental genotype with the new hg19 coordinates because I don&#39;t know how to pair the new coordinates with the old ones. I wonder if there is a way to get their matching relationship. Any idea? Thanks.</p>
<p>Thanks, Ashutosh. Problem solved. I changed the 4th column to unique IDs, eg 1.rs#, 2.rs#, 3.rs#. Then my worries are gone.</p>
biostars
{"uid": 109527, "view_count": 3323, "vote_count": 1}
Dear All, I wonder if R stops support for "makeTxDbFromUCSC" function? I tried many times but keep failing with the error message below: **> txDB <- makeTxDbFromUCSC(genome = "hg19", tablename = "ensGene")** **Error in makeTxDbFromUCSC(genome = "hg19", tablename = "ensGene") : could not find function "makeTxDbFromUCSC"** Any input is appreciated. Thanks, Xiao
Did you load the library? library(GenomicFeatures) See possible solutions at SO: - [Error: could not find function … in R](https://stackoverflow.com/q/7027288/680068)
biostars
{"uid": 490928, "view_count": 537, "vote_count": 1}
<p>Hi all,</p> <p>I'm currently working with some GWAS data after imputation using IMPUTE, however, we are having problem to conduct SNPTEST, and has been suggested to convert the GEN and Sample files after IMPUTE to PLINK BED/PED files, then run association analysis using PLINK, to double check. </p> <p>Using GTOOL, has been able to convert GEN and sample files to PED and MAP files, but when run logistic regression, ERROR: Locus rs has &gt;2 alleles: individual 1 has genotype [ A G ] but we've already seen [ N ] and [ A ], occurred. Have tried to remove the relevant individual and snp, but still getting the same message.</p> <p>The code used for the above message in PLINK was --file test_chr01 --logistic --....--out..</p> <p>Trying to figure out why before to disturb my supervisor again. Any advices are appreciated. Many thanks.</p> <p>Guan</p>
<p>I had the same problem before and one of my group members helped me out. </p> <p>The error is related to that missing data: GTOOL makes the PED file with 'N' as the missing genotype, which plink doesn't recognize as missing because plink expects missing genotypes to show '-9'. Plink can understand the 'N' for missing using the following command-line parameter: </p> <p>--missing-genotype N</p> <p>Add this when converting GEN and sample files to PED/MAP.</p>
biostars
{"uid": 10047, "view_count": 5739, "vote_count": 1}
Hi, I need phyloP scpres for the whole human genome. I was able to download 10M positions from http://genome.ucsc.edu/cgi-bin/hgTables using these instructions: group: Comparative Genomics track: Conservation table: your phyloP table of choice region: genome output format: data points However I need the whole genome information, not just 10M. The website directs me to the downloads page, but I was lost there unfortunately. Can anyone please link me to the correct place? (Human Grch37 phyloP values in data point format) Thanks!!
https://www.biostars.org/p/86847/#86892 https://www.biostars.org/p/242484/#242486
biostars
{"uid": 354911, "view_count": 3597, "vote_count": 1}
I have gone through filtering variants through `VQSR` and `hard-filter` from [here][1] My understanding about `VQSR` is that the we don't want to combine `SNP` and `INDEL` where as they are combined in `Hard-filter`. 1. In VQSR, we run `VariantRecalibrator` with mode `SNP` and `INDEL` and we get `.recal` files for both snp and indels. 2. Next, we apply `ApplyVQSR` with mode `INDEL` with `indels.recal` file to generate `indel.recalibrated.vcf`. 3. In the next step, we apply `ApplyVQSR` for `SNP` with vcf input as `indel.recalibrated.vcf` and .recal file generated from `VariantRecalibrator` with mode `SNP`. 4. This step generate file with `snp.recalibrated.vcf.gz` which will contain both `SNP` and `INDEL` as filtered and will be final filtered data from `VQSR` Is my understanding correct about variant filtering here ? If this is right how do we deal with `MIXED` type ? [1]: https://gatk.broadinstitute.org/hc/en-us/articles/360035531112
It is better to split the multi-allelic site to bi-allelic sites: use `bcftools` bcftools norm \ -m-any \ --check-ref -w -f /path/to/reference/hg38.fasta \ input.vcf -o output.vcf
biostars
{"uid": 484727, "view_count": 1045, "vote_count": 9}
Hi all, What I need to do is filter a file produced using non-stringent Variant Effect Predictor (VEP) settings with one that was produced with more stringent VEP settings. I've been running VEP locally using the cache option with a pre-built cache with this command on my vcfs: ``` perl $VEP \ --cache \ --dir $VEP_DIR \ --offline \ --input_file $input \ --output_file $output \ --sift b \ --polyphen b \ --regulatory \ --protein \ --symbol \ --ccds \ --uniprot \ --check_existing \ --gmaf \ --maf_1kg \ --maf_esp \ --pubmed ``` Everything works great and I'm super happy with the documentation. However, I realized after I had run my command on all my exomes that I would most likely get many entries for each particular variant depending on different Ensembl Feature IDs. VEP has a fix for this, which is to use the `--most_severe` flag when running the command. That works perfectly, however, some extra flags are disabled when using the `--most_severe` flag. I would like to retain this extra information (like gene name/symbol Feature,Consequence, etc.) for the variants produced with the `--most_severe` flag. ``` perl $VEP \ --cache \ --dir $VEP_DIR \ --offline \ --input_file $input \ --output_file $output \ --regulatory \ --uniprot \ --check_existing \ --gmaf \ --maf_1kg \ --maf_esp \ --most_severe ``` So now I have two files for each vcf; 1) disabled `--most_severe` and 2) `--most_severe`. The 2nd file is basically a subset of the 1st file but with some important missing information. In the 1st file when there are multiple entries for a variant, most of the fields are the same except the `Feature_type` field and often the `Extra` field. Both produce a tab delimited text file with columns such as this: #Uploaded_variation Location Allele Gene Feature Feature_type Consequence cDNA_position CDS_position Protein_position Amino_acids Codons Existing_variation Extra Is there a way to filter the 1st file with the 2nd file. I think I need to use fields `Uploaded_variation` and `Consequence` for matching the 1st file because those are the fields that are unique in the line. I think using awk to search for columns in both files won't work because there is some information lost in the Consequence field in the 2nd file For example a variant Consequence may change from: `non_coding_transcript_exon_variant,non_coding_transcript_variant` to `non_coding_transcript_exon_variant` I appreciate any help in solving this issue. Alternatively there is a `filter_vep` script provided by VEP for post-VEP annotation filtering but I don't think there is an option here that will solve my problem. Thanks, Tesa
You will encounter problems whichever way you try to combine these two files I'm afraid. For example, let's say your small (`--most_severe` on) file has a line with a consequence of `missense_variant`. Then in your large file, there are three corresponding lines of output for that variant with a consequence of `missense_variant` - how do you decide which to choose? Also, the consequence type picked by `--most_severe` may be calculated relative to an Ensembl feature that does not have reliable biological evidence to support it - do you still want to choose this one over any others? Is it practical you for you re-run your analyses? There are a couple of other options that you might find useful: a) `--pick`: this chooses one line of consequence data (with all the fields retained) for each variant. It uses the following criteria to pick one: 1) is the transcript canonical 2) is the transcript biotype protein_coding 3) consequence rank 4) transcript length In the forthcoming version of VEP you will be able to customise this order. http://www.ensembl.org/info/docs/tools/vep/script/vep_options.html#opt_pick `--pick_allele` chooses one line per variant allele (i.e. this will come into effect when the input variant has more than one alternate allele) b) `--flag_pick`: like `--pick` but just adds a flag to the line chosen by the same rules c) `--per_gene`: like `--pick` but chooses one line per variant/gene combination As a footnote, we always try to discourage people from using these summary flags if we can - there will always be cases where valuable data gets lost, and you are relying on an arbitrary and subjective algorithm to perform that summarising. The logic of this algorithm will always be wrong for some use case no matter how we code it. Thus by keeping all the data you are ensuring you don't miss anything.
biostars
{"uid": 120055, "view_count": 5582, "vote_count": 2}
<p>I often have ChIP-seq experiments where I want to get a feel for read coverage over the predicted binding regions. Looking through the UCSC genome browser region by region is slow and labourious. What i am looking for is a way of plotting read coverage (from bedGraph, Wig etc) for many different binding regions and present many of these plots on one page.</p> <p>I envisage the input would be a bedGraph/Wig file and a list of binding region coordinates. I am aware of a previous <a href='http://www.biostars.org/p/6132/#6138'>Biostars thread</a> that kind of covers this, but it uses UCSC/IGV. I am looking for something much more simplistic - just a line graph per region. Even more, it would be great to be able to plot ChIP and input read coverage on the same graph. </p> <p>I wonder whether some Python guru etc. has already tackled this?</p> <p>Many thanks!</p>
<p>I will like to update my previous answer:</p> <p>We have made available a suite of tools called <a href='http://deeptools.github.io'>deepTools</a> to make this sort of visualizations very easy. There is a tool called profiler that will plot exactly what you want. You provide a list of regions (BED or GFF format) and a bigWig file and the output is a profile. <a href='https://github.com/fidelram/deepTools/wiki/Visualizations'>Here</a> you can find the documentation on how to run the tool. You can easily convert any wig/bedgraph file to a bigWig using the tools from UCSC that can dowloaded here: <a href='http://hgdownload.cse.ucsc.edu/admin/exe/'>http://hgdownload.cse.ucsc.edu/admin/exe/</a></p> <p>The tool uses bigWig files to parallelize computations. </p> <p><img src='https://raw.github.com/fidelram/deepTools/master/examples/flowChart_computeMatrixetc.png' alt='enter image description here' /></p>
biostars
{"uid": 62121, "view_count": 10892, "vote_count": 4}
Hello, Is it possible to parse the read entries in an (unaligned) BAM file one by one, without requiring huge amounts of memory? I've tried `samtools view <bamfile> | <python script>` as well as using the `pysam.AlignmentFile()` parser from inside the script, but both solutions on our cluster end up taking over 60GB of RAM for a 6GB BAM. I do believe we have nodes that have can handle a lot more ram than that, but I'm still annoyed by requirements that wouldn't run on a laptop if needed. I've briefly tried to look around, but nobody seems to be asking this question with regards to simply parsing a BAM. Most memory-related topics for samtools seem to revolve around sorting. So, is there a more resource-efficient way to parse BAMs progressively, or does the whole thing need to be decompressed into memory first (presumably that's what's happening) before the entries can be accessed sequentially? Thanks!
It turns out that @i.sudbery was indeed correct. The memory-greedy bit was a dumb design choice in my script. I was storing information from each bam entry, only to count up the occurrences at the end. With 200000000 entries, the little bits of stored info added up... My goal is much more efficiently achieved by starting all the counters right from the beginning and updating them entry by entry. This is really basic stuff, I'm not even sure why I didn't do it that way from the beginning.
biostars
{"uid": 382923, "view_count": 2056, "vote_count": 1}
Hi, I've a doubt for which I've no answer. The fourth field of the TCGA barcode contains the information about the sample, either primary tumor, metastatic, etc For instance `TCGA-93-A4JP-01A-11H-A24S-13`, this is a primary tumor, but what does the A next to the 02 means? Sometimes I've sen other letter than A such C, E... Thanks
**Edit April 6th, 2020:** Since I posted this answer, this page seems to have come online: <a href="https://docs.gdc.cancer.gov/Encyclopedia/pages/TCGA_Barcode/">TCGA Barcode</a> -------------------- ---------- # TCGA Barcode <a href="https://imgbb.com/"><img src="https://image.ibb.co/bSkaW7/barcode.png" alt="barcode" border="0"></a> <a href="https://ibb.co/neTUr7"><img src="https://preview.ibb.co/g8g4PS/TCGA_barcode.jpg" alt="TCGA_barcode" border="0"></a> # Code tables Go here for the code tables for further in depth details: - https://gdc.cancer.gov/resources-tcga-users/tcga-code-tables Kevin
biostars
{"uid": 313063, "view_count": 8703, "vote_count": 1}
Dear all, I am a rookie in data analysis and stuck with my results dnt know how to interpret them. I started with 7 metagenomic assemblies of different species of Azolla fern. The aim was to identify bacteria in leaf ecosytem of azolla different species. Out hypotheisis was, if there are similar bacteria which repeat within the azollas different species, they will cluster together when their genomes will be plotted in dendrogram or a tree. The method used `spades` to get assemblies, `BWA` was used to do backmapping, `samtools` for sorting, `metabat` for binning and checkm for to see completeness and contamination of bins. Then `prokka` was used to annotate the genomes and uniport ids were obtained and table was made of all uniport id of all the bins. the table was changed to binary table and then used to create a dendrogram in R. The dendrogram and then tree made by using dendrogram in fig tree. In the tree i observed that the bacteria are clustering according to the metagenomic sample or plant host not on the basis of their similar taxonomical name eg rhizobiales is clustering with burkholderiales of same metagenomic assembly but not with rhizobiales of other host plant assembly. Im on the dead end how to intrepret these results and what can i deduce from it. and are there other ways to improve my approach? Can i compare similar taxonomical bins directly of different metagenomic assemblies any suggestions will be valuable. kind regards manpy student utrecht university holland
An alternative approach might be to calculate `mash` distances between all your genomes directly. Assuming your taxonomic assignments are largely correct, this would save you having to mess about with extracting the uniprot IDs, binary matrix etc. Since you can’t align that many full genomes, mash distances are a good surrogate for genome similarity. You can then draw your trees using pairwise mash distances among all your genomes. I’m not 100% sure how much you’ll need to polish your genomes etc. You may need to reorder contigs and perhaps concatenate but if you read up on it I’m sure it’ll become clear.
biostars
{"uid": 342058, "view_count": 987, "vote_count": 1}
Hi everyone! I am trying to find the best way to make 2 boxplot for a specific gene from data found in a row for a subset of columns within data frame x. x dimensions are 634 by 128 columns Each row is specific to a gene, Column 1 has gene name, and I want to say look at gene in row#1 columns 2:48 data I want to include in one boxplot columns 49:128 data I want to include in another boxplot data frame looks something like this ``` gene accepted_hits_x1.bam accepted_hits_x1.bam etc.... 1 AARS1 -6 0 etc.... ``` I also want to be able to see each data point that makes up the boxplot plotted in the plot I am having a problem: I am running into the problem where my data (residual from mean ... meaning x value - mean) is a series of positive and negative values and it appears that with this plot it is excluding these negative values... ```r data <- unlist(subset(datavr, gene =="IGF1R", select=2:128)) news <- data.frame(data=data, factor=c(rep(1,47), rep(2,80))) news$data <- (log10(as.numeric(news$data)) + 1) g <- ggplot(data=news, aes(x=as.factor(factor), y=data)) g + geom_boxplot() + geom_point(color="purple", size=3) + xlab("A38-41 A38-5 ") + ylab("log10(Residual from Mean)+1") + ggtitle("IGF1R inside region") + theme(plot.title = element_text(face="bold")) ``` The problem is that it keeps giving me error saying that: ```r Removed 110 rows containing missing values (geom_point) ``` This could be that these values are negative so taking the log10(value)+1?
```r gene_id <- 1 # consider the first gene data_1 <- your_dataframe[gene_id,2:48] data_2 <- your_dataframe[gene_id,49:128] boxplot(data_1,data_2) ```
biostars
{"uid": 145153, "view_count": 8143, "vote_count": 1}
I downloaded mapped SAM/BAM files from modEncode CAGE-seq data. I looked at SAM files and observed some inconsistency on the way how the first base (transcription start sites) is called. When there is a mismatch on first base, by either "N" or due to insertion of "G" on first base, that will shift TSS by one base. I have shown a example of where transcription start site (TSS) is correct and another example where TSS has shifted by one base due to mismatch on first base. **Correct mapping** TSS is 1450200 > chr3R 1450200 1450227 HWUSI-EAS1720_0021_FC63A8AAAXX:2:4:4359:14455#0/1 0 + 0 27M * 0 0 CTTTCCGTGCGGTTCGTAAAAATGACT caffffcaffffcffffcfdfff_efd PQ:i:16 > chr3R 1450200 1450227 HWUSI-EAS1720_0021_FC63A8AAAXX:2:6:8037:15660#0/1 0 + 0 27M * 0 0 CTTTCCGTGCGGTTCGTAAAAATGACT hhhhhhhhhhfhhhhfeffhhgfdehg PQ:i:19 **Incorrect TSS due to mismatch on first base** TSS is 1450199 instead of 1450200 Here,either "N" is inserted on first base or "G" added by CAGE protocol. Both of these result in TSS being different by one nucleotide. > chr3R 1450199 1450226 HWUSI-EAS1720_0021_FC63A8AAAXX:2:69:12174:14031#0/1 0 + 0 27M * 0 0 NCTTTCCGTGCGGTTCGTAAAAATGAC Geeedefadffffdfffdffffadfef PQ:i:1 > chr3R 1450199 1450226 HWUSI-EAS1720_0021_FC63A8AAAXX:2:84:2284:6722#0/1 0 + 0 27M * 0 0 NCTTTCCGTGCGGTTCGTAAAAATGAC F]b``bffcfcggcggfd__febbbBB PQ:i:0 > chr3R 1450199 1450226 HWUSI-EAS1720_0021_FC63A8AAAXX:2:100:6796:15301#0/1 0 + 0 27M * 0 0 GCTTTCCGTGCGGTTCGTAAAAATGAC Qfffcfffdffffbfffdccadd^Wb` PQ:i:0 How can i correct these in my SAM file ? Basically if there is a mismatch on first base, TSS info should be corrected. So on this case, if "N" or "G" is clipped, it's TSS should be 1450200. I looked at CIGAR information, but it appears to be same "27M" on all. Any help is appreciated. Thank you !!!
Hi Chirag, on my side, I usually work on the CAGE data after it is transformed to a BED format, and during that transformation I apply a naive correction for the extra Gs. My workflow is paired-end, so it does not directly apply to the modENCODE data, but for the sake of the example, here is an extract from the [source code](https://github.com/Population-Transcriptomics/pairedBamToBed12/blob/pairedbamtobed12/src/pairedBamToBed12/pairedBamToBed12.cpp#L339) of the [pairedBamToBed12](https://www.biostars.org/p/160342/) tool that we are using. void SimpleGCorrection(const BamAlignment& bam1, const BamAlignment& bam2, const string strand, unsigned int& alignmentStart, unsigned int& alignmentEnd, vector<int>& blockLengths, vector<int>& blockStarts) { string md; if ( (strand == "+") & (FirstBase(bam1) == "G") ) { bam1.GetTag("MD", md); md = md.substr(0,2); if (md == "0A" || md == "0C" || md == "0T") CutOneLeft(alignmentStart, blockLengths, blockStarts); } if ( (strand == "-") & (LastBase(bam2) == "C") ) { bam2.GetTag("MD", md); md = md.substr(md.length() -2, 2); if (md == "A0" || md == "G0" || md == "T0") CutOneRight(alignmentEnd, blockLengths); } } And here is the disclaimer that I added in our documentation: > NOTE: CAGE methods sometimes add an extra G at the beginning of the cDNAs (see http://population-transcriptomics.org/nanoCAGE/#extra-G). This leads to 1-base shifts of some TSS peaks. From version 1.2, `pairedBamToBed12` provides an experimental option, `-extraG` to shift the start or end (according to the strand) of the output of one base when a G mismatch is detected on the first base of Read1. This is a very naive implementation and a more detailed description of the problem may be found in the supplemental material of the [FANTOM3 main article](http://science.sciencemag.org/content/309/5740/1559). Thus, the `-extraG` option available here is not entierly satisfactory and may be removed in the future. A better approach for instance would be to post-process the BAM file instead of implementing a correction here. I hope it helps.
biostars
{"uid": 234966, "view_count": 1854, "vote_count": 1}
Hello, I am looking for an article or video which explains how the normalization in the limma voom is performed. I looked for some articles but I could not find any which explains it clearly. Can someone help me in this regard?
A good place to start would be to look back to where limma was originally used, i.e., for microarray analyses and microarray normalisation, and i highly recommend Professor John Quackenbush's great paper on this: http://www.cs.cmu.edu/~zivbj/class05/reading/norm.pdf Then I would read-up on linear modelling and linear regression, as limma is fundamentally based on linear modelling. Finally, with that under your belt, I would read the actual work by the 'new' limma authors: - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4402510/ - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4937821/ Part of being a great researcher is being able to read and understand your colleagues' published works, but I can understand that entering a new field can take time when there are literally 1000s of articles already out there on the topic.
biostars
{"uid": 275742, "view_count": 4118, "vote_count": 1}
Dear all: I have downloaded Geneious software trial version [https://www.geneious.com/][1], but now it is expired. As a fresh level grad student, I can't afford to pay full version of Geneious software. I aware that Geneious has abundant features to analyze NGS data. My expectation is to find tools that relatively similar to Geneious. My major intent is to carry out **phylogenetic analysis**.Can anyone recommend the better alternative for Geneious tools? Is there any free, open source software tools that can be alternative for Geneious? My colleague is not well comfortable with R environment, so we are seeking tools that require less programming input. Can anyone give possible aid or point out which library we can use? [1]: https://www.geneious.com/
You can also have a look at [UGENE][1] as an alternative to Geneious. [1]: http://ugene.net/
biostars
{"uid": 276826, "view_count": 14393, "vote_count": 1}
Hi all, I'm a bioinformatician and I often have to deal with RNAseq differential gene expression analysis projects. I think I understand well the whole process to get from the raw data to the the normalized read counts but, unfortunately, due to my little statistical background, I'm having trouble dealing with the last step of differential expression. When it comes to simple pairwise comparison between two conditions I understand the process, but when there is more complex comparisons (timelapses, multiple comparisons, including confounding effect ... ) I'm struggling for chosing the relevant design matrices. I'm curious if anyone would know good tutorials, online courses, books, or any ressources that would allow me to learn how to get better at that. Thank for your help,
I've not actually done it myself, but people say good things about Rafael Irizarry's courses. Particularly see weeks 3 and 4 of the Introduction to Linear Models and Matrix Algebra course [here][1] and [chapter 5][2] of his book. [1]: http://rafalab.github.io/pages/harvardx.html [2]: http://genomicsclass.github.io/book/
biostars
{"uid": 361024, "view_count": 1407, "vote_count": 3}
I have two files with genes File one (with 40000 genes) Gene 1 Gene 2 Gene 3 Gene b Gene f Gene c Gene r Gene z File two (with 39000 genes) Gene 1 Gene 3 Gene 2 Gene b I would like to know if there is a command line (with awk or bash) to extract that lines that exist in the one file and not in the two file
> I would like to know if there is a command line (with awk or bash) to extract that lines that exist in the one file and not in the two file use comm : http://man7.org/linux/man-pages/man1/comm.1.html comm -3 <(sort file1.txt) <(sort file2.txt )
biostars
{"uid": 237170, "view_count": 6196, "vote_count": 1}
I am analysing protein data using Python programming language and Jupyter notebook. In the Terminal I have put an alias in a hidden file on the home directory entitled .bash_profile, in order to be able to open pymol directly from the Terminal . alias pymol=/folder/where/pymol/is/located/Applications/PyMOL.app In Jupyter notebook, the first two commands are executed: from ipymol import viewer as pymol pymol.start() However, the following command gives an attribute error. pymol.fetch('4MBS') # Fetch PDB Is there another way to load a file directly from my hardrive for viewing? The only workaround I have found is using. import nglview as nv view = nv.show_file(/path/to/folder/where/protein/located.protein) view I appreciate any help in advance as i would like to use pymol in this way, if possible. Thanks
The answer to this question is that a bash shell .pml script needs to be made instead of a python .py script. The shell script can be shown in a Jupyter Notebook by using RawNBConvert setting, which is found underneath the help menu. The images generated from the .pml script can be viewed using Pymol and snapshot jpeg or png files can be taken of the molecules. The ipymol does not work and the nglview is not required.
biostars
{"uid": 390717, "view_count": 2652, "vote_count": 1}
Hi All, I know there are a few posts here that raise specific questions about RNA-seq library prep. protocols, but I was curious if there's a comprehensive catalog about exactly what protocols exist and exactly what type of data they produce (i.e. to what do the reads in the final FASTA/FASTQ files correspond). Basically, I'm somewhat confused by all of the different aspects of a protocol and how they compose. For example, if a protocol is stranded or not, the relative orientation of the reads, which strands each read (i.e. \1 and \2) comes from. Are the reads that end up in a FASTA/Q file always reported 5' to 3' with respect to the strand from which they are derived? Are mates pointing toward each other reported with respect to the same strand, opposite strands, or both depending on the protocol? What do people mean when they say reads are 'reversed' --- does this mean reverse complemented with respect to the mate, or that the read is actually reported in the 3' to 5' direction (i.e. reversed but not complemented)? Basically, I'm curious how all of these different variables interact with each other to produce the read sequences that will be used for downstream analysis. If I want to be able to communicate (to another person, or, perhaps as importantly, to a piece of software) all of the details / constrains about which reads should map to which strands in which orientations --- what is the most parsimonious way to do so? What is the minimum amount of information I need to convey? Is there a standard language / specification for representing this information? I'm sorry to ask such a broad question, but I'm a bit overwhelmed and trying to gain a comprehensive understanding of what, exactly, the reads in a file represent in light of the protocol under which they were prepared. Thanks!
<p>The relative orientation of a pair of reads will be the same unless you&#39;re using mate-pairs or the like. This is true regardless of whether you have a stranded/directional library or not.</p> <p><strong>Edit:</strong> I should add that when this isn&#39;t the case with a standard paired-end library then either (A) an error occurred when the library was made (PCR or otherwise), or (B) the sample doesn&#39;t match the reference there (i.e., you have a variant), or (C) it&#39;s a case of incorrect mapping, or (D) something really strange is going on (perhaps the nature of the experiment would make this likely).</p>
biostars
{"uid": 104915, "view_count": 2829, "vote_count": 1}
Here is my attempt: def remove_too_long_reads(bam): for read in bam: if read.end - read.start < 60: yield read bam = BedTool(input.bam) bam = BedTool(remove_too_long_reads(bam)) But when I try to use the bam file afterwards like so: bam.intersect(... I get an error: > <class 'BrokenPipeError'>: Broken pipe The command was: > > bedtools intersect -b data/gencode_annotation/tss.bed -a stdin > > Things to check: Error in job intersect_data_tss while creating output > file data/tss/bam/Exp2_9h_PolII.bam. RuleException: KeyError in line > 31 of > /local/home/endrebak/code/programmable_epigenetics/rules/tss/intersect_bam_gencode.rules: > 32 File > "/local/home/endrebak/code/programmable_epigenetics/rules/tss/intersect_bam_gencode.rules", > line 31, in __rule_intersect_data_tss File > "/local/home/endrebak/anaconda3/lib/python3.5/site-packages/pybedtools/bedtool.py", > line 773, in decorated File > "/local/home/endrebak/anaconda3/lib/python3.5/site-packages/pybedtools/bedtool.py", > line 336, in wrapped File > "/local/home/endrebak/anaconda3/lib/python3.5/site-packages/pybedtools/helpers.py", > line 373, in call_bedtools
In order to maintain compatibility with VCF/BED/GFF/GTF functionality, pybedtools converts BAM/SAM reads into `pybedtools.Interval` objects. When just filtering on length as you're doing here, that's unnecessary overhead. Here's a more efficient way to do it with pysam, which also has the benefit of providing access to all the read details if you need to do more sophisticated filtering. You can then use the filtered bam for downstream operations (e.g., histograms of read counts across intervals): https://gist.github.com/daler/f44c19510860aeb29e8f75432eec8883
biostars
{"uid": 193145, "view_count": 2620, "vote_count": 1}
I would like to compare the results from different algorithms for clustering data from an RNAseq experiment. One methodology we used is **WGCNA**, which represents each cluster/module by a module **eigengene**. What I am now looking for is a way to calculate such eigengenes on arbitrary lists of genes and their expression profiles. I want to use that to reduce the clusters generated by other methods to a set of "representative genes" and compare that to the WGCNA output. Unfortunately, I'm not clear about how to get from a list of genes and their expression profiles to an eigengene. The functions inside the WGCNA R package are deeply tied into the WGCNA analysis and I can't see how to use them on arbitrary data frames of gene expression data. Any hints would be highly welcome!
Eigengenes were first defined in this [paper][1]. Singular value decomposition (SVD) is what you're looking for. The eigengenes are the right singular vectors of the SVD of the expression matrix. If X is your data with genes as rows and samples as columns, the SVD of X is X=USV' and the eigengenes are defined as the vectors in V. In R, eigengenes <- svd(X)$v EDIT: Fixed link to paper. [1]: http://www.pnas.org/content/97/18/10101
biostars
{"uid": 299167, "view_count": 6038, "vote_count": 1}
blastp -query homo.faa -db cow.faa -out homo_blast.csv -outfmt 6 -evalue 0.00001 -max_target_seqs 1 -num_threads 32 I got the below error while performing protein blast with the code mentioned above > BLAST Database error: No alias or index file found for protein > database [cow.faa] in search path When I revised the code by specifying the path, the problem was not resolved. When I typed subject instead of db, the problem was solved, but I'm not sure if it's correct.
When you use a `-db` switch, it is expected that your database will be indexed. This is done using a `makeblastdb` command, and will create several files that will end in `.p??` where ?? stands for two other letters. When searching against a large database, to gain speed you would want to index it and use a `-db` switch. You most likely have not done this step. When using a `-subject` switch, the database need not be indexed. The result will be the same as when using the `-db` switch, but the search will be slower and this is not recommended for larger databases. It is typically used for comparing two sequences, or when the target database is relatively small.
biostars
{"uid": 9537312, "view_count": 335, "vote_count": 1}
<p>I try to understand how to chose the optimized pi_hat parameter for a dataset. In many articles, they chose 0.2 as pi_hat, and everything above that is considered to be cryptic relatedness or duplicates.</p> <p>I've tested IBD on HapMap, the files I use can be found here: <a href='ftp://ftp.ncbi.nlm.nih.gov/hapmap/genotypes/2009-01_phaseIII/plink_format/'>ftp://ftp.ncbi.nlm.nih.gov/hapmap/genotypes/2009-01_phaseIII/plink_format/</a>. I first remove all annotated offspring from HapMap. Then I peform IBD to see if it still finds samples with cryptic relatedness to each other. The steps I peform are the following (in PLINK):</p> <pre><code>1) LD-prune: plink --file hapmap --indep-pairwise 50 5 0.2 plink --file hapmap --extract plink.prune.in --recode --out hapmap_pruned (2) IBD: plink --file hapmap_pruned --genome --min 0.2 </code></pre> <p>The results shows that many cryptic related samples can be found with a pi_hat of 0.2 as threshold, even if all offspring were initially removed. My question is, is this a normal behavior? Or should one increase the pi_hat? How to find out a "good" pi_hat for a custom dataset?</p>
<p>This is normal behavior for the HapMap data set. See <a href='http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0049575'>Stevens et al.</a> and a few of the earlier papers he cites that try to identify un-annotated relationship in HapMap. The Stevens paper is using Cotterman coefficients (K1,K2) which are fraction of the genome shared IBD1, and IBD2. As <a href='http://www.biostars.org/u/3919/'>zx8754</a> mentioned above, 0.125 is third degree, 0.25 is second degree, and 0.5 is first degree, although in practice these thresholds can be too low or high due to consanguinity, and admixture.</p>
biostars
{"uid": 75335, "view_count": 24590, "vote_count": 14}
<p>I used <a href="http://www.biostars.org/post/show/2822/extracting-multiple-fasta-sequences-at-a-time-from-a-file-containing-many-sequences/#2823">this piece of biopython codes</a> written by <a href="http://www.biostars.org/user/profile/216/">Eric Normandeau</a> to parse several sequences from a Fasta file. The codes worked nicely. I would like to have my parsed output from this format </p> <pre><code>&gt;DR179241 similar to UniRef100_A5HIY3 Cluster: Thaumatin-like protein; n=1; ...... ATAATCTTTAGATCAGTCATCAATCTCAACAGTATCGCTTTCAATTCTCTTTCATATTGC ATGGAAGTGTGTAAATACAATTAGGGCATTCATTGAGTTGACTTCATTTAAGCGCT...... </code></pre> <p>exactly like this format:</p> <pre><code>&gt;DR179241 ATAATCTTTAGATCAGTCATCAATCTCAACAGTATCGCTTTCAATTCTCTTTCATATTGC ATGGAAGTGTGTAAATACAATTAGGGCATTCATTGAGTTGACTTCATTTAAGCGCT...... </code></pre> <p>Could anyone kindly please light me for example how to modify <a href="http://www.biostars.org/post/show/2822/extracting-multiple-fasta-sequences-at-a-time-from-a-file-containing-many-sequences/#2823">this piece of biopython codes</a> written by <a href="http://www.biostars.org/user/profile/216/">Eric Normandeau</a> in order to suit my purpose? </p> <p>Thank you very much and have a nice day.</p>
You could do this in R, using <a href="http://pbil.univ-lyon1.fr/software/seqinr/">seqinR</a>: library(seqinr) x=read.fasta("yourfile.fasta") write.fasta(x,file="newfile.fasta",names=names(x))
biostars
{"uid": 46818, "view_count": 3640, "vote_count": 1}
I am stuck with a very simple problem. I want to build a Pearson correlation matrix for my microarray dataset. My .cvs file consists of normalized, log-transformed expression values of 18k genes across 36 samples. I want to find the gene-gene Pearson correlation from this matrix using R package. After that, I want to transform the matrix to the form of an edge-list with genes in the first two columns and the value of the correlation in the last column. I was trying out the cor() function in R, but I guess there is some issue with numeric/character values because of which it gives me the error `x has to be numeric`. Kindly give some suggestions as to what way I can read in the file and transform the matrix. Thanks Gene sample1 sample2 sample3 A 10 50 78 B 50 45 55 C 70 56 44
Use my script <a href="http://userweb.eng.gla.ac.uk/umer.ijaz/bioinformatics/taxo_bivariate_plot.R">taxo_bivariate_plot.R</a>. It uses `cor()` as suggested by Phil S. Usage information is as follows: $ Rscript taxo_bivariate_plot.R --help Usage: taxo_bivariate_plot.R [options] file Options: --ifile=IFILE CSV file --opath=OPATH Output path --fsize=FSIZE Font size [default 1.2] --width=WIDTH Width of jpeg files [default 800] --height=HEIGHT Height of jpeg files [default 800] --correlation=CORRELATION Correlation to use: 1=pearson, 2=spearman, 3=kendall [default 1] --rmode Mode: TRUE=R mode, FALSE=Q mode [default FALSE] -h, --help Show this help message and exit This script generates bivariate plots with histograms on the diagonals, scatter plots with smooth curves below the diagonals and correlations with significance levels above diagonals. Data file has the following organization: Var_1 Var_2 Var_3 .. Var_R Sample_1 Sample_2 Sample_3 ... Sample_N For example, $head ENV_pitlatrine.csv Samples,pH,Temp,TS,VS,VFA,CODt,CODs,perCODsbyt,NH4,Prot,Carbo T_2_1,7.82,25.1,14.53,71.33,71,874,311,36,3.3,35.4,22 T_2_10,9.08,24.2,37.76,31.52,2,102,9,9,1.2,18.4,43 T_2_12,8.84,25.1,71.11,5.94,1,35,4,10,0.5,0,17 T_2_2,6.49,29.6,13.91,64.93,3.7,389,180,46,6.2,29.3,25 T_2_3,6.46,27.9,29.45,26.85,27.5,161,35,22,2.4,19.4,31 T_2_6,7.69,28.7,65.52,7.03,1.5,57,3,6,0.8,0,14 T_2_7,7.48,29.8,36.03,34.11,1.1,107,9,8,0.7,14.1,28 T_2_9,7.6,25,46.87,19.57,1.1,62,8,13,0.9,7.6,28 T_3_2,7.55,28.8,12.65,51.75,30.9,384,57,15,21.6,33.1,47 $ Rscript taxo_bivariate_plot.R --ifile=ENV_pitlatrine.csv Will generate the following image: <img alt="" src="http://userweb.eng.gla.ac.uk/umer.ijaz/bioinformatics/ENV_pitlatrine_BP.jpg" style="height:800px; width:800px" /> This Rscript is back-end of my <a href="http://quince-srv2.eng.gla.ac.uk:8080">TAXAenv</a> website. You can use `--rmode` to transpose your matrix and change the script accordingly to meet your needs. Best Wishes, Umer
biostars
{"uid": 98664, "view_count": 15320, "vote_count": 6}
Can anyone suggest me a pipeline with scripts for exome sequencing starting from the raw reads(paired end) till calling SNP, INDELS and then viewing the aligned file in IGV , it would be good if anyone has worked with GATK(genome analysis toolkit). I am new to exome sequencing data analysis , I have a proposed pipeline which I have made but I cannot understand it properly , so if anyone can help me out who has already in depth knowledge and experience in this area it would be of great help. My pipeline is below. I want to know how to use it with the scripts and call the variants and also find out the list of genes causing the mutations. ### Align samples to reference genome (BWA), generates SAI files. #### Steps pipeline: 1. Convert SAI to SAM (BWA) 2. Convert SAM to BAM binary format (SAM Tools) 3. Sort BAM (SAM Tools) 4. Index BAM (SAM Tools) 5. Identify target regions for realignment (Genome Analysis Toolkit) 6. Realign BAM to get better Indel calling (Genome Analysis Toolkit) 7. Reindex the realigned BAM (SAM Tools) 8. Call Indels (Genome Analysis Toolkit) 9. Call SNPs (Genome Analysis Toolkit) 10. View aligned reads in BAM/BAI (Integrated Genome Viewer) Does anyone have a script to perform this analysis for understanding. I also have a basic script which I am attaching below. ### Standard exome sequencing pipeline #### Preparation of the input files tar -xzf chromFa.tar.gz Then concatenate the single-chromosome files to a single genome reference file (make sure they are in the exact same order as stated below, GATK won't work otherwise): ``` cat chr1.fa chr2.fa chr3.fa chr4.fa chr5.fa chr6.fa chr7.fa chr8.fa chr9.fa \ chr10.fa chr11.fa chr12.fa chr13.fa chr14.fa chr15.fa chr16.fa chr17.fa chr18.fa \ chr19.fa chr20.fa chr21.fa chr22.fa chrX.fa chrY.fa chrM.fa > hg19.fa ``` aligning the sequences to the human genome I use BWA (also why we are doing this step if this is not the actual alignment step) bwa index -a bwtsw -p hg19 hg19.fa #### Actual Alignment If this is done you could start aligning you fastq files to that by invoking bwa like this: bwa aln -t 4 -f input.sai -I hg19 input.fastq For a sample called Exome1 that would in my case look like `@RG\tID:Exome1\tLB:Exome1\tSM:Exome1\tPL:ILLUMINA` (not being able to understand the below line) bwa sampe -f out.sam -r "@RQ\tID:<ID>\tLB:<LIBRARY_NAME>\tSM:<SAMPLE_NAME>\tPL:ILLUMINA"\ hg19 input1.sai input2.sai input1.fq input2.fq #### SAM to BAM] conversion ``` java -Xmx4g -Djava.io.tmpdir=/tmp \ -jar picard/SortSam.jar \ SO=coordinate \ INPUT=input.sam \ OUTPUT=output.bam \ VALIDATION_STRINGENCY=LENIENT \ CREATE_INDEX=true ``` #### Marking PCR Duplicates ``` java -Xmx4g -Djava.io.tmpdir=/tmp \ -jar picard/MarkDuplicates.jar \ INPUT=input.bam \ OUTPUT=input.marked.bam \ METRICS_FILE=metrics \ CREATE_INDEX=true \ VALIDATION_STRINGENCY=LENIENT ``` #### Local realignment around indels #### Step1: ``` java -Xmx4g -jar GenomeAnalysisTK.jar \ -T RealignerTargetCreator \ -R hg19.fa \ -o input.bam.list \ -I input.marked.bam ``` This step puts the table in the file in `input.bam.list`. When this is finished we can start the realigning step using the statements below: ``` java -Xmx4g -Djava.io.tmpdir=/tmp \ -jar GenomeAnalysisTK.jar \ -I input.marked.bam \ -R hg19.fa \ -T IndelRealigner \ -targetIntervals input.bam.list \ -o input.marked.realigned.bam ``` When using paired end data, the mate information must be fixed, as alignments may change during the realignment process. Picard offers a utility to do that for us: ``` java -Djava.io.tmpdir=/tmp/flx-auswerter \ -jar picard/FixMateInformation.jar \ INPUT=input.marked.realigned.bam \ OUTPUT=input_bam.marked.realigned.fixed.bam \ SO=coordinate \ VALIDATION_STRINGENCY=LENIENT \ CREATE_INDEX=true ``` ### Quality score recalibration That's still not all. Quality data generated from the sequencer isn't always very accurate and for obtaining good SNP calls (which rely on base quality scores), recalibration of these scores is necessary (See http://www.broadinstitute.org/files/shared/mpg/nextgen2010/nextgen_poplin.pdf as well). Again this is done in two steps: the CountCovariates step and the TableRecalibration steps. Both can be run from the GATK package: 1\. Count covariates: ``` java -Xmx4g -jar GenomeAnalysisTK.jar \ -l INFO \ -R hg19.fa \ --DBSNP dbsnp132.txt \ -I input.marked.realigned.fixed.bam \ -T CountCovariates \ -cov ReadGroupCovariate \ -cov QualityScoreCovariate \ -cov CycleCovariate \ -cov DinucCovariate \ -recalFile input.recal_data.csv ``` This step creates a .csv file which is needed for the next step and requires a dbSNP file, which can be downloaded at the UCSC Genome browser homepage DbSNP132 is the most novel one which can be downloaded from the UCSC browser, but dbSNP is updated regularly, so newer versions will be available in the future. Download the dbsnp132.txt.gz file and unzip it using gunzip (that's just an example). 2\. Table recalibration: ``` java -Xmx4g -jar GenomeAnalysisTK.jar \ -l INFO \ -R hg19.fa \ -I input.marked.realigned.fixed.bam \ -T TableRecalibration \ --out input.marked.realigned.fixed.recal.bam \ -recalFile input.recal_data.csv ``` ### SNP calling Produce raw SNP calls SNP calling is done using the GATK UnifiedGenotyper program. It calls SNPs and short indels at the same time and gives a well annotated VCF file as output. ``` java -Xmx4g -jar GenomeAnalysisTK.jar \ -glm BOTH \ -R hg19.fa \ -T UnifiedGenotyper \ -I input.marked.realigned.fixed.recal.bam \ -D dbsnp132.txt \ -o snps.vcf \ -metrics snps.metrics \ -stand_call_conf 50.0 \ -stand_emit_conf 10.0 \ -dcov 1000 \ -A DepthOfCoverage \ -A AlleleBalance \ -L target_intervals.bed ``` #### Filter SNPs Although this step is called filtering, I usually don't throw out possible wrong SNP calls and sometimes it proved to be useful to get back to those SNPs in a later step in the analysis. I prefer to flag them according to the reason why they should be filtered. The filtering scheme are partially the recommended ones by the GATK team and some are based on my experience. A SNP which passes through all the filters doesn't necessarily mean a true SNP call and SNPs filtered out don't necessarily define a sequencing artifact, but it gives a clue for possible reasons why a SNP could be wrong. (In case you've got several exomes (>30) Variant Quality Score recalibration will yield better results than pure filtering. For details see http://www.broadinstitute.org/gsa/wiki/index.php/Variant_quality_score_recalibration) ``` java -Xmx4g -jar GenomeAnalysisTK.jar \ -R hg19.fa \ -T VariantFiltration \ -B:variant,VCF snp.vcf.recalibrated \ -o snp.recalibrated.filtered.vcf \ --clusterWindowSize 10 \ --filterExpression "MQ0 >= 4 && ((MQ0 / (1.0 * DP)) > 0.1)" \ --filterName "HARD_TO_VALIDATE" \ --filterExpression "DP < 5 " \ --filterName "LowCoverage" \ --filterExpression "QUAL < 30.0 " \ --filterName "VeryLowQual" \ --filterExpression "QUAL > 30.0 && QUAL < 50.0 " \ --filterName "LowQual" \ --filterExpression "QD < 1.5 " \ --filterName "LowQD" \ --filterExpression "SB > -10.0 " \ --filterName "StrandBias" ``` ### Annotations using annovar #### Conversion to annovar file format For annotating SNP calls I use the software annovar (http://www.openbioinformatics.org/annovar). It annotates a lot of different data to the SNPs and is especially suited for exome-level data-sets. At first we need to convert the VCF file format to the annovar file format. Annovar got it's own script to do that for us convert2annovar.pl --format vcf4 --includeinfo snp.recalibrated.filtered.vcf > snp.annovar include the `--includeinfo` argument as this will move the annotations from GATK (filters, SNP quality scores and everything else) to the annovar file. Another script annotates the annovar file. This script needs some annotation files, all of which can be downloaded at their homepage Be sure to get all the hg19_xxx files if you've done alignment on the hg19 human assembly and save it in the humandb subfolder of the annovar folder. The script then produces a comma-separated text file with all the annotations, which can be viewed in Excel, OpenOffice Calc or similar programs. summarize_annovar.pl --buildver hg19 snp.annovar ./humandb -outfile snps It would be of great help if anyone can come up with suggestions or some pipeline script.
<p>What you're asking here is probably beyond the scope of a Q/A site. To properly review all of these steps and provide feedback and suggestions would take hours. If you really need that level of support, then you're going to want to pay someone a consulting fee to help you get your pipeline set up. </p> <p>If you have specific questions about individual steps or commands, then Biostar can be a great resource, and please do feel free to ask questions. I'd encourage you to look through old posts first, as many of these topics have been addressed individually in the past.</p>
biostars
{"uid": 82617, "view_count": 15556, "vote_count": 1}
I know I might be missing something obvious, but how do I get the accuracy/quality of my imputation after using IMPUTE2 (or any imputation tool)? How is the quality of an imputation measured? Thank you!
I actually figured it out. Incase anyone ever has the same question. IMPUTE2 outputs a file called info where it has imputation quality information. Depending on the command line arguments you choose you can get more specific statistics. https://mathgen.stats.ox.ac.uk/impute/output_file_options.html#info_metric_details
biostars
{"uid": 365968, "view_count": 2205, "vote_count": 1}
Hello everyone, I am new in 1000 genomes project data. I want to download all bam files belonging to phase3, can anyone guide me how can I download all of them (from the command line?). Do you have any estimation how long it is going to take? I want to compute the depth of coverage only for some specific intervals, not the entire genome. Is there any way that I could do it without downloading the data? I could find this, but not sure if it is relevant to what I want to do? samtools view -b ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data/HG01375/alignment/HG01375.mapped.ILLUMINA.bwa.CLM.low_coverage.20120522.bam 2:1,000,000-2,000,000 | genomeCoverageBed -ibam stdin -bg > coverage.bg I would appreciate if anyone could guide me.
you wrote: samtools view -b ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data/HG01375/alignment/HG01375.mapped.ILLUMINA.bwa.CLM.low_coverage.20120522.bam 2:1,000,000-2,000,000 | (...) you want: samtools -bu view 'http://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase3/data/HG01375/alignment/HG01375.mapped.ILLUMINA.bwa.CLM.low_coverage.20120522.bam' "2:1000000-2000000" | (...)
biostars
{"uid": 310884, "view_count": 3626, "vote_count": 1}
Hi, I have this list of proteins from a new genome project so its pretty much unannotated. However, it's closely related to *C. elegans* so I was thinking of trying to identify the closest *C. elegans* homologues. What I've been doing right now is doing a protein blast in ncbi with the protein sequences and then taking the top *C. elegans* hit, however, there are far too many sequences to be able to do this one at a time, so I was wondering if there's a way to do it faster/automated/program that does it for me. Thanks!
Try using [HMMER][1]. The manual is available [here][2]. IN BRIEF: for each protein sequence in *C. elegans* you make a HMM using `hmmbuild` command. Concatenate all HMM models into a single file to make a database file. You have to use `hmpress` to create additional files in order to search your database. Now you can use either `phmmer` if you want to scan against the database you have just created or `hmmsearch` to scan individual models against the sequences you have. The documentation describes very well what commands you need, but note the subtle differences of scanning model vs set of sequences and set of sequences vs db of models. If you have access to a parallel environment such as MPI (OpenMPI can usually be installed even on the local machines to take full advantage of multiple cores) then you can build the HMMER with MPI support to increase throughput. A rough idea of a time in our use was: Building and pressing a database of ~10k models takes 10 mins (ish) scanning a coding sequence against a database of ~10k models takes 2-3 seconds. This is very rough guide that we have used it, which undoubtedly will differ from your use case. [1]: http://hmmer.janelia.org/ [2]: ftp://selab.janelia.org/pub/software/hmmer3/3.1b1/Userguide.pdf
biostars
{"uid": 107359, "view_count": 2815, "vote_count": 1}
Hello all, I've posted the question in Stackoverflow but I thought I might get more responses here. I was able to load my csv file into a numpy array: data = np.genfromtxt('csv_file', dtype=None, delimiter=',') Now I would like to generate a heatmap. I have 19 categories from 11 samples, along these lines: COG station1 station2 station3 station4 COG0001 0.019393497 0.183122497 0.089911227 0.283250444 0.074110521 COG0002 0.044632051 0.019118032 0.034625785 0.069892277 0.034073709 COG0003 0.033066112 0 0 0 0 COG0004 0.115086472 0.098805295 0.148167492 0.040019101 0.043982814 COG0005 0.064613057 0.03924007 0.105262559 0.076839235 0.031070155 COG0006 0.079920475 0.188586049 0.123607421 0.27101229 0.274806929 COG0007 0.051727492 0.066311584 0.080655401 0.027024185 0.059156417 COG0008 0.126254841 0.108478559 0.139106704 0.056430812 0.099823028 I wanted to use matplotlib colormesh. all the examples I could find used random number arrays. I can get the plot easily with random numbers, however I can't get my csv file to plot. first it refuses to reshape. I have NaNs there so I tried masking but that failed too. Also, I had to delete the header and first column, is there a way to leave them and get labels for the axes? I've edited the original question to include an excerpt of the csv file. any help and insights would be greatly appreciated. many thanks
Here's a nickel, kid, go get yourself a better plotting library > library(ggplot2) > foo = read.table('foo.txt', header=T) > foomelt = melt(foo) Using COG as id variables > ggplot(foomelt, aes(x=COG, y=variable, fill=value)) + geom_tile() + scale_fill_gradient(low='white', high='steelblue') > ggsave('biostar.png') Saving 7.97" x 7.75" image ggplot2 is plotting heaven and *way* better than matplotlib. Use rpy2 to run from python - they even have ggplot2 examples in the docs. ![](https://imgur.com/fGTZT.png)
biostars
{"uid": 920, "view_count": 28323, "vote_count": 9}
I have made a package to which I call the QuasR package among others, therefore, I have put it both in the DESCRIPTION file and in the NAMESPACE file. The DESCRIPTION file is as follows: > **LazyData**: true > >**Imports**: Rcpp, Biostrings, bedr, devtools, tidyverse, dplyr, phangorn, ggseqlogo, metan, ggpubr, scales, ggplot2 > > **Enhances**: parallel > > **Remotes**: bioc::release/Rbowtie,bioc::release/QuasR,bioc::release/GenomicAlignments,bioc::release/Rhtslib, bioc::release/rtracklayer,bioc::release/ShortRead, bioc::release/BSgenome,bioc::release/GenomicFeatures, bioc::release/VariantAnnotation,bioc::release/GenomicFiles,bioc::release/dada2, bioc::release/QuasR > > **LinkingTo**: Rcpp > > **RoxygenNote**: 7.2.0 The problem is that when I Dokerize this package, the following error occurs: `Error in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]): there is no package called ‘QuasR’` How can I call this package in the DESCRIPTION file? I tried to insert as Imported package, with the same result...
Try to add a line with `biocViews:` before `Imports:` as suggested in this [answer][1]. Then you simply add the bioconductor libraries as `Imports:` [1]: https://bioinformatics.stackexchange.com/questions/3365/r-package-development-how-does-one-automatically-install-bioconductor-packages
biostars
{"uid": 9526272, "view_count": 686, "vote_count": 1}
Hi everyone, I have used the "findOverlaps" function in R to find which positions of my two datasets overlap. I have also used "countOverlaps" to see how many overlaps I have. What I want to do now, is to find the exact coordinates which overlapped. I was looking how to do it using findOverlaps but it has no such options and by doing a google search, I couldn't find much help. A little help on this would be greatly appreciated. Thank you
I have done this with dummy data. Lets say we have we have regions A (groupA) and regions B (groupB). groupA <- data.table( chr = rep("chr1", 5), start = seq(0, 1000, 100), end = seq(50, 1050, 100)) setkey(groupA, chr, start, end) groupB <- data.table( chr = rep("chr1", 5), start = seq(25, 1025, 100), end = seq(75, 1075, 100)) setkey(groupB, chr, start, end) Check: 1. If your datasets are data.table `class(groupA)`, if not do `setDT(groupA)` 2. If keys are chr start end, if not do `setkey(groupA, chr, start, end)` </ol> # Find overlaps over <- foverlaps(groupA, groupB, nomatch = 0) # Extract exact regions over2 <- data.table( chr = over$chr, start = over[, ifelse(start > i.start, start, i.start)], end = over[, ifelse(end < i.end, end, i.end)])
biostars
{"uid": 173502, "view_count": 16309, "vote_count": 8}
I am running DESeq2 to find DEGs between multiple samples, but I'm not able to decide what type of design to use, and how to arrange my data? My data the following categories- ---------- **1.DISEASE SUBTYPE | 2. TYPE OF MUTATION** A | mut1 | mut2 | mut3 B | mut1 | mut2 | mut3 C | mut1 | mut2 | mut3 ---------- There are three different mutation backgrounds (mut1, mut2, mut3) common in each of the disease subtype (A, B, C). **I want to compare my NORMAL HEALTHY samples with each of the Amut1, Amut2, Amut3, Bmut1, Bmut2... and so on. (and also inter-category comparisons)** How should I arrange/tidy my data for DESeq2, and what should I write for the design? Should I just compare them pairwise separately, or make a complex design for R? Any kind of help is appreciated, I'm just a beginner. Thank you!!!
I would try out two approaches. The first approach would be combining your disease subtypes and mutation factor levels for each sample. Your sample sheet would look as follows. condition sample_1 A_mut1 sample_2 A_mut2 sample_3 A_mut3 ... ... The regression formula would then just be `~ condition`. The next approach uses interaction terms in the regression, so would model both the main effects of each mutation and each disease subtype, as well as the differences in effect for each disease subtype based on the mutation. Your sample sheet would now look like the following. subtype mutation sample_1 A mut1 sample_2 A mut2 sample_3 A mut3 ... ... ... For this analysis the regression formula would be `~ subtype + mutation + subtype:mutation`. More information on multi-factor design can be found in the [DESeq2 documentation](https://bioconductor.org/packages/release/bioc/vignettes/DESeq2/inst/doc/DESeq2.html#multi-factor-designs), and a detailed explanation for working with interaction terms can be found in the help documentation for the results function, `help("results", package="DESeq2")`.
biostars
{"uid": 461836, "view_count": 497, "vote_count": 1}
I am new to analysing ATAC-seq data. As mentioned on https://www.biostars.org/p/209592/, there seem to be two ways to use MACS to analyse ATAC-seq data. 1. Utilising the --shift -100 --extsize 200 command This, I believe, is to find where the cutting sites are. 2. Utilising the --shift 37 --extsize 73 command This is to find the nucleosomes since the DNA is wrapped around nucleosomes is circa 147bp. I specifically want to document regions of open chromatin, particularly enhancer regions. I have read in the literature that open chromatin regions will give reads <100bp. If this is correct, do I have to filter my bam file to look at only this length. Surely if I map all of the my data, not filtering for read size I will end up with genome-wide ATAC-coverage? Looking at my bam file following mapping to the genome, my read sizes range from 50bp-450bp (with a peak at around 70bp).
All these macs2 options that people use, combining shift and extsizes in my experience do not make a notable difference. As spacemorrissey mentioned, unlike ChIP-seq (with typical short single-end reads) you want to scan ATAC-seq data for cutting sites within the accessable chromatin. I always use `macs2 callpeak -g hs --nomodel -f BAMPE`, which skips the macs-typical shifting model and piles up the entire paired-end fragment size. For reliable enhancer calling, ATAC-seq is the wrong assay anyway, because even though active enhancers are accessible, by far not every accessible region is an enhancer (promoters, silencers, insulators etc...). What exactly is the question you are working on?
biostars
{"uid": 324287, "view_count": 3791, "vote_count": 1}
Hello, I am trying to recapitulate a figure in the following paper; figure 4F (both left and right panels): https://www.cell.com/cell-reports/pdf/S2211-1247(21)01473-X.pdf Does anyone know how to do these types of diagrams? I'm assuming they're done in Cytoscape. My data includes TFs in 4 different cell types, as well as the target genes (TFs) of those TFs in all 4 cell types, and then a column designating if that interaction if activating or repressing. Currently I can create a circle network diagram in Cytoscape, but the TFs aren't organized by cell type other than the node color, so it just looks like one big circle. It also filtered any repeat TFs to just one node, so if I have the same TF in multiple cell types I can't visualize that... Thank you!
Yes. This figure was made using Cytoscape (it is mentioned in the Methods). Here is how I would do it. Data prep: As you noted, you'll need unique node names for every TF you want represent. This means concatenating gene names with cell types in your case, e.g., Sox9_RPC and Sox9_MG. So, your data would look something like this: Sox9_RPC | activates | Nfia_MG Sox9_RPC | represses | Pax6_TT Nfib_MG | activates | Lhx2_TT In Cytoscape: 1. You can load these data into Cytoscape using File>Import>Network from file. 2. Next, you want to select the nodes for each of the cell types. Using the search bar, you can use `name:*_RPC` to select all the nodes with "_RPC", for example. 3. Then run the Circular Layout *on selected nodes only"*. Move that circle off to the side while it is selected. Repeat the selection and layout of each of the cell types. 4. You can use the Style panel to map edge color to the interaction type (a discrete mapping for "activates" or "represses"). You can also use bypass node border color to set the color of nodes per cell type when selected. 5. You can use the Annotation panel to add large font labels like "RPC" and "MG" for your cell types on the network (like in the figure). These steps should reproduce that figure.
biostars
{"uid": 9519025, "view_count": 327, "vote_count": 1}
Hi Biostars, I have been using trimmomatic for quite some time, but realized that I don't get something. So one can use LEADING and TRAILING options to remove bases from the beginning and end of the read, respectively. From manual: LEADING Remove low quality bases from the beginning. As long as a base has a value below this threshold the base is removed and the next base will be investigated. My question is: "From the beginning" means from the beginning until the end of the read? If yes, then what is the meaning of having TRAILING option if the whole read is scanned? Otherwise, until which base does trimmomatic scan by LEADING option? Cheers,
According to the [documentation][1] both both options take the argument "quality": LEADING:quality leading: Cut bases off the start of a read, if below a threshold quality quality: Specifies the minimum quality required to keep a base. Remove low quality or N bases. Also from [the manual][2]: LEADING - Remove low quality bases from the beginning. As long as a base has a value below this threshold the base is removed and the next base will be investigated. "LEADING 3" would delete all bases below a quality threshold of 3 or that are N, beginning at the first base and continuing until the first base that is at least a quality of 3 and is not N AAAGGGTTT 012345678 - Leading 3 would cause the deletion of AAA AAANNNTTT 012345678 - Leading 3 would cause the deletion of AAANNN AAAGGGTTT 123456789 - Leading 3 would cause the deletion of AA Same happens with trailing but from the other end [1]: http://www.usadellab.org/cms/?page=trimmomatic [2]: http://www.usadellab.org/cms/uploads/supplementary/Trimmomatic/TrimmomaticManual_V0.32.pdf
biostars
{"uid": 317812, "view_count": 8679, "vote_count": 3}
I am working on some human WGS data hg38 assembly. I was wondering: is there any BED format to exclude low-quality sites? Thank you very much in advance!
the complement of wgs_calling_regions.hg38.interval_list under https://console.cloud.google.com/storage/browser/genomics-public-data/resources/broad/hg38/v0/ ?
biostars
{"uid": 9542976, "view_count": 739, "vote_count": 1}
I have a FASTQ file and I'm able to run the FASTQC program to analyse the file. but when I use trim_galore, FASTQC (or the FASTQC option in trim_galore) is not working anymore. $ fastqc ./sub1_val_1.fq.gz This is the output: Started analysis of sub1_val_1.fq.gz Analysis complete for sub1_val_1.fq.gz Failed to process file sub1_val_1.fq.gz java.lang.ArrayIndexOutOfBoundsException: -1 at uk.ac.babraham.FastQC.Modules.SequenceLengthDistribution.calculateDistribution(SequenceLengthDistribution.java:100) at uk.ac.babraham.FastQC.Modules.SequenceLengthDistribution.raisesError(SequenceLengthDistribution.java:184) at uk.ac.babraham.FastQC.Report.HTMLReportArchive.startDocument(HTMLReportArchive.java:336) at uk.ac.babraham.FastQC.Report.HTMLReportArchive.<init>(HTMLReportArchive.java:84) at uk.ac.babraham.FastQC.Analysis.OfflineRunner.analysisComplete(OfflineRunner.java:155) at uk.ac.babraham.FastQC.Analysis.AnalysisRunner.run(AnalysisRunner.java:110) at java.lang.Thread.run(Thread.java:695) Is the Failed to process file an error because the version is not correct between trim_galore and FastQC? I found [this][1], [but that wasn't that helpful][2]. I'm using FastQC v0.11.5 and trim_galore v0.4.1. I subsetted a library (reads in paired-end) using this: seqtk sample -s100 ./SRR2937435_1.fastq.gz 10000 | gzip > sub1.fastq.gz seqtk sample -s100 ./SRR2937435_2.fastq.gz 10000 | gzip > sub2.fastq.gz The sub1_val_1.fq.gz file was after passing sub1.fastq.gz into trim_galore. FastQC with sub1.fastq.gz is working. [1]: http://seqanswers.com/forums/archive/index.php/t-4846.html [2]: https://bugs.launchpad.net/ubuntu/+source/fastqc/+bug/1443275
I found the answer: You have to uncompress it. Probably, trim_galore is only working with tar.gz and not fastq.gz. gzip -d -k sub1.fastq.gz > sub1.fastq y # to accept to overwrite gzip -d -k sub2.fastq.gz > sub2.fastq y # to accept to overwrite trim_galore --illumina --paired --fastqc sub1.fastq sub2.fastq
biostars
{"uid": 204664, "view_count": 6137, "vote_count": 4}
Hello, I'm fighting with my awk command since yesterday. I have a file (locus.txt), this is some IgH locus from mm10 (I don't have header but you have : chr, start, end, strand and name_of_the_locus, separated by tab) chr12 113363298 113365156 - gamma3 chr12 113330756 113338695 - gamma1 chr12 113308036 113314227 - gamma2b chr12 113274557 113277035 - gammaepsilon chr12 113260153 113264625 - alpha chr12 113289248 113295541 - gamma2a chr12 113423027 113426701 - muIgh chr12 113225832 113255223 - 3'RR chr12 113416247 113418358 - IgD What I want to do is to grab the minimum position in this file, so the minimum position in start column (second column : `113225832`, for 3'RR) Then, I want to substract all my position with this minimum and rearrange the file like this gamma3 137466 139324 gamma1 104924 112863 ...etc ---------- **What I have tried so far** Search for minimum value, saved in $min : min=`awk -v min=1000000000 '{if($2<min){min=$2}}END{print min}' locus.txt` Then substract position and rearrange the file : awk -F $'\t' '{$1=$4=""; print $5"\t"$2-$min"\t"$3-$min}' locus.txt But I got this : gamma3 0 1858 gamma1 0 7939 gamma2b 0 6191 gammaepsilon 0 2478 alpha 0 4472 gamma2a 0 6293 muIgh 0 3674 3'RR 0 29391 IgD 0 2111 The only correct result is `29391` for 3'RR Seems not like a complex problem but I can't find a way out of this... I bet on a casting problem but i'm not even sure. Thanks for your help !
To pass a variable to awk you can use the `-v` option like this (not tested): awk -v min=$min -F $'\t' '{$1=$4=""; print $5 "\t" $2 - min "\t" $3 - min}' locus.txt
biostars
{"uid": 333887, "view_count": 1018, "vote_count": 2}
On the Julia forum, there was the suggestion of creating a new "flavor" of the BioStar Handbook, but using Julia as programming language for the code examples. https://discourse.julialang.org/t/biostar-handbook-computational-genomics-and-julia-to-be-or-not-to-be/25732 Is this allowed under the current license, and if so, would there be any interest from the current authors in collaborating on this?
Code examples may be translated in any other language, What is not permitted is re-publishing another version of the book where any of the content is copied verbatim and only the code is changed. As always *Fair Use* applies and I consider myself a supporter of [Fair Use](https://en.wikipedia.org/wiki/Fair_use) policies.
biostars
{"uid": 387809, "view_count": 1114, "vote_count": 4}
I'm going through the instructions page on https://gatkforums.broadinstitute.org/gatk/discussion/1601/how-can-i-prepare-a-fasta-file-to-use-as-reference Specifically, the command I don't see how to do is: java -jar CreateSequenceDictionary.jar R= Homo_sapiens_assembly18.fasta O= Homo_sapiens_assembly18.dict [Fri Jun 19 14:09:11 EDT 2009] net.sf.picard.sam.CreateSequenceDictionary R= Homo_sapiens_assembly18.fasta O= Homo_sapiens_assembly18.dict [Fri Jun 19 14:09:58 EDT 2009] net.sf.picard.sam.CreateSequenceDictionary done. Runtime.totalMemory()=2112487424 44.922u 2.308s 0:47.09 100.2% 0+0k 0+0io 2pf+0w I think that `CreateSequenceDictionary.jar` comes from Picard, so I downloaded that from https://broadinstitute.github.io/picard/, but I don't see `CreateSequenceDictionary.jar` anywhere in the directory. However, I do see `CreateSequenceDictionary.java` I assume that `.jar` files are analogous to C executables, and `.java` files are analogous to `.c` human-readable code. Going through the Picard readme file, I see that I should execute `./gradlew shadowJar` but this build fails on two different computers that I'm on. So I can't make/get `CreateSequenceDictionary.jar` I'm at a loss, how do I generate this dict file?
Picard is a wrapper command that will run the subcommands. There is no single executable for each subcommand. Simply run `picard.jar` and then check the output printed to screen for the subcommand you want. Then run `java -jar picard.jar <subcommand>`
biostars
{"uid": 416404, "view_count": 7267, "vote_count": 2}
where to get the file of hg19 exon, intron, UTR region ? I read lots of post in biostar. However it's been a long time post.
Get the respective GTF (annotation) file for your genome. Once you have this you can basically follow https://www.biostars.org/p/112251/#314840 to get the respective features. GTFs can be found at NCBI, Ensembl or GENCODE.
biostars
{"uid": 400463, "view_count": 1716, "vote_count": 1}
Hi All, Could you please help me how to explain different methods for differential expression analysis such as edgeR, Limma, DESeq etc to biologist or non-bioinformatician. Thanks in advance!
*DESeq* and *EdgeR* are very similar and both assume that no genes are differentially expressed. *DESeq* uses a "*geometric*" normalisation strategy, whereas *EdgeR* is a weighted mean of log ratios-based method. Both normalise data initially via the calculation of size / normalisation factors. *Limma* / *Voom* is different in that it normalises via the very successful (for microarrays) quantile nomalisation, where an attempt is made to match gene count distributions across samples in your dataset. It can somewhat loosely be viewed as scaling each sample's values to be between the min and max values (across all samples). Thus, the final distributions will be similar. **Note added August 19, 2020:** for *limma*, if we are referring to microarrays and not RNA-seq, then normalisation will be performed by *affy* or *oligo* for Affymetrix arrays, while *limma* has functionality to normalise Illumina and Agilent arrays. Here is further information (important parts in bold): #DESeq2 > DESeq: This normalization method [14] is included in the DESeq > Bioconductor package (version 1.6.0) [14] and is **based on the > hypothesis that most genes are not DE**. **A DESeq scaling factor for > a given lane is computed as the median of the ratio, for each gene, of > its read count over its geometric mean across all lanes.** The > underlying idea is that non-DE genes should have similar read counts > across samples, leading to a ratio of 1. **Assuming most genes are not > DE, the median of this ratio for the lane provides an estimate of the > correction factor that should be applied to all read counts of this > lane to fulfill the hypothesis**. By calling the estimateSizeFactors() > and sizeFactors() functions in the DESeq Bioconductor package, this > factor is computed for each lane, and raw read counts are divided by > the factor associated with their sequencing lane. *[source: https://www.ncbi.nlm.nih.gov/pubmed/22988256]* #EdgeR > Trimmed Mean of M-values (TMM): This normalization method [17] is > implemented in the edgeR Bioconductor package (version 2.4.0). It is > **also based on the hypothesis that most genes are not DE**. The TMM factor is computed for each lane, with one lane being considered as a > reference sample and the others as test samples. For each test sample, > **TMM is computed as the weighted mean of log ratios between this test and the reference, after exclusion of the most expressed genes and the > genes with the largest log ratios.** According to the hypothesis of > low DE, this TMM should be close to 1. If it is not, its value > provides an estimate of the correction factor that must be applied to > the library sizes (and not the raw counts) in order to fulfill the > hypothesis. The calcNormFactors() function in the edgeR Bioconductor > package provides these scaling factors. To obtain normalized read > counts, these normalization factors are re-scaled by the mean of the > normalized library sizes. Normalized read counts are obtained by > dividing raw read counts by these re-scaled normalization factors. *[source: https://www.ncbi.nlm.nih.gov/pubmed/22988256]* #Limma / Voom > Quantile (Q): First proposed in the context of microarray data, this > **normalization method consists in matching distributions of gene counts** > across lanes [22, 23]. It is implemented in the Bioconductor package > limma [31] by calling the normalizeQuantiles() function. *[source: https://www.ncbi.nlm.nih.gov/pubmed/22988256]*
biostars
{"uid": 284775, "view_count": 48855, "vote_count": 27}
Hi, my question is regarding the most common diff. expr. tools used (edger, deseq2, limma). As they are created before UMIs existed (where the overall set of data includes quite low copy numbers on all entries which seem like rare low expressed genes) did such data can be used with their statistical methods and normalization and get proper DEG results? I'm not a statistician but know that changing the scale of the data, dispersion, etc may cause bias in the results?
UMI counts will work perfectly fine for these tools and should actually better match the statistical assumptions underlying them than regular read counts (due to the additional noise from PCR duplicates here). The issue is more when you get to drop-outs (missed signal), which can occur in things like scRNA-seq. In practice that still seems to be handled well enough by these tools.
biostars
{"uid": 9524611, "view_count": 312, "vote_count": 1}
Hello Everyone, I have a very basic question. Having little experience in Command Line, I am stumped on practically the first step. I am downloading the following file from the Clustal Omega website: [Source code .tar.gz (1.2.1)][1] However, to run this in terminal, I have to install argtable2. So I went and downloaded the following from the website: [argtable2-13.tar.gz][2] I placed both these into a single folder, and in terminal setup the directory to the argtable2 folder via cd /filepath and typed: ./configure A ton of checks are done which ends like this: ``` configure: creating ./config.status config.status: creating Makefile config.status: creating example/Makefile config.status: creating src/Makefile config.status: creating doc/Makefile config.status: creating doc/argtable2.3 config.status: creating doc/argtable2.html config.status: creating tests/Makefile config.status: creating argtable2.pc config.status: creating argtable2-uninstalled.pc config.status: creating src/config.h config.status: src/config.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands nat-oitwireless-inside-vapornet100-c-20170:argtable2-13 Plosslab$ make Making all in src /Library/Developer/CommandLineTools/usr/bin/make all-am Making all in tests make[1]: Nothing to be done for `all'. Making all in doc cp argtable2.3 argtable.3 make[1]: Nothing to be done for `all-am'. ``` **I then set the directory to the clustalO folder and configure and it ends like this**: Could not find argtable2.h. Try $ ./configure CFLAGS='-Iyour-argtable2-include-path **The INSTALL text file says I should type this line**: ./configure CFLAGS='-I/opt/local/include' LDFLAGS='-L/opt/local/lib' **I then get the following error**: configure: error: Could not find argtable2.h. Try $ ./configure CFLAGS='-Iyour-argtable2-include-path **I try to follow the recommendation as such**: ./configure CFLAGS='-/lab/argtable2-13' **And finally I get the following error, and I am stumped**: ``` checking whether the C compiler works... no configure: error: in `/Users/lab/argtable2-13': configure: error: C compiler cannot create executables ``` Anyone have any ideas as to why this isn't working? Did I download the wrong version of argtable? Am I typing in the wrong commands? Any advice would be thoroughly appreciated. [1]: http://www.clustal.org/omega/clustal-omega-1.2.1.tar.gz [2]: http://prdownloads.sourceforge.net/argtable/argtable2-13.tar.gz
What works for me: 1\. Install Xcode and then install the command-line utilities, which adds Clang and Clang++ and related tools (basically, the equivalent of the GCC compiler kit for the purposes of compiling and building C and C++ projects). If you are on OS X Mavericks, you can install the command-line tools directly via the Terminal [via these instructions][1]. If you are on OS X Yosemite, you might use [these instructions][2] to set up the CLT. Or you can just download and install CLT [via the Apple Developer Connection site][3]. 2\. Once the CLT are installed, install Homebrew: $ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" 3\. Use Homebrew to install `argtable`: $ brew install argtable 4\. Once `argtable` is installed, download source for Clustal Omega, unpack it and compile it: ``` $ wget -qO- http://www.clustal.org/omega/clustal-omega-1.2.1.tar.gz > clustal-omega-1.2.1.tar.gz $ tar zxvf clustal-omega-1.2.1.tar.gz ... $ cd clustal-omega-1.2.1 $ ./configure CFLAGS="-I/usr/local/include" LDFLAGS="-L/usr/local/lib" --prefix="/usr/local" ... $ make ... $ sudo make install ... ``` Clustal Omega is now in `/usr/local/bin`: ``` $ which clustalo /usr/local/bin/clustalo $ clustalo --version 1.2.1 ``` [1]: http://osxdaily.com/2014/02/12/install-command-line-tools-mac-os-x/ [2]: http://railsapps.github.io/xcode-command-line-tools.html [3]: https://developer.apple.com/library/ios/technotes/tn2339/_index.html#//apple_ref/doc/uid/DTS40014588-CH1-DOWNLOADING_COMMAND_LINE_TOOLS_IS_NOT_AVAILABLE_IN_XCODE_FOR_OS_X_10_9__HOW_CAN_I_INSTALL_THEM_ON_MY_MACHINE_
biostars
{"uid": 128261, "view_count": 15490, "vote_count": 5}
Hi Biostars, Is it fine to use STAR for bacterial data (no splicing)? Any comments/suggestions are highly appreciated. Thanks
It should work but you need to specify `--alignIntronMax 1` to force STAR to avoid splice alignments. Also during the genome index generation step you should put `--genomeSAindexNbases` to `min(14, log2(GenomeLength)/2 - 1)` if your bacterial genome of interest is relatively small. GenomeLength is in base pairs
biostars
{"uid": 280579, "view_count": 4650, "vote_count": 2}
Dear all, considering a RNA-seq experiment and analysis that provides the expression values as TPM, please would you let me know what is a minimum TPM value in order to consider a gene to be expressed ? talking about RPKM.FPKM units, I remember that a gene was considered expressed if RPKM (or FPKM) > 1 ... thanks a lot, -- bogdan
I do not believe there is any definitive answer. There are so many factors that go into each experiment such that it is difficult to pick a value. A RPKM / FPKM value of 1 seems quite low to me, i.e., in 'error' territory. What you have to consider is the distribution of your data and the suitability of it for whatever downstream tools you will use. If including low-count / low expressed genes is going to distort your data distribution and introduce biases, then you need to remove them - check via histograms. From RNA-seq, most genes are lowly expressed, possibly due to transcriptional 'noise' more than anything else. I say 'noise' in the knowing that they may reflect genuine transcription but have no regulatory function and are artifacts of other transcriptional processes that have occurred. They may also reflect regions where TF binding and/or promoter activity was weak. So, you have the liberty to choose your own cut-off for TPM and state it in the methods. :) Please take the time to read Gordon's answer, here: https://support.bioconductor.org/p/98820/#98875 Kevin Edit: another interesting discussion: https://www.researchgate.net/post/How_to_determine_whether_a_gene_is_expressed_in_RNA-seq2
biostars
{"uid": 366965, "view_count": 12865, "vote_count": 1}
There exists a lot of literature on distinguishing driver mutations from passengers. I am trying to build my own deep learning model to do the same. I am facing some potential issues. First, I have downloaded COSMIC mutation data and used the FATHMM labels to designate drivers (positive examples) in my dataset. I am sceptical to use passengers from COSMIC as they may be false negatives. So I turned to the 1000 genome project to download SNVs (to construct my negative examples). I am unsure if this is correct, however, I have seen some papers do the same. Do I need to apply any filters on the 1000 genome SNV data to construct the final dataset? One such paper talks of using SNVs with a global minor allele frequency≤1%.
This area of research is difficult because each person has their own opinion about what constitutes a 'driver' and 'passenger' mutation. For example, I disagree with the 1000 Genomes approach because I already know that some 1000 Genomes polymorphisms that have appreciable minor allele frequencies of around 15% in Caucasians can drive ER-positive breast cancer. To be frank: we don't have a clue what >90% of the variants listed in dbSNP / 1000 Genomes are doing. A large proportion of them could ultimately be driving cancer and other diseases. If I were you, I would not try to define on my own what is / is not a driver and passenger. Why not utilise the work that has already been published and then follow their guidance for your deep learning model? Look at this paper, published in the highly reputable *Cell* journal: - <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6029450/">Comprehensive Characterization of Cancer Driver Genes and Mutations</a>. Here is another one, published a couple of weeks ago in *Nature Genetics*: - <a href="https://www.nature.com/articles/s41588-019-0572-y">Identification of cancer driver genes based on nucleotide context</a> I also just stumbled upon this online platform, which focuses on drivers: - <a href="https://www.intogen.org/search">intOGen</a> Build upon the work that is already out there. Then, at least, if you try to publish your work, it will be more difficult for reviewers to criticise you. By the way, I am not sure why you are using FATHMM, or did you mean FATHMM-MKL, as mentioned <a href="https://cancer.sanger.ac.uk/cosmic/analyses">HERE</a> on COSMIC's page. GWAVA and Funseq2 were designed for somatic mutations ( see here: https://www.biostars.org/p/286364/#286483 ). Kevin
biostars
{"uid": 425580, "view_count": 831, "vote_count": 1}
Example input: multi-sample VCF (adapted from [www.internationalgenome.org][1]): **Note:** my actual file is bgzipped and tabixed with ~2Mln variants (rows) and ~1000 samples (columns). ##fileformat=VCFv4.0 ##fileDate=20090805 ##source=myImputationProgramV3.1 ##reference=1000GenomesPilot-NCBI36 ##phasing=partial ##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of Samples With Data"> ##INFO=<ID=DP,Number=1,Type=Integer,Description="Total Depth"> ##INFO=<ID=AF,Number=.,Type=Float,Description="Allele Frequency"> ##INFO=<ID=AA,Number=1,Type=String,Description="Ancestral Allele"> ##INFO=<ID=DB,Number=0,Type=Flag,Description="dbSNP membership, build 129"> ##INFO=<ID=H2,Number=0,Type=Flag,Description="HapMap2 membership"> ##FILTER=<ID=q10,Description="Quality below 10"> ##FILTER=<ID=s50,Description="Less than 50% of samples have data"> ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"> ##FORMAT=<ID=GQ,Number=1,Type=Integer,Description="Genotype Quality"> ##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Read Depth"> ##FORMAT=<ID=HQ,Number=2,Type=Integer,Description="Haplotype Quality"> #CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NA00001 NA00002 NA00003 20 14370 rs6054257 G A 29 PASS NS=3;DP=14;AF=0.5;DB;H2 GT:GQ:DP:HQ 0/0:48:1:51,51 1/0:48:8:51,51 1/1:43:5:.,. 20 17330 . T A 3 q10 NS=3;DP=11;AF=0.017 GT:GQ:DP:HQ 0/0:49:3:58,50 0/1:3:5:65,3 0/0:41:3 20 1110696 rs6040355 A G,T 67 PASS NS=2;DP=10;AF=0.333,0.667;AA=T;DB GT:GQ:DP:HQ 1/2:21:6:23,27 2/1:2:0:18,2 2/2:35:4 21 1230237 . T . 47 PASS NS=3;DP=13;AA=T GT:GQ:DP:HQ 0/0:54:7:56,60 0/0:48:4:51,51 0/0:61:2 21 1234567 microsat1 GTCT G,GTACT 50 PASS NS=3;DP=9;AA=G GT:GQ:DP 0/1:35:4 0/2:17:2 1/1:40:3 Expected output: **20.txt** NA00001 NA00002 NA00003 0/0 1/0 1/1 0/0 0/1 0/0 1/2 1/2 2/2 **21.txt** NA00001 NA00002 NA00003 0/0 0/0 0/0 0/1 0/2 1/1 I was thinking of using `cut | sed` combo with some `regex`, but thought there must be already some tool out there, maybe *[bcftools][2]* (couldn't get the right flags to work) ? Any other ideas? [1]: http://www.internationalgenome.org/wiki/Analysis/Variant%20Call%20Format/vcf-variant-call-format-version-40/ [2]: http://www.htslib.org/doc/bcftools.html
bcftools annotate -x '^FORMAT/GT' input.vcf.gz | grep -v "^##" | cut -f 10-
biostars
{"uid": 350404, "view_count": 2539, "vote_count": 1}
Hello Biostars community, I'm trying to analyze miRNASeq data of TCGA melanoma (SKCM) samples. This is a prototype analysis in which I picked `hsa-mir-155` and looked at how its expression is correlated with survival. My end goal is expanding this analysis to other cancer types also focusing on other miRNAs. I performed my analyses with two of the popular packages in R (RTCGA and TCGAbiolinks) and obtained quite different results. I'm trying to make sense of what might be causing the different results. Any help is appreciated. For comparison purposes, I'm attaching some figures and my codes as well. ![Kaplan-Meier curves][1] Overall Kaplan-Meier curves look quite different. These plots were generated on the whole cohort (unsegregated for any parameter). I'm including the plot from `oncoLNC.org` database as well for comparison purposes. RTCGA looks more like the `oncoLNC` curve. You can see similar differences when the data is segregated based on `patient.gender`: ![KM_curves_per_gender][2] When I segregate patients based on the expression of `hsa-mir-155` (top and bottom thirds), the differences become more obvious: ![mir155_top_bottom_thirds][3] To understand what might be different in the datasets I exported the data from both packages and performed a comparison. [Linked][4] excel file shows the comparisons including clinical details (`days_to_death` and `days_to_last_follow_up`) and gene expression values of `hsa-mir-155` (both `read_count` and `reads_per_million_miRNA`. I noticed that there are considerable differences between two datasets. I'm pretty sure, the way I organized the data is ok and I don't think the differences are due a mistake in data manipulation in R. The code I used for analyses can be found at [RTCGA_v1][5] and [TCGABiolinks_v1][6] Please let me know why you think there is a discrepancy here. I'm pretty new to this type of analyses and hopefully, I didn't miss something silly. Thank you very much in advance for your insights, Atakan [1]: https://s22.postimg.cc/f3513t4wx/Untitled.jpg [2]: https://s22.postimg.cc/h8zbxuq8h/untitled2.jpg [3]: https://s22.postimg.cc/ra4678vmp/Untitled3.jpg [4]: https://www.dropbox.com/s/acy1ubbk1fkdhtj/biolinks_vs_rtcga.xlsx?dl=0 [5]: https://www.dropbox.com/s/d99djwqkuwxm1qn/RTCGA_v1.R?dl=0 [6]: https://www.dropbox.com/s/lw2dkk9fb2wa4q8/TCGAbiolinks_v1.R?dl=0
Alright, I think I figured out what's going on. The biggest difference in the results was caused by a missed point during pre-processing. Please see below: `RTCGA` features an already preprocessed data frame with clinical parameters. In this data frame there is a `times` variable that combines both `days_to_death` and `days_to_last_follow_up`. That way you can provide `times` variable into the `survfit()` function and get the data in one step. In `TCGAbiolinks` data I needed to create this variable myself. Previously I overlooked this point, and I fitted the survival model only by using `days_to_death` variable. This effectively discards data from censored and alive patients skewing the results. I also compared `legacy` and `harmonized` `miRNA` datasets by using TCGAbiolinks package. Results are shown below all side by side. As an example, two different parameters are plotted: `patient.gender` and `hsa-mir-155` expression. `harmonized` and `legacy` data uses the same clinical annotation file, that's why gender plots look the same. With this realization, `RTCGA` and `TCGAbiolinks` are giving comparable results. Whew! ![KM comparison][1] Thank you all for your insights! Best, Atakan [1]: https://s15.postimg.cc/91i5k280r/comparison.jpg
biostars
{"uid": 319981, "view_count": 3400, "vote_count": 5}
<p>This dataset is for a specific disease-gene-test results. The dataset goes like this. </p> <p>| Test | Gene | Relevance | Values |</p> <p>Test and Gene are the two parameters on x and y axis. Values are the combined results for the parameters pair. The problem for me here is that I have one more parameter called Relevance which represents the relevance of the test-gene pair and it is boolean (only two values YES/NO).</p> <ul> <li>The dataset (Relevance) should be differentiated in the map with different colour (like red and green).</li> <li>Gradient of that colour represents numerical values between that interaction.</li> </ul> <p>The end result I was aiming at was, Test on x-axis and Gene on y-axis and the map for these interaction will be only with two colours (representing Relevance values) and the gradient of that colour representing Values. Is this possible to achieve this kind of Heatmap, if Yes how can I achieve. If not is there any other option to display such kind of data (similar to heatmap)</p> <p>Help appreciated !!</p> <p>Thanks,</p> <p>RDS</p> <p>something like this - <img src='http://www.mathworks.com/matlabcentral/fx_files/24253/2/heatmap.png' alt='something like this' /></p>
<p>I am sure you can do it in R, a little bit fancy using the <code>ggplot2</code> library and having your data in the form of dataframe. Check these two posts on how to achieve it. <a href="http://learnr.wordpress.com/2010/01/26/ggplot2-quick-heatmap-plotting/">ggplot2-quick-heatmap-plotting</a> and <a href="http://biostars.org/p/47432/">Constructing an Heatmap of "Distance of binding region relative to TSS"</a>.</p> <p>Cheers</p>
biostars
{"uid": 56291, "view_count": 17291, "vote_count": 2}
1. How did it come to be that the alternate nucleotide was more frequent than the reference nucleotide? 2. How does one account for this phenomenon when designing a strategy to filter for variants of interest? Should I go through the complicated process of selecting those individuals who DO NOT have the variant and calculate that the REFERENCE frequency in the population is probably around (1 - esp6500siv_all)? I am researching a rare disease and have whole exome sequence data with the corresponding variant calls. Each variant call has been passed to annovar and among other data, we have looked up the frequency of the variant in the esp6500siv2_all data. Clearly a variant that was observed to have a high frequency in our sample but that had low frequency in esp6500siv2_all would be of disproportionate interest. Low and behold I was surprised to find that 13% of the all of our variants (4055 out of 32131) had an allele frequency that was greater than 0.5. How can that be? I expected that all the allele frequencies would be <0.5. I had thought that the variants would be akin to a minor allele frequency (MAF). Clearly I was wrong. I pulled 3 random variants from among the variants that had more than 0.5 frequency, to check them against the Exome Variant Server. avsnp147 Chr Start End Ref Alt Gene.refGene esp6500siv2_all 1: rs3803530 15 89632842 89632842 C A KIF7 0.5373 2: rs621383 3 125118840 125118840 T C SLC12A8 0.9988 3: rs633561 11 64229857 64229857 A G NUDT22 0.9418 Looking up at NHLBI Exome Sequencing Project (ESP) [Exome Variant Server][2] and using All Allele 1. rs3803530: C>A; A=6984/C=6014 which means A is 6984/(6984+6014) or 0.537 2. rs621383: T>C; C=12479/T=15 which means C is 12479/(12479+15) or 0.999 3. rs633561: A>G; G=12240/A=756 which means G is 12240/(12240+756) or 0.941 [1]: https://drive.google.com/file/d/1jA7Azm2f1ss_rw3NoVgMSJ75tBqAf_qt/view?usp=sharing [2]: http://evs.gs.washington.edu/EVS/
This is due to the fact that the very reference genomes that we use for re-alignment are themselves based on individuals who carry rare risk alleles. Thus, when we call variants against these genomes, we are, at many loci, comparing against rare disease risk alleles. As the best/worst example (depending on your point of view), hg19 / GRCh37 was used for more than a decade as the primary reference genome, yet ~70% of the genomic sequence of this genome was based on a single individual from the Buffalo area, New York, USA. Amongst the many 1 000s of rare disease susceptibility alleles that this individual carried was one called *Factor V Leiden*, which statistically significantly increases the risk of deep vein thrombosis (DVT). If you're researching DVT (I was), you have to be aware of this. Thus, if I perform exome-seq on an individual who does not have *Factor V Leiden* and re-align the data to hg19 / GRCh37, the *Factor V Leiden* variant position will show a SNV because the reference allele in my patient sample (which doesn't increase risk of DVT) is being compared against the disease allele that's contained in the very reference genome against which I'm re-aligning my data. Without careful screening, I may assume that my patient has increased risk of DVT, erroneously so. There was a publication on this listed in PubMed but it's very difficult to find, even by Google. It's a critical problem yet has not received the attention that it deserves. **Edit June 2, 2021: much later, I found it: <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3732491/">THE REFERENCE HUMAN GENOME DEMONSTRATES HIGH RISK OF TYPE 1 DIABETES AND OTHER DISORDERS</a>** The situation improved with hg38 / GRCh38, as this reference build was based on much more individuals, but the same problems still persist, broadly speaking. So, you really have to get to know your target panel and all of these nuances related to whatever variants you're studying., particularly if you're dealing with live patient data. Kevin ----------------------------------------- --------------------- **Update 3rd January 2018** It has come to my attention that there is an automated method to search for these types of variants in your VCF: - http://rmahunter.bioinf.me/ - https://github.com/bioinf/RMAhunter
biostars
{"uid": 282029, "view_count": 7002, "vote_count": 9}
I have a series of fastq files (with up to 4000 reads in each) that I want to parse based on the time of sequencing. So in the fastq header, date/time is listed as "start_time=2017-10-09T18:54:24z" If I wanted to extract all sequences between 18:00 hours and 20:00 hours, is there a tool I can use to find and extract them?
I wrote a python script now. Execute as `python timefilt.py part1.fastq.gz --time_from 2017-10-09T18:00:00Z --time_to 2017-10-09T20:00:00Z` https://gist.github.com/wdecoster/1ab9adac7c8095498ff91ee22468eaac
biostars
{"uid": 337503, "view_count": 2547, "vote_count": 1}
Heys, I imagine this is an odd question for most of you, but for me would be really useful to know your opinion on this. I already have some experience with bash, I know the basic commands and I even run for loops and make some functions, but now I achieved a point were I have the impression I can't go forward without help. However, I've been trying to look for tutorials or whatever option that teaches me a bit further without success. I'm in datacamp, but they don't have a lot of courses on bash (although the ones they have helped me quite a lot) and in internet I did not find any practical help. So my question is, where could I find some tutorials/information about more advanced help in bash? I still need to learn a lot and I do have a lot of (i imagine) simple doubts (for example, is it better to use a for loop or a function, how to optimize bash scripts, ...), but without an external help I don't know how to answer them. What did you do where you achieved my situation? I'm a biologist without background on informatics, so I imagine that this not help in my situation. Whatever option you think will be more than welcome! Thanks!!
The Linux Foundation has some advanced tutorials https://tldp.org/LDP/abs/html/index.html
biostars
{"uid": 471806, "view_count": 657, "vote_count": 1}
Hi there, I'm using bowtie2 to align some fastq files downloaded from [ENA][1] I've first run a quality check to see if there is something annoying, but in general terms it is fine. Then, I've run bowtie-2 as follow: bowtie2 -x human_index_bowtie/human_index_bwt -U SRR11557616.fastq.gz -S 5h.sam But I got this error message: Error: Read SRR11557616.1 ugc_599_6_F3_0002_0/1 has more quality values than read characters. terminate called after throwing an instance of 'int' Aborted (core dumped) (ERR): bowtie2-align exited with value 134 the head of the file is a bit weird: head -n 8 @SRR11557614.1 ugc_599_3_F3_0001_0 length=74 T3.0321.1000.133...11...011..20...11...21...10...11...31....0....2....1.... +SRR11557614.1 ugc_599_3_F3_0001_0 length=74 !aB_aaaBa^aQBWaYBBBaYBBBOa_BBWSBBBKaBBBNWBBBWOBBBS^BBBN`BBBBSBBBBPBBBBaBBBB @SRR11557614.2 ugc_599_3_F3_0001_1 length=74 T2.3313.1031.030...30...231..10...22...00...02...22...13....0....2....2.... +SRR11557614.2 ugc_599_3_F3_0001_1 length=74 !aBaaaaBaa_]BPa^BBBbaBBBaaaBBaaBBB\`BBBa\BBBaaBBBaaBBBWPBBBBaBBBBaBBBBaBBBB Any ideas how to solve the problem? or I should select a different file? Thanks! [1]: https://www.ebi.ac.uk/ena/browser/home
The problem is that this is a colorspace file (like ABI SOLiD system, not Illumina), and `bowtie2` does not support colorspace alignment. `bowtie` (aka `bowtie1`) used to support it, but support got dropped in recent versions afaik. See: http://bowtie-bio.sourceforge.net/news.shtml Colorspace support was dropped in versions 1.3.0 and later so you would need to download a version prior to that, then build an index `bowtie-build` with the `-C` option to enable colorspace, and then align your data. See manual for details, I never used colorspace myself.
biostars
{"uid": 9474313, "view_count": 2404, "vote_count": 2}
Hallo, I would like to create such an expression matrix: GENE ID/SAMPLE sample1 sample2 sample3 sample4 sample-n gene1 logFC ... ... ... ... gene2 ... ... ... ... ... gene3 ... ... ... ... ... gene4 ... ... ... ... ... gene-n ... ... ... ... ... using data from GSE at GEO. I will be grateful for any suggestions or just codes. Second thing: How to get such an expression matrix after limma DE analysis? How to modify limma codes? I will appreciate any suggestions or codes. Thanks in advance, one more time. Regards!
Something like this should about do it. library(GEOquery) # assumes only one platform in the GSE gse = getGEO('GSEXXXX')[[1]] gse is an ExpressionSet, so all the usual ExpressionSet methods work as expected. In particular, we can use `fData()` to get the feature information (gene information) and the `exprs()` method to get the actual values. write.table(data.frame(fData(gse),exprs(gse)),sep="\t",row.names=FALSE,file='abc.txt') You may need to use a subset of the columns in `fData` to match your needs.
biostars
{"uid": 113718, "view_count": 5470, "vote_count": 3}
Is there a way to sample sequences from the `fastq.gz` file? I have files that contain hundreds of millions of reads, I want to sample about 100 million reads from each file. Is there a way to do this?
[BBTools's][1] reformat tool does this, keeping pairs together if they are interleaved: reformat.sh in=reads.fq.gz out=sampled.fq.gz samplereadstarget=100000000 or for reads in 2 files: reformat.sh in1=reads1.fq.gz in2=reads2.fq.gz out=sampled1.fq.gz out2=sample2.fq.gz samplereadstarget=100000000 You can also just sample a fraction of the reads without respect to the final number, which is faster since it only needs to read the file once: reformat.sh in=reads.fq.gz out=sampled.fq.gz samplerate=0.1 or just sample the first X reads, which is faster still: reformat.sh in=reads.fq.gz out=sampled.fq.gz reads=100000000 Reformat is multithreaded and extremely fast. [1]: https://sourceforge.net/projects/bbmap/
biostars
{"uid": 121336, "view_count": 8746, "vote_count": 1}
I am using IGV to look at some RNA-seq alignment. In general I find this tool extremely helpful. However, when looking at some longer genes I am unable to visualize the alignment and I receive the message: "Zoom in to see alignment". Is there a way to change this setting to visualize the entire gene (maybe by increasing memory usage of the tool?). Does anybody know how to do this?
Go to `View > Preferences`. Then in `Alignments` tab, change the `Visibility range threshold` field.
biostars
{"uid": 186152, "view_count": 5348, "vote_count": 2}
Hello: I would like to convert .gff3 file to 12-column .bed file, as in this link under "BED Format" (http://genome.ucsc.edu/FAQ/FAQformat.html#format1). I have thus far used Galaxy from Penn State, but it outputs a 6-column .bed file. Any advice is greatly appreciated! Thank you
The UCSC Genome Browser [hosts conversion utilities][1] that you can run from your command line to accomplish the gff3 to BED12 conversion. Note utilities are OS specific and need to be given permission to execute with "chmod +x utilityName". Here's an example of how I did a conversion using the following steps: - wget ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_31/gencode.v31.basic.annotation.gff3.gz - gunzip gencode.v31.basic.annotation.gff3.gz - ./gff3ToGenePred gencode.v31.basic.annotation.gff3 gencode.v31.basic.genePred - ./genePredToBed gencode.v31.basic.genePred gencode.v31.basic.bed Disclaimer that I work for the UCSC Genome Browser. :) [1]: http://hgdownload.soe.ucsc.edu/downloads.html#utilities_downloads
biostars
{"uid": 85869, "view_count": 18983, "vote_count": 4}
I have a Mac with OS 10.11.4, which has both Python 2.7.3 (OS Build-in) and Python 3.5.1 (just installed). With Python2, I can import Biopython (version 1.61). The Python3, however, cannot see the Biopython package. I want to update Biopython to the latest version 1.66 and link it to Python3. Do I need to uninstall version 1.61 first? Is yes, how? How to make sure the installation from tarball will link it to Python3? Thanks.
This is somewhat off-topic for the site, but if you `pip3 install --user biopython` it'll install the most recent version of biopython. Packages are always specific to a version of python, so your python2 and python3 installed versions can coexist.
biostars
{"uid": 192571, "view_count": 1963, "vote_count": 1}
Hi, I've a GRanges representing a intervals for all genes in the genome. A lot of these intervals are overlapping. I would like to use the `reduce()` from `GenomicRanges` package in order to make a non-overlapping set of interval. However I would like to do it for each gene separately. Thus for one specific gene, intervals for this gene should not overlap ; but intervals for different genes may overlap. One solution would like to split the GRanges by gene and apply reduce() on each subset but I'm wondering if there is a more efficient way ? Thanks Actual data chrom start end hgnc 1 100 200 MYC 1 150 300 MYC 1 400 500 MYC 1 150 230 TP53 1 200 350 TP53 1 420 550 TP53 expected result chrom start end hgnc 1 100 300 MYC 1 400 500 MYC 1 150 350 TP53 1 420 550 TP53 My actual solution : # gene is the dataframe used to create the initial GRanges do.call(rbind,lapply( split(gene,gene$hgnc), function(x){ as.data.frame( reduce( GRanges(x$chrom,IRanges(x$start,x$end))))}))
Looks like we can't avoid *reduce()*, but we can avoid `do.call(rbind, lapply(...` using *data.table*: library(data.table) library(GenomicRanges) # reduce function # example data x <- fread(" chrom start end hgnc 1 100 200 MYC 1 150 300 MYC 1 400 500 MYC 1 150 230 TP53 1 200 350 TP53 1 420 550 TP53") x[, as.data.table(reduce(IRanges(start, end))), by = .(chrom, hgnc)] # chrom hgnc start end width # 1: 1 MYC 100 300 201 # 2: 1 MYC 400 500 101 # 3: 1 TP53 150 350 201 # 4: 1 TP53 420 550 131 Code stolen and adapted from StackOverflow post: - [Finding all overlaps in one iteration of foverlap in R's data.table](https://stackoverflow.com/a/45574386/680068)
biostars
{"uid": 386585, "view_count": 6760, "vote_count": 2}
How can separate one column in two column in R? id rs143 rs148 rs149 rs1490 1 02003s NA 11 22 11 2 02003s NA 10 11 22 3 02003s NA 11 11 12 4 02003s NA 10 11 11 5 02003s NA 10 11 11 in result i want this format: id rs143 rs143.1 rs148 rs148.1 rs149 rs149.1 rs1490 rs1490.1 1 02003s NA NA 1 1 2 2 1 1 2 02003s NA NA 1 0 1 1 2 2 3 02003s NA NA 1 1 1 1 1 2 4 02003s NA NA 1 0 1 1 1 1 5 02003s NA NA 1 0 1 1 1 1
Requires a nice mixture of diverse functions here. This could be done in a single line, but would be complex. > df id rs143 rs148 rs149 rs1490 1 02003s NA 11 22 11 2 02003s NA 10 11 22 3 02003s NA 11 11 12 4 02003s NA 10 11 11 5 02003s NA 10 11 11 #Ensure that NAs are encoded as characters > df[is.na(df)] <- "NA" #split each value by an empty string delimiter, then re-merge all columns back together > df2 <- data.frame(df$id, do.call(cbind, lapply(df[,2:ncol(df)], function(x) t(as.data.frame(strsplit(as.character(x), split=""))))), row.names=c(1:nrow(df))) > df2 df.id X1 X2 X3 X4 X5 X6 X7 X8 1 02003s N A 1 1 2 2 1 1 2 02003s N A 1 0 1 1 2 2 3 02003s N A 1 1 1 1 1 2 4 02003s N A 1 0 1 1 1 1 5 02003s N A 1 0 1 1 1 1 #Now fix the colnames > index1 <- seq(from=2, to=ncol(df2), by=2) > index2 <- seq(from=3, to=ncol(df2), by=2) > colnames(df2)[index1] <- colnames(df[2:ncol(df)]) > colnames(df2)[index2] <- paste(colnames(df[2:ncol(df)]), ".1", sep="") > df2 df.id rs143 rs143.1 rs148 rs148.1 rs149 rs149.1 rs1490 rs1490.1 1 02003s N A 1 1 2 2 1 1 2 02003s N A 1 0 1 1 2 2 3 02003s N A 1 1 1 1 1 2 4 02003s N A 1 0 1 1 1 1 5 02003s N A 1 0 1 1 1 1 #Restore he NAs > df2[df2=="N"] <- NA > df2[df2=="A"] <- NA df id rs143 rs148 rs149 rs1490 1 02003s NA 11 22 11 2 02003s NA 10 11 22 3 02003s NA 11 11 12 4 02003s NA 10 11 11 5 02003s NA 10 11 11 df2 df.id rs143 rs143.1 rs148 rs148.1 rs149 rs149.1 rs1490 rs1490.1 1 02003s <NA> <NA> 1 1 2 2 1 1 2 02003s <NA> <NA> 1 0 1 1 2 2 3 02003s <NA> <NA> 1 1 1 1 1 2 4 02003s <NA> <NA> 1 0 1 1 1 1 5 02003s <NA> <NA> 1 0 1 1 1 1
biostars
{"uid": 310681, "view_count": 2441, "vote_count": 1}
Hi, I was wondering if anyone knows if there is a default protein database linked to the MAF files of the TCGA project? Basically what I want to do is to get the mutated protein sequences that correspond to the missense mutations listed in the MAF files. For that, I need the correct protein sequence for each gene with a missense mutation, since the MAF file has the position of the mutation in the gene and in the protein, I can write a simple script that changes the wild type amino acid with the mutation. However, it is crucial to get the correct protein sequence, so that the position and wild type amino acid stored in the MAF file corresponds to the same amino acid in that position of the protein sequence in the database. Thanks
The latest TCGA MAF standard is to choose the "[worst affected][1]" isoform per variant, from among the Gencode Basic v19 isoforms. These standards have changed over time, so not all MAFs will use the same isoform database. You'll find GAF files listed [here][2], originally based on UCSC KnownGenes, but now based on Gencode Basic v19. If you want to map each variant to [Uniprot's canonical isoform][3] per gene, then pull the Ensembl ENST IDs of all Uniprot's canonical isoforms, dump them into a text file one ID per line, and pass it to [maf2maf][4] under argument `--custom-enst`, to re-annotate all TCGA MAFs. [1]: http://useast.ensembl.org/info/genome/variation/predicted_data.html#consequences [2]: https://tcga-data.nci.nih.gov/docs/GAF/ [3]: http://www.uniprot.org/help/canonical_and_isoforms [4]: https://github.com/ckandoth/vcf2maf
biostars
{"uid": 146605, "view_count": 3319, "vote_count": 3}
I am reading a paper about miRNA and confused by a fig. ![enter image description here][1] [1]: http://i.imgur.com/yKXw8Ja.png In "Novel miRNA for which miRNA* were found", what does miRNA* mean in the context? Does it mean reverse-complement sequence of miRNA? I hope someone could give me some suggestions.
miRNA* refers to the antisense sequence to the mature miRNA. miRDeep classifies miRNA reads based on the location of reads in the pre-miRNA: miRNA (mature miRNA sequence), miRNA* (as sequence of miRNA), and loop (matching loop region of pre-miRNA hairpin).
biostars
{"uid": 237033, "view_count": 2547, "vote_count": 1}
<p>The Homo sapiens Genome (Build 37.2) has just been published by the NCBI : <a href='http://www.ncbi.nlm.nih.gov/mapview/stats/BuildStats.cgi?taxid=9606&amp;build=37&amp;ver=2'>http://www.ncbi.nlm.nih.gov/mapview/stats/BuildStats.cgi?taxid=9606&amp;build=37&amp;ver=2</a></p> <p>Do the Fasta sequences of the human genome have changed since the version 37.1 (e.g: some large contigs of NNNNN would have been solved ? ) or is it just a matter of annotations ?</p>
<p><strong>Short answer</strong></p> <p>Afaik, the sequences are not changed for the <em>primary assembly</em>, so if you don't go looking for the new sequences, you won't find them. However, some new corrections and new sequence are available if you want them. The annotations have been updated.</p> <p><strong>Long answer</strong></p> <p>NCBI 37.2, like Ensembl 60, is based on GRCh37.p2.</p> <p>GRCh37.p2 is, as the name implies, just a patch release for GRCh37. It contains two kinds of patches, which should be seen as temporary updates until they are fully incorporated into the next major release of the genome.</p> <ul> <li>A fix patch represents corrections to an existing sequence.</li> <li>A novel patch represents novel sequence (perhaps filling some of the N's of runs).</li> </ul> <p>As I understand it, these are not intended to replace the original sequences while the main release, GRCh37, is still in effect. They can be seen as a sneak preview of the next major release for those who are interested. This means that the GRCh37 <em>primary assembly</em> is the same between GRCh37, GRCh37.p1 and GRCh37.p2, and the patch sequences exist separately.</p> <p>If you download the primary assembly sequence for Chromosome 5, say, from the <a href='ftp://ftp.ncbi.nlm.nih.gov/genbank/genomes/Eukaryotes/vertebrates_mammals/Homo_sapiens/GRCh37.p2/'>GRC</a>, you will get the original sequence, which doesn't include the updates. To get the updated sequences, you would need to download a separate patch file. Small annotation files are also provided that explain where the patch sequence "fits" in the original sequence.</p> <p>Ensembl follows the pattern of keeping the original sequences separate from the patches - you would <a href='ftp://ftp.ensembl.org/pub/current/fasta/homo_sapiens/dna/'>download</a> the original sequences separately from the patch sequences.</p> <p>I'm less sure of this, but I think NCBI also keeps the sequences separate. NCBI 37.2 contains a 'GRCh37.p2-Primary Assembly', and the patch sequences seem to be represented as a separate assembly - "GRCh37.p2-PATCHES".</p> <p>Note also that GRCh37.p2 contains the mitochondrial sequence - previous versions of the assembly did not contain this.</p>
biostars
{"uid": 3954, "view_count": 8301, "vote_count": 7}
I have the sequenced data of an organism. but it has three 16srRNA which belong to 3 different organisms. I guess it could be contaminated. How could I extract the contigs belonging to each organism present in the sequence data?
If you truly feel that there are three organisms then you can use `bbsplit.sh` (from [BBMap suite][1]) to bin your reads into respective organismal pools. This will generally work well as long as the bacterial are distinct enough. You are able to decide what you want to do with reads that multi-map (map to all three reference genomes). e.g. keep in all bins, toss etc. Use the answer here and ask if you have any questions: https://www.biostars.org/p/143019/#143040 Since you have bacterial data you could turn off `maxindel=0`. [1]: https://sourceforge.net/projects/bbmap/
biostars
{"uid": 357839, "view_count": 1389, "vote_count": 1}
<p>Hi,</p> <p>I want to extract read-pairs that aligned concordantly exactly 1 time to the reference genome. Is there any way to parse the output <a href='http://samtools.sourceforge.net/SAM1.pdf'>SAM</a> file? I would really appreciate your suggestions.</p> <p>Best regards,</p> <p>Rahul </p>
<p>The simplest way is likely something along the lines of:</p> <pre><code>samtools view -hf 0x2 alignments.bam | grep -v "XS:i:" &gt; filtered.alignments.sam </code></pre> <p>As I mentioned in the comment above, the <code>-f 0x2</code> part will get only "properly paired" alignments, which will effectively be the concordant alignments. For the option #2 definition of "map only once", you can take advantage of the fact that bowtie2 will add an <code>XS</code> auxiliary tag to reads that have another "valid" mapping. So, a quick inverse grep (<code>grep -v</code>) can get rid of those. One possible problem with this is if only one read of a pair can map to multiple places (e.g., the original fragment partly overlapped a simple tandem repeat) then you'd end up with orphans. The easiest fix to that (if it's a problem) would be to just check if the read names of pairs of alignments are the same. I'm sure someone will come up with a way to do that in awk, but the following python script is likely simpler:</p> <pre><code>#!/usr/bin/env python import csv import sys f = csv.reader(sys.stdin, dialect="excel-tab") of = csv.writer(sys.stdout, dialect="excel-tab") last_read = None for line in f : #take care of the header if(line[0][0] == "@") : of.writerow(line) continue if(last_read == None) : last_read = line else : if(last_read[0] == line[0]) : of.writerow(last_read) of.writerow(line) last_read = None else : last_read = line </code></pre> <p>If you saved that as <code>foo.py</code> and made it executable and in your PATH, then the following would solve the aforementioned issue:</p> <pre><code>samtools view -hf 0x2 alignments.bam | grep -v "XS:i:" | foo.py &gt; filtered.alignments.sam </code></pre> <p>Note that this won't work with coordinate-sorted alignments, but bowtie2 doesn't produce those.</p>
biostars
{"uid": 95929, "view_count": 21243, "vote_count": 8}
Hi, I'm starting out new in bioinformatics. I have couple of questions on searching dbSNP. 1. I'm searching (a list of rs# in [batch mode][1] in browser) dbSNP for SNP coordinates. dbSNP returns the coordinates in hg38 assembly build (I'm requesting a bed file for output format). I'd like to retrieve the coordinates in hg19 version. Is there a way to achieve this? dbSNP FAQ section doesn't mention if this could be done. 2. I would also like to know if it's possible to search genotype information for a given SNP (rs#). I would also like this in batch mode. Any help is appreciated! [1]: http://www.ncbi.nlm.nih.gov/projects/SNP/dbSNP.cgi?list=rslist
One way to do this is via the command line. You could download SNP annotations via `wget`. For example: $ wget -qO- ftp://ftp.ncbi.nih.gov/snp/organisms/human_9606_b151_GRCh37p13/VCF/common_all_20180423.vcf.gz | gunzip -c | convert2bed --input=vcf --output=bed --sort-tmpdir=${PWD} - > hg19.snp151.bed Filter via `grep` for the SNP of interest. For example, to search on a single SNP ID: $ grep -F rs554008981 hg19.snp151.bed 1 13549 13550 rs554008981 . G A . RS=554008981;RSPOS=13550;dbSNPBuildID=142;SSR=0;SAO=0;VP=0x050000000005000026000100;GENEINFO=DDX11L1:100287102;WGT=1;VC=SNV;ASP;KGPhase3;CAF=0.9966,0.003395,.;COMMON=1;TOPMED=0.99221139143730886,0.00778064475025484,0.00000796381243628 To search on a file of IDs, e.g. a list of SNP IDs in `rsIDs.txt`: $ grep -fF rsIDs.txt hg19.snp151.bed > matches.bed
biostars
{"uid": 186617, "view_count": 11470, "vote_count": 1}
Hello every one, can any one suggested a tool or method to get rsID for a number variants I have in vcf file, do I have to manipulate the header for my file? Many thanks for your help in advance The vcf file I have in this shape: ``` ##fileformat=VCFv4.1 ##INFO=<ID=OID,Number=.,Type=String,Description="List of original Hotspot IDs"> ##INFO=<ID=OPOS,Number=.,Type=Integer,Description="List of original allele positions"> ##INFO=<ID=OREF,Number=.,Type=String,Description="List of original reference bases"> ##INFO=<ID=OALT,Number=.,Type=String,Description="List of original variant bases"> ##INFO=<ID=OMAPALT,Number=.,Type=String,Description="Maps OID,OPOS,OREF,OALT entries to specific ALT alleles"> ##FORMAT=<ID=AO,Number=A,Type=Integer,Description="Alternate allele observation count"> ##FORMAT=<ID=DP,Number=1,Type=Integer,Description="Read Depth"> ##FORMAT=<ID=FAO,Number=A,Type=Integer,Description="Flow Evaluator Alternate allele observation count"> ##FORMAT=<ID=FDP,Number=1,Type=Integer,Description="Flow Evaluator Read Depth"> ##FORMAT=<ID=FRO,Number=1,Type=Integer,Description="Flow Evaluator Reference allele observation count"> ##FORMAT=<ID=FSAF,Number=A,Type=Integer,Description="Flow Evaluator Alternate allele observations on the forward strand"> ##FORMAT=<ID=FSAR,Number=A,Type=Integer,Description="Flow Evaluator Alternate allele observations on the reverse strand"> ##FORMAT=<ID=FSRF,Number=1,Type=Integer,Description="Flow Evaluator reference observations on the forward strand"> ##FORMAT=<ID=FSRR,Number=1,Type=Integer,Description="Flow Evaluator reference observations on the reverse strand"> ##FORMAT=<ID=GQ,Number=1,Type=Integer,Description="Genotype Quality, the Phred-scaled marginal (or unconditional) probability of the called genotype"> ##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype"> ##FORMAT=<ID=RO,Number=1,Type=Integer,Description="Reference allele observation count"> ##FORMAT=<ID=SAF,Number=A,Type=Integer,Description="Alternate allele observations on the forward strand"> ##FORMAT=<ID=SAR,Number=A,Type=Integer,Description="Alternate allele observations on the reverse strand"> ##FORMAT=<ID=SRF,Number=1,Type=Integer,Description="Number of reference observations on the forward strand"> ##FORMAT=<ID=SRR,Number=1,Type=Integer,Description="Number of reference observations on the reverse strand"> ##INFO=<ID=AO,Number=A,Type=Integer,Description="Alternate allele observations"> ##INFO=<ID=DP,Number=1,Type=Integer,Description="Total read depth at the locus"> ##INFO=<ID=FAO,Number=A,Type=Integer,Description="Flow Evaluator Alternate allele observations"> ##INFO=<ID=FDP,Number=1,Type=Integer,Description="Flow Evaluator read depth at the locus"> ##INFO=<ID=FR,Number=1,Type=String,Description="Reason why the variant was filtered."> ##INFO=<ID=FRO,Number=1,Type=Integer,Description="Flow Evaluator Reference allele observations"> ##INFO=<ID=FSAF,Number=A,Type=Integer,Description="Flow Evaluator Alternate allele observations on the forward strand"> ##INFO=<ID=FSAR,Number=A,Type=Integer,Description="Flow Evaluator Alternate allele observations on the reverse strand"> ##INFO=<ID=FSRF,Number=1,Type=Integer,Description="Flow Evaluator Reference observations on the forward strand"> ##INFO=<ID=FSRR,Number=1,Type=Integer,Description="Flow Evaluator Reference observations on the reverse strand"> ##INFO=<ID=FWDB,Number=A,Type=Float,Description="Forward strand bias in prediction."> ##INFO=<ID=FXX,Number=1,Type=Float,Description="Flow Evaluator failed read ratio"> ##INFO=<ID=HRUN,Number=A,Type=Integer,Description="Run length: the number of consecutive repeats of the alternate allele in the reference genome"> ##INFO=<ID=HS,Number=0,Type=Flag,Description="Indicate it is at a hot spot"> ##INFO=<ID=LEN,Number=A,Type=Integer,Description="allele length"> ##INFO=<ID=MLLD,Number=A,Type=Float,Description="Mean log-likelihood delta per read."> ##INFO=<ID=NR,Number=1,Type=String,Description="Reason why the variant is a No-Call."> ##INFO=<ID=NS,Number=1,Type=Integer,Description="Number of samples with data"> ##INFO=<ID=QD,Number=1,Type=Float,Description="QualityByDepth as 4*QUAL/FDP (analogous to GATK)"> ##INFO=<ID=RBI,Number=A,Type=Float,Description="Distance of bias parameters from zero."> ##INFO=<ID=REFB,Number=A,Type=Float,Description="Reference Hypothesis bias in prediction."> ##INFO=<ID=REVB,Number=A,Type=Float,Description="Reverse strand bias in prediction."> ##INFO=<ID=RO,Number=1,Type=Integer,Description="Reference allele observations"> ##INFO=<ID=SAF,Number=A,Type=Integer,Description="Alternate allele observations on the forward strand"> ##INFO=<ID=SAR,Number=A,Type=Integer,Description="Alternate allele observations on the reverse strand"> ##INFO=<ID=SRF,Number=1,Type=Integer,Description="Number of reference observations on the forward strand"> ##INFO=<ID=SRR,Number=1,Type=Integer,Description="Number of reference observations on the reverse strand"> ##INFO=<ID=SSEN,Number=A,Type=Float,Description="Strand-specific-error prediction on negative strand."> ##INFO=<ID=SSEP,Number=A,Type=Float,Description="Strand-specific-error prediction on positive strand."> ##INFO=<ID=SSSB,Number=A,Type=Float,Description="Strand-specific strand bias for allele."> ##INFO=<ID=STB,Number=A,Type=Float,Description="Strand bias in variant relative to reference."> ##INFO=<ID=TYPE,Number=A,Type=String,Description="The type of allele, either snp, mnp, ins, del, or complex."> ##INFO=<ID=VARB,Number=A,Type=Float,Description="Variant Hypothesis bias in prediction."> ##LeftAlignVariants="analysis_type=LeftAlignVariants bypassFlowAlign=true kmer_len=19 min_var_count=5 short_suffix_match=5 min_indel_size=4 max_hp_length=8 min_var_freq=0.15 min_var_score=10.0 relative_strand_bias=0.8 output_mnv=0 sse_hp_size=0 sse_report_file= target_size=1.0 pref_kmer_max=3 pref_kmer_min=0 pref_delta_max=2 pref_delta_min=0 suff_kmer_max=3 suff_kmer_min=0 suff_delta_max=2 suff_delta_min=0 motif_min_ppv=0.2 generate_flow_position=0 input_file=[] read_buffer_size=null phone_home=STANDARD gatk_key=null read_filter=[] intervals=null excludeIntervals=null interval_set_rule=UNION interval_merging=ALL reference_sequence=/results/referenceLibrary/tmap-f3/hg19/hg19.fasta rodBind=[] nonDeterministicRandomSeed=false downsampling_type=BY_SAMPLE downsample_to_fraction=null downsample_to_coverage=1000 baq=OFF baqGapOpenPenalty=40.0 performanceLog=null useOriginalQualities=false BQSR=null defaultBaseQualities=-1 validation_strictness=SILENT unsafe=null num_threads=1 combined_sample_name= num_cpu_threads=null num_io_threads=null num_bam_file_handles=null read_group_black_list=null pedigree=[] pedigreeString=[] pedigreeValidationType=STRICT allow_intervals_with_unindexed_bam=false logging_level=INFO log_to_file=null help=false variant=(RodBinding name=variant source=/results/analysis/output/Home/MGHBED_602/plugin_out/variantCaller_out/IonXpress_001/small_variants.sorted.vcf) out=org.broadinstitute.sting.gatk.io.stubs.VCFWriterStub NO_HEADER=org.broadinstitute.sting.gatk.io.stubs.VCFWriterStub sites_only=org.broadinstitute.sting.gatk.io.stubs.VCFWriterStub filter_mismatching_base_and_quals=false" ##contig=<ID=chr1,length=249250621,assembly=hg19> ##contig=<ID=chr10,length=135534747,assembly=hg19> ##contig=<ID=chr11,length=135006516,assembly=hg19> ##contig=<ID=chr12,length=133851895,assembly=hg19> ##contig=<ID=chr13,length=115169878,assembly=hg19> ##contig=<ID=chr14,length=107349540,assembly=hg19> ##contig=<ID=chr15,length=102531392,assembly=hg19> ##contig=<ID=chr16,length=90354753,assembly=hg19> ##contig=<ID=chr17,length=81195210,assembly=hg19> ##contig=<ID=chr18,length=78077248,assembly=hg19> ##contig=<ID=chr19,length=59128983,assembly=hg19> ##contig=<ID=chr2,length=243199373,assembly=hg19> ##contig=<ID=chr20,length=63025520,assembly=hg19> ##contig=<ID=chr21,length=48129895,assembly=hg19> ##contig=<ID=chr22,length=51304566,assembly=hg19> ##contig=<ID=chr3,length=198022430,assembly=hg19> ##contig=<ID=chr4,length=191154276,assembly=hg19> ##contig=<ID=chr5,length=180915260,assembly=hg19> ##contig=<ID=chr6,length=171115067,assembly=hg19> ##contig=<ID=chr7,length=159138663,assembly=hg19> ##contig=<ID=chr8,length=146364022,assembly=hg19> ##contig=<ID=chr9,length=141213431,assembly=hg19> ##contig=<ID=chrM,length=16569,assembly=hg19> ##contig=<ID=chrX,length=155270560,assembly=hg19> ##contig=<ID=chrY,length=59373566,assembly=hg19> ##fileDate=20140616 ##phasing=none ##reference=/results/referenceLibrary/tmap-f3/hg19/hg19.fasta ##reference=file:///results/referenceLibrary/tmap-f3/hg19/hg19.fasta ##source=Torrent Unified Variant Caller (Extension of freeBayes) #CHROM POS ID REF ALT QUAL FILTER chr1 65886142 . C G 97.35 PASS ```
For VCF annotation with rsids, you may need dbSNP VCF file (hg19=b37.x build based dbSNP VCF) in addition to sample(s) vcf. You need to index VCF files and then use tools suggested: GATK variant annotator (mentioned above by Pierre Lindenbaum), Snpsift, bcftools Note that dbSNP vcf (for entire b37) is huge. You can also use web based annotation tool (http://www.ncbi.nlm.nih.gov/variation/tools/reporter/) from NCBI. **Example code for bcftools**: bcftools annotate -c ID -a dbsnp.vcf.gz sample1.vcf.gz > sample1.rs.vcf **Example code for snpsift**: java -jar SnpSift.jar annotate dbsnp.vcf sample1.vcf > sample1.rs.vcf
biostars
{"uid": 160021, "view_count": 7558, "vote_count": 1}
Hi guys. I'm just getting started with bioinformatics and this is my first post on Biostars. Thanks in advance for all your help. I've been trying to successfully acquire sample reads and align them for quite some time now, but things still seem to be going wrong. I'm using NCBI's Sequence Read Archive (SRA) to find sample data. I've been downloading and formatting that data using their SRA Toolkit, with tools like `prefetch` and `fastq-dump`. Then I've tried to align them using `bowtie2`. I'll list the exact commands I've used: ``` prefetch SRR390728 sra-stat -x --quick SRR390728 ``` This is what the statistics look like: ``` <Run accession="SRR390728" spot_count="7178576" base_count="516857472" base_count_bio="516857472" cmp_base_count="49610016"> <Size value="193606607" units="bytes"/> <AlignInfo> <Ref seqId="GPC_000000394.1" name="5" .../> <Ref seqId="GPC_000000395.1" name="Y" .../> <Ref seqId="NC_000001.9" name="1" .../> <Ref seqId="NC_000002.10" name="2" .../> ... <Ref seqId="NC_000022.9" name="22" .../> <Ref seqId="NC_000023.9" name="X" .../> <Ref seqId="NC_001807.4" name="M" .../> </AlignInfo> <QualityCount> <Quality value="4" count="1285575"/> <Quality value="5" count="6274952"/> ... <Quality value="30" count="1110"/> <Quality value="33" count="4722"/> </QualityCount> <Databases> <Database> <Table name="PRIMARY_ALIGNMENT"> <Statistics source="meta"> <Rows count="12979096"/> <Elements count="467247456"/> </Statistics> </Table> <Table name="REFERENCE"> <Statistics source="meta"> <Rows count="616097"/> <Elements count="3080436051"/> </Statistics> </Table> <Table name="SEQUENCE"> <Statistics source="meta"> <Rows count="7178576"/> <Elements count="516857472"/> </Statistics> </Table> </Database> </Databases> </Run> ``` To me this means there is alignment data in there, particularly in table: `PRIMARY_ALIGNMENT`. I guess raw sequencing data is also in the `SEQUENCE` table. `fastq-dump` documentation states that the `SEQUENCE` table is used by default, so the `--table` option is omitted from the first call. ``` fastq-dump -O ../output SRR390728 fastq-dump --table PRIMARY_ALIGNMENT -O ../output --fasta SRR390728 ``` This leaves us with a couple of files in the `output` directory: - `SRR390728.fastq` - a `fastq` file containing the raw reads from the SRA file; and - `SRR390728.fasta` - a `fasta` file containing reads from the SRA file's `PRIMARY_ALIGNMENT` table. Next, I move onto using `bowtie2` - first by indexing the aligned reads so we can use it as a reference: bowtie2-build output/SRR390728.fasta output/bowtie/SRR390728 This outputs a whole bunch of files into the `output/bowtie` directory. Finally, I can try to align the raw reads: bowtie2 -x output/bowtie/SRR390728 -U output/SRR390728.fastq -S output/SRR390728.sam 30 minutes later, I've got the following depressing output: ``` 7178576 reads; of these: 7178576 (100.00%) were unpaired; of these: 7178576 (100.00%) aligned 0 times 0 (0.00%) aligned exactly 1 time 0 (0.00%) aligned >1 times 0.00% overall alignment rate ``` The output `sam` file looks like this (without the starting line numbers): ``` 1 @HD VN:1.0 SO:unsorted 2 @SQ SN:SRR390728.1 LN:36 3 @SQ SN:SRR390728.2 LN:36 4 @SQ SN:SRR390728.3 LN:36 .. .. .. 12979095 @SQ SN:SRR390728.12979094 LN:36 12979096 @SQ SN:SRR390728.12979095 LN:36 12979097 @SQ SN:SRR390728.12979096 LN:36 12979098 @PG ID:bowtie2 PN:bowtie2 VN:2.2.5 CL:".../bowtie2-2.2.5/bowtie2-align-s --wrapper basic-0 -x output/bowtie/SRR390728 -S output/SRR390728.sam -U output/SRR39 0728.fastq" 12979099 SRR390728.1 4 * 0 0 * * 0 0 CATTCTTCACGTAGTTCTCGAGCCTTGGTTTTCAGCGATGGAGAATGACTTTGACAAGCTGAGAGAAGNTNC ;;;;;;;;;;;;;;;;;;;;;;;;;;;9;;665142;;;;;;;;;;;;;;;;;;;;;;;;;;;;;96&&&&( YT:Z:UU 12979100 SRR390728.2 4 * 0 0 * * 0 0 AAGTAGGTCTCGTCTGTGTTTTCTACGAGCTTGTGTTCCAGCTGACCCACTCCCTGGGTGGGGGGACTGGGT ;;;;;;;;;;;;;;;;;4;;;;3;393.1+4&&5&&;;;;;;;;;;;;;;;;;;;;;<9;<;;;;;464262 YT:Z:UU 12979101 SRR390728.3 4 * 0 0 * * 0 0 CCAGCCTGGCCAACAGAGTGTTACCCCGTTTTTACTTATTTATTATTATTATTTTGAGACAGAGCATTGGTC -;;;8;;;;;;;,*;;';-4,44;,:&,1,4'./&19;;;;;;669;;99;;;;;-;3;2;0;+;7442&2/ .. .. .. 20157672 SRR390728.7178574 4 * 0 0 * * 0 0 AGAAAAAGGATGAATNNNNNNNNNNNNNNNANNNNNNNNNNNATNNCTTCNNNNGTNNANNNNNNNNNNNNT ;;;;;;-;;;;;;*)%%%%%%%%%%%%%%%6%%%%%&&&&&&;4&&;;;;&&&&.;&&;&&&&&&&& &&&&3 YT:Z:UU YF:Z:NS 20157673 SRR390728.7178575 4 * 0 0 * * 0 0 AGTTTTAATTTTTNATATTTTACTTCATAGTCTTTTACACATTTTAAAATGACCTAAATTAACGACATATCA ;;;;;;;8;;;;;%;&3;;,;&+;;1:)8+504&5/0776;;;16/10&1/.1;4.;;**4;0&7&& 6&*-& YT:Z:UU 20157674 SRR390728.7178576 4 * 0 0 * * 0 0 AATATCACAGCGANCGCTATAGATCGGAAGATCGGTATAGCGGTCGCTGTGATATTAGATCGGAAGAGCGTC ;;;;;;1;;;;;1%;;;75;55:;:::%5720+,2/;;;;;;;;;;;;<;:4;;;;;:9;;;:;;,4 42&42 YT:Z:UU ``` Note that some of the pasted text (particularly surrounding the read qualities) might look messed up thanks to the online editor. It seems that all of those reads have `4` as the value for their flag. The bowtie2 manual describes `4` as: "The read has no reported alignments". Why not? **What am I missing? Why was nothing aligned?**
1. Try to rely as little as possible on SRA. The fastq files from fastq-dump are fine to use, but I would strongly encourage you to never use anything else. 2. This dataset is paired-end, you need to use the `--split-files` option. 3. Download the version of the human genome you would like to align against and build the bowtie2 indices with that. Alternatively, you can download pre-made indices from iGenomes (the files are huge though, it's often faster to just build things yourself).
biostars
{"uid": 143352, "view_count": 4507, "vote_count": 1}
Heys, I'm working with two ssh clusters and I want to transfer some data from one to another. I could download it to my pc and then to the other cluster, but is more than 100 GB and I imagined there would be an easy way to transfer the data between both clusters. I've been trying to look for info in internet, but I didn't fully got it. If anybody could help me would be great! Thanks in advance!
Anything wrong with just `rsync` it from A to B directly? Something like this, assuming only the files to be transferred were in the current directory on the source cluster: rsync --progress ./* username@servername:/path/to/destination
biostars
{"uid": 470994, "view_count": 667, "vote_count": 1}
<p>I am advised that the FASTA sequences in Uniprot is the wild type. However when I check the sequence of HIV-1 protease in <a href="http://www.uniprot.org/uniprot/O90777">Uniprot</a>, I see it isn&#39;t the same as in 1MUI, the wild type as <a href="http://hivdb.stanford.edu/pages/3DStructures/pr.html">this side</a> stated.</p> <p>The FASTA in Uniprot:</p> <pre> PQVTLWQRPIVTIKIGGQLKEALLDTGADDTVLEEMSLPGKWKPKMIGGIGGFIKVRQYDQVSIEICGHKAIGTVLIGPTPVNIIGRNLLTQLGCTLNF</pre> <p>The one in 1MUI:</p> <pre> PQITLWQRPLVTIKIGGQLKEALLDTGADDTVLEEMSLPGRWKPKMIGGIGGFIKVRQYDQILIEICGHKAIGTVLVGPTPVNIIGRNLLTQIGCTLNF</pre> <p>Which is the true wild type?</p>
I am not a HIV expert, but as far as I know there is so much variability in the HIV-1 protease, that there is no such thing as a wildtype or consensus sequence for this protein. Also, it is good to realise that UniProtKB consists of two parts: UniProtKB/Swiss-Prot, which is manually annotated and reviewed and therefore of very high quality, and UniProtKB/TrEMBL, which is automatically annotated and not reviewed and therefore of much lower quality. See also this FAQ: ["Why is UniProtKB composed of 2 sections, UniProtKB/Swiss-Prot and UniProtKB/TrEMBL?"][1]. [1]: http://www.uniprot.org/faq/7
biostars
{"uid": 104635, "view_count": 1894, "vote_count": 1}
Hi all, I want to use enhancedvolcanoes package to produce volcanoes plot, but the example is so confusing with its input data in 'S4' type, obj oriented. I have been looking for an example that shows simple data format that I need to have before inputing them in and use the package. Nowhere to be found! I just have an excel sheet with 3 columns: `Gene, log2FoldChange, pvalue` When input into R as VC_data, can we seen as: <div>https://www.dropbox.com/s/pq1l38ko1p35e8k/Screen%20Shot%202019-05-19%20at%2011.22.13%20PM.png?dl=0</div> My code is as following: EnhancedVolcano(VC_data, lab = VC_data$Gene, x = 'log2FoldChange', y = 'pvalue', xlim = c(-5, 8)) I keep getting this error saying that 'Error in EnhancedVolcano(VC_data, lab = VC_data$Gene, x = "log2FoldChange", : log2FoldChange is not numeric!' When I did `is.numeric(VC_data$log2FoldChange)`, I get true. I am very frustrated. Please help!
Seems to be an issue with your input being a `tibble`: tb <- tibble(log2fc = c(2,3), pval = c(0.01, 0.007)) ## this returns the error you have: EnhancedVolcano(toptable = tb, x = 'log2fc', y = 'pval', lab = "foo") ## this one does it properly: EnhancedVolcano(toptable = data.frame(tb), x = 'log2fc', y = 'pval', lab = "foo") => Convert `tibble` to `data.frame` Edit: The way EnhancedVolcano checks if data are numeric is (source code line 76-78): if(!is.numeric(toptable[,x])) { stop(paste(x, " is not numeric!", sep="")) } Problem is that e.g. `tb[,1]` does not automatically reduces dimensions to directly access the data but returns: > tb[,1] # A tibble: 2 x 1 log2fc <dbl> 1 2 2 3 whereas: tb[,1][[1]] [1] 2 3 returns the numeric data.
biostars
{"uid": 380384, "view_count": 3286, "vote_count": 1}
Hi guys, I am looking for a solid tool for viral metagenomic data analysis, for virus profiling etc. I am new in this domaine, I read some papers which propose several tools, I need to pick one and stick on it. So It is better ask the community first. I found tool such as **Virsorter**, **VirFinder**,**PPR-Meta**,**DeepVirFinder** and **VirionLang**, have no idea which one is relatively the best. Please help. My Metagenomics data is from human fecal sample, medium depth. I've already run humann3 for shallow microbiome analysis. Yes, humann3(or metaphlan3) can do virus profiling with additional parameter `--add_viruses`, but results was not good enough.
I don't think there is a best tool for this purpose, as all of them them are better at one thing or another. Generally speaking, homology-based tools tend to have lower false positive rate, yet tools that rely on k-mer frequencies may work better in discovering novel and rare viruses. The most reliable solution is likely to come as a union of several approaches. I recommend [**VIBRANT**][1] as a standalone tool. Yet another approach is a combination of VirSorter2, CheckV and DRAM-v as detailed in [**this protocol**][2] by Sullivan lab. I don't think you need pleas for help in your messages. It is understood that everyone who writes in this forum needs help. [1]: https://github.com/AnantharamanLab/VIBRANT [2]: https://www.protocols.io/view/viral-sequence-identification-sop-with-virsorter2-bwm5pc86
biostars
{"uid": 9512837, "view_count": 544, "vote_count": 1}
Common Workflow Language (**CWL**) https://github.com/common-workflow-language/common-workflow-language / http://common-workflow-language.github.io/draft-3/ has been trending on my twitter timeline during the last weeks. However the spec is quite large and I find it hard to get some simple examples. Furthermore, I have the feeling that all engines require a lot of dependencies or docker. I'd like to test my makefile-based workflows using CWL, how should I write and test the following simple **Makefile** using CWL: ``` SHELL=/bin/bash .PHONY: all all : database.dna database.dna : seq1.dna seq2.dna seq3.dna cat seq1.dna seq2.dna seq3.dna > database.dna seq3.dna : seq3.rna tr "U" "T" < seq3.rna > seq3.dna seq3.rna : echo "AUGCGAUCGAUCG" > seq3.rna seq2.dna : seq2.rna tr "U" "T" < seq2.rna > seq2.dna seq2.rna : echo "AUGAAGACUGCGAUCGAUCG" > seq2.rna seq1.dna : seq1.rna tr "U" "T" < seq1.rna > seq1.dna seq1.rna : echo "AUGAAGACUGACUCGUCG" > seq1.rna ``` **EDIT**: feel free to add the file for your favorite workflow-engine as an answer. https://twitter.com/PaoloDiTommaso/status/625995681434607616 https://twitter.com/smllmp/status/625999447869231104
<p>To describe this workflow using CWL you need to:<br /> - describe the 3 different tools (tr, echo, cat)</p> <p>- describe the workflow structure</p> <p>- describe the input data for the workflow job, i.e. the 3 input strings (AUGAAGACUGACUCGAUCGAUCG. etc.)</p> <p>As an example to see the implementation of &quot;tr&quot;, you can see this gist: https://gist.github.com/hmenager/897f3e81ad7cd98e94a6 . The github repository for common-workflow-language also has a number of tool and workflow descriptions that might help (https://github.com/common-workflow-language/common-workflow-language/tree/master/conformance/draft-2).</p> <p>The reference implementation should be fairly easy to set up (pip install cwl-runner) but it is minimal (no cluster integration, etc.). Docker is not and should never become a requirement of CWL, some tools have been described with docker &quot;requirements&quot; but this is absolutely not mandatory. You can also ask your questions on all the channels mentionned here: https://github.com/common-workflow-language/common-workflow-language#community-and-contributing</p> <p>Hope this helps,</p> <p>Herv&eacute;</p>
biostars
{"uid": 152226, "view_count": 5484, "vote_count": 13}
I'm working on a SLURM cluster with NGS data. I trimmed raw reads and was thinking of the best way to align them to the reference genome. I have pairs of reads for a few samples. I wrote a script for parallel bwa: #SBATCH --cpus-per-task=1 #SBATCH --ntasks=10 #SBATCH --nodes=1 # align with bwa & convert to bam bwatosam() { id=$1 index=$2 output=$3/"$id".bam fq1=$4/"$id".R1.fq.gz fq2=$4/"$id".R2.fq.gz bwa mem -t 16 -R '@RG\tID:"$id"\tSM:"$id"\tPL:ILLUMINA\tLB:"$id"_exome' -v 3 -M $index $fq1 $fq2 | samtools view -bo $output }; export -f bwatosam # run bwatosam in parallel ls trimmed/*.R1.fq.gz | xargs -n 1 basename | awk -F ".R1" '{print $1 | "sort -u"}' | parallel -j $SLURM_NTASKS "bwatosam {} index.fa alns trimmed" But I'm not sure if I use the right parameters (#SBATCH) for the job because if I do it without -j: #SBATCH --nodes=1 #SBATCH --ntasks-per-node=5 # run bwatosam in parallel ls trimmed/*.R1.fq.gz | xargs -n 1 basename | awk -F ".R1" '{print $1 | "sort -u"}' | parallel "bwatosam {} index.fa alns trimmed" It works 10 times faster. What number of nodes/cpus/threads should I use?
Depends on the node. I typically run alignments with basically this kind of script (sorry https://www.biostars.org/u/30/, no snakemake yet) on a 72-core node with 192GB RAM, and then use: #SBATCH --nodes=1 #SBATCH --ntasks-per-node=72 #SBATCH --partition=normal In this case I would use 4 parallel processes with 16 threads for `bwa` each. Depends on how much memory your node has. Can you give some details? When using `parallel` I recommend booking the entire node to ensure you are not interfering with processes from other users. => Note that I always book the entire node if running `parallel` things so I essentially do not care about RAM consumption etc. as long as the node can handle it. If you share the node with others it might be a good idea to task the your admin before if using `parallel` is allowed on your cluster nodes.
biostars
{"uid": 393238, "view_count": 1748, "vote_count": 5}
I am trying to understand Gene expression data analysis. Could someone the percentage of change of the GE calculating from log2 fold change (L2FC)? > FoldChange = μ_Treatment / μ_Control > > log2FC=Log2(μ_Treatment)-Log2(μ_Control)
seems correct to me. log2FC is more informative as you will see the direction (sign) and the effect. Treatment = 10 Control = 100 `FoldChange = 10/100 = 0.1` which is not informative `log2FC = log2(10)-log2(100) = -3.3` which tells the effect and direction.
biostars
{"uid": 369889, "view_count": 5645, "vote_count": 2}
I have a bed file with the co-ordinates includes chr start and end columns. #chr start end chrM 1 1 chrM 2 2 chrM 3 3 chrM 4 4 chrM 5 5 chrM 7 7 chrM 8 8 chrM 9 9 chrM 10 10 chrM 11 11 chrM 12 12 chrM 13 13 chrM 14 14 here there are 2 region are present i want to get output like #chr start end chrM 1 5 chrM 7 14 which tool should i use to perform this after this i need to find its location into the mitochorial genome
I'm afraid your BED file only describes **deletions** because a BED in half-open : see chromEnd in https://genome.ucsc.edu/FAQ/FAQformat.html#format1 the tool you need is "bedtools merge" http://bedtools.readthedocs.io/en/latest/content/tools/merge.html > which tool should i use to perform this after this i need to find its location into the mitochorial genome uh ???
biostars
{"uid": 190685, "view_count": 1796, "vote_count": 1}
Hello, I merged bam files using `samtools merge -r out.bam in1.bam in2.bam ...` With `-r` option I got the RG tag for each read, as expected, good. However, the header of the merged bam does not have the @RG lines. So my question: Is there any off-the-shelf tools to add the @RG header lines once the reads have been tagged? If not, the pipeline I have in mind goes along these lines: - Scan the bam file and collect all the different RG tags - Output the header of the bam file. - Append the RG tags to the header the - Reheader the original bam file. Does it make sense? Thanks Dario
I ended up writing a little script for this, if anyone is interested it is here: - https://code.google.com/p/bioinformatics-misc/source/browse/trunk/addRGtoSAMHeader.py - https://github.com/dariober/bioinformatics-cafe/blob/master/addRGtoSAMHeader.py
biostars
{"uid": 124124, "view_count": 11704, "vote_count": 1}
Hi everyone, I came across SNP filtering tutorial where the author used the flag `--mac 3` to filter SNPs that have a minor allele count less than 3. That is `vcftools --vcf input_file.vcf --mac 3 --recode --out filtered_file` Could someone explain to me why filtering out sites with minor allele count below 3? By retaining 3 alleles and above, what exactly are we aiming at? I tried to apply the above script to my snps data from cassava crops (diploid with 18 chromosomes) having 359793 sites x 980 samples. After filtering, I now have 147518 sites x 980 samples indicating large number of sites where dropped. What is going on? Please I need better clarification on --mac 3. This is because I also intend to filter for minor allele frequency later on.enter code here Thanks
Alleles which are observed infrequently are more likely to be errors and in any case, they would not be useful because they have low statistical power to detect association with anything so they are usually filtered out.
biostars
{"uid": 356253, "view_count": 1915, "vote_count": 2}