INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
---|---|---|---|
Hi folks. I need to run a de novo short-read genome assembler (on a paired-end/mate-pair library) that prefers outputting shorter but error-free contigs rather than longer contigs/scaffolds which may be mis-assembled. What assembler or what specific setting in an assembler of choice do you recommend to yield such contigs (as error-free as possible and no contig overlappings)? | According to [this paper][1] in BMC Bioinformatics journal:
- For short read libraries (e.g. Illumina MiSeq): **CLC bio assembler** ([CLC Assembly Cell][2]) (commerical, free 2-week trial)
- For Roche 454 read libraries: **Newbler** (Roche)
These assemblers tend to break reads and contigs at repeat boundaries and place repeated elements into separate contigs. Hence we might have more conservative and better quality (less likely to be mis-assembled) contigs.
[1]: http://www.biomedcentral.com/1471-2105/15/211/abstract
[2]: http://www.clcbio.com/products/clc-assembly-cell/ | biostars | {"uid": 96350, "view_count": 2302, "vote_count": 1} |
Hello,
I'm trying to call SNPs from a .SAM file.
I've checked the output format of .vcf files but I couldn't find what I need.
I need to extract the counts of alleles for each SNP ID.
Something like this :
rs99999999 A:123, T:0, C:345, G:0
I'm a beginner in the field so sorry if my question looks simple but I cannot find the answer.
Can anyone tell me how to do it ?
Thanks in advance. | You won't find this kind of count directly into a VCF file. But there is the allelic depth for the reference and the alternative alleles.
For example for this SNP:
chr_1 21682 . T C 150.0 . AC=1;AF=1.00;AN=1;DP=4;FS=0.000;MLEAC=1;MLEAF=1.00;MQ=56.44;QD=31.06;SOR=3.258 GT:**AD**:DP:GQ:PL 1:**0,4**:4:99:180,0
The reference allele is T and the alternate allele is C. And if you look at the AD (allelic depth) field, you will find that there is 0 reads supporting the reference, and 4 reads supporting the alternate allele.
You can use a tool like SnpSift extractFields to get this field. | biostars | {"uid": 204555, "view_count": 4716, "vote_count": 1} |
<p>I wonder why I can't find any database for enzymes composed by multiple proteins as separate entities.</p>
<p>I'll explain better: the OST complex is a multiple-units enzyme composed by 6-7 proteins, like OST4, STT3A or STT3B, RPN1, DAD1, etc...
If I look for any of these proteins in Uniprot, I get a nice entry for each: for example, if you look at STT3A you get a lot of informations, including a description that explains that this protein partecipates to the OST complex.</p>
<p>However, if you look for 'OST complex' in uniprot, there is not a separate entity for this multiple-units enzyme... I wonder if you know of a database where complex enzymes are described, or if you have faced this problem before.</p>
| <p>Does this one help? <a href="http://mips.helmholtz-muenchen.de/genre/proj/corum/index.html">CORUM</a> – the Comprehensive Resource of Mammalian protein complexes.</p>
<p>Just tried a quick search using "DAD1" and it seemed to return useful results.</p> | biostars | {"uid": 1027, "view_count": 2944, "vote_count": 5} |
Dear all,
is it possible to visualize insertions in a sequence?
I have prepared a simulated sequence of the mitochondrial genome from the release hg38 by placing non human sequences right in the middle of it (position 8284). I then aligned the simulated genome to the mitochondrial index and the visualized the alignment with the integrated genome viewer IGV. However, I don't see any sign of insertions in the figure.
![enter image description here][1]
Is there a way to highlight the insertion point? Maybe by showing only clipped reads or the reads that map only on one mate?
Thank you.
[1]: http://u.cubeupload.com/gigux/Msplitpoint.png | I see evidence of the "transgene" insertion: all those **identical soft-clipped bases** centered at the position you inserted the non-human sequence. Pay attention: 1) all reads are soft-clipped at the same reference position, 2) as far as I can tell, all soft-clipped bases are identical between different reads.
Look at the picture below. The big red arrow indicates the insertion point, and the darkened rectangles indicate the inserted sequence (which I was able to determine as parvovirus by blasting them, even before you told us it was parvovirus).
<a href="https://ibb.co/eSbiRK"><img src="https://preview.ibb.co/gzBHmK/image3721.png" alt="image3721" border="0"></a>
However, keep in mind this visual inspection works well because you have a simple, small and with no duplications reference genome, and a simple and small insertion, without other copies of it throughout the reference genome. As WouterDeCoster [pointed above][1], there are better methods to identify structural variation events in more complex scenarios.
[1]: https://www.biostars.org/p/331663/#331694 | biostars | {"uid": 331663, "view_count": 10888, "vote_count": 1} |
I have a large metagenomic RNA-seq dataset that I am trying to assemble to find viral sequences but it is too large for my hardware (52gb RAM). I can see that there is a lot of bacterial contamination from many different species when I BLAST reads. I want to filter out all bacterial reads so that I can assemble. Ideas?
1. Download all bacterial genomes from Refseq and try to bowtie to that (will take a long time). As well, since when has the compressed Refseq bacterial fna files reached 72gb (when combined)?!? The last all.bacteria.gz file in Refseq archive from 2015 is 2.7gb...
2. Somehow condense all bacterial genomes into non-redundant, then align?
3. Other ideas? | You cannot determine whether there is bacterial contamination simply from BLASTing reads. Similarly, it is impossible to trivially filter out bacterial contamination by mapping to all known bacteria, because viruses tend to share sequence with their hosts, and there's no guarantee that your bacteria are in the reference dataset.
You need a completely different approach. Perhaps you should assemble the data, annotate the assemblies, and then pull out contigs with genes known to occur only in viruses... | biostars | {"uid": 203426, "view_count": 3301, "vote_count": 2} |
Hello,
I need your help to address the parameter found in [bcl2fastq2][1] tool when demultiplexing data generated by Illumina's sequencers. As you know, there are different ways to sequence genomic data but mostly by doing Paired-End (PE) or Single-End (SE) sequencing. Plus, to sequence the data, you have to use single-indexing or double (or dual) indexing on the reads. As per Illumina's definition:
> **Single and Dual Indexing**
>
The number of index sequences added to samples differs for single-indexed and dual-indexed sequencing.
>
> **Single-indexed libraries** — Adds up to 48 unique six-base Index 1 (i7) sequences to generate up to 48 uniquely tagged libraries.
>
> **Dual-indexed libraries** — Adds up to 24 unique eight-base Index 1 (i7) sequences and up to 16 unique eight-base Index 2 (i5) sequences, generating up to 384 uniquely tagged libraries. The IDT for Illumina TruSeq UD Indexes are provided as index pairs and can generate up to 96 uniquely tagged libraries. These indexes add up to 96 unique eight-base Index 1 sequences and up to 96 unique eight-base Index 2 indexes.
>
>During indexed sequencing, the index is sequenced in a separate read, called the Index Read, where a new sequencing primer is annealed. When libraries are dual-indexed, the sequencing run includes two additional reads, called the Index 1 Read and Index 2 Read.
Knowing this, I have two questions:
1. Is it acceptable to mix single index and dual index on the same flowcell (e.g. Hiseq 4000) knowing that we configured the sequencer as a dual index run ?
2. How can we demultiplex such data since the file generated by the sequencer (RunInfo.xml) contains configuration for a dual index run ? In other words, demultiplexing lanes that have dual index works fine when providing the RunInfo.xml, but for single index, what should I use for the --use-bases-mask parameter ?
Also, I know that for --use-bases-mask, we can use the following parameters for different types of sequencing:
- **Single-End sequencing:** `Y * ,I6N *`
- **Ovation® SoLo RNA-Seq from nugen/tegan only (see theodore's post below for more details):** `Y*,I8Y*,Y*` (Thanks to theodore)
- **10x Genomic Single Cell 3' RNA v2 kit + more standard libraries on the same run:** `Y26n*,I8n*,Y*` (Thanks to theodore)
- **10x Genomic Single Cell 3' RNA v3 and v3.1 kit + more standard libraries on the same run:** `Y28n,I8n*,Y*` (Thanks to theodore)
- **Paired-End sequencing:**
- **Dual-Indexing:** `Y\*,I\*,I\* ,Y\*`
- **No Index:** `Y\*,Y\*` (Thanks to Devon Ryan)
- **Single Indexing:** `Y\*,I6N,Y\*` (Thanks to Devon Ryan)
- **In-read barcode in the first read for some of the samples, but the run was PE dual-index**: `I5Y*,N*,N*,Y*` (Thanks to igor)
- **10x Genomic Single Cell 3' v1 kit:** `Y98,Y14,I8,Y10` (Thanks to igor)
- **10x Genomic Single Cell 3' v1 kit + more standard libraries on the same run:** `Y98N*,Y14N*,I8N*,Y10N*` (Thanks to igor)
- **10x Genomic Single Cell ATAC kit + more standard libraries on the same run:** `Y50,I8n*,Y16,Y49` (Thanks to theodore)
- **Ovation® SoLo RNA-Seq from nugen/tegan mixed (see theodore's post below for more details):** `Y*,I8Y*,N*,Y*` (Thanks to theodore)
Also, could you please state what other types of parameters could be used in different cases ? (for future readers)
Thanks for your time and help. Don't forget to upvote this post please so users can find this post.
[1]: https://support.illumina.com/content/dam/illumina-support/documents/documentation/software_documentation/bcl2fastq/bcl2fastq2_guide_15051736_v2.pdf | 1. Yes, though bcl2fastq2 won't be able to handle it in a single step. We commonly do this and we then process each flow cell in compatible chunks, using `--tiles`. As an example, if the first two lanes of a flow cell have compatible indices (both in number and length) then you need `--tiles s_1,s_2`. You then also need multiple output directories per flow cell.
2. See above. In short, you use one `--use-bases-mask` at a time.
Note that unless you have a mixture of either barcode lengths between lanes or barcode strategies (dual vs. single) you don't actually need `--use-bases-mask` at all.
For PE and no index you would could use `--use-bases-mask Y*,Y*`, unless you used an index run. For a single index it'd then be `Y*,I6N,Y*`. | biostars | {"uid": 344768, "view_count": 15056, "vote_count": 2} |
<p>Hi,</p>
<p>I'm having trouble removing duplicates using Picard tools on SOLiD data. I get a regex not matching error.</p>
<p>The reads have the following names:</p>
<pre>
22_758_632_F3
604_1497_576
124_1189_1519_F5
358_1875_702_F5-DNA</pre>
<p>And I don't think Picard tools is able to pick these read names with its default regex.</p>
<p>I tried to change the default regex. This time it does not throw an error, but it takes too long and times out (out of memory). I suspect I'm not giving the right regex. Here is my command:</p>
<pre>
java -jar $PICARD_TOOLS_HOME/MarkDuplicates.jar I=$FILE O=$BAMs/MarkDuplicates/$SAMPLE.MD.bam M=$BAMs/MarkDuplicates/$SAMPLE.metrics READ_NAME_REGEX="([0-9]+)_([0-9]+)_([0-9]+).*"</pre>
<p>Any help is appreciated. Thanks!</p>
| <p>I was able to fix the issue, by adding -<code>Xmx16g</code> and increasing the RAM size. Apparently the RAM was not sufficient.</p>
| biostars | {"uid": 122074, "view_count": 2633, "vote_count": 1} |
<p>Hi every one i am doing some plamid genome assembly with spades. After assembly i used SSPACE for scaffolding. But there are some gap each of the draft genome. I can fill the gap by PCR. But i want to reduce the number of gap insilico?? Can anyone suggest how to reduce the gap ?? If i do mapping the reads with contigs will it be give any promising result??? </p>
| <p><a href='http://soap.genomics.org.cn/soapdenovo.html'>GapCloser</a> from SOAPdenono works perfect for me. <br />
To use it, you will need paired-end or mate-pair reads. </p>
| biostars | {"uid": 85431, "view_count": 12133, "vote_count": 2} |
<p>Hello,</p>
<p>I wish to filter from a big FASTA file only sequences related to chrs 1-22, X and Y.
An example of FASTA sequence is:</p>
<pre><code>>ENSG00000119314|ENST00000210227|PTBP3|9|-1|115024785
GGGTGGCAGGTGCCTGTAATCCCAGCTACTCCAGAGGCTGAGGCAGGGGAATTGCTTGAG
CCTGGGAGGCAGAGGTTGCAGTGAGCCGAGATTGTGCCACTGCACTCCAGCCTGGAGTCT
CACTTTGTCACACAGGGTGGAGTGCAGTGGTGTGATCTCGGCTCACTGCAACCTCTGCTT
ACCGGGTTGAGATTCTCCTGTCTCAACCTCCTGAGTAGCTGGGATTACAGGCGTGCACCA
CCAAGCCAGACTAATTTTCCTATTTTTAGTAGAGATGGGGGGTTTCACCATGTTGGCCAG...
>ENSG00000236011|ENST00000211377|GPANK1|HSCHR6_MHC_COX|-1|31616421
CCCTATTCCTACCTAACCTCCCCTCAGGACTCAGGCTCCAATGTGTTGAGCCCCAACTCC
TTCCCATAAGACTGCCACACGGTGCTTTCCTTTCCCTTCTTCAACACTCACCAATGGGAA
GCATTGGCTGGTTCTCACAGTACACACGAGGACAGTAACCAAAGTCTCCTTGCTGGTACT
TTTCCAACTGAGGTGAATACAATGGAAGGGGTTGGCAGGTAGATGTAAAGAAGAGGCAAC
TCCCTTCGCAGCCCAACCCATACCACTCTGTCCCCCACTCCTCCCACCTCTGTCCAGAGG
CCCCTTCTCTGGACTAGACGGGCTCTCAAACTTCTGTGTTGCCTTTCTTCCAATTAGGCA
GGCTACAAACCATCAGAGCCATTTGTTGTTTGTTCCTTGAGGAAGAGGCAGTCTATCACA
ACTCTCTGATTCAAGGTCTGTCTCCCTCCCTGAAAACAATCCCTTCAGGATGACCCCCAA...
>ENSG00000087494|ENST00000201015|PTHLH|12|-1|28115255
TCCGCTCACGGGCCCCGAGACCCCCGAAGTTCCCATGGAGCCTAAGATCCCCAGGAGCCA
AGCCTGCCCCGTCCCTGCGGATCAGCTTCCTAATGGGCGACCCAAGTCTATCGCAGGCGG
TGGGGATGAGGACGCTGGGTGGGAGGAGGGGAGGGGAGGCTGAAAAAGATCATCCCCCTT
GCCCTAAGGCCTCTCCCAAGACCCTGGACCCCTGCCCTAAGAGACTCAGGCCTCCCTTGC
TGCAGTGGGAGCGCAAACACCAGGGCAGGAGACTCCAGAGAAGGAGCGCATAACTCAACG
TTTGCTCTCCTGAAGCCTTATTTCTGATAAAAATTACAGAAAAGTTAGGCAGGATCCAAA
GACACCGTAATGACCAGCTCAAAGCCAAACAGACAGGACATCCAGTGCGGGTGTCTGGAT...
</code></pre>
<p>As you can see in the forth place after "|" there is the chromosome name:
in the first one is: 9
the second one is: HSCHR6<em>MHC</em>COX
and the third one is: 12</p>
<p>I want to create a new FASTA file containing only 9, 12 sequences and in more generally 1-22, X and Y sequences.</p>
<p>What is fastest way to do it?</p>
<p>Thanks,</p>
<p>Tom.</p>
| <p>You can use <a href="http://samtools.sourceforge.net/">samtools</a> for that:</p>
<pre><code>#index a genome
samtools faidx human_genome.fa
#select chromosomes or regions
samtools faidx human_genome.fa chr1 chr2:1:2000 [...] > human_selected.fa
</code></pre>
| biostars | {"uid": 49773, "view_count": 24961, "vote_count": 6} |
Hi all,
I've come across a problem in PLINK when trying to do a Fishers exact test. The command I'm using is as follows:
plink --file test --fisher --allow-no-sex --1
And the error I get is:
```
ERROR: Locus 1:54208 has >2 alleles
Individual Ind3 Ind3 has genotype [ G G ] but we've already seen [ A ] and [ T ]
```
I've checked my file rigorously and the data is indeed 'GG' with no A's or T's nearby! I also have no missing data. The length of each line (i.e. for each individual) is consistent throughout. I've tried both tab- and space-demilited files, but no difference. I haven't found any special characters etc. either (using vi :set list).
Interestingly, I've taken Ind3 out of the file and re-run the test, but the same error is thrown up (but now obviously on Ind4, which is now on line 3).
Any ideas? | Plink requires that sites be biallelic. If ANY other individual has a nucleotide/nucleotides that make it multiallelic at that site, then plink fails.
Barring this, your file is formatted incorrectly. From the plink manual:
> Genotypes (column 7 onwards) should also be white-space delimited; they can be any character (e.g. 1,2,3,4 or A,C,G,T or anything else) except 0 which is, by default, the missing genotype character. All markers should be biallelic. All SNPs (whether haploid or not) must have two alleles specified. Either Both alleles should be missing (i.e. 0) or neither. No header row should be given. For example, here are two individuals typed for 3 SNPs (one row = one person):
>
> FAM001 1 0 0 1 2 A A G G A C
> FAM001 2 0 0 1 2 A A A G 0 0
> ...
>
> The default missing genotype character can be changed with the --missing-genotype option, for example:
>
> plink --file mydata --missing-genotype N | biostars | {"uid": 129442, "view_count": 6966, "vote_count": 1} |
<p>Dear BioStars Community,</p>
<p>I have a problem with an alignment to the transcriptome.</p>
<p>I have 8 RNA-Seq libraries sequenced on Illumina HiScanSQ system in one lane (2x100bp, paired-end) per sequencing run. These 8 libraries (1 pool) were put into two sequencing runs to obtain a decent number of reads. After demultiplexing (using bcl2fastq-1.8.4) the reads were trimmed using TrimGalore and aligned to the previously assembled transcriptome (because there is no reference genome for the organism - <em>Pinus sylvestris</em> - I am trying to analyze...) by Bowtie2-2.2.6. In the case of 7 libraries there were almost no difference in the alignment efficiency (~85-95%, with ~60-85% of uniquely mapped reads), but in case of one library something strange happened:</p>
<p><strong>Run1:</strong></p>
<pre>
11109859 reads; of these:
11109859 (100.00%) were paired; of these:
1701658 (15.32%) aligned concordantly 0 times
6666961 (60.01%) aligned concordantly exactly 1 time
2741240 (24.67%) aligned concordantly >1 times
----
1701658 pairs aligned concordantly 0 times; of these:
11078 (0.65%) aligned discordantly 1 time
----
1690580 pairs aligned 0 times concordantly or discordantly; of these:
3381160 mates make up the pairs; of these:
3218192 (95.18%) aligned 0 times
86430 (2.56%) aligned exactly 1 time
76538 (2.26%) aligned >1 times
85.52% overall alignment rate</pre>
<p><strong>Run2</strong>:</p>
<pre>
14719563 reads; of these:
14719563 (100.00%) were paired; of these:
7641835 (51.92%) aligned concordantly 0 times
4991995 (33.91%) aligned concordantly exactly 1 time
2085733 (14.17%) aligned concordantly >1 times
----
7641835 pairs aligned concordantly 0 times; of these:
7874 (0.10%) aligned discordantly 1 time
----
7633961 pairs aligned 0 times concordantly or discordantly; of these:
15267922 mates make up the pairs; of these:
15039673 (98.51%) aligned 0 times
94443 (0.62%) aligned exactly 1 time
133806 (0.88%) aligned >1 times
48.91% overall alignment rate</pre>
<p>So my question is: <strong>what should I do to find out what went wrong?</strong> I excluded (maybe too soon...) the human error because this was the same pool used for two runs (from one Eppendorf tube).</p>
<p>I also did FastQC on demultiplexed and trimmed reads - links for this library with low alignment efficiency are provided below:</p>
<p>Run1 (the good one), demultiplexed: <a href="http://twrzes.wtvk.pl/run1_R1_fastqc.html">http://twrzes.wtvk.pl/run1_R1_fastqc.html </a>and <a href="http://twrzes.wtvk.pl/run1_R2_fastqc.html">http://twrzes.wtvk.pl/run1_R2_fastqc.html</a></p>
<p>Run1, after trimming: <a href="http://twrzes.wtvk.pl/run1_R1_trimmed_fastqc.html">http://twrzes.wtvk.pl/run1_R1_trimmed_fastqc.html</a> and <a href="http://twrzes.wtvk.pl/run1_R2_trimmed_fastqc.html">http://twrzes.wtvk.pl/run1_R2_trimmed_fastqc.html</a></p>
<p>Run2 (the bad one), demultiplexed: <a href="http://twrzes.wtvk.pl/run2_R1_fastqc.html">http://twrzes.wtvk.pl/run2_R1_fastqc.html</a> and <a href="http://twrzes.wtvk.pl/run2_R2_fastqc.html">http://twrzes.wtvk.pl/run2_R2_fastqc.html</a></p>
<p>Run2, after trimming: <a href="http://twrzes.wtvk.pl/run2_R1_trimmed_fastqc.html">http://twrzes.wtvk.pl/run2_R1_trimmed_fastqc.html</a> and <a href="http://twrzes.wtvk.pl/run2_R2_trimmed_fastqc.html">http://twrzes.wtvk.pl/run2_R2_trimmed_fastqc.html</a></p>
<p><strong>Command-line commands I used for:</strong></p>
<p><strong>1) Demultiplexing:</strong></p>
<pre>
/path/to/configureBclToFastq.pl --input-dir /path/to/folder/with/BCLs/Data/Intensities/BaseCalls --output-dir /path/to/folder/with/BCLs/Unaligned --sample-sheet /path/to/folder/with/BCLs/sample-sheet.csv --fastq-cluster-count 0 --mismatches 1 --with-failed-reads</pre>
<p><strong>2) Trimming (TrimGalore-0.4.0, a wrapper for cutadapt-1.8.3):</strong></p>
<pre>
trimgalore --paired --quality 20 --illumina --stringency 1 -e 0.2 --length 40 -o /path/to/trimmed/fastq --trim1 run1_R1.fastq run1_R2.fastq</pre>
<p><strong>3) Alignment (Bowtie2-2.2.6)</strong></p>
<pre>
bowtie2 -p 12 -I 0 -X 2000 --dovetail --very-sensitive-local -N 1 -x /path/to/index/index -1 run1_R1_trimmed.fastq -2 run1_R2_trimmed.fastq -S /path/to/aligned/sam/run1.sam</pre>
<p>If you need any additional info, I would be more than happy to provide it.</p>
<p>Thank you very much for your efforts on solving this problem.</p>
<p>Kind regards,<br />
Tomasz Wrzesinski</p>
<p>--<br />
Tomasz Wrzesinski, MSc<br />
PhD Student<br />
Laboratory of High Throughput Technologies<br />
Institute of Molecular Biology and Biotechnology<br />
Faculty of Biology<br />
Adam Mickiewicz University in Poznan<br />
Umultowska 89/1.117<br />
61-614 Poznan, Poland<br />
tel. +48 61 829 5833<br />
e-mail: [email protected]</p>
| <p>My first approach would be assemble the run2 unmapped reads and blast the contigs, looking for contaminants. Alternatively, you could blast a sample of the unmapped raw reads. It may be interesting to assemble and map run1 unmapped reads as well, as a control.</p>
<p>Are you using a stranded or unstranded library preparation protocol?</p>
<p>Looking at the FastQC reports, both runs seems just fine. The only suspicious thing I noticed is GC content seems to be slightly different (1%-2%) between runs, and on run2 %A is consistently higher than %T.</p>
| biostars | {"uid": 161695, "view_count": 2480, "vote_count": 1} |
Dear all,
"TO TRIM OR NOT TO TRIM?"
My PE RNAseq library prep of human brain tissue was made with TruSeq Illumina kit A using index 5, and I've got a few yellow warnings that I'd like to know what you'd do.
I found a yellow warning for overrepresented sequences - none are Illumina adapters/index. When I align the multiple overrepresented sequences, they mostly overlap, and when I blast that sequence this is the result:
[Homo sapiens uncharacterized LOC105378179 (LOC105378179), transcript variant X2, ncRNA][1]
Should I REMOVE this sequence using Trimmomatic, since it's overly reprersented?
For last, I have a warning on per sequence GC content.
I thought of doing a "mild" trimming of the reads using trimmomatic (`LEADING:3 TRAILING:3 SLIDINGWINDOW:4:15 MINLEN:36`) to remove low-quality bases, and that's all. What would you recommend?
[1]: http://blast.ncbi.nlm.nih.gov/Blast.cgi#alnHdr_767982462 | For what it's worth, I always clean my raw data, at the very least for poor-quality base calls or poor-quality reads in general. So do my colleagues. The real (and somewhat longer) answer to your question, however, is rooted in what type of data you're generating and what you want to ultimately do with it.
If you have shotgun libraries and are looking to assemble a whole genome, keeping in low-quality reads and bases increases complexity and can dramatically increase run time. It's pretty striking; I've seen assembler get 'hung up' trying to sort out the k-mer graph, and the problem can disappear once poor reads are removed. This also applies to duplicates.
If you're just mapping to a reference and calling variants, it's less of a deal nowadays than it was a few years ago. BWA's MEM algorithm can soft-clip reads to improve mapping quality, and this is useful if you have residual adapters or low-quality spans at the beginning. I see this as a secondary bonus of sorts, but I would still trim my reads.
Also, you might have a non-random distribution of k-mers or subsequences represented just based on your library prep. Imagine that you PCR a single locus and then make a library out of it. You will definitely have an overrepresentation. Scaling up, say you targeted and sequenced the exome. Again, your distribution of k-mers might be non-random because you might expect to see certain motifs overrepresented (start/stop codons, for example).
So, I would clean my raw data using a series of best practices (remove low-quality bases/reads/adapters, identify overlaps, dedup - but the dedup doesn't apply to your expression data). I would also be a bit leery to just chop bases off for no reason other than a summary report suggests overrepresentation. The question to ask is, "Will this affect the biological interpretation of my data systematically?"
I would love to hear others' thoughts. | biostars | {"uid": 144880, "view_count": 19921, "vote_count": 2} |
I am trying to convert a csv-file to a set of arrays with an expressionTool and have a piece of javascript that executes as intended when calling:
node javaScript.js
Due to lacking experience with java script I use googled solutions and when executing the script as a part of a cwl-pipeline it crashes. The problematic line is:
var fs = require('fs')
It results in a ReferenceError for require. The reason I have found seems to point toward fs being a server side feature, and I can only guess, but perhaps cwl runs the script as a client-script?
The alternative method I found included FileReader, but that doesn't seem to be part of the node environment.
Is there a correct way of doing this? I'm at a loss...
| The ```require``` function is a feature available in nodejs ("server side javascript") to import other javascript modules into the current javascript file.
When using the ```InlineJavascriptRequirement``` requirement in a cwl CommanLineTool or in an ExpressionTool, the cwl engine will try to locate a javascript interpreter. If you use cwltool and you have nodejs installed, the javascript code included in your CommanLineTool or ExpressionTool will be passed to nodejs to be executed. However I do not think that such javascript code can include instructions to import other nodejs module by calling the ```require``` function.
One way to work around not using the require function, would be to implement the needed processing completely and solely with the javascript code directly included as expression in your CommanLineTool or ExpressionTool.
Here is an example, where you can see a piece of javascript code that takes care of parsing the contents of the csv files into an object with key/values being line numbers and of arrays of strings for each line in the csv
Lets assume this csv file:
data.csv
A,B,C,D
E,F,G,H
I,J,K,L
The cwl job file is:
expression.yaml
#!/usr/bin/env cwltool
cwl:tool: expression.cwl
datafile:
class: File
path: data.csv
The expression tool file is:
expression.cwl
#!/usr/bin/env cwl-runner
cwlVersion: v1.0
class: ExpressionTool
requirements:
- class: InlineJavascriptRequirement
inputs:
filename:
type: string
outputBinding:
outputEval: $(inputs.datafile.basename)
filecontent:
type: string
outputBinding:
outputEval: $(inputs.datafile.contents)
datafile:
type: File
inputBinding:
loadContents: true
outputs:
processedoutput:
type: Any
expression: "${var lines = inputs.datafile.contents.split('\\n');
var nblines = lines.length;
var arrayofarrays = [];
var setofarrays = {};
for (var i = 0; i < nblines; i++) {
arrayofarrays.push(lines[i].split(','));
setofarrays[i] = lines[i].split(',');}
return { 'processedoutput': setofarrays } ;
}"
This will produce the following results:
Final process status is success
{
"processedoutput": {
"1": [
"E",
"F",
"G",
"H"
],
"0": [
"A",
"B",
"C",
"D"
],
"2": [
"I",
"J",
"K",
"L"
]
},
"filecontent": "A,B,C,D\nE,F,G,H\nI,J,K,L",
"filename": "data.csv"
}
The two outputs ```filename``` and ```filecontents``` are not necessary, but may help with exploring how this works.
The question described desired data structure for the result as a "set of arrays" An example of csv file and result desired might help. As it is I am not sure if "set" was referring to the Set class available in ECMAScript 6 (recent version of javascript). The JSON types available for cwl outputs inlude arrays and objects, so the example show how to convert the csv file content into an object whose property values are arrays of strings, and the keys are the line numbers. If an array of array is desired instead, the code can be changed in the last line by replacing ```return { 'processedoutput': setofarrays } ;``` with ```return { 'processedoutput': arrayofarrays } ;```
I hope this helps... | biostars | {"uid": 226272, "view_count": 4887, "vote_count": 2} |
I'm just trying to use samtools mpileup and it is aborting with out showing any error for some bacterial genomes, with the file given there are two variants and the program dies.
The error is just:
```
[mpileup] 1 samples in 1 input files
<mpileup> Set max per-file depth to 8000
Aborted (core dumped)
```
The reference file (indexed with bwa) is: https://drive.google.com/file/d/0BzPjZ1hM-XWPMmxucnpDUjdpUFE/view?usp=sharing
The sorted bam file is: https://drive.google.com/file/d/0BzPjZ1hM-XWPUWw4NkdhM0E3Vm8/view?usp=sharing
The call is:
samtools mpileup -uf SaureusTCH1516_nt_genome.fasta 446.srt.bam
To obtained the sorted bam file:
First I trimmed pair end reads by quality and if adapters were found.
Second the trimmed reads are splitted in two different files, one for each end of the sequencing data, resulting in files without quality information.
Third I aligned the reads files to the reference with bwa aln and bwa sampe
Fourth I created the sam file with samtools view | sort
And finally I indexed the sorted bam file resulting in the file given in the link.
The version of samtools I'm using is 1.1 using htslib 1.1
Any hints on what I could be doing wrong, or what may be happening?
Thanks in advance | <p>You have a corrupted alignment (for read #1 of HISEQ:185:HBB8WADXX:2:1105:8685:78337). I haven't determined what's actually wrong with its encoding (well, it's with the auxiliary tags), but you can simply avoid the problem with</p>
<pre>
samtools mpileup -Ruf SaureusTCH1516_nt_genome.fasta 446.srt.bam</pre>
| biostars | {"uid": 132723, "view_count": 4946, "vote_count": 2} |
I am unable to view my data which I was able to view and manipulate with similar commands prior to this. Running on a Linux server. This is what I run and the errors I get:
```
tabix sequence.snps.vcf.gz chr37>chr37.vcf
bgzip -c chr37.vcf > chr37.vcf.gz
tabix -p vcf chr37.vcf.gz >chr37.vcf.gz.tbi
```
ERROR:
tabix the index file already exists. please use '-f- to overwrite
I saw that I had the tbi file so thought I could just run this to see some of the file
```
tabix chr37.vcf.gz chr37:1-20000
[ti_index_load] wrong magic number
[ti_index_load] fail to load the index: chr.37.vcf.gz.tbi
[tabix] failed to load the index file
```
I removed the index file and remade it didn't fix the problem. I tried
```
tabix -p -f vcf chr37.vcf.gz > chr37.vcf.gz.tbi
[main] unrecognized preset
```
I tried moving the `-f` in front of the `-p`, removing the `-p`, and the `vcf` still get same error. I even reloaded the `sequence.snp` file and remade the files. I have no idea what the issue is. Any help would be greatly appreciated. | I believe the command you want for that last part is just:
tabix -p -f vcf chr37.vcf.gz
It automatically creates the appropriate file. When you add the redirect after it, it immediately creates an empty file and prepares to write stdout to that file. Then, when tabix checks, it sees that a file by that name exists. | biostars | {"uid": 176877, "view_count": 11467, "vote_count": 1} |
I have some RNA sequencing reads to align to the human reference genome. I found the genome FASTA files on both [GENCODE][1] and [ENSEMBL][2]: `GRCh38.p13.genome.fa.gz` and `Homo_sapiens.GRCh38.dna.toplevel.fa.gz`
But after unzipping them, I found that they are 3.1G and 60G respectively. Why is that? And which one should I use? (considering the purpose of the project is to detect gene fusion from the sequencing reads).
[1]: http://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_38/
[2]: http://ftp.ensembl.org/pub/release-104/fasta/homo_sapiens/dna/ | `toplevel` file from Ensembl includes haplotypes with full length of chromosome padded out using N's. That is the reason it is huge compared to GENCODE file. Use the Ensembl primary file which is equivalent to GENCODE.
From README at Ensembl:
---------
TOPLEVEL
---------
These files contains all sequence regions flagged as toplevel in an Ensembl
schema. This includes chromsomes, regions not assembled into chromosomes and
N padded haplotype/patch regions. | biostars | {"uid": 9468892, "view_count": 1837, "vote_count": 1} |
Hi,
I am working in R environment and I have a long list of gene names which I am showing few first objects of it here:
```
[1] SPNCRNA.1436,omh5,snR95
[2] snR46
[3] snR10
[4] SPNCRNA.1651,SPNCRNA.515
[5] snR42
[6] SPNCRNA.1094,SPNCRNA.1095,SPRRNA.47,SPRRNA.48
[7] snR88
[8] SPNCRNA.497
[9] SPSNORNA.54
[10] snoR39b
```
I am wondering if there is any way to split the indexes with several gene names in them into individual ones? The list is so long.
Thanks! | to elaborate a little on RamRS answer:
```
x <- c("SPNCRNA.1436,omh5,snR95","snR46", "snR10", "SPNCRNA.1651,SPNCRNA.515")
results <- c()
for (i in 1:length(x)){
n <- 1
xi <- strsplit(x[i], ",")
results <- c(results, xi[[1]][n])
# print(xi[[1]][n], sep="")
while (!is.na(xi[[1]][n+1])){
n <- n+1
# print(xi[[1]][n], sep="")
results <- c(results, xi[[1]][n])
}
}
results
[1] "SPNCRNA.1436" "omh5" "snR95" "snR46" "snR10" "SPNCRNA.1651" "SPNCRNA.515"
```
--added some edits--
RamRS already gave you a perfect code, I still modified mine above
with this you can then save the results in whichever way you prefer.
`x` is your initial vector with the names, if it's part of a column then `x` will be `myMatrix[,(# col with x)]` or `myDataFrame$x` | biostars | {"uid": 129861, "view_count": 2766, "vote_count": 1} |
I am using HOMER's mergePeaks tool (homer/v4.6) to find the overlapping peaks between different bed files, for downstream visualization. However, the resulting number of peaks do not match the number of peaks that I started with. I have files that look like this:
# a set of ChIP-Seq peaks
$ wc -l Sample1.bed
106536 Sample1.bed
$ head Sample1.bed
#chr start end name conc
chr1 10009 10438 chr1:10009-10438 1.93238645557258
chr1 710223 710919 chr1:710223-710919 2.6191877990312
chr1 712387 715259 chr1:712387-715259 6.36598230427597
chr1 752136 752850 chr1:752136-752850 2.77465732982152
chr1 755212 755710 chr1:755212-755710 2.15283141685661
chr1 756766 759022 chr1:756766-759022 4.05288454015793
chr1 760940 763718 chr1:760940-763718 6.44867373331963
chr1 772008 781353 chr1:772008-781353 6.86725737719796
chr1 800093 801813 chr1:800093-801813 4.41658071582566
# TSS regions from Gencode
$ wc -l gencode.bed
105785 gencode.bed
$ head gencode.bed
chr1 1868 22010
chr1 19553 39554
chr1 20266 40366
chr1 42472 62473
chr1 43048 63049
chr1 52947 72948
chr1 59090 79091
chr1 121024 141025
chr1 150445 170446
chr1 307719 327730
And the command I am using is this:
mergePeaks Sample1.bed gencode.bed -prefix mergepeaks -venn venn.txt
The stdout stream even displays the correct number of peaks:
mergePeaks Sample1.bed gencode.bed -prefix mergepeaks -venn venn.txt
Max distance to merge: direct overlap required (-d given)
Merging peaks...
Comparing Sample1.bed (106535 total) and Sample1.bed (106535 total)
Comparing Sample1.bed (106535 total) and gencode.bed (105785 total)
Comparing gencode.bed (105785 total) and Sample1.bed (106535 total)
Comparing gencode.bed (105785 total) and gencode.bed (105785 total)
However, the resulting files do not contain the full number of peaks.
$ cat venn.txt | cut -f3-
Total Name
13229 gencode.bed
40836 Sample1.bed
19649 Sample1.bed|gencode.bed
13229 + 40836 + 19649 = 73714 peaks, which is less than the 100k that both sets started with. This is also reflected in the line counts for these files:
$ wc -l mergepeaks_*
40837 mergepeaks_Sample1.bed
19650 mergepeaks_Sample1.bed_gencode.bed
13230 mergepeaks_gencode.bed
73717 total
Any idea what is happening to these missing peaks? | After a thorough investigation, I ran mergePeaks on the original gencode.bed file by itself and found that it contained a number of peaks that overlapped each other, thus reducing the total number true peaks by the same amount as were missing. This was corroborated by running `bedtools merge -i` on the file and producing output with the same number of entries. Mystery solved. The original HOMER outputs were indeed accurate. | biostars | {"uid": 189867, "view_count": 4550, "vote_count": 1} |
Hello,
I am trying extract certain information from a gbk file I can extract the locus tag and the amino acid sequence however I am struggling to extract the gene location as it not in the same format in the file e.g.: `/locus_tag="NCTC86_00002"`
This is my script so far:
```py
from Bio import GenBank
from Bio import SeqIO
gbk_filename = "HS.gb"
faa_filename = "HS_converted.faa"
input_handle = open(gbk_filename, "r")
output_handle = open(faa_filename, "w")
for seq_record in SeqIO.parse(input_handle, "genbank"):
print "Dealing with GenBank record %s" % seq_record.id
for seq_feature in seq_record.features :
if seq_feature.type=="CDS" :
assert len(seq_feature.qualifiers['translation'])==1
output_handle.write(">%s from %s\n%s\n" % (
seq_feature.qualifiers['locus_tag'][0],
seq_record.name,
seq_feature.qualifiers['translation'][0]))
output_handle.close()
input_handle.close()
print "Done"
``` | Hi,
the SeqFeature objects have a "location" attribute that contains the start/stop position of the feature.
```py
from Bio import SeqIO
gbk_filename = "HS.gb"
faa_filename = "HS_converted.faa"
output_handle = open(faa_filename, "w")
for seq_record in SeqIO.parse(gbk_filename, "genbank") :
print "Dealing with GenBank record %s" % seq_record.id
for seq_feature in seq_record.features :
if seq_feature.type=="CDS":
assert len(seq_feature.qualifiers['translation'])==1
output_handle.write(">%s from %s\n%s\n" % (
seq_feature.qualifiers['locus_tag'][0],
seq_record.name,
seq_feature.qualifiers['translation'][0]))
print('Start: %d, Stop: %d, Strand: %d'%(int(seq_feature.location.start),
int(seq_feature.location.end),
seq_feature.strand))
output_handle.close()
print "Done"
```
Hope this helps | biostars | {"uid": 142972, "view_count": 4607, "vote_count": 1} |
Dear all:
I want to obtain chi-square statistics for following data by element wise. My apology to ask this statistical question from this community. However, my data contains list of overlap's significance score of 3 GRanges objects, I want to get its global score by element-wise. How can I get this in R?
#This is the data that I want to get its global score by element wise:
[[1]]
NumericList of length 7
[[1]] 1e-22
[[2]] 1e-19
[[3]] 1e-18
[[4]] 1e-16
[[5]] 1e-24
[[6]] 1e-20
[[7]] 1e-15
[[2]]
NumericList of length 7
[[1]] 1e-24
[[2]] 1e-24
[[3]] 1e-20
[[4]] 1e-25
[[5]] 0.1
[[6]] 1e-19
[[7]] 1e-18
[[3]]
NumericList of length 7
[[1]] 1e-11
[[2]] 1e-11
[[3]] 1e-10
[[4]] numeric(0)
[[5]] numeric(0)
[[6]] 1e-15
[[7]] numeric(0)
if you wonder third list element contains numeric(0), which refers to non-overlapped regions, so I can replace it with zero:
li.3 <- <- lapply(li.3, function(x) {
res <- ifelse(length(x)>0, x, 0)
})
# this is reproducible example :
data <- DataFrame(
v1=c(1e-22,1e-19,1e-18,1e-16,1e-24,1e-20, 1e-15),
v2=c(1e-24,1e-24,1e-20,1e-25,0.1,1e-19,1e-18),
v3=c(1e-11,1e-11,1e-10,numeric(0),numeric(0),1e-15,numeric(0)))
#my desired output something like (just example by element wise) :
global fisher score of `(1e-22, 1e-24, 1e-11)` = ?
global fisher score of `(1e-19, 1e-24, 1e-11)` = ?
...
global fisher score of `(1e-24, 1e-01, numeric(0))` = ?
I want to get global score by element wise. How can I get this in R? Alternatively, I also prefer to see fisher exact test result for above data. I will be grateful if anyone can give me any idea for doing this. Thanks a lot
| You have a data frame with three columns:
> data
DataFrame with 7 rows and 3 columns
v1 v2 v3
<numeric> <numeric> <numeric>
1 1e-22 1e-24 1e-11
2 1e-19 1e-24 1e-11
3 1e-18 1e-20 1e-10
4 1e-16 1e-25 0e+00
5 1e-24 1e-01 0e+00
6 1e-20 1e-19 1e-15
7 1e-15 1e-18 0e+00
What confuses me is that this dataframe seems to contain p-values already. So what do you want to calculate exactly?
You may combine p-values, assuming they are independent, using different approaches. The simplest is just by taking their mean (see [When combining p-values, why not just averaging?][1] )
> data$global = apply(data[1:3], 1, mean)
> data
DataFrame with 7 rows and 4 columns
v1 v2 v3 global
<numeric> <numeric> <numeric> <numeric>
1 1e-22 1e-24 1e-11 3.333333e-12
2 1e-19 1e-24 1e-11 3.333333e-12
3 1e-18 1e-20 1e-10 3.333333e-11
4 1e-16 1e-25 0e+00 3.333333e-17
5 1e-24 1e-01 0e+00 3.333333e-02
6 1e-20 1e-19 1e-15 3.333700e-16
7 1e-15 1e-18 0e+00 3.336667e-16
>
More accurate methods to combine p-values would include Fisher's method. See for example http://stats.stackexchange.com/questions/168181/r-package-for-combining-p-values-using-fishers-or-stouffers-method for some R packages to do it.
For example:
> library(metap)
> data$global = apply(data[1:3], 1, function(df) sumlog(df)$p)
Warning messages:
1: In sumlog(df) : Some studies omitted
2: In sumlog(df) : Some studies omitted
3: In sumlog(df) : Some studies omitted
> data
DataFrame with 7 rows and 4 columns
v1 v2 v3 global
<numeric> <numeric> <numeric> <numeric>
1 1e-22 1e-24 1e-11 8.745181e-54
2 1e-19 1e-24 1e-11 7.855507e-51
3 1e-18 1e-20 1e-10 6.219311e-45
4 1e-16 1e-25 0e+00 9.540599e-40
5 1e-24 1e-01 0e+00 5.856463e-24
6 1e-20 1e-19 1e-15 7.855507e-51
7 1e-15 1e-18 0e+00 7.698531e-32
> sumlog(c(1e-22, 1e-24, 1e-11))
chisq = 262.4947 with df = 6 p = 8.745181e-54
>
[1]: http://stats.stackexchange.com/questions/78596/when-combining-p-values-why-not-just-averaging | biostars | {"uid": 200974, "view_count": 2385, "vote_count": 1} |
How to analyse Level 3 data (RPKM values) from TCGA? What Kind of analysis could be performed using level 3 data to find out differential expressed genes between normal and diseased samples? | For RNA-seq data you can use DESeq2 or EdgeR to perform a differential expression analysis. These tools are part of Bioconductor in R. However, both of these programs perform their own internal normalizations and they recommend you input the raw counts, not the RPKM or RSEM scaled estimates (for RNAseqV2 TCGA data). If you are downloading from the TCGA Portal each patient's data is in a separate file, but if you go to https://confluence.broadinstitute.org/display/GDAC/Dashboard-Stddata and click on the Open link beside the cancer of interest you will find tar files that contain merged text files with all patients in one file. Once you get all the raw counts or normalized counts in a single matrix you can analyze the data with any program that accepts a matrix of data; BUT make sure it is meant to be used on RNA-seq data because this type of data has different properties than microarray data and needs to be treated slightly differently when genes have low or zero read counts. In doing my own comparisons between DESeq2 and EdgeR, I have found I prefer DESeq2 results because it compensates for low reads counts, which can artificially inflate fold changes. | biostars | {"uid": 143131, "view_count": 11934, "vote_count": 6} |
Hi all,
I have multiple files that have the same (long) title format:
Lactococcus_lactis_subsp_cremoris_strain_number_file.gbk_fasta.fasta_proteins.fasta_results
I want to delete everything after ".fasta" so it would be something like:
Lactococcus_lactis_subsp_cremoris_strain_number_file.gbk_fasta.fasta
I tried to make a small bash script in order to do that quickly but when I run it, nothing is changing.
Here is my script:
#!/bin/bash
for file in /home/Documents/Folder;
do
cut -d_ -f1,2,3,4,5,6,7,8 <<< "$f";
done
There is obviously something wrong with it but I can't find where ...
If you have any ideas/suggestions for improvements, please, let me know.
Thank you for your help!
| `cut` operates on file contents, not file names - not unless used much differently. Plus, your loop variable is called `$file` while your loop uses something called `$f`.
You should go with sed. It could be done with bash parameter expansion too, but sed is easier.
In your case, you'd need something like
echo "mv ${file} $(echo $file | sed 's/.fasta.*/.fasta/')" #untested - the first "." in the sed expression might need escaping
If the above `echo`s `mv` commands as expected, remove the echo and the double quotes surrounding the `mv` command to execute the command. | biostars | {"uid": 455895, "view_count": 659, "vote_count": 1} |
Does anyone know how to download and install the R-GSEA package?
If I try to register via http://www.broadinstitute.org/cancer/software/gsea/software/software_index.html, I just get a 404 error.
Is there any other source for this package? | <p>There are several packages implementing GSEA on Bioconductor, <a href="https://www.bioconductor.org/packages/release/bioc/html/gage.html">gage</a> and <a href="https://www.bioconductor.org/packages/release/bioc/html/phenoTest.html">phenoTest</a> are two examples, surely someone will point more.</p>
| biostars | {"uid": 166600, "view_count": 11494, "vote_count": 1} |
Hi,
I am using RMarkdown in Rstudio and I want to execute commands from a program using a bash chunk
````{bash}
```
I have a program called samtools on my computer so when I execute it in the chunk, it works.
```{bash}
samtools
```
When I type :
```{bash}
which samtools
```
The output is that it tells me samtools is located in the usr/local/bin directory.
However, when I execute a program with vcftools , I get an error because Rstudio, does not know where the program is:
I have it in another directory on my computer.
How do I get Rstudio or Rmarkdown to execute vcftools from the bash chunk?
Is there a way that I can tell RMarkdown which directory to look in to find the program?
For example (something like) :
```{bash}
$vcftools = /Users/m.o.l.s/Programs_For_Bioinformatics/vcftools
```
or would I have to move all of the programs to usr/bin/local?
Outside of Rstudio, I have made aliases to the programs so they work fine on the terminal.
I made the alias by writing in my .bash_profile
`alias bcftools=/Users/paths/to/where/the/program/is/installed`
but I added the path to vcftools to my export PATH in the bash profile completely.
| `.bash_profile` is executed when you start a login shell. My suspicion is that the `bash` R markdown cells are executed in a separate shell instance, that is not a login shell and so this is not executing your `.bash_profile` file when it starts.
It might be different with `.bashrc`, which might get executed when you start a bash shell in Rmarkdown (or it might not - for example, im pretty sure neither are run on an SGE submission script).
You could try explicitly adding `source ~/.bash_profile` to the start of your code chunk.
| biostars | {"uid": 400685, "view_count": 7174, "vote_count": 2} |
Hi.
I am new for performing RNA-Seq. I have got the sequencing raw reads (PE) for my samples from Arabidopsis thaliana plants. I have analysed these reads using FASTQC. Now I have to do the alignment and mapping of these reads with the refrence genome.
In this regard my first question is, should I do any further cleaning or processing for my reads before alignment. And should I have to merge the pair-end reads?
Second question is which aligner should I use? I have tried with BWA, Bowtie2 and STAR, and I got the maximum alignment with STAR.
Other question is which source I should use for the reference genome. Like TAIR10, Araprot11 or ftp://ftp.ensemblgenomes.org/pub/release-41/plants/fasta/arabidopsis_thaliana/dna/
I will be grateful if you may guide me for my analysis.
Best
Umesh
| You should :
- Not do further cleaning of the fastq files
- Not merge paired-end fastq files
- Use STAR for alignment (BWA and bowtie2 are maint for DNA not RNA)
- For the genome you can use the ENSEMBL version : https://plants.ensembl.org/Arabidopsis_thaliana/Info/Index | biostars | {"uid": 357624, "view_count": 1130, "vote_count": 1} |
<p>Hi,</p>
<p>I have a couple of contigs, and I would like to answer questions of the following sort</p>
<p>a.) Are they unique?</p>
<p>b.) Which organism do they come from?</p>
<p>c.) Do they come from a protein encoding gene?</p>
<p>d.) What is that protein's function etc...</p>
<p>I am very new to Bioinformatics (even Bio for that matter) so any help would be appreciated. Even links to further readings.
Thanks.</p>
| As suggested by Pierre, BLAST will pretty much answer all of your questions. You can use the LinkOut Links (usually on the right, use CTRL+F to find them) on the NCBI webpage to get additional information.
You should note one potential pitfall though: in order to properly search for proteins, your contigs should come from <a href="http://en.wikipedia.org/wiki/Messenger_RNA">mRNA sequencing</a>, because eucaryotic DNA is likely to contain <a href="http://de.wikipedia.org/wiki/Intron">introns</a> in their sequence (which you can't make sense of that easily). If you're searching genomic sequences, finding the right BLAST parameters can be a bit tricky.
In most cases you will know a bit of information about your sequence that will help you applying the right searches. If you're going totally blind, doing a BLAST against NCBI's nr database is a good place to start (because most likely your sequence is not entirely new or unknown).
What you can try as well:
1. Trying to <a href="http://www.ncbi.nlm.nih.gov/projects/gorf/">find</a> an <a href="http://en.wikipedia.org/wiki/Open_reading_frame">ORF</a>; mRNAs proteins will have a possible translation from nearly the beginning to nearly the end; <a href="http://en.wikipedia.org/wiki/Polyadenylation">Polyadenylation signals</a> (in eucaryotes) might provide additional help (there are 3 STOP codons- think about how likely it is for a given DNA sequence of length N to contain none by pure chance)
2. Use <a href="http://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi">NCBI's Conserved Domain Search</a> to classify the protein
3. There are many specialized databases to find information as soon as you know what type your sequence is; for instance, there is <a href="http://www.brenda-enzymes.org/">BRENDA</a> for enzymes or <a href="http://rdp.cme.msu.edu/seqmatch/seqmatch_intro.jsp3.">RDP</a> for rRNA.
4. If it is a protein and you are interested in structure, you might want to take a look at the <a href="http://pdb.rcsb.org/pdb/home/home.do">PDB</a>.
Sorry if this was too basic, but you mentioned you were new to biology ;-) | biostars | {"uid": 3634, "view_count": 9319, "vote_count": 2} |
Hi everyone, i have a blast p result with the average identity percentage 35%, is it an acceptable percentage?
If not what is the minimal acceptable identity percentage?
Between Identity percentage, e-value and bitscore, which one should we focus on in order to find the best match? Thanks. | Identity 35% means that 35% of aa in your sequence match to other sequences in database. There isn't something like "acceptable percentage". It always depends on what you are looking for:
--- if you have unknown protein sequence and you would like to know the homology sequences, information about identity (even 35%) is valuable,
--- if you have known protein and you need to confirm the sequence, the identity 35% is small and may suggest that something went wrong during your analysis.
The E-value is very important, the lower the better.
Best,
Agata
| biostars | {"uid": 187230, "view_count": 53946, "vote_count": 6} |
Hello! Is it possible to retrieve information about variants of a single gene from the ExAC Browser in VCF format? I can only see the option to "Export table to CSV". | You can use tabix and query directly from the FTP:
tabix -h ftp://ftp.broadinstitute.org/pub/ExAC_release/current/ExAC.r0.3.1.sites.vep.vcf.gz 2:39967768-39967768 > exported.vcf | biostars | {"uid": 265577, "view_count": 3332, "vote_count": 1} |
Hi - I am exploring how many threads I can use for psiblast. The default value is 1 thread, and I want to know how high I can set this. Using
$ cat /proc/sys/kernel/threads-max
126335
I obtain the maximum number of processes available to the system. I somehow doubt that this is analogous to the amount of threads I can use in the psiblast `num_threads` flag. can anyone shed any light on this?
Thanks | Don't go near 126 thousand; the max threads for an operating system includes moving the hard-drives and networking, and everything else. See "ps aux" for a list of how many things are already active!
Choosing a threading level for a program is based on a dozen factors, usually if you want to do it if you have more CPU cores sitting idle, while the single-threaded version of psiblast is running 100% of one core. Try two and see if you can get two cores to 100%. Maximally you'd benefit from the number of threads = number of CPU cores, unless there is other non-cpu constraints.
It depends on the system architecture and the software architecture. In my experience, a dual processor, quad core (for 8 threads max) can do something like compute pi in eight threads efficiently; If instead of computing pi, you were reading big files from disk, and have only the one disk, youll see any more than 1-2 threads slowing each other down as they have to wait for data. Running six threads accessing the harddrive will be slower than three. This ratio depends on how much of each resource is needed by each thread.
Something like sequence alignment (BWA or Bowtie etc) needs to read a little data, then crunch a lot of CPU, so spinning up all 8 threads is fine, theyll wait their turn for data, and then get off-sync from each other and end up with 100% disk utilization and almost 8x100% CPU.
If your processes are for example requesting data from a web-server with some unknown delays, then you could run a dozen or a hundred threads and theyll wait and go when they can.
It also depends on the motherboard architecture. You probably only have 2 or 4 channels to access RAM, so running more than 4 threads that need high volume access to RAM will also slow each other down.
The key word is contention. The usual solution is trial and error; you will have to measure the speed for various threads-settings and choose an optimal for your task. I dont know about psiblast specifically, but it probably needs to access RAM quickly, and youll see it get slower per thread after 5 threads. Maybe the optimal is 6-8, but I guarantee trying 1,000 simultaneously will not be faster than 10.
Finally, of course is the level of parallelism available to the algorithm, sometimes BLAST has to work sequentially and will ignore your thread setting for some parts of the job, so these optimal settings can vary with the reference genome and query sequences. | biostars | {"uid": 103645, "view_count": 3596, "vote_count": 3} |
I am looking for a tool/script/pony to correct the REF column in a vcf file whenever that nucleotide doesn't match the reference genome, as supplied in fasta format.
It sounds like a common task but I could not find something. I did find `bcftools +fixref` but that only works for SNPs. My vcf files are from structural variants.
Cheers,
W | Hello https://www.biostars.org/u/24526/ ,
there is another `bcftools` plugin that should do this:
$ bcftools +fill-from-fasta input.vcf -- -c REF -f genome.fa > output.vcf
fin swimmer | biostars | {"uid": 347588, "view_count": 3945, "vote_count": 2} |
Hi Everyone,
I am working on Whole Genome Sequencing and analysis of Human genome from illumina HiSeq platform with about 30X coverage. Each sample (human genome) have about 250-300 fastq.gz files, whom I am dealing with 'fastqc' for quality check using following command :
/usr/local/bin/fastqc -t 8 -f fastq -o OUT/ -casava *.gz -noextract
Although it is running fine and generating equal number of "fastqc.zip" files which I unzipped using unzip '*.zip'. So, here I have 2 questions:
1. Can I merge two or more fastq files and then run fastqc on those merged files? If yes, how should I merge those fastq files?
2. I have to manually check 250-300 fastqc folder to know the quality by opening .html page. Is there any way where I can have summary of overall quality of the fastq files in a flowcell?
Please let me know your comments. I'll be highly thankful to you.
Best,
Ravi | We have a script that will run fastqc and generate a [summary report][1] with the images from all the fastq files it was run on. You may also find it useful to systematically parse the fastqc_data.txt files from each run and combine the results that way.
The script is here, but may not be the most useful and documented thing ever... Depends on imagemagick to generate thumbnails...
https://github.com/metalhelix/illuminati/blob/cluster/scripts/fastqc.pl
Also uses this script:
https://github.com/metalhelix/illuminati/blob/cluster/scripts/thumbs.sh
[1]: http://research.stowers.org/mcm/fastqc/fastqc_plots.html | biostars | {"uid": 141797, "view_count": 37455, "vote_count": 4} |
I have a BED file with some elements that are located in overlapped intervals. I added unique IDs, and length in the 4th and 5th columns, respectively.
cat sorted_intervals.bed
[....]
NC_007117.7 52869911 52875049 NC_007117.7.27 5138
NC_007117.7 52869911 52870819 NC_007117.7.28 908
NC_007117.7 52869929 52870807 NC_007117.7.29 878
NC_007117.7 52869932 52870798 NC_007117.7.30 866
NC_007117.7 52869932 52870795 NC_007117.7.31 863
NC_007117.7 52869932 52870780 NC_007117.7.32 848
NC_007117.7 52869938 52870804 NC_007117.7.33 866
NC_007117.7 52869956 52870795 NC_007117.7.34 839
NC_007117.7 52874159 52875088 NC_007117.7.35 929
NC_007117.7 52874159 52875088 NC_007117.7.36 929
NC_007117.7 52874159 52875088 NC_007117.7.37 929
NC_007117.7 52874159 52875088 NC_007117.7.38 929
NC_007117.7 52874159 52875082 NC_007117.7.39 923
NC_007117.7 52874159 52875088 NC_007117.7.40 929
NC_007117.7 52874159 52875079 NC_007117.7.41 920
NC_007117.7 52874159 52875088 NC_007117.7.42 929
NC_007117.7 52874159 52875088 NC_007117.7.43 929
NC_007117.7 52874162 52875085 NC_007117.7.44 923
NC_007117.7 52874192 52875052 NC_007117.7.45 860
NC_007117.7 52874192 52875079 NC_007117.7.46 887
NC_007117.7 52874192 52875052 NC_007117.7.47 860
[....]
My goal is to find which elements are located in overlapping genomic regions and then select the one that spans the longest length. I also want to keep those elements that do not overlap with any others. To do so, in a previous analysis, I used the following command:
while read LINE ; do grep -wE "$LINE" sorted_intervals.bed | sort -k5,5nr | head -1 ; done < <(bedtools merge -i sorted_intervals.bed -c 4 -o count,collapse | awk '{print $5}' | sed 's/,/|/g')
which successfully outputs the element NC_007117.7.27 for the above example.
Now I want to repeat the same analysis and set up a cutoff value (let's say 10%) for the overlapping regions. That is, if two elements have overlap >=10%, keep the longest one; but if two elements have overlap <10%, keep both of them. Same as before, I also would like to keep elements that do not overlap with any other elements.
I have found similar questions, such as:
[Bed file: merging intervals that overlap a certain percentage][1]
[multiple bed - merge regions IF overlapping more than xx percent of size][2]
However, I am still not very sure how to do it, because when I run the following *bedops* command it gives me three different overlaps for the above example instead of just one as in *bedtools merge*. I also tried the option *--fraction-either* and it produced the same results. I think I might be misunderstanding something.
bedmap --count --echo-map-range --echo-map-id --fraction-both 0.1 --delim '\t' sorted_intervals.bed | uniq | grep "NC_007117.7.27"
21 NC_007117.7 52869911 52875088 NC_007117.7.28;NC_007117.7.27;NC_007117.7.29;NC_007117.7.32;NC_007117.7.31;NC_007117.7.30;NC_007117.7.33;NC_007117.7.34;NC_007117.7.41;NC_007117.7.39;NC_007117.7.35;NC_007117.7.36;NC_007117.7.37;NC_007117.7.38;NC_007117.7.40;NC_007117.7.42;NC_007117.7.43;NC_007117.7.44;NC_007117.7.45;NC_007117.7.47;NC_007117.7.46
8 NC_007117.7 52869911 52875049 NC_007117.7.28;NC_007117.7.27;NC_007117.7.29;NC_007117.7.32;NC_007117.7.31;NC_007117.7.30;NC_007117.7.33;NC_007117.7.34
14 NC_007117.7 52869911 52875088 NC_007117.7.27;NC_007117.7.41;NC_007117.7.39;NC_007117.7.35;NC_007117.7.36;NC_007117.7.37;NC_007117.7.38;NC_007117.7.40;NC_007117.7.42;NC_007117.7.43;NC_007117.7.44;NC_007117.7.45;NC_007117.7.47;NC_007117.7.46
Does anyone know what would be the best way to proceed?
Many thanks,
EDIT: I don't think I need the two elements to have a reciprocal overlapping percentage of 10%, just the smallest one among the two being compared. That is, if the smallest of the two elements has an overlap that is less than 10%, then both elements should be kept.
[1]: https://www.biostars.org/p/170298/
[2]: https://www.biostars.org/p/143898/ | Based on the sketch provided here:
![Example][1]
Here is a script that I used to evaluate overlaps two levels down, first looking for the longest interval in a merged region, and then looking at overlaps with intervals overlapping that longest interval.
This runs entirely within Python, no use of `bedops` or other command-line kits. This uses the `ncls` library, which has some Cython optimizations for fast interval overlap queries, along with PyRanges to do the first-pass merge operation.
Notes:
1. This is not recursive; some more complicated overlap arrangements might need a recursive approach, depending on how your overlap criteria must be applied.
2. This would need to be run on one chromosome's worth of data at a time, unless coordinates are first translated to an "absolute" coordinate scheme as described in a previous comment. The `bedextract` tool can be used to quickly split a BED file into separate chromosomes, e.g.,
for chr in `bedextract --list-chr in.bed`; do bedextract ${chr} in.bed > in.${chr}.bed; done
Code:
#!/usr/bin/env python
import sys
import io
import click
import ncls
import pandas as pd
import pyranges as pr
import numpy as np
import collections
test_intervals_str = '''Chromosome Start End Id
chr1 50 80 F
chr1 100 120 D
chr1 100 140 C
chr1 120 200 A
chr1 199 260 B
'''
def df_from_intervals_str(str):
'''
In:
str - (string) TSV-formatted intervals with header
Out:
df - (Pandas dataframe)
'''
intervals = io.StringIO(str)
return pd.read_csv(intervals, sep='\t', header=0)
def ncl_from_df(df):
'''
In:
df - (Pandas dataframe)
Out:
df_as_ncl - (NCLS object)
'''
return ncls.NCLS(df['Start'], df['End'], df.index)
def ncl_all_overlaps(a, b):
'''
In:
a - (NCLS object) set A
b - (Pandas dataframe) set B
Out:
(l_idxs, r_idxs) - (tuple) indices of set A and set B which overlap
'''
return a.all_overlaps_both(b['Start'].values, b['End'].values, b.index.values)
def test_ncl_all_overlaps(a, b):
'''
In:
a - (NCLS object) set A
b - (Pandas dataframe) set B
'''
print(ncl_all_overlaps(a, b))
def ovr_idxs_to_map(ovr):
'''
In:
ovr - (tuple) output from ncl_all_overlaps
Out:
m - (OrderedDict) mapping of overlaps by indices
'''
m = collections.OrderedDict()
for l_idx, r_idx in zip(ovr[0], ovr[1]):
if l_idx not in m:
m[l_idx] = []
m[l_idx].append(r_idx)
return m
def search_candidates(m, a, b, t):
'''
In:
m - (OrderedDict) mapping of overlaps by indices
a - (Pandas dataframe) set A (merged regions)
b - (Pandas dataframe) set B (input intervals)
t - (float) threshold for overlap
Accepted elements are written to standard output stream
'''
for ak, bv in m.items():
if len(bv) == 1:
# print disjoint element
b_row = b.iloc[[bv[0]]]
sys.stdout.write('{}\n'.format('\t'.join([str(x) for x in b_row.values.flatten().tolist()])))
else:
# get set A's merged region length
a_row = a.iloc[[ak]]
a_len = a_row['End'].item() - a_row['Start'].item()
# iterate through set B to get the first, longest overlap
mo = 0
mi = -1
mv = -1
for i, v in enumerate(bv):
b_row = b.iloc[[v]]
co = (b_row['End'].item() - b_row['Start'].item()) / a_len
if co > mo:
mo = co
mi = i
mv = v
# if the longest element does not meet the overlap
# threshold, we skip to the next merged region
if mo <= t:
continue
# otherwise, examine a list of candidates
candidate_idxs = bv.copy()
accepted_idxs = [mv]
candidate_idxs.pop(mi)
# mv is the index of the item in set B that we now test against the
# remaining elements in the candidate list
pv = mv
parent_df = b.iloc[[pv]]
children_df_as_ncl = ncl_from_df(b.iloc[candidate_idxs, :])
children_ovr_parent = ncl_all_overlaps(children_df_as_ncl, parent_df)
children_ovr_parent_map = ovr_idxs_to_map(children_ovr_parent)
# test overlaps of children with longest-overlapping parent
p_row = b.iloc[[pv]]
candidate_idxs_to_remove = []
for ci, cv in enumerate(candidate_idxs):
c_row = b.iloc[[cv]]
c_len = c_row['End'].item() - c_row['Start'].item()
# remove any candidates which do not overlap the parent -- these may
# have originally overlapped the merged regions we started with
if cv not in children_ovr_parent_map[pv]:
candidate_idxs_to_remove.append(ci)
continue
# measure overlap, per criteria in sketch
if (c_row['Start'].item() < p_row['Start'].item()) and (c_row['End'].item() < p_row['End'].item()):
l = p_row['Start'].item()
r = c_row['End'].item()
elif (c_row['Start'].item() < p_row['End'].item()) and (c_row['End'].item() > p_row['End'].item()):
l = c_row['Start'].item()
r = p_row['End'].item()
else:
# child element is nested within the parent
candidate_idxs_to_remove.append(ci)
continue
# calculate overlap, relative to child element
o = (r - l) / c_len
# if child element coverage is *less* than threshold, include it
if o < t:
accepted_idxs.append(cv)
# either way, we remove it from further consideration
candidate_idxs_to_remove.append(ci)
# make sure that we have no children left to test
# if we have any candidate children left, something went wrong
assert(len(candidate_idxs) == len(candidate_idxs_to_remove))
# print accepted elements
for acc_idx in accepted_idxs:
acc_row = b.iloc[[acc_idx]]
sys.stdout.write('{}\n'.format('\t'.join([str(x) for x in acc_row.values.flatten().tolist()])))
@click.command()
@click.option('--threshold', type=float, default=0.1, help='overlap threshold')
def main(threshold):
# validate parameter
assert((threshold > 0) and (threshold < 1))
# import data
df = df_from_intervals_str(test_intervals_str)
# validate input -- only one chromosome at a time
assert(len(df.Chromosome.unique()) == 1)
df_as_ncl = ncl_from_df(df)
gr = pr.from_string(test_intervals_str)
mf = gr.merge().as_df()
# convert column types to make compatible with ncls
mf['Start'] = mf['Start'].astype(np.int64)
mf['End'] = mf['End'].astype(np.int64)
# associate intervals with merged regions by indices (analogous to bedmap)
mf_ovr_df = ncl_all_overlaps(df_as_ncl, mf)
mf_ovr_df_map = ovr_idxs_to_map(mf_ovr_df)
# search through associations for those that meet overlap criteria
search_candidates(mf_ovr_df_map, mf, df, threshold)
if __name__ == '__main__':
main()
Some examples of running it with different thresholds:
$ ./biostars9470750.py --threshold=0.1
chr1 50 80 F
chr1 120 200 A
chr1 199 260 B
$ ./biostars9470750.py --threshold=0.5
chr1 50 80 F
$ ./biostars9470750.py --threshold=0.01
chr1 50 80 F
chr1 120 200 A
[1]: /media/images/1e720a6a-cac4-4371-b9cb-e647e98a | biostars | {"uid": 9470750, "view_count": 2851, "vote_count": 1} |
Hi guys! I need help in setting a `for` loop in R (I'm quite new programming in R). I would like to add the same header to all the files that match a concrete pattern inside a folder.
To get the list of files I'm using the following code:
filelist <- list.files(pattern = "DESeq2_result*")
And this is the `for` loop I am trying to implement:
for (i in seq_along(filelist)) {
names[[i]] <- a
out [i]
}
where `a` is a vector that I defined with the names of the different columns:
a <- c("gene_id", "baseMean", "log2FC",
"SD", "WaldStatistic", "pval", "padj")
If you have any tutorial/page to help me to learn and practise my ability to code functions and loops in R would be so much appreciated.
Thank you so much in advance!
Jordi
| Use `col.names = ` argument when reading the files, then write out, something like this:
for(i in list.files(pattern = "DESeq2_result_*"))
write.table(read.table(i, col.names = c("gene_id", "baseMean", "log2FC",
"SD", "WaldStatistic", "pval", "padj")), i)
**Note:** this overwrites existing files, to create new files:
for(i in list.files(pattern = "DESeq2_result_*"))
write.table(read.table(i, col.names = c("gene_id", "baseMean", "log2FC",
"SD", "WaldStatistic", "pval", "padj")),
paste0(i, ".fixed.txt")) | biostars | {"uid": 359655, "view_count": 2727, "vote_count": 1} |
Trying to index vcf file but getting the following
```
tabix -p vcf dbsnp_138.hg19.vcf.gz
Not a BGZF file: dbsnp_138.hg19.vcf.gz
tbx_index_build failed: dbsnp_138.hg19.vcf.gz
```
Thoughts on how to proceed? Thanks! | Looks to me like the dbsnp file is not bgzipped?
```
gunzip dbsnp_138.hg19.vcf.gz
bgzip dbsnp_138.hg19.vcf
tabix -p vcf dbsnp_138.hg19.vcf.gz
``` | biostars | {"uid": 138514, "view_count": 19143, "vote_count": 9} |
How can I get the number of mapped reads for a particular region?
`samtools view -c -F 4 my.bam` gives me count in the entire bam file but I can't just add `-r Chr1:0:1000` to get reads in that region only. | samtools view in.bam chr1:0-1000 | wc -l
but better way is to do it with <a href="http://bedtools.readthedocs.org/en/latest/content/tools/multicov.html">bedtools</a> if you have many regions, for e.g like exon coordinates or peak coordinates, which is more efficient way for counting purpose.
bedtools multicov -bams aln1.bam [ aln2.bam aln3.bam . . ] -bed ivls-of-interest.bed | biostars | {"uid": 166438, "view_count": 2714, "vote_count": 1} |
Dear all:
I have downloaded Geneious software trial version [https://www.geneious.com/][1], but now it is expired. As a fresh level grad student, I can't afford to pay full version of Geneious software. I aware that Geneious has abundant features to analyze NGS data. My expectation is to find tools that relatively similar to Geneious. My major intent is to carry out **phylogenetic analysis**.Can anyone recommend the better alternative for Geneious tools? Is there any free, open source software tools that can be alternative for Geneious? My colleague is not well comfortable with R environment, so we are seeking tools that require less programming input. Can anyone give possible aid or point out which library we can use?
[1]: https://www.geneious.com/ | Have a look at [Jalview][1]
[1]: http://www.jalview.org/ | biostars | {"uid": 276826, "view_count": 14393, "vote_count": 1} |
Hi all:
I am working with Affymetrix microarray data for my entry to microarray analysis. However, I am trying to see data points distribution within labeled groups in the 3D plot, because I want to see how similar each group of data points in 3D space. To do so, I used `scatterplot3d` package from CRAN to get 3D to scatter plot, didn't get the correct plot for my data.
So my guess could be the first cluster my data points that belong to different labeled groups then render them in 3D space. Here is my reproducible data that simulated from the [actual dataset:][1]
**reproducible data**
> dput(head(phenDat,30))
structure(list(SampleID = c("Tarca_001_P1A01", "Tarca_013_P1B01",
"Tarca_025_P1C01", "Tarca_037_P1D01", "Tarca_049_P1E01", "Tarca_061_P1F01",
"Tarca_051_P1E03", "Tarca_063_P1F03", "Tarca_075_P1G03", "Tarca_087_P1H03",
"Tarca_004_P1A04", "Tarca_064_P1F04", "Tarca_076_P1G04", "Tarca_088_P1H04",
"Tarca_005_P1A05", "Tarca_017_P1B05", "Tarca_054_P1E06", "Tarca_066_P1F06",
"Tarca_078_P1G06", "Tarca_090_P1H06", "Tarca_007_P1A07", "Tarca_019_P1B07",
"Tarca_031_P1C07", "Tarca_079_P1G07", "Tarca_091_P1H07", "Tarca_008_P1A08",
"Tarca_020_P1B08", "Tarca_022_P1B10", "Tarca_034_P1C10", "Tarca_046_P1D10"
), GA = c(11, 15.3, 21.7, 26.7, 31.3, 32.1, 19.7, 23.6, 27.6,
30.6, 32.6, 12.6, 18.6, 25.6, 30.6, 36.4, 24.9, 28.9, 36.6, 19.9,
26.1, 30.1, 36.7, 13.6, 17.6, 22.6, 24.7, 13.3, 19.7, 24.7),
Batch = c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L), Set = c("PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA",
"PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA",
"PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA",
"PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA",
"PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA", "PRB_HTA",
"PRB_HTA", "PRB_HTA"), Train = c(1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), Platform = c("HTA20",
"HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20",
"HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20",
"HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20",
"HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20", "HTA20",
"HTA20")), row.names = c(NA, 30L), class = "data.frame")
**my attempt**:
hclustfunc <- function(x) hclust(x, method="complete")
distfunc <- function(x) as.dist((1-cor(t(x)))/2)
d <- distfunc(persons_df)
fit <- hclustfunc(d)
but seems I need to group data points that belong to each individual groups (for instance, in the batch column, there are 4 different batches), then use either PCA or clustering or k-means to measure the distance then render them in 3D space with 3D scatter plot. But so far my attempts didn't bring up my expected plot.
basically, I want to see data points (a.k.a, rows) that belong to different batch (or group), wanted to color them by some 'group' attribute. I just want to see how data points are similar to each other if we grouped them by different age categories (I used `findInterval(persons_df$ages, c(10,20,30,40,50))`), different batch, and different platform
I am thinking to use kmeans, PCA, other methods can give me different components that can be visualized in 3D plot, but this is not very intuitive to me how to do it in R?
**desired plot**
I want to get 3D plot something like this:
<a href="https://ibb.co/Pw6Xycr"><img src="https://i.ibb.co/jgr0Yy8/33.jpg" alt="33" border="0" /></a>
can anyone point me out how can I possibly to make this happen? any way to get cluster my data and visualize it in 3D plot in R? Any thoughts? Thanks
[1]: https://jumpshare.com/v/eZhYBHZm1MGfFDMx7MXq | First, read the docs of the functions you're using. hclust() does hierarchical clustering which means it produces a tree, not individual clusters. To get these, you need to cut the tree (check ?cutree).
Second, you don't need to wrap a function in another function if you're not somehow modifying it, i.e. you can just do
d <- as.dist((1-cor(t(x)))/2)
tree <- hclust(d, method="complete")
This makes the code clearer.
Once you have a vector of cluster memberships and a vector of associated colors, you can use them to assign colors. For a 3D scatter plot, I use something like this:
library(rgl)
library(car)
scatter3d(x = PC1, y = PC2, z = PC3, surface = FALSE, groups = as.factor(clusters), surface.col = cluster.colors, col = cluster.colors, xlab="PC1",ylab="PC2",zlab="PC3")
To recap:
- Read the docs of the functions you intend to use
- Cluster your data to obtain a vector of cluster memberships
- Get a vector of colors you want to associate with each cluster
- Plot using the appropriate syntax for the plotting function of your choice.
| biostars | {"uid": 389336, "view_count": 1683, "vote_count": 1} |
Hi everyone,
I have to create an artificial .vcf file with SNVs, but also with small insertions, deletions and duplications, that will be used by tools like Exomiser, Extasy, ... for variant prioritization
For SNV's, it's quite easy to fill in the chromosome, genomic position, reference and alternative allele. However for indels & duplication, I'm often not sure how to do this correctly.
If there is a deletion for example, what is then the position that you should put? Position that is deleted, or the base before or after? Same problem for an insertion and a duplication. And which reference allele should you take in those cases? Does anyone know if there are rules to do this? | <p>Check the VCF format specifications: http://samtools.github.io/hts-specs/VCFv4.2.pdf</p>
<p>In particular, in section 5 contains many examples.</p>
| biostars | {"uid": 114703, "view_count": 7544, "vote_count": 1} |
Hi All,
I would just like some clarification of terminology regarding a detail of gene coexpression network construction. Let's say I have two RNA-seq datasets, each dataset containing `n` replicates, and each dataset representing sequencing data from the same biological system in two different experimental conditions. How should I construct the data matrix for input to something like WGCNA if I want to analyze gene coexpression networks ***across experimental conditions/interventions?***
What I imagine is that each row of the matrix represents data from one gene, and each column represents data collected from one of the replicates in an experimental condition. So for example, one particular row of the matrix would look like this:
c1R1 ... c1Rn c2R1 ... c2Rn
gene x [val, ... val, val, ... val]
Where the first column `c1R1` corresponds to the data from the first experimental replicate in the first condition, and the last column `c2Rn` corresponds to the nth experimental replicate in the 2nd experimental condition. For coexpression analysis, each row is then correlated with every other row in a pairwise fashion, an adjacency matrix is constructed from the correlation analysis and then other analyses such as module detection can be conducted based on the resulting adjacency matrix.
I just want to verify that this is an appropriate method for organizing data if one wishes to construct coexpression networks for genes "across an intervention". | Hi mantale1,
That's exactly correct. By including replicates from both conditions, network will reflect both the specific pathways that are co-regulated during your condition of interest, as well as whatever genes are constitutively expressed in the organism.
If you were to then start added samples from other unrelated conditions, you would be both improving the accuracy of the global co-expression network due to the increased information, but also would be reducing the signal resulting from the intervention you are interested in.
Couple things you might consider:
1) Depending on the number of replicates you have for each condition, you may end up with a very noisy co-expression network. Most of the methods were developed for microarray data where you are likely to have many more samples. With less then 10 replicates across both conditions, you are likely have a large number of spurious correlations.
2) You might consider filtering out genes which are not differentially expressed across your intervention. This will help both with eliminating spurious correlations, and also help to bring out the signal specifically due to the intervention.
Keith | biostars | {"uid": 200292, "view_count": 2108, "vote_count": 1} |
Hi,
I have 3 metagenomes, all of which are from enrichment cultures. My aim is to assemble the genomes of bacterial strains I have not been able to isolate which may degrade my compound of interest. I thought it would be best to merge the samples beforehand and this would improve the coverage of the MAGs but after reading some other posts I'm not so sure. Would it be best to concatenate samples? If so, is it ok that I have done this prior to the trimming stage, i.e I have concatenated all the r1.fq.gz and all the r2.fq.gz files and now plan to trim these files?
Thanks,
Jess | Hi Jess and welcome at Biostars.
To make it short: Yes your approach very likely will work. I do similar merging operations when I get WGS metagenome data from, for example, a time series experiment.
To outweigh the posts you mentioned, maybe take a look at [metabat][1] or [concoct][2], two software packages that can be used to isolate genomes from you metagenome. Though both essentially start binning using k-mer frequencies, they also assume bam file input of reads aligned to a common reference. That common reference is an assembly of pooled reads from all your experiments.
[1]: https://bitbucket.org/berkeleylab/metabat/overview
[2]: https://concoct.readthedocs.io/en/latest/ | biostars | {"uid": 354280, "view_count": 1301, "vote_count": 1} |
<p>Hello!
I obtained a list of unmapped reads IDs from my BAM file and I want to remap only the unmapped reads with other parameters.
How can I extract the subset of unmapped reads from my original fastq file?
Thank you in advance,
Luke</p>
| I also wrote a program for this purpose, distributed with [BBMap][1]. Usage:
filterbyname.sh in=reads.fq out=filtered.fq names=names.txt include=t
The `include` flag will toggle between including or excluding the names in `names.txt` (which can, alternately, be another fastq or fasta file). This also supports paired input/output, and names being substrings or superstrings of read IDs.
[1]: https://sourceforge.net/projects/bbmap/ | biostars | {"uid": 45816, "view_count": 17160, "vote_count": 6} |
What are some good resources to learn shell script for NGS pipeline development?
How much shell script should one know to develop an intermediate level pipeline for NGS data analysis?
Can someone suggest some good resources, tutorials? | What is an "intermediate level pipeline"? What is your target audience? Release the pipeline into the wild? Internal lab use? Personal use? Anyway, to learn shell scripting for NGS pipeline development, you must learn shell scripting, so look at the "[Bash Guide for Beginners][1]" and "[Advanced Bash-Scripting Guide][2]".
With a very basic understanding of bash scripting you may easily put together a simple pipeline which will, for example, clean your reads, assemble a genome, map the reads / additional reads into assembled genome, and annotate assembled genome. In fact, I wrote such simple pipeline - it is really crude, no error checking, no optimizations, no whatever, but I feed fastq files and some hours later get a draft genome and its annotation.
[1]: http://www.tldp.org/LDP/Bash-Beginners-Guide/html/Bash-Beginners-Guide.html
[2]: http://www.tldp.org/LDP/abs/html/ | biostars | {"uid": 178949, "view_count": 4548, "vote_count": 5} |
I'd want to combine 3 pheatmap into 1 figure. Here is the test code. it seems `mfrow` parameter does not work for it.
```
test = matrix(rnorm(200), 20, 10)
test[1:10, seq(1, 10, 2)] = test[1:10, seq(1, 10, 2)] + 3
test[11:20, seq(2, 10, 2)] = test[11:20, seq(2, 10, 2)] + 2
test[15:20, seq(2, 10, 2)] = test[15:20, seq(2, 10, 2)] + 4
colnames(test) = paste("Test", 1:10, sep = "")
rownames(test) = paste("Gene", 1:20, sep = "")
par(mfrow=c(1,2))
pheatmap(test)
pheatmap(test, kmeans_k = 2)
pheatmap(test, kmeans_k = 3)
``` | <p>If it's just for the one figure, I'd do it manually in Inkscape. However, for high throughput figure production, I would export individual images from R and script the final figure assembly from the components, for example using <a href="http://www.imagemagick.org/" target="_blank">ImageMagick</a>. You could also write an SVG file directly (for example with the perl module <a href="http://search.cpan.org/~ronan/SVG-2.28/SVG/Manual.pm" target="_blank">SVG</a>) or by scripting Inkscape.</p>
| biostars | {"uid": 128229, "view_count": 22000, "vote_count": 4} |
Hi,
I have questions regarding [the CNV calls calculated from TCGA][1].
What I understand, they used a CBS algorithm to find segments which are changed compared to a reference and the segment mean value is a measure of this change. In general, a mean log2 Ratio of the probe intensities.
Actually, the segments can be defined as deletions or duplications beyond a threshold (defined from you. Severel papers used +/-0.2).
```
Sample Chromosome Start End Number_of_probes Segment_Mean
TCGA-CC-A8HV-01 chr1 51598 5999008 100 -0.0325
TCGA-CC-A8HV-01 chr1 6001979 6002289 153 -2.1264
TCGA-CC-A8HV-01 chr1 6002874 14443436 2 -0.0923
```
Afterwards, TCGA "re"calculated (to enhance?) the CNV detection results in cancer samples using the segmentation data with [GISTIC2][2]. Is this right?
I compared some of the segment mean data and the results from GISTIC2 (estimates) for cancer samples and found differences on gene and sample level.
If the GISTIC2 method provides better results do I have to use then a similar algorithm for non-cancer healthy samples and germline CNVs? And which are these tools? Can I use GISTIC, as well?
Thanks.
[1]: https://genome-cancer.soe.ucsc.edu/proj/site/hgHeatmap/
[2]: http://www.broadinstitute.org/software/cprg/?q=node/31 | Hi Jimbou,
Struggling with similar questions over here, as the used threshold is very often arbitrarily described in literature without further explanation / reasoning. What I found so far concerning GISTIC is the following (see http://www.cbioportal.org/faq.jsp):
> What is GISTIC? What is RAE?
>
> Copy number data sets within the portal are generated by GISTIC or RAE algorithms. Both algorithms attempt to identify significantly altered regions of amplification or deletion across sets of patients. Both algorithms also generate putative gene/patient copy number specific calls, which are then input into the portal.
>
> For TCGA studies, the table in `all_thresholded.by_genes.txt` (which is the part of the GISTIC output that is used to determine the copy-number status of each gene in each sample in cBioPortal) is obtained by applying both low- and high-level thresholds to to the gene copy levels of all the samples. The entries with value +/- 2 exceed the high-level thresholds for amps/dels, and those with +/- 1 exceed the low-level thresholds but not the high-level thresholds. **The low-level thresholds are just the 'amp_thresh' and 'del_thresh' noise threshold input values to GISTIC (typically 0.1 or 0.3) and are the same for every thresholds.**
>
> **By contrast, the high-level thresholds are calculated on a sample-by-sample basis and are based on the maximum (or minimum) median arm-level amplification (or deletion) copy number found in the sample.** The idea, for deletions anyway, is that this level is a good approximation for hemizygous given the purity and ploidy of the sample. **The actual cutoffs used for each sample can be found in a table in the output file sample_cutoffs.txt**. All GISTIC output files for TCGA are available at: gdac.broadinstitute.org.
Hope this helps, though I did not yet manage to obtain a copy of the 'sample_cutoffs.txt' for my cancer cohort. In case you found any more information please share.
Cheers | biostars | {"uid": 133927, "view_count": 15586, "vote_count": 7} |
Is there a way to convert a `.vcf.gz.tbi ` file back to a `.vcf.gz` or `.vcf` file?
My end goal is to convert these files into plink bed/bim/fam files, but plink will not accept the .vcf.gz.tbi file as is. Any help is appreciated! | No. An index (that is what tbi is) is just a table of content. You cannot reconstruct every page of a book with just the ToC at hand. | biostars | {"uid": 9534881, "view_count": 439, "vote_count": 1} |
I would like to parse a BAM file in parallel using pysam and multiple_iterators
Here is my code
import pysam
import sys
from multiprocessing import Pool
import time
def countReads(chrom,Bam):
count=0
#Itr = Bam.fetch(str(chrom),multiple_iterators=False)
Itr = Bam.fetch(str(chrom),multiple_iterators=True)
for Aln in Itr: count+=1
if __name__ == '__main__':
start = time.time()
chroms=[x+1 for x in range(22)]
cpu=6
BAM = sys.argv[1]
bamfh = pysam.AlignmentFile(BAM)
pool = Pool(processes=cpu)
for x in range(len(chroms)):
pool.apply_async(countReads,(chroms[x],bamfh,))
#countReads(chroms[x],bamfh)
pool.close()
pool.join()
end = time.time()
print(end - start)
I get this error when I run it.
TypeError: _open() takes at least 1 positional argument (0 given)
But it spits out a whole bunch of errors. Can anyone help me to use multiprocessing to read a BAM file in parallel using pysam?
Thanks | fixed it. I was going off some online blog that was wrong
import pysam
import sys
from multiprocessing import Pool
import time
def countReads(chrom,BAM):
count=0
# here's the fix
bam = pysam.AlignmentFile(BAM,'rb')
Itr = bam.fetch(str(chrom),multiple_iterators=True)
for Aln in Itr: count+=1
if __name__ == '__main__':
start = time.time()
chroms=[x+1 for x in range(22)]
cpu=6
BAM = sys.argv[1]
pool = Pool(processes=cpu)
for x in range(len(chroms)):
pool.apply_async(countReads,(chroms[x],BAM,))
#countReads(chroms[x],bamfh)
pool.close()
pool.join()
end = time.time()
print(end - start)
| biostars | {"uid": 275974, "view_count": 3714, "vote_count": 1} |
<p>One of our project used to query OMIM data as XML through NCBI's efetch utility, as described here for example:</p>
<p><a href="http://biostar.stackexchange.com/questions/4194/what-is-the-best-way-to-interact-programmatically-with-omim">What is the best way to interact programmatically with OMIM?</a></p>
<p>However, it seems the service has stopped functioning a few months ago. It now simply returns the following error:</p>
<blockquote>
<p>Database: omim - is not supported</p>
</blockquote>
<p>I can find no mention of an update to the API on NCBI's website or anywhere else.
At the same time, the pages accessible directly on OMIM's website offer no link to structured data (XML or otherwise) and the downloadable file, while using some specific format to delimit fields, is still far from the flexibility of the former XML files (for example, it is impossible to retrieve metadata for each reference).</p>
<p>Is there currently any way to regain access to OMIM data in a structured, parsable format (XML...)?</p>
| <p>I've been waiting for this since July 2011.
<a href="https://twitter.com/OmimOrg/status/90770519841980416">https://twitter.com/OmimOrg/status/90770519841980416</a></p>
<blockquote>
<p>@yokofakun[?] @mrrizkalla and we have an web API in the wings, not
quite ready to open it up yet . omim.org (@OmimOrg)</p>
</blockquote> | biostars | {"uid": 19421, "view_count": 5967, "vote_count": 5} |
Can the chromosome names and lengths to which a bam file was aligned be determined solely from its bai file? | No. The BAI format is described in §5.2 of [SAMv1.pdf](http://samtools.github.io/hts-specs/SAMv1.pdf) and does not contain the chromosome names or lengths. Instead it merely identifies reference sequences by their index, 0 <= n < n_ref, in the corresponding BAM file's (binary) header.
The CSI format operates similarly.
The [Tabix format](https://samtools.github.io/hts-specs/tabix.pdf) OTOH does contain the names of reference sequences (but not their lengths). | biostars | {"uid": 457538, "view_count": 1168, "vote_count": 1} |
<p>When I searched the NCBI Taxonomy db for id=693 it has retrieved the result but the tax_id in the result is different by the ID which I searched
<a href='http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=taxonomy&id=603&retmode=xml'>http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=taxonomy&id=603&retmode=xml</a></p>
<p>and I just find out this tax_id (603) is related to gi=156763600 on gi_taxid_prot.dmp.gz and If you search it on <a href='http://www.ncbi.nlm.nih.gov/protein/156763600'>ncbi</a> and press Taxonomy from the right part of the page it comes by error on the next page.</p>
<p>Have you any explanation for these!?</p>
| <p>I assume you mistyped the first taxonomy ID and meant <code>603</code> instead. When you are using NCBI's web-search form, then a search for the ID <code>603</code> will redirect you to <a href='http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&id=59203&lvl=3&lin=f&keep=1&srchmode=5&unlock'>Salmonella enterica subsp. arizonae</a>, which coincides with the XML query result you linked to. Said Salmonella has taxonomy ID <code>59203</code> just as in the XML result.</p>
<p>The taxonomy IDs <code>603</code> and <code>59203</code> appear to be denoting the same database entry, which is reflected in the XML at the bottom with:</p>
<pre><code>[AkaTaxIds]
[TaxId]603[/TaxId]
[/AkaTaxIds]
</code></pre>
<p>I think you can treat taxonomy IDs <code>603</code> and <code>59203</code> as synonyms.</p>
<p>Hope that helps.</p>
| biostars | {"uid": 50917, "view_count": 3022, "vote_count": 1} |
<p>I got several <a href='http://samtools.sourceforge.net/SAM1.pdf'>SAM</a>/BAM files, and I am interested in 5'ends of the mapped reads. Is there any tools or scripts to count how many 5'ends are mapped at a specific genomic position? </p>
<p><strong>N.B.</strong> I am not try to count the total reads number mapped to the specific genomic position. </p>
<p>For example, in the following window, I get 5 reads, 3 of them have a 5'ends at 14488, the other 2 have a 5'ends at 14487. </p>
<p><img src='http://s23.postimg.org/83y733kp7/Screen_Shot_2013_08_30_at_20_12_37.png' alt='enter image description here' /></p>
<p>And I want produce a table like this.</p>
<p><img src='http://s9.postimg.org/r1xebaei3/Screen_Shot_2013_08_30_at_20_15_30.png' alt='enter image description here' />. </p>
| <pre>
<code>bedtools genomecov -d -5 -ibam input.bam -g genome.fa > output.txt</code></pre>
<p>More options see bedtools documents at http://bedtools.readthedocs.org/en/latest/content/tools/genomecov.html</p>
| biostars | {"uid": 80236, "view_count": 9298, "vote_count": 6} |
<p>New version of BCFTools (using call instead of view) doesn't produce VCF files with GQs in the FORMAT field.</p>
<p>Does anyone know how to get it in the new version (1.1)?</p>
| <p>When you run bcftools call specify -f GQ to have it added to the output</p>
<pre>
bcftools call -vmO z -f GQ</pre>
| biostars | {"uid": 125792, "view_count": 2998, "vote_count": 2} |
Hi,
I've got bisulfite-sequencing data for two differentiation stages. The raw data was mapped using Bismark. For each CpG site, the methylation ratio was marked as "A/B" (A methylated reads vs B unmethylated reads for this site).
Now I want to compare the overall methylation of a certain region. Assuming this region contains 3 CpG sites, the methylated/unmethylated ratio of each site in the two stages are 45/60, 4/10, 5/15 and 14/65, 3/9, 7/20 respectively.
I think there are two ways to calculate the overall methylation rate of the region:
**Method 1: total amount of methylated reads / total number of reads within the region**
methylation rate of stage 1 = (45+4+5)/(45+60+4+10+5+15) = **54/139** = 0.39
methylation rate of stage 2 = (14+3+7)/(14+65+3+9+7+20) = **24/118** = 0.20
**Method 2: average the methylation rate of each CpG site within the region**
methylation rate of stage 1 = (45/(45+60) + 4/(4+10) + 5/(5+15))/3 = 0.32
methylation rate of stage 2 = (14/(14+65) + 3/(3+9) + 7/(7+20))/3 = 0.22
For **method 1**, I can then calculate the confidence inteval and the P value to know whether **54 out of 139** is significantly different from **24 out of 118**.
However, the methylation rate calculated by **method 1** is biased toward the site with higher reads coverage.
**Method 2** seems to be more robust to indicate the actual methylation rate of the region.
But I don't know which statistics test should I use for method 2 to know the significance.
Please help.
Thanks in advance. | From context I gather that you lack biological replicates, which is unfortunate but still rather common. As you mentioned, "method 1", which is basically summing across CpGs and performing a Fisher's test is less than ideal, since it will miss interesting cases (using your earlier nomenclature, think of 50/0, 0/50 in sample1 and 25/25, 25/25 in sample 2, they're very different but this method would miss that).
Method 2 lends itself to a weighted paired t-test, which you can do in R. This method is, of course, also not ideal, since you give a weight to each CpG, but if coverage is drastically different between the two samples at a CpG then you start running into problems (there are ways around that, but they'd be moderately annoying).
What would likely be a nice method in this case would be to perform a Fisher's test on the individual CpGs between the samples and then spatially correct the p-values (the original paper from Benjamini & Heller is <a href="http://www.tandfonline.com/doi/abs/10.1198/016214507000000941#preview">here</a> but you can find an implementation in the <a href="http://www.bioconductor.org/packages/release/bioc/html/BiSeq.html">BiSeq bioconductor package</a>, which you could also just directly use to make your life easier since it's actually targeted at RRBS or other targeted BS-seq datasets). You could also use the SLIM model for p-value adjustment, such as is used in <a href="https://code.google.com/p/methylkit/">methylKit</a>. Somewhat similarly, you could also first use a smoothing method (see <a href="http://bioconductor.org/packages/release/bioc/html/bsseq.html">BSseq</a> as an example) and then do a paired t-test, though I wouldn't recomment that for your use case.
There are some additional possibilities, but I'd recommend just using BiSeq and being done with it. | biostars | {"uid": 97980, "view_count": 4475, "vote_count": 4} |
I have a list of genomic ranges mapped to hg19 . my data is in matrix format lets call it `ranges` which has 600,000 rows and 4 clumns
here is few row of my data
head(ranges)
chr start end strand
[1,] "chr1" "10025" "10525" "."
[2,] "chr1" "13252" "13752" "."
[3,] "chr1" "16019" "16519" "."
[4,] "chr1" "96376" "96876" "."
[5,] "chr1" "115440" "115940" "."
[6,] "chr1" "235393" "235893" "."
Is there a function that gets sequences and calculates GC content for each row( each range)
I would prefer that output be in a vector format
I would really appreciate your help
| thank you all,
these solutions are great.
I also found a function that calculates GC content using BSgenome.Hsapiens.UCSC.hg19.
require(BSgenome.Hsapiens.UCSC.hg19)
require(BSgenome)
library(Repitools)
Granges<- makeGRangesFromDataFrame(data.frame(ranges))
gc<- gcContentCalc(Granges , organism=Hsapiens, verbose=TRUE)
| biostars | {"uid": 478444, "view_count": 2573, "vote_count": 2} |
I have many files like this
ENSG00000000003.13 366
ENSG00000000005.5 26
ENSG00000000419.11 1905
ENSG00000000457.12 775
ENSG00000000460.15 377
ENSG00000000938.11 316
ENSG00000000971.14 272
ENSG00000001036.12 1726
ENSG00000001084.9 2479
ENSG00000001167.13 1166
ENSG00000001460.16 38
ENSG00000001461.15 298
They are all htseq-count.
I have two questions
1- how can I convert the ID to gene names? do you have any solution available (I can use any language) ?
2- what would you use for normalizing them?
| To answer your first question : <br>
Create a TSV file with your input => list.tsv <br>
Remove ".13" etc from your ENSG (list.tsv) => list_ensembl_gid.tsv <br>
awk -F '\t' -v OFS='\t' '{sub(/\.[0-9]*/, "", $1)} 1' list.tsv > list_ensembl_gid.tsv;
Then use R & Biomart to retrieve corresponding genes names
R
data = read.table("list_ensembl_gid.tsv")
colnames(data)[1] <- "ensembl_gene_id"
colnames(data)[2] <- "counts"
library('biomaRt')
hsapiens = useMart("ensembl",
dataset="hsapiens_gene_ensembl")
hsapiens_infos <- getBM(attributes=c('ensembl_gene_id',
'external_gene_name'),
mart = hsapiens)
merge_infos <- merge(x = data,
y = hsapiens_infos,
by = "ensembl_gene_id",
all.x = TRUE)
head(merge_infos)
> ensembl_gene_id counts external_gene_name <br>
> ENSG00000000003 366 TSPAN6 <br>
> ENSG00000000005 26 TNMD <br>
> ENSG00000000419 1905 DPM1 <br>
> ENSG00000000457 775 SCYL3 <br>
> ENSG00000000460 377 C1orf112 <br>
> ENSG00000000938 316 FGR <br> | biostars | {"uid": 293965, "view_count": 4284, "vote_count": 1} |
This is a very basic question which I have not been able to understand even after searching on the web for significant amount of time. Can someone please explain me as if to a high-school student about these terms:
1. contig
2. Singleton read; and
3. Discordant mate pairs | Contigs are long continuous stretch of sequence results from overlapping short reads.
When you do a paired-end sequence, you sequence a DNA fragment ( lets say of length 400bp ) from both the ends, resulting R1 and R2 reads. When you map them back to genome, if the both reads map with in 400bp (as they are generated from 400bp fragment) and expected strand orientation, they are referred as concordant pairs. This is primarily in DNA-Seq. I don't think there is a discordant mate pair, it is called discordant pair. Mate-Pair sequencing is a different thing.
When one of the mates (either R1 or R2) is not present (lost due to pre-processing or in mapping step or post-processing of data), the other mate becomes **single (also sometimes referred as orphan reads) called as singletons**. | biostars | {"uid": 176978, "view_count": 8369, "vote_count": 2} |
Does anyone know how to solve this error? I have never seen this error before and my current script has been working fine up until this point.
> t2g <- biomaRt::getBM(
+ attributes = c("ensembl_transcript_id", "transcript_version",
+ "ensembl_gene_id", "external_gene_name", "description",
+ "transcript_biotype"), mart = mart)
No encoding supplied: defaulting to UTF-8.
Error in biomaRt::getBM(attributes = c("ensembl_transcript_id", "transcript_version", :
The query to the BioMart webservice returned an invalid result: biomaRt expected
a character string of length 1. Please report this to the mailing list. | For me the Ensembl mirror sites are down. It looks like you're US based, so by default you'll get redirected to [uswest.ensembl.org](http://uswest.ensembl.org/biomart/martview?redirect=no) or [useast.ensembl.org](http://useast.ensembl.org/biomart/martview?redirect=no), neither of which are working for me at the moment.
The main site is working so you want to use [www.ensembl.org][1] as your host. However you also have to supply the argument `ensemblRedirect = FALSE`, otherwise their internal redirection will simply send you back to your local site. This works for me, even if I'm using a server based in Texas:
mart <- useMart(biomart = "ENSEMBL_MART_ENSEMBL",
dataset = "hsapiens_gene_ensembl",
host = 'www.ensembl.org',
ensemblRedirect = FALSE)
After some discussion with the Ensembl BioMart team the plan going forward is to remove the redirection entirely when using **biomaRt**, so this should no longer be an issue, and you'll go to whatever address you've provided. I'll try to update here when that's been done.
----------
The 'No Encoding Supplied' message is a red herring relating to the content being returned by the server - you see it whether there's a problem or not. I've already addressed this in the developmental version of **biomaRt**, and will update the release version too. It's not very helpful to end users!
[1]: http://www.ensembl.org?redirect=no | biostars | {"uid": 294125, "view_count": 1999, "vote_count": 1} |
<p>Hi,</p>
<p>I've been using the blastn (version 2.2.28+) standalone tool against a custom formatted genome via:</p>
<pre><code>blastn -db BLASTDB -word_size 7 -query input.fa -out filename -perc_identity 100 -outfmt 6 -max_target_seqs 2
</code></pre>
<p>To discard non-perfect hits and show only the 2 top hits.</p>
<p>The output file has a great format however is there a way to add an extra column that contains the actual target-seq (sequence of the matched hit)? Such that the fields are:</p>
<pre><code>query id, subject id, % identity, alignment length, mismatches, gap opens, q. start, q. end, s. start, s. end, evalue, bit score, sequence
</code></pre>
<p>Thanks!</p>
<ul>
<li>TJC</li>
</ul>
| Run `blastn -help` then look for the field called `outfmt`
*** Formatting options
-outfmt <String>
alignment view options:
0 = pairwise,
1 = query-anchored showing identities,
2 = query-anchored no identities,
3 = flat query-anchored, show identities,
4 = flat query-anchored, no identities,
5 = XML Blast output,
6 = tabular,
7 = tabular with comment lines,
8 = Text ASN.1,
9 = Binary ASN.1,
10 = Comma-separated values,
11 = BLAST archive format (ASN.1)
Options 6, 7, and 10 can be additionally configured to produce
a custom format specified by space delimited format specifiers.
The supported format specifiers are:
qseqid means Query Seq-id
qgi means Query GI
qacc means Query accesion
qaccver means Query accesion.version
qlen means Query sequence length
sseqid means Subject Seq-id
sallseqid means All subject Seq-id(s), separated by a ';'
sgi means Subject GI
sallgi means All subject GIs
sacc means Subject accession
saccver means Subject accession.version
sallacc means All subject accessions
slen means Subject sequence length
qstart means Start of alignment in query
qend means End of alignment in query
sstart means Start of alignment in subject
send means End of alignment in subject
qseq means Aligned part of query sequence
sseq means Aligned part of subject sequence
evalue means Expect value
bitscore means Bit score
score means Raw score
length means Alignment length
pident means Percentage of identical matches
nident means Number of identical matches
mismatch means Number of mismatches
positive means Number of positive-scoring matches
gapopen means Number of gap openings
gaps means Total number of gaps
ppos means Percentage of positive-scoring matches
frames means Query and subject frames separated by a '/'
qframe means Query frame
sframe means Subject frame
btop means Blast traceback operations (BTOP)
staxids means Subject Taxonomy ID(s), separated by a ';'
sscinames means Subject Scientific Name(s), separated by a ';'
scomnames means Subject Common Name(s), separated by a ';'
sblastnames means Subject Blast Name(s), separated by a ';'
(in alphabetical order)
sskingdoms means Subject Super Kingdom(s), separated by a ';'
(in alphabetical order)
stitle means Subject Title
salltitles means All Subject Title(s), separated by a '<>'
sstrand means Subject Strand
qcovs means Query Coverage Per Subject
qcovhsp means Query Coverage Per HSP | biostars | {"uid": 88944, "view_count": 154253, "vote_count": 30} |
Hi all,
I used bamToFastq to convert my bamfiles into R1 and R2 fastq files. I tried samtools fastq before but found it was erroneous (so many missing sequences). However, I still have an issue with bamTofastq from bedtools. When I add the read counts obtained from ```grep -c "@" R1.fastq``` and ```grep -c "@" R2.fastq```, it is always slightly less than the count from the bamfile ```samtools view -c in.bamfile```. Why might this be the case? I haven't found any documentation to suggest that this is a normal thing. The R1 and R2 fastq counts should equal the counts in the bamfile so what am I doing wrong with the conversion??
Thank you. | > it is always slightly less than the count from the bamfile samtools view -c in.bamfile. Why might this be the case?
you bam contains supplementary, secondary alignments. | biostars | {"uid": 9475071, "view_count": 1001, "vote_count": 1} |
<p>Hi all,</p>
<p>I'm facing a very annoying error in R while assigning row names to my data matrix. I have some RNA-seq data that I'm considering clustering in R. I'm using gene names as row names for my expression matrix but it keeps reporting that there are duplicate names. Some un-annotated genes have been assigned with some IDs that start with numbers. I don't understand how to deal with this error? Is there a way to work around it? because I cant change the gene names. </p>
<p>EDIT: </p>
<pre><code>gene sample1 sample2 sample3
Mar-01 4.19504 3.9006 4.15683
Mar-02 3.0554 3.4261 3.76675
un_A_2 1.1515 1.2455 0.563484
un_A_3 98.2504 120.341 101.753
ENSGALG00000008227 39.6383 12.8651 38.2281
ENSGALG00000008242 5.71557 7.79314 9.40917
ENSGALG00000008277 24.6231 28.3207 24.9288
CNN3 141.708 134.476 144.514
CNNM1 0.840218 0.963683 0.619086
CNNM2 16.0282 12.1301 12.4665
</code></pre>
<p>Many thanks.</p>
| <p>One way of dealing with this is in R is the function <code>make.names</code> with the option <code>unique=TRUE</code>, see <code>?make.names</code>.</p>
<pre><code>> nams = c("bl-a","bl-a","bl-a", "foo" )
> df = data.frame(matrix (1:4))
> df
matrix.1.4.
1 1
2 2
3 3
4 4
> rownames(df) = nams
Error in `row.names<-.data.frame`(`*tmp*`, value = value) :
duplicate 'row.names' are not allowed
In addition: Warning message:
non-unique value when setting 'row.names': ‘bl-a’
> rownames(df) = make.names(nams, unique=TRUE)
> df
matrix.1.4.
bl.a 1
bl.a.1 2
bl.a.2 3
foo 4
</code></pre>
| biostars | {"uid": 62988, "view_count": 182113, "vote_count": 11} |
I'd like to extract consensus of reads from a bam/sam file which would follow particular criteria:
1. The portions of reference which have 0 coverage are represented in
the consensus as N
2. If there is at least single read mapping to a particular nucleotide,
this nucleotide is retrieved in a consensus.
3. If there is conflicting evidence from reads, the nucleotide with
highest frequency (lets say 75%) is retrieved in consensus. If there is no nucleotide with >= 75% frequency, N is retrieved in the consensus.
Is there a tool which would enable this or similar level of fine-tuning of the consensus extraction from bam/sam?
What I tried: I used samtools, bcftools and vcfutils to get consensus:
samtools mpileup -uf reference.nt mapping.bam | bcftools call -c | vcfutils.pl vcf2fq > consensus.fasta
However, the consensus I'm getting this way contains a substantial proportion of the reference sequence and I can't find parametres which could modify how the consensus is calculated.
Yet another approach - extracting consensus using IGV viewer (option Copy consensus sequence) gives me basically the consensus I'm looking for but there is apparently no way how to automatize this for hundreds of reference sequences which I have.
Note: I posted this question also as a comment to my original question [Map eukaryotic genomic reads to transcript reference of closely related organism][1] however I'm posting it here separately as it diverged from my original question.
[1]: https://www.biostars.org/p/288924/#289983 | simple-consensus-per-read-group by Thomas Sibley is the tool I was looking for (https://github.com/MullinsLab/simple-consensus-per-read-group). It was recently updated so it can now correctly process BAMs with multiple scaffolds/contigs. | biostars | {"uid": 290008, "view_count": 4151, "vote_count": 1} |
Hello there, hope all of you are fine. I do hope you are enjoying this weekend.
In my little experience, I always had to deal with samples coming from different batches (i.e. coming from different hospitals or experiments done in different days). One postdoc in my lab showed me how to deal with batch effect by using **SVA package.** I guess it is a brilliant idea to work with that, but if I don't go wrong this is not the best tool to cope with the batch effect as, in my cases, I may know the source of confounding (in other words, I know that samples are generated in different days/ come from different hospitals).
**My first question is**: at first glance, by looking at the PCA plot from this experiment, how can you determine (i.e. be absolutely sure) that your samples are biased or not. How can you be absolutely sure that your samples need a correction, if they cluster as expected? and if they don't cluster as you would expect how can you know that this is not due to **real** biology or not? what if you are **over-correcting** samples and removing relevant biological data that, in turn, make impossible to determine **genes that are *truly* changing**? how can you know that?
**My second question is** more practical and, basically, linked to my inexperience. Given the fact I have always used this SVA package by slavishly following postdoc recommendation (with her own doubts), I would like to deal with this problem in a definite way; hopefully, by designing a model matrix. I have zero ideas on where to start and if you could help me with some advice, that would be grand! I really need your help guys, cause I am absolutely alone now and don't know who to ask.
I have found this [link][1] but it seems to be a bit advanced for myself.
[1]: http://genomicsclass.github.io/book/pages/adjusting_with_linar_models.html | > At first glance, by looking at the PCA plot from this experiment, how
> can you determine (i.e. be absolutely sure) that your samples are
> biased or not:
I think it is safe to assume that there is always some technical bias in an experiment. So rather than determining whether your samples are biased or not, **a PCA allows qualitatively assess how big is this technical bias (batch effect) compared to the other factorial effects**. Note that you need at least two samples from the same batch to see if your samples group by batch in the principal component space.
> How can you be absolutely sure that your samples need a correction, if
> they cluster as expected? and if they don't cluster as you would
> expect how can you know that this is not due to real biology or not?
To interpret the PCA, always think in term of variability. The samples will always cluster depending on the major source of variability. Usually there would be at least three sources of variability: the controlled biological factors (for instance, healthy vs diseased sample), the batch effect (day, hospital), and the residual biological/technical variability:
- If the samples cluster based on the controlled biological factors, that means that it is the main source of variability. In consequence, you are likely to find biological effects. I would still correct for batch effect though, because being a minor source of variability doesn't mean that there is no batch effect at all.
- If the samples primarily cluster based on batch, it could mean that the variability caused by batch is very high or that the variability caused by the controlled biological factors is low. Either way, you'll need to correct for batch.
- Finally, if sample are clustered seemingly randomly, it means that the residual biological/technical variability is the main source of variability. It is the worse case scenario because you can not really control for this kind of residual effect. Usually, it means that you will need a highly powered study in order to extract relevant biological information from that "mess".
> I would like to deal with this problem in a definite way; hopefully,
> by designing a model matrix
The idea of linear model, which is a great way to take care of batch effects, is also linked to the sources of variability. The link you provided is a good start, but I'll try to make the basics clearer. In a linear model, you will express a measure, for instance gene expression, by the sum of modeled effects of biological/batch factors you know of:
gene expression ~ factor_1 + factor_2 + factor_1:factor_2 + batch
("~" means that what is on the left is explained by the formula on the right ; ":" means that you choose to also model the interaction factor_1 and factor_2)
The formula above is dependent of a design matrix that specify the levels of the factors for each sample. for instance:
factor_1 factor_2 batch
sample1 A X I
sample2 A Y II
sample3 A Y I
sample4 B X I
sample5 B Y I
sample6 B Y II
Hope this helps,
Carlo
| biostars | {"uid": 381503, "view_count": 3084, "vote_count": 3} |
<p>How can I construct a phylogenetic tree based on the SNP's shared between strains? I have whole genome SNP calls for 10 different strains in a multi-sample vcf. </p>
<p>Are there any tools that can take the vcf as an input for creating phylogenetic trees? Or do I need to convert the multi-sample vcf to another matrix? Which kind of matrix would that be how can I create it from the vcf? </p>
<p>Is there a list somewhere of popular packages that kan be used for creating phylogenetic trees? Or guides on how to go from a multi-sample vcf to a plylogenetic tree. </p>
| Here is what I did in the SNPrelate package to get a dendogram and pca from my multisample vcf file
#vcf to GDS
snpgdsVCF2GDS("my.vcf", "my.gds")
snpgdsSummary("my.gds")
genofile <- openfn.gds("my.gds")
#dendogram
dissMatrix <- snpgdsDiss(genofile , sample.id=NULL, snp.id=NULL, autosome.only=TRUE,remove.monosnp=TRUE, maf=NaN, missing.rate=NaN, num.thread=10, verbose=TRUE)
snpHCluster <- snpgdsHCluster(dist, sample.id=NULL, need.mat=TRUE, hang=0.25)
cutTree <- snpgdsCutTree(snpHCluster, z.threshold=15, outlier.n=5, n.perm = 5000, samp.group=NULL,col.outlier="red", col.list=NULL, pch.outlier=4, pch.list=NULL,label.H=FALSE, label.Z=TRUE, verbose=TRUE)
#pca
sample.id <- read.gdsn(index.gdsn(genofile, "sample.id"))
pop_code <- read.gdsn(index.gdsn(genofile, "sample.id")
pca <- snpgdsPCA(genofile)
tab <- data.frame(sample.id = pca$sample.id,pop = factor(pop_code)[match(pca$sample.id, sample.id)],EV1 = pca$eigenvect[,1],EV2 = pca$eigenvect[,2],stringsAsFactors = FALSE)
plot(tab$EV2, tab$EV1, col=as.integer(tab$pop),xlab="eigenvector 2", ylab="eigenvector 1")
legend("topleft", legend=levels(tab$pop), pch="o", col=1:nlevels(tab$pop)) | biostars | {"uid": 83232, "view_count": 28925, "vote_count": 14} |
<p>Does anyone know of any software package that can convert .impute2 data to .mldose (i.e. imputed data from IMPUTE2 to imputed data from MACH)? I have tried impute2mach in GenABEL, but it has repeatedly failed with a known error...</p>
<p>Direct conversion preferred, but I'm happy with indirect as long as it works!</p>
| <p>I have written a brief cookbook to perform this conversion in UNIX:</p>
<p><a href="http://openwetware.org/wiki/User:Jonathan_R._I._Coleman/Notebook/Notes_and_Protocols/2014/06/27">http://openwetware.org/wiki/User:Jonathan_R._I._Coleman/Notebook/Notes_and_Protocols/2014/06/27</a></p>
| biostars | {"uid": 101168, "view_count": 8393, "vote_count": 4} |
Hello everybody,
Here i my issue, I have a Trinity.fasta file like this
>TRINITY_DN5631_c0_g2_i1 len=947 path=[0:0-946]
TACAACTTGAACATCAACAATGGTTGCGCAGCTATTGCCATCCGCGACGTTCGAGGACTGCGTGCGAA
>TRINITY_DN62279_c1_g1_i1 len=298 path=[0:0-297]
TATTACCATTATTATTATTATCATATTTATGTTCATTATTATCATTATCATAATCATTATCATCTTGATA
...
And I also have a list of id in a id.txt file :
TRINITY_DN16359_c0_g1_i4
TRINITY_DN62279_c1_g1_i1
...
I am trying to extract for my fasta file the sequences that have their id in my txt.file.
I am using seqkit for that, but with no success :
seqkit grep -n -f id.txt Trinity.fasta -o result.fa
Does anyone know how to fix it ?
| Referring to the [usage](https://bioinf.shenwei.me/seqkit/usage/#grep), you should not switch on `-n` , just use `seqkit grep -f id.txt seqs.fa`.
-n, --by-name match by full name instead of just id
-i, --ignore-case ignore case
-r, --use-regexp patterns are regular expression
For `-nrif`, it's **partly matching** full FASTA header by regular expression and case-ignored, with patterns from file.
*This may produce some unwanted results*. For example, `seq_1` matches `seq_10` with `-nri`.
| biostars | {"uid": 469520, "view_count": 7764, "vote_count": 1} |
In the documentation of Gemini there is that you should
1. [Decompose][1] the original VCF such that variants with multiple alleles are expanded into distinct variant records; one record for each REF/ALT combination.
2. [Normalize][2] the decomposed VCF so that variants are left aligned and represented using the most parsimonious alleles.
http://gemini.readthedocs.org/en/latest/index.html
This sound like a good thing to do because is makes it easier to asign correct IDs and to compare variants.
Only problem I have seen mentioned is that for samples that have an ALT1, ALT2 genotype your genotype is now split over 2 vcf records; MISSING, ALT1 and MISSING , ALT2 . Or even REF, ALT1 and REF, ALT2 . Both don't correctly represent the genotype of the sample.
[1]: http://genome.sph.umich.edu/wiki/Vt#Decompose
[2]: http://genome.sph.umich.edu/wiki/Vt#Normalization | There are a few ways to skin this cat, and it is also an area with fairly active development. The central difficulty is that there often are multiple ways to represent the same variant in VCF, particularly in cases where block substitutions or indels are involved, and there is no "right" representation.
The decomposition/normalization approach has the downside that the process tends to destroy a lot of the good information that is contained in the original call set (e.g. phasing information, INFO/FORMAT annotations, quality scores). In addition, even after decomposition the results can be arbitrary (and so may not match up with with the coordinates you are getting your IDs from anyway, defeating the purpose).
An alternative approach is to have smarter comparison tools which are directly aware of representational ambiguity, by performing variant comparison at the haplotype level. AFAIK [CGI calldiff][1] and [RTG vcfeval][2] were independently the first to implement this strategy, and new tools are finally catching on, in varying stages of development ([SMaSH][3], [vgraph][4], [hap.py][5]). These tools replay the variants from the VCF into the reference and determine whether variants match by whether the resulting haplotypes match. With vcfeval the full VCF annotation information is preserved during the comparison (not so with hap.py, vgraph doesn't currently output VCF, and I haven't used calldiff or SMaSH)
In particular, the haplotype comparison tools are the current state of the art for same-sample call-set comparison (either between callers, or comparing with a benchmark set) -- certainly in the case of vcfeval this was the motivating driver in the development. The decomposition/normalization approach is more useful if you want to establish a population-level database where variants are converted to a "canonical" form with limited annotation requirements. Of course there is nothing to say you cannot use both techniques, depending on what you are trying to achieve.
[1]: http://www.completegenomics.com/public-data/analysis-tools/cgatools/
[2]: http://realtimegenomics.com/products/rtg-tools/
[3]: http://smash.cs.berkeley.edu/
[4]: https://github.com/bioinformed/vgraph
[5]: https://github.com/Illumina/hap.py | biostars | {"uid": 151266, "view_count": 6776, "vote_count": 3} |
Hello,
does anyone know the code for each color of UCSC genome browser? Is there a list or something?
For example this one "color=250,0,0" gives me the color RED and this one "color=0,0,250" is blue.
Is there a list in which i can select the serial number for other colors?
Thanks in advance | It uses standard [RGB codes][1].
You can Google them to get the code for the color you want (here is first hit: https://www.rapidtables.com/web/color/RGB_Color.html)
[1]: https://en.wikipedia.org/wiki/RGB_color_model | biostars | {"uid": 320245, "view_count": 10433, "vote_count": 1} |
The mouse gene: ENSMUSG00000004500 has 2 human homologs: ENSG00000249471, and ENSG00000083812. Looking the opposite direction, the indicated mouse gene is the only mouse homolog of these two human genes. I was hoping someone from Ensembl could clarify why is this labeled "many2many" instead of "one2many"?
I retrieved a table of mouse homologs for human genes (Ensembl v104, data set Human Genes GRCh39) using the following biomart query:
Attribute set: "homologs radio button"
Attributes:
1. Gene.stable.ID
2. Mouse.gene.stable.ID
3. Mouse.homology.type
4. Homology related stats [ %identities, GOC, Alignment Coverage, and Confidence]
As I was reviewing the table, I found several records where the Mouse.homology.type was labeled "ortholog_many2many" where the homologous mouse gene had multiple human homologs, but those human homologs are only associated with the single mouse gene. Shouldn't these genes be labeled "one2many" as is true for most other similar cases:
R Filtering Code:
> with(
+ hom_2_mus %>%
+ select(
+ Human_ID=Gene.stable.ID,
+ Mouse_ID=Mouse.gene.stable.ID,
+ Mouse.homology.type
+ ) %>%
+ group_by(Human_ID) %>%
+ filter(n() == 1) %>% # Human genes with only one mouse homolog
+ group_by(Mouse_ID) %>%
+ filter(n() > 1), # Mouse genes with multiple human homologs
+ table(Mouse.homology.type)
+ )
Mouse.homology.type
ortholog_many2many ortholog_one2many
44 1477
| Hi Adam,
This particular mouse gene is the only homologue of the two human gene stable IDs, but has a listing of "many2many" still because this mouse gene has undergone a duplication event following the speciation. Although the homologous relationship is only shown between the gene stable IDs - this mouse gene has more than one protein coding gene attached, and so effectively the two human genes show a "many2many" relationship on the gene tree - with the 2 mouse proteins. It could be noted that the human side homologues include more than just those two genes, and those two gene stable IDs actually encode multiple protein coding genes. | biostars | {"uid": 9471470, "view_count": 848, "vote_count": 1} |
Hi - I'm new to Plink and am trying to read in a transposed fileset so that I can then convert it to bed/bim/fam, but I keep getting this message that says "Note: Variant ####### is triallelic. Setting rarest alleles to missing." Where there are many lines and the ####### ranges from 0-497197. But it still creates files that have the extension `.temporary.bed.tmp`, `.temporary.bim`, and `.temporary.fam`.
So my problem is that I can't figure out why it thinks that they're all triallelic when I can look at my .tfam file and see that they aren't (or at least don't appear so to me). Does anyone have any suggestions?
My tped input looks like this (just with 497198 SNPs):
```
12 rs1000000 0 126890980 A A B B B B B B
4 rs10000023 0 95733906 B B B A B A B A
4 rs10000030 0 103374154 B B B B B B A B
4 rs10000041 0 165621955 A A B B B B B A
4 rs10000042 0 5237152 B B B B B B B B
```
My plink command looks like this:
./plink --tfile myfile --recode --out myfile
And the return I get from plink looks like this:
Note: Variant XXXXXX is triallelic. Setting rarest alleles to missing.
In addition, I'm also getting the error (on the very last line) but I thought the centimorgan position was allowed to be 0?:
Error: Invalid centimorgan position on line 2 of .tped file
Any help would be much appreciated - thanks in advance! | The `.tped` loader did not properly detect when a line didn't have enough genotypes for the number of samples in the `.tfam` file. This error is properly reported in the Nov 23 development build.</p>
You now need to check why your `.tfam` has more lines than there are genotype pairs in the `.tped`. | biostars | {"uid": 166882, "view_count": 5616, "vote_count": 1} |
I have a large set of footprint intervals that range from 11 to 25bp For the purpose of motif discovery I would like to extend all intervals to, for example, 50bp. Intervals should be extended equally from both sides. I would usually use 'bedtools slop' for fixed length intervals, but this would not appear to work with variable length.
It would be great if anyone could advise me how to use bedtools, or something else. I have a nagging feeling I am missing something obvious, so apologies in advance!
| Here's a way that I think should extend both ends of BED elements to the desired target length:
$ TARGET_LENGTH=50
$ awk -vF=${TARGET_LENGTH} 'BEGIN{ OFS="\t"; }{ len=$3-$2; diff=F-len; flank=int(diff/2); upflank=downflank=flank; if (diff%2==1) { downflank++; }; print $1, $2-upflank, $3+downflank; }' in.bed | sort-bed - > out.bed
Non-even length elements or a non-even target length will require flank lengths that are unequal. Sounds like this is not a problem.
You might adjust the logic to randomly pick which of `upflank` or `downflank` to decrement or increment in this case, so that you don't impart a bias from this adjustment (esp. if original elements are stranded, like footprints that will ultimately be mapped to TF binding sites or other stranded elements), e.g.:
$ TARGET_LENGTH=50
$ awk -vF=${TARGET_LENGTH} 'BEGIN{ OFS="\t"; }{ len=$3-$2; diff=F-len; flank=int(diff/2); upflank=downflank=flank; if (diff%2==1) { if (rand() >= 0.5) { downflank++; } else { upflank--; } }; print $1, $2-upflank, $3+downflank; }' in.bed | sort-bed - > out.bed | biostars | {"uid": 241085, "view_count": 1987, "vote_count": 1} |
Hello,
I have a fasta file with lines of format -
>FBti0019256 type=transposable_element; loc=2L:22300300..22304444; name=invader2{}555; dbxref=FlyBase_Annotation_IDs:TE19256,FlyBase:FBti0019256; MD5=d9259a0e33aad699215e64916bd47a5b; length=4145; release=r6.19; species=Dmel;
I would like to convert these lines into a bed file of format -
chr2L /t 22300300 /t 22304444 /t invader2
Is there a program that can directly perform this conversion or an awk command that can do this easier? Please let me know, thank you for your help. | Assumption is that chromosomes are numbered, no x and y chromosomes. For fasta headers only, user can use grep
$ sed 's/.*loc.\(\b.*\b\):\(\b.*\b\)\.\.\(\b.*\b\);.*=\(\b.*\b\){.*/chr\1\t\2\t\3\t\4/g' test.txt
chr2L 22300300 22304444 invader2
$ cut -f2,3 -d";" test.txt| cut -d= -f2,3 | awk -v OFS="\t" -F':|=|;|\\..|{' '{print "chr"$1,$2,$3,$5}'
chr2L 22300300 22304444 invader2
$ cat test.txt
>FBti0019256 type=transposable_element; loc=2L:22300300..22304444; name=invader2{}555; dbxref=FlyBase_Annotation_IDs:TE19256,FlyBase:FBti0019256; MD5=d9259a0e33aad699215e64916bd47a5b; length=4145; release=r6.19; species=Dmel;
1000 bash cuts:
$ cut -f2,3 -d";" test.txt| cut -d= -f2,3 | cut -f1,2 -d: --output-delimiter=$'\t'| cut -f1,2,3 -d'.' --output-delimiter=$'\t' | cut -f1,2 -d';' --output-delimiter=$'\t'| cut -f1,2 -d'=' --output-delimiter=$'\t' | cut -f1,2 -d'{' --output-delimiter=$'\t'| cut -f1,2,3,4,6
2L 22300300 22304444 invader2
| biostars | {"uid": 312675, "view_count": 1734, "vote_count": 1} |
Hi,
I'm trying to extract reads based on their start coordinate in a bam file. I've tried using samtools view but that seems to give all reads covering that region, not originating there.
Apologies if this has been asked elsewhere but I couldn't find an answer via Google.
Many thanks. | Something like:
`samtools view -h -q 10 in.bam chr1:1000-1500 | awk 'BEGIN{OFS="\t"}{if($1 ~ @) {print} else {if($4 >= 1000 || $4 <=1500) {print} else {}}}' | samtools view -Sbo output.bam -`
One could restructure this to get around the default print action in awk, but this will also get the job done. | biostars | {"uid": 234013, "view_count": 4460, "vote_count": 1} |
I have a set of vcf files that were filtered using GATK hard filtering. I filtered the snps and then the indels seperately and then merged the two and got a vcf with a list of polymorphisms in which the polymorphisms that had failed the filters weere marked as such (of course) Now I would like to make a vcf that lacks the snps and Indels that failed the filtering. What command should I run. | If you're referring to the `PASS` flag, you can use anything from `vcftools` to plain `awk` to plainer `grep`. If there's more to the PASS criteria than just the flag, you're going to need to elaborate on that.
Also, do you mean `variants` when you say `polymorphisms`? I know people may use the terms interchangeably, but they do not mean the same thing. `Variants` are the most generic descriptor; they refer to all loci where multiple alleles are found, whereas `polymorphisms` assume the functional impact of the variant to result in a polymorphic phenotype that is not usually pathogenic. | biostars | {"uid": 190885, "view_count": 4739, "vote_count": 1} |
hi everyone,
I have just started learning genomics as a part of my bioinformatics degree and I've been introduced to linux for handling fastq files and using fast qc. can anybody suggest some good learning resources which is more inclined towards linux for genomics. | My advice would be (to echo Ram) focus on a subset of small command line commands and don't worry too much about learning about the nuts and bolts of Linux. In a rough order of importance I would say learn the following:
1. ``cd``, ``cat``, ``mkdir``, ``ls -lh``, ``rm``, ``touch``, ``head``, ``tail``, ``echo``, ``find``, ``man``, ``../``, ``sudo``
2. ``cut``, ``for``, ``while``, ``if else``
3. ``grep``, ``sed``, ``awk``, ``parallel`` | biostars | {"uid": 9485624, "view_count": 1269, "vote_count": 7} |
Hi all,
I have a couple of fastq files containing reads starting with different name like:
@HWI-ST865:463:C7C8KACXX:2:2316:21016:100943 1:N:0:TAAGGCGA
@HWI-ST1178:227:C7C95ACXX:7:1101:1581:2125 1:N:0:TAAGGCGA
My question is: how can I split them in two parts?
I tried to use some tools like fastx_toolkit but I cannot create a proper barcode file
Is there any easy way to do that such as a grep command, cause i also tried with grep but i got an output containing only the first line of the reads and missed the other three
Thank you in advance! | You can use either Heng Li's `bioawk` or `grep -A 3`. The former is a wrapper on awk to make it work with separators used in biological data formats, and the latter is a grep that picks up the matching line+3 lines that follow. | biostars | {"uid": 179956, "view_count": 2361, "vote_count": 1} |
Hi,
I am having problems with integrating sequenceserver (v. 1.14) that I am using on apache server with my own jbrowse on the same apache server. I did all (it seems so at least) what is written [here][1] .
Still - no links! Obviously, I am missing something basic here and will be very glad for your help!!!!
I am not an expert (to say the least) in ruby. Here is my links.rb file -
Hope I will find my mistake here...
THANKS!
module SequenceServer
# Module to contain methods for generating sequence retrieval links.
module Links
require 'erb'
# Provide a method to URL encode _query parameters_. See [1].
include ERB::Util
#
alias_method :encode, :url_encode
NCBI_ID_PATTERN = /gi\|(\d+)\|/
UNIPROT_ID_PATTERN = /sp\|(\w+)\|/
require 'json'
# Link generators return a Hash like below.
#
# {
# # Required. Display title.
# :title => "title",
#
# # Required. Generated url.
# :url => url,
#
# # Optional. Left-right order in which the link should appear.
# :order => num,
#
# # Optional. Classes, if any, to apply to the link.
# :class => "class1 class2",
#
# # Optional. Class name of a FontAwesome icon to use.
# :icon => "fa-icon-class"
# }
#
# If no url could be generated, return nil.
#
# Helper methods
# --------------
#
# Following helper methods are available to help with link generation.
#
# encode:
# URL encode query params.
#
# Don't use this function to encode the entire URL. Only params.
#
# e.g:
# sequence_id = encode sequence_id
# url = "http://www.ncbi.nlm.nih.gov/nucleotide/#{sequence_id}"
#
# querydb:
# Returns an array of databases that were used for BLASTing.
#
# whichdb:
# Returns the database from which the given hit came from.
#
# e.g:
#
# hit_database = whichdb
#
# Examples:
# ---------
# See methods provided by default for an example implementation.
def sequence_viewer
accession = encode self.accession
database_ids = encode querydb.map(&:id).join(' ')
url = "get_sequence/?sequence_ids=#{accession}" \
"&database_ids=#{database_ids}"
{
:order => 0,
:url => url,
:title => 'Sequence',
:class => 'view-sequence',
:icon => 'fa-eye'
}
end
def fasta_download
accession = encode self.accession
database_ids = encode querydb.map(&:id).join(' ')
url = "get_sequence/?sequence_ids=#{accession}" \
"&database_ids=#{database_ids}&download=fasta"
{
:order => 1,
:title => 'FASTA',
:url => url,
:class => 'download',
:icon => 'fa-download'
}
end
def ncbi
return nil unless id.match(NCBI_ID_PATTERN)
ncbi_id = Regexp.last_match[1]
ncbi_id = encode ncbi_id
url = "http://www.ncbi.nlm.nih.gov/#{querydb.first.type}/#{ncbi_id}"
{
:order => 2,
:title => 'NCBI',
:url => url,
:icon => 'fa-external-link'
}
end
def uniprot
return nil unless id.match(UNIPROT_ID_PATTERN)
uniprot_id = Regexp.last_match[1]
uniprot_id = encode uniprot_id
url = "http://www.uniprot.org/uniprot/#{uniprot_id}"
{
:order => 2,
:title => 'Uniprot',
:url => url,
:icon => 'fa-external-link'
}
end
def jbrowse
qstart = hsps.map(&:qstart).min
sstart = hsps.map(&:sstart).min
qend = hsps.map(&:qend).max
send = hsps.map(&:send).max
first_hit_start = hsps.map(&:sstart).at(0)
first_hit_end = hsps.map(&:send).at(0)
my_features = ERB::Util.url_encode(JSON.generate([{
:seq_id => accession,
:start => sstart,
:end => send,
:type => "match",
:subfeatures => hsps.map {
|hsp| {
:start => hsp.send < hsp.sstart ? hsp.send : hsp.sstart,
:end => hsp.send < hsp.sstart ? hsp.sstart : hsp.send,
:type => "match_part"
}
}
}]))
my_track = ERB::Util.url_encode(JSON.generate([
{
:label => "BLAST",
:key => "BLAST hits",
:type => "JBrowse/View/Track/CanvasFeatures",
:store => "url",
:glyph => "JBrowse/View/FeatureGlyph/Segments"
}
]))
url = "<http://http://mysite/pomegranate/>" \
"?loc=#{accession}:#{first_hit_start-500}..#{first_hit_start+500}" \
"&addFeatures=#{my_features}" \
"&addTracks=#{my_track}" \
"&tracks=BLAST" \
"&highlight=#{accession}:#{first_hit_start}..#{first_hit_end}"
{
:order => 2,
:title => 'JBrowse',
:url => url,
:icon => 'fa-external-link'
}
end
end
end
# [1]: https://stackoverflow.com/questions/2824126/whats-the-difference-between-uri-escape-and-cgi-escape
[1]: https://jbrowse.org/docs/faq.html#how-can-i-link-blast-results-to-jbrowse | For the sake of those who will read this - here is the answer:
I followed the tutorial here https://jbrowse.org/docs/faq.html#how-can-i-link-blast-results-to-jbrowse
and put links.rb file in installation directory /sequenceserver-1.0.14/lib/sequenceserver/ . I just added the bit that is shown on jbrowse tutorial to the existing links.rb and than added the path to the conf file in /etc/sequenceserver/. Also, note that the url in the bit to be added should be changed to your own jbrowse url. And it works fine now.
| biostars | {"uid": 9505121, "view_count": 1049, "vote_count": 1} |
I'm looking to use something different than the built-in color palettes in `Seurat`, but I'm finding it very difficult to find packages that can handle 20+ discrete colors. I've looked through `RColorBrewer` and `colorspace`, as well as `ggsci`, and all max out around 12 colors from what I can see. Does anyone know of other color packages that can handle a higher number of colors, or am I best off using `scale_color_manual()` and doing it myself in `ggplot2`? | > am I best off using scale_color_manual() and doing it myself in ggplot2
Yes.
Keep in mind though that there is a reason for the limit of discrete colors! It's not the computers or tools that have a problem distinguishing more than 12 discrete colors, it's the human perception that tends to be the limiting factor here!
That being said -- I don't know why you find that `RColorBrewer` is limited.
```
> library(RColorBrewer)
> colorRampPalette(rev(brewer.pal(n = 7, name = "RdYlBu")))(20)
[1] "#4575B4" "#5D8CC0" "#75A3CC" "#8DBBD8" "#A5CCE2" "#BEDDEB" "#D7EDF4" "#E6F5EC" "#F0F9DA" "#FAFDC8" "#FEFAB7"
[12] "#FEF0A8" "#FEE699" "#FDD78A" "#FDBD78" "#FCA267" "#FA8856" "#EE6A46" "#E24D36" "#D73027"
``` | biostars | {"uid": 437705, "view_count": 1130, "vote_count": 1} |
I would like a list of drug-variant interactions (i.e. a variant for which its mutational status affects the efficacy of a particular drug).
I noticed that [nightly-ClinicalEvidenceSummaries.tsv][1] currently contains 593 drug-variant interactions (as of March 22, 2016). Is this the full list of all drug-variant interactions currently contained in the CIViC database?
If so, great! Otherwise, it seems like the alternative is to use the API to obtain variant/evidence information and then parse the resulting JSON. Thanks.
[1]: https://civic.genome.wustl.edu/#/releases | That sound right. The nightly TSV file should be a complete representation of the total number of evidence statements in CIViC. However, the level of detail is reduced compared to what you could get from the API. Also, I see that there are some formatting issues still occurring with the TSV file. We are working to fix this. However, the API is definitely recommended.
Check out this set of python tools (under active development) that pulls CIViC data via the API in various ways.
https://github.com/griffithlab/civic-api-client | biostars | {"uid": 182831, "view_count": 2552, "vote_count": 3} |
<p>I'm trying to run a PSIBlast program which selects certain sequences out at every round before it does the next iteration. For this I need the Round attribute shown in <a href="http://biopython.org/DIST/docs/tutorial/Tutorial.html#fig:psiblastrecord.<br">http://biopython.org/DIST/docs/tutorial/Tutorial.html#fig:psiblastrecord.<br< a="">> The biopython tutorial says: "In Biopython, the parsers return Record objects, either Blast or PSIBlast depending on what you are parsing." However, I can only access attributes found in the normal Blast record class: </a><a href="http://biopython.org/DIST/docs/tutorial/Tutorial.html#fig:blastrecord.">http://biopython.org/DIST/docs/tutorial/Tutorial.html#fig:blastrecord.</a> </p>
<p>from Bio.Blast.Applications import NcbipsiblastCommandline
from Bio import SeqIO
from Bio.Blast import NCBIXML</p>
<pre><code>File = "KNATM"
def psiBlast(File):
fastaFile = open("BLAST-"+File+".txt","r")
my_blast_db = r"C:\Niek\Test2.2.17\TAIR9_pep_20090619.fasta"
my_blast_file = '"C:\\Niek\\Evolution MiP\\BLAST-'+File+'.txt"'
my_blast_exe = r"C:\Niek\blast-2.2.24+\bin\psiblast.exe"
E_VALUE_TRESH = "10"
for seq_record in SeqIO.parse("BLAST-"+File+".txt", "fasta"):
global cline
tempFile = open("tempFile.fasta","w")
tempFile.write(">"+str(seq_record.id)+"\n"+str(seq_record.seq)+"\n")
tempFile.close()
cline = NcbipsiblastCommandline(cmd = my_blast_exe, db = my_blast_db, \
query = "tempFile.fasta", evalue = E_VALUE_TRESH, outfmt = 5, \
out = "1stIte"+File+".xml", out_pssm = "pssm"+File)
cline()
result_handle = open("1stIte"+File+".xml","r")
blast_record = NCBIXML.read(result_handle)
for alignment in blast_record.alignments:
for alignment in blast_record.alignments:
print alignment.title
for hsp in alignment.hsps:
print hsp.expect
psiBlast(File)
</code></pre>
<p>So this works, but if I change</p>
<pre><code>blast_record = NCBIXML.read(result_handle)
for Round in blast_record.rounds:
etc...
</code></pre>
<p>I get</p>
<pre><code>Traceback (most recent call last):
File "C:\Niek\Evolution MiP\psiBLAST1.1.py", line 45, in <module>
psiBlast(File)
File "C:\Niek\Evolution MiP\psiBLAST1.1.py", line 36, in psiBlast
for Round in blast_record.rounds:
AttributeError: Blast instance has no attribute 'rounds
</code></pre>
<p>So how do you have to blast/parse to get the PSIBLAST record?</p>
<p>Thanks,
Niek</p>
| Not for python and not for PSI-BLAST but my answer might help: instead of trying to parse your XML with Python, how about parsing it with XML+XSLT ? For example, I wrote the stylesheet below to generate an input for mongodb from BLAST:
https://gist.github.com/RamRS/e96904c44241649228180704af6a58ed
Result:
xsltproc --novalid stylesheet.xsl ~/blast.xml
use blastdb;
query={
def:"No definition line",
len:670
,param: {
expect:10,
sc_match:2,
sc_mismatch:-3,
gap_open:5,
gap_extend:2,
filter:"L;m;",
}
};
db.queries.save(query);
hit={
query_id: query._id,
num:1,
id:"gi|118082669|ref|XM_416233.2|",
gi:118082669,
def:"PREDICTED: Gallus gallus similar to ubiquitous tetratricopeptide containing protein RoXaN; Rotavirus X asso
ciated non-structural protein (LOC417996), mRNA",
acn:"XM_416233",
len: 2868,
hsp:[
{
num:1,
bit_score:544.1,
score:602,
evalue:3.34611e-151,
query_from:92,
query_to:395,
hit_from:2378,
hit_to:2681,
query_frame:1,
hit_frame:1,
identity:303,
positive:303,
gaps:0,
align_len:304,
qseq:"TACTAGATATGCAGCAGACCTATGACATGTGGCTAAAGAAACACAATCCTGGGAAGCCTGGAGAGGGAACACCACTCACTTCGCGAGAAGGGGAGAAACAGATCCA
GATGCCCACTGACTATGCTGACATCATGATGGGCTACCACTGCTGGCTCTGCGGGAAGAACAGCAACAGCAAGAAGCAATGGCAGCAGCACATCCAGTCAGAGAAGCACAAG
GAGAAGGTCTTCACCTCAGACAGTGACTCCAGCTGCTGGAGCTATCGCTTCCCTATGGGCGAGTTCCAGCTCTGTGAAAGGTACCA",
midline:"|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| |||",
hseq:"TACTAGATATGCAGCAGACCTATGACATGTGGCTAAAGAAACACAATCCTGGGAAGCCTGGAGAGGGAACACCACTCACTTCGCGAGAAGGGGAGAAACAGATCCA
GATGCCCACTGACTATGCTGACATCATGATGGGCTACCACTGCTGGCTCTGCGGGAAGAACAGCAACAGCAAGAAGCAATGGCAGCAGCACATCCAGTCAGAGAAGCACAAG
GAGAAGGTCTTCACCTCAGACAGTGACTCCAGCTGCTGGAGCTATCGCTTCCCTATGGGCGAGTTCCAGCTCTGTGAAAGGTTCCA"
}
(...) | biostars | {"uid": 2513, "view_count": 6228, "vote_count": 4} |
How to fetch gene ids (in **RED**) from NCBI gene names (in **BLUE**) using either efetch or python?
<a href="https://ibb.co/bdBDWTV"><img src="https://i.ibb.co/mzqgDkK/asdad.png" alt="asdad" border="0"></a>
I am looking at [this][1] link and it does exactly the opposite of what I want.
from Bio import Entrez
import sys
id_list = ['3799']
Entrez.email = "*****@gmail.com"
def retrieve_annotation(id_list):
request = Entrez.epost("gene",id=",".join(id_list))
try:
result = Entrez.read(request)
except RuntimeError as e:
print "An error occurred while retrieving the annotations."
print "The error returned was %s" % e
sys.exit(-1)
webEnv = result["WebEnv"]
queryKey = result["QueryKey"]
data = Entrez.esummary(db="gene", webenv=webEnv, query_key =
queryKey)
annotations = Entrez.read(data)
print "Retrieved %d annotations for %d genes" % (len(annotations),
len(id_list))
return annotations
def print_data(annotation):
for gene_data in annotation:
gene_id = gene_data["Id"]
gene_symbol = gene_data["NomenclatureSymbol"]
gene_name = gene_data["Description"]
print "ID: %s - Gene Symbol: %s - Gene Name: %s" % (gene_id, gene_symbol, gene_name)
annotation=retrieve_annotation(id_list)
print annotation
**Output**
python ncbi.py
Retrieved 1 annotations for 1 genes
DictElement({u'DocumentSummarySet': DictElement({u'DbBuild': 'Build190501-0100m.1', u'DocumentSummary': [DictElement({u'Status': '0', u'NomenclatureSymbol': 'KIF5B', u'OtherDesignations': 'kinesin-1 heavy chain|conventional kinesin heavy chain|epididymis secretory protein Li 61|kinesin 1 (110-120kD)|kinesin heavy chain|ubiquitous kinesin heavy chain', u'Mim': ['602809'], u'Name': 'KIF5B', u'NomenclatureName': 'kinesin family member 5B', u'CurrentID': '0', u'GenomicInfo': [DictElement({u'ChrAccVer': 'NC_000010.11', u'ChrLoc': '10', u'ExonCount': '27', u'ChrStop': '32009009', u'ChrStart': '32056442'}, attributes={})], u'OtherAliases': 'HEL-S-61, KINH, KNS, KNS1, UKHC', u'Summary': '', u'GeneWeight': '9359', u'GeneticSource': 'genomic', u'MapLocation': '10p11.22', u'ChrSort': '10', u'ChrStart': '32009009', u'LocationHist': [DictElement({u'AssemblyAccVer': 'GCF_000001405.38', u'ChrAccVer': 'NC_000010.11', u'AnnotationRelease': '109', u'ChrStop': '32009009', u'ChrStart': '32056442'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000001405.33', u'ChrAccVer': 'NC_000010.11', u'AnnotationRelease': '108', u'ChrStop': '32009009', u'ChrStart': '32056442'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000306695.2', u'ChrAccVer': 'NC_018921.2', u'AnnotationRelease': '108', u'ChrStop': '32299659', u'ChrStart': '32347070'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000001405.28', u'ChrAccVer': 'NC_000010.11', u'AnnotationRelease': '107', u'ChrStop': '32009009', u'ChrStart': '32056442'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000306695.2', u'ChrAccVer': 'NC_018921.2', u'AnnotationRelease': '107', u'ChrStop': '32299659', u'ChrStart': '32347070'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000001405.25', u'ChrAccVer': 'NC_000010.10', u'AnnotationRelease': '105', u'ChrStop': '32297937', u'ChrStart': '32345370'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000002125.1', u'ChrAccVer': 'AC_000142.1', u'AnnotationRelease': '105', u'ChrStop': '32018110', u'ChrStart': '32065918'}, attributes={}), DictElement({u'AssemblyAccVer': 'GCF_000306695.2', u'ChrAccVer': 'NC_018921.2', u'AnnotationRelease': '105', u'ChrStop': '32299659', u'ChrStart': '32347070'}, attributes={})], u'Organism': DictElement({u'CommonName': 'human', u'ScientificName': 'Homo sapiens', u'TaxID': '9606'}, attributes={}), u'NomenclatureStatus': 'Official', u'Chromosome': '10', u'Description': 'kinesin family member 5B'}, attributes={u'uid': u'3799'})]}, attributes={u'status': u'OK'})}, attributes={})
[1]: https://biopython.org/wiki/Annotate_Entrez_Gene_IDs | Using EntrezDirect
$ esearch -db gene -query "KIF5B [GENE] AND Homo [ORGN]" | esummary | xtract -pattern DocumentSummary -element Id
3799
3830
More generic solution (output trimmed for brevity):
$ esearch -db gene -query "KIF5B [GENE]" | esummary | xtract -pattern DocumentSummary -element Id,ScientificName
36810 Drosophila melanogaster
3799 Homo sapiens
16573 Mus musculus
117550 Rattus norvegicus
100855651 Canis lupus familiaris
595132 Sus scrofa
100038146 Xenopus tropicalis
514261 Bos taurus
696652 Macaca mulatta
100101320 Xenopus laevis
450390 Pan troglodytes
420472 Gallus gallus
103188818 Callorhinchus milii
101839615 Mesocricetus auratus | biostars | {"uid": 377660, "view_count": 2646, "vote_count": 2} |
Hi all,
I have a table that you will see below:
CHROM_POS A_Freq M.F Annotation N_Chr POP
1 CM009840.1_932 1.000000 0.000000 nongenic 20.00000 KHUZ
2 CM009840.1_1096 0.666667 0.333333 nongenic 13.33334 KHUZ
3 CM009840.1_1107 0.277778 0.277778 nongenic 5.55556 KHUZ
4 CM009840.1_1177 0.500000 0.500000 nongenic 10.00000 KHUZ
5 CM009840.1_1276 0.555556 0.444444 nongenic 11.11112 KHUZ
6 CM009840.1_1295 0.555556 0.444444 nongenic 11.11112 KHUZ
7 CM009840.1_1518 0.937500 0.062500 nongenic 18.75000 KHUZ
8 CM009840.1_1527 0.000000 0.000000 nongenic 0.00000 KHUZ
9 CM009840.1_1533 0.937500 0.062500 nongenic 18.75000 KHUZ
10 CM009840.1_1630 0.062500 0.062500 nongenic 1.25000 KHUZ
SO, I want to draw a plot like the following plot:
![enter image description here][1]
[1]: http://uupload.ir/files/4lxb_a_freq.png
What is the best idea? | Here is a start:
library(dplyr)
library(ggplot2)
# example data
set.seed(1); myData <- data.frame(
A_Freq = runif(1000),
Annotation = sample(LETTERS[1:3], 1000, replace = TRUE))
# prepare data, use "cut" make "A_Freq" groups
plotDat <- myData %>%
mutate(AlleleFrequency = cut(A_Freq, seq(0, 1, 0.25))) %>%
group_by(AlleleFrequency, Annotation) %>%
summarise(FractionOfSNPs = n()/nrow(myData) * 100)
# then plot
ggplot(plotDat,
aes(AlleleFrequency, FractionOfSNPs, group = Annotation, col = Annotation)) +
geom_line() +
scale_y_continuous(limits = c(0, 100))
| biostars | {"uid": 344592, "view_count": 1687, "vote_count": 1} |
I am trying to download entire dataset for a bioproject using esearch and efetch from the Entrez Utilities.
My syntax is based on syntax posted by @Istvan Albert at https://www.biostars.org/p/111040/#359440, which is
> `esearch -db sra -query PRJNA40075 | efetch --format runinfo | cut
> -d
> ',' -f 1 | grep SRR | head -5 | xargs fastq-dump -X 10 --split-files`
For the BioProject PRJNA269201 I am interested in, slightly truncated syntax as shown below, creates 144 empty files as expected:
esearch -db sra -query PRJNA269201 | efetch --format runinfo | cut -d ',' -f 1 | grep SRR | xargs touch
However, when I try the full-length syntax, it behaves differently from what I expected under both scenarios 1 and 2 detailed below:
**Scenario 1**. On head-node of a cluster:
esearch -db sra -query PRJNA269201 | efetch --format runinfo | cut -d ',' -f 1 | grep SRR | head -2 | xargs fastq-dump --split-files
one file finished download, but it is 5.5G which is way larger than the 1.2GB I expected based on info at this [link][1] - is this difference because of file compression?! How can I download to a much more compressed version for both storage and downstream RNA-Seq analyses?
-rw-rw-r-- 1 aksrao aksrao 1.1G Jan 19 19:47 SRR1726554_1.fastq
-rw-rw-r-- 1 aksrao aksrao 5.5G Jan 19 19:44 SRR1726553_1.fastq
**Scenario 2**. When I try to submit this as a shell script, the STDERR stream (SLURM queue management on UBUNTU cluster) captures the following error message:
> `2019-01-20T02:28:55 fastq-dump.2.8.2 err: param empty while validating`
> `argument list - expected accession`
This same problem was reported on the original post by user @ bandanaschapagain, but it may not have been answered and resolved, hence I am posting this afresh. Could someone please help me? Thank you!
[1]: https://www.ncbi.nlm.nih.gov/sra/?term=SRR1726553
| Download the [RunInfo][1] table and use parallel to download multiple files at once.
#!/bin/bash
#change the number after -j change the number of files to be processed.
parallel --verbose -j 20 prefetch {} ::: $(cut -f5 SraRunTable.txt ) >>sra_download.log
wait
parallel --verbose -j 20 fastq-dump --split-files {} ::: $(cut -f5 SraRunTable.txt ) >>sra_dump.log
wait
exit
[1]: https://www.ncbi.nlm.nih.gov/Traces/study/?WebEnv=NCID_1_74280305_130.14.18.97_5555_1548224368_3125055013_0MetA0_S_HStore&query_key=3 | biostars | {"uid": 359441, "view_count": 7056, "vote_count": 2} |
Hello,
I am carrying out a metagenomic study and I would like to know if there are studies listing microbial species associated with environmental contamination. If I find, for instance, crAssphage in stools, this virus is reported as a common presence in the intestinal locus. Is this virus present as a contaminant let's say on a bench as well?
Are there species that signal contamination of samples? | Yes, metagenomics is full of contaminating sequences and there is a whole literature on this going back many years, just a few of which are represented here:
- the worse the assembly, the greater the contamination problems
- contaminants are technology-specific nanopore has different contaminants to illumina
- many labs have contaminants too
- clean environments and positive / negative controls are a must
https://www.biorxiv.org/content/10.1101/2020.01.26.920173v1.full.pdf Salzberg 2020 - RefSeq and Genbank contaminants
Infamous species (mostly with illumina adapters)
Achromobacter
Turkey
various band worms
Carp Cyprinus carpio
https://microbiomejournal.biomedcentral.com/articles/10.1186/s40168-019-0678-6
Reagent microbiome? https://www.nature.com/articles/s41564-018-0202-y
Quality and Contaminant free reference genomes?
FDA-ARGOS
https://www.nature.com/articles/s41467-019-11306-6
Practically removing contaminants: https://www.molecularecologist.com/2017/01/handling-microbial-contamination-in-ngs-data/
| biostars | {"uid": 433569, "view_count": 630, "vote_count": 1} |
Hi everyone
I need to locate a blast local database that was created on s server long time ago, but i don't know what is it extension or how it looks if I see it through more command.
How I can find this file? | At its simplest form, blast databases have three files: for nucleotides, file.nhr, files.nin, file.nsq; for proteins, file.phr, files.pin, file.psq. Search for one of these extensions:
find . -name "*.phr" | biostars | {"uid": 312335, "view_count": 1812, "vote_count": 1} |
**1.** this is my phenotype file (called outputfile.txt in command line use):
FID IID Cadmium_Chloride Caffeine Calcium_Chloride Cisplatin Cobalt_Chloride Congo_red Copper Cycloheximide Diamide E6_Berbamine Ethanol Formamide Galactose Hydrogen_Peroxide Hydroquinone Hydroxyurea Indoleacetic_Acid Lactate Lactose Lithium_Chloride Magnesium_Chloride Magnesium_Sulfate Maltose Mannose Menadione Neomycin Paraquat Raffinose SDS Sorbitol Trehalose Tunicamycin x4-Hydroxybenzaldehyde x4NQO x5-Fluorocytosine x5-Fluorouracil x6-Azauracil Xylose YNB YNB:ph3 YNB:ph8 YPD YPD:15C YPD:37C YPD:4C Zeocin
A01_01 A01_01 -7.32351970578731 0.279992827000249 0.313118165836545 1.65817907082079 -1.60444210190495 5.84161725611811 -4.13094977046224 0.821226166664529 3.62260156257758 -0.378746805086589 -0.6449544101999 0.736772421684145 1.46869950807288 4.25247880427656 -0.439429122584143 0.471260934436784 -0.502023574403563 -0.0196386553492135 -0.520403819717771 -3.04250228422253 -0.239535833991348 3.24339670861968 -3.94506679134117 2.13462934930907 2.02778180052776 -10.930132784538 1.5331378908103 -0.768634428150619 0.718639222878471 NA -0.734761808299035 0.760529220008652 -0.756192366531865 2.09460844047308 0.20839083641332 1.39503843403223 1.19905393883646 -0.309148758204671 17.470821887375 0.055225386257017 -0.184268373327551 24.5489707854467 0.712171057826513 0.890841948461777 4.11837231021474 8.59281835912838
A01_02 A01_02 -8.09823582391425 -0.206326076018097 -0.534843782803465 -0.918011723216776 0.892197592923579 -1.61817232545715 1.13194737114694 -0.764735687307454 -2.94627867266571 -2.47519275599105 -0.203037737638922 -0.661085887535845 1.74459605348331 -3.83556423753907 -0.120208249207331 -1.99993926169807 -1.09816332010706 0.649474778852619 -0.586994384784721 2.64012099474669 -0.308361579587721 -1.14413156235224 -5.39735154446785 0.319899854554594 -1.72754411262877 5.08769611065373 -0.691267578117329 2.46423743955996 -0.706985029180143 NA -0.3904578652155 -0.598586077448249 2.3069619915968 -3.57657131625641 -0.53930290771097 0.631594704977842 0.44318172585355 0.697908024887215 18.0529250220805 0.28346211824195 1.66220146506514 26.808475766906 -1.52249804709209 0.0060616411700553 0.0665150029109814 -4.22047646027812
**2.** I would like to perform quantitative association testing for all traits.
I have been fighting all day and cant find an answer anymore, why I am getting an error (see below), when running this command line:
$ plink2 --assoc --bfile binary_fileset --pheno outputfile.txt --pheno-name Caffeine -pfilter 1e-5
**Error:**
4096 MB RAM detected; reserving 2048 MB for main workspace.
11623 variants loaded from .bim file.
1008 people (0 males, 0 females, 1008 ambiguous) loaded from .fam.
Ambiguous sex IDs written to plink.nosex .
0 phenotype values present after --pheno.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 1008 founders and 0 nonfounders present.
Calculating allele frequencies... done.
11623 variants and 1008 people pass filters and QC.
Note: No phenotypes present.
Warning: Skipping --assoc/--model since less than two phenotypes are present.
| Hi mmukhame321,
I guess you have solved this problem. It took me a while to find out why. My .fam file does not include the gender information, so there will be a warning like "If you don't want **those phenotypes to be ignored**, use the --allow-no-sex flag". I suppose that's why it came out to the "Note: No phenotypes present." After I added the flag "--allow-no-sex", it worked. Hope this information could help others who are disturbed by this annoying problem. | biostars | {"uid": 184571, "view_count": 9703, "vote_count": 4} |
I have the following RNA-seq set-up:
- 5 samples for comparison
- matrix with TMM-normalized FPKM values
Now, TMM-normalization allows comparison across samples, so I can compare gene X between samples 1-5. Can I also compare gene X with gene Y within, say, sample 2 or does the TMM-normalization make this inappropriate (e. g. using different scaling factors for each gene or anything like that)?
I have tried to read the TMM-normalization paper (cited below) and it seems that when there are more than two samples, it picks a reference sample and calculates a single scaling factor for each pairwise comparison between the reference and each non-reference sample. That would seem to suggest that the within-sample comparison like the one outlined above is possible. Is it?
Robinson, M. and A. Oshlack (2010). "A scaling normalization method for differential expression analysis of RNA-seq data." Genome Biology 11(3): R25. | <p>TMM won't change the order of values, it's meant to adjust between samples only. So if you're asking, "Does gene A have a higher FPKM than gene B in sample 3", then you'll get the same answer with/without normalization (the exact size of the difference might differ, though).</p>
| biostars | {"uid": 120162, "view_count": 4156, "vote_count": 2} |
I have a dataframe containing a list of enriched GO terms resulting from a differential expression analysis. I want to filter out GO terms of interest by filtering with a second dataframe containing those GO terms I'm looking for.
Based on this post
[https://community.rstudio.com/t/dplyr-filter-from-another-dataframe/7207][1]
I wrote the following code
Extracted_GO <- filter(BP_1_5, GO_Accession_Number == GO_Search$GO_Accession_Number)
which gives the following error
longer object length is not a multiple of shorter object length
and
str(BP_1_5)
gives
tibble [5,325 x 5] (S3: tbl_df/tbl/data.frame)
$ GO_Accession_Number: chr [1:5325] "GO:0000028" "GO:0000045" "GO:0000045" "GO:0000070" ...
$ Biological.Process : chr [1:5325] "ribosomal small subunit assembly" "autophagosome assembly" "autophagosome assembly" "mitotic sister chromatid segregation" ...
$ HGNC_Symbol : chr [1:5325] "ERAL1" "UBQLN1" "STX12" "INCENP" ...
$ Gene.name : chr [1:5325] "Eral1" "Ubqln1" "Stx12" "Incenp" ...
$ GO.domain : chr [1:5325] "P" "P" "P" "P" ... data.frame': 5325 obs. of 6 variables:
and
str(GO_Search)
gives
tibble [1,727 x 2] (S3: tbl_df/tbl/data.frame)
$ GO_Accession_Number: chr [1:1727] "GO:0062201" "GO:0098610" "GO:0075003" "GO:0075002" ...
$ Description : chr [1:1727] "actin wave" "adhesion between unicellular organisms" "adhesion of symbiont appressorium to host" "adhesion of symbiont germination tube to host" ...
Given my two dataframes are of fixed length and can't be changed is there a way of achieving my goal?
[1]: https://community.rstudio.com/t/dplyr-filter-from-another-dataframe/7207
| filtered <- BP_1_5[BP_1_5$GO_Accession_Number %in% GO_Search$GO_Accession_Number, ] | biostars | {"uid": 9463426, "view_count": 797, "vote_count": 1} |
Hello,
I would like to ask if you know any R package which will help me in doing a mass search for genes relations and which pathways are they corresponding to?
E.g I had about 34 thousand genes (i.e ENSG00000146648 - coded in ensembl DB) derived from GBM cancer patients. I want to list their functions and check in which pathways each of them is involved. Do you have any ideas?
I was looking for a similar question, but I could not find one (expect biopython, which I am not using - unless I will be forced to. ;)) | You can get GO annotations for genes in Ensembl. Use either the [Ensembl perl API][1] or the Bioconductor [biomaRt R package][2].
[1]: https://www.ensembl.org/info/docs/api/index.html
[2]: http://bioconductor.org/packages/release/bioc/html/biomaRt.html | biostars | {"uid": 294483, "view_count": 835, "vote_count": 1} |
Hello,
After running Varscan and Mutect on a set of 10 patients (tumor / normal comparison), I have run through a pipeline of false-positive filtering. When I look at my resulting Ts/Tv ratio (by manual calculation, snpEff summary file, SnpSift tstv calculation or GATK VariantEval), it is quite low for human whole genome sequence data (1.3-1.6). I have read all I can find here and in papers about the expected ratio, and how a low ratio could denote a great deal of false positives.
I ran Varscan with relatively lax parameters for calling somatic mutations (5 reads in N, 8 in T, but strand bias filtered), however I thought Mutect would call a confident set. Both SNP callers end up with a low Ts/Tv. My question is, can I chalk this result up to false positives (which is okay with me, I wanted a sensitive not specific call set), or could it be a problem with the BAM alignment? I suppose a poorly aligned BAM would lead to false positives too, but any insight or information would be greatly appreciated. | We should expect Ts/Tv ratio of somatic point mutations to be wildly variable across tumor types... depending on various mutagens, or the mechanisms involved in DNA repair. I can't seem to find a publication that confirms this assumption, but [this figure comes close][1]. Here are my quick and dirty Ts/Tv ratios of mutation calls grabbed from [that paper][2], but please double-check my work.
[1]: http://www.nature.com/nature/journal/v500/n7463/fig_tab/nature12477_F2.html
[2]: http://www.nature.com/nature/journal/v500/n7463/full/nature12477.html
**Note**: A caveat in the data below is that some cohorts are exomes while others are whole-genomes. Since there's more GC content in exomes, these Ts/Tv ratios are not perfectly comparable... but good enough for our point to hold.
```
Cancer Type Ts/Tv
ALL 0.949906
AML 2.128909
Bladder 1.325778
Breast 0.859808
Cervix 1.265049
CLL 1.006487
Colorectum 2.163191
Esophageal 1.38155
Glioblastoma 3.53876
Glioma Low Grade 2.244252
Head and Neck 1.172555
Kidney Chromophobe 2.545455
Kidney Clear Cell 1.165541
Kidney Papillary 1.116037
Liver 1.222369
Lung Adeno 0.439277
Lung Small Cell 0.569885
Lung Squamous 0.635106
Lymphoma B-cell 0.971431
Medulloblastoma 1.381825
Melanoma 8.54497
Myeloma 1.303654
Neuroblastoma 0.566366
Ovary 0.876746
Pancreas 1.021448
Pilocytic Astrocytoma 1.837178
Prostate 1.220668
Stomach 3.006267
Thyroid 2.161623
Uterus 1.632635
```
| biostars | {"uid": 104473, "view_count": 5897, "vote_count": 3} |
Hello,
I am doing a RNA-Seq analysis with R using the limma package. I have RNA-Seq data of different mutant lines of a model organism which have two different phenotypes in comparison with the siblings as control. The samples are like that:
- line 1, phenotype 1, mutant and sibling
- line 2, phenotype 1, mutant and sibling
- line 3, phenotype 2, mutant and sibling
- line 4, phenotype 2, mutant and sibling
- line 5, phenotype 2, mutant and sibling
That means line 1 and 2 have the similar phenotype 1
and line 3, 4, and 5 have the similar phenotype 2
which differ from phenotype 1. Let's say phenotype 1 has more cells than the control and phenotype 2 has less cells than the control.
I did the RNA-Seq analysis for each line, but now I want to compare the lines of one phenotype. I want to find the genes which are different in the lines with the same phenotype in comparison to the siblings. My makeContrast command looks like that:
cont.matrix <- makeContrasts(pheno1 = (line1.mut+line2.mut)-(line1.sib+line2.sib),
pheno2 = (line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib),
levels=design)
Finally I want to find genes (eg. genes involved in the cell cycle) which are in phenotype 1 up and in phenotype 2 down regulated or the other way around.
cont.matrix <- makeContrasts(pheno1vspheno2 = ((line1.mut+line2.mut)-(line1.sib+line2.sib)) -
((line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib)),
levels=design)
I am not sure if a can do a contrast matrix like this. Do you think this will give me the genes I am interested in? | You may want to double-check this but it seems to me that with:
pheno1vspheno2 = ((line1.mut+line2.mut)-(line1.sib+line2.sib)) -
((line3.mut+line4.mut+line5.mut)-(line3.sib+line4.sib+line5.sib))
You are looking for an interaction between mutant/sibling state and phenotype. *I.e.* genes that respond differently between mutant and sibling depending on whether they are in phenotype 1 or 2.
**EDIT** russhh in his nice answer points out that the contrasts should be averaged. So the above should in fact be (please check ok):
pheno1vspheno2 = ((line1.mut+line2.mut)/2 - (line1.sib+line2.sib)/2) -
((line3.mut+line4.mut+line5.mut)/3 - (line3.sib+line4.sib+line5.sib)/3)
----
If you want the difference between phenotype 1 and 2 regardless of mutant/sibling state you could use:
pheno1vspheno2 = (line1.mut + line2.mut + line1.sib + line2.sib)/4 -
(line3.mut + line4.mut + line5.mut + line3.sib + line4.sib + line5.sib)/6,
I would suggest you add to your count matrix one or more a dummy genes with behaviour that you want to pick up as differential or not significant to test whether the contrasts do what you expect. | biostars | {"uid": 435540, "view_count": 1314, "vote_count": 3} |
<p>Do you know any <strong>public</strong> scientific SQL server ?</p>
<p>for example, I would cite:</p>
<ul>
<li>UCSC <a href='http://genome.ucsc.edu/FAQ/FAQdownloads#download29'>http://genome.ucsc.edu/FAQ/FAQdownloads#download29</a></li>
<li>ENSEMBL <a href='http://uswest.ensembl.org/info/data/mysql.html'>http://uswest.ensembl.org/info/data/mysql.html</a></li>
<li>GO <a href='http://www.geneontology.org/GO.database.shtml#mirrors'>http://www.geneontology.org/GO.database.shtml#mirrors</a></li>
</ul>
<p>(I'll give a +1 to each correct answer)</p>
| Flybase has direct access to its postgres chado database.
http://flybase.org/forums/viewtopic.php?f=14&t=114
hostname: flybase.org
port: 5432
username: flybase
password: no password
database name: flybase
e.g.
psql -h flybase.org -U flybase flybase | biostars | {"uid": 474, "view_count": 11197, "vote_count": 22} |
Good afternoon,
I have a question about using collapseReplicates in DESeq2.
As far as I understand, this function adds up the counts belonging to one biosample. I understand the meaning of this if the number of technical replicates per biosample is the same for all samples. Please tell me what should I do if the number of technical replicates per sample differs?
For example, SAMNXXXXX corresponds to SRRXXXXX1 (count = 5) and SRRXXXXX2 (count = 6), SAMNYYYYY corresponds only to SRRYYYYY1 (count = 10). If I add up the counts for SAMNXXXXX (5 + 6 = 11) and then compare it with count for SAMNYYYYY (10), I will get an incorrect conclusion that the expression is higher in SAMNXXXXX.
Maybe I need to take the arithmetic mean or something else? It seems to me that the arithmetic mean is not very reasonable. For example, I have counts of 181 and 2 for different replicates of the same biosample.
Note: this situation is not observed for most samples. For example, in a particular dataset there are 89 biosamples without technical replicates and and 5 biosamples with 2 technical replicates in each.
Thanks!
Good regards,
Poecile
| Collasping **technical** replicates takes place before DESeq2 internal normalization, so you don't need to worry about the arithmetrics. Difference in sequencing depth will be handled exactly as with non-collapsed samples: by dividing the counts by a size factor calculated based on the median of ratio method. Therefore, it is ok to not have the same number of technical replicates by biosample. | biostars | {"uid": 9497035, "view_count": 1001, "vote_count": 1} |
Hello everyone
I am doing network analyses by using WGCNA, I got error message when I am setting the soft threshold. Here is what I got.
I am following these commands:
> library(WGCNA)
disableWGCNAThreads()
> library(cluster)
> options(stringsAsFactors = FALSE)
> femData = read.csv("gene.csv")
> names(femData)
[1] "gene_id" "gene" "locus" "log2.fold_change."
[5] "test_stat" "p_value" "q_value" "AF"
[9] "AF.1" "AF.2" "AD" "AD.1"
[13] "AD.2"
> dim(femData)
[1] 6765 13
> datExprFemale=as.data.frame(t(femData[, -c(1:7)]))
> names(datExprFemale)=femData$gene
> rownames(datExprFemale)=names(femData)[-c(1:7)]
> powers=c(1:10)
> sft=pickSoftThreshold(datExprFemale,powerVector=powers)
Error in summary(lm1)$coefficients[2, 1] : subscript out of bounds
In addition: Warning message:
executing %dopar% sequentially: no parallel backend registered
I tried to find similar questions in archive, but I could not.
Please I need help with this issue if anyone able to help.
Best
Shaima
| The error means you're trying to access an element outside the array; this usually means one your indices is greater than the last available index of the array.
As for the warning, check the [doParallel docs][1]: "Remember: unless registerDoMC is called, foreach will not run in parallel. Simply loading the doParallel package is not enough."
[1]: https://cran.r-project.org/web/packages/doParallel/vignettes/gettingstartedParallel.pdf | biostars | {"uid": 178866, "view_count": 3388, "vote_count": 2} |
Hi,
I would appreciate it if anyone could direct me on how to give different colors for the up and down-regulated genes using the R Bioconductor package **enhancedvolcano**.
Just as indicated in the following link.
Thanks!
https://galaxyproject.github.io/training-material/topics/transcriptomics/tutorials/rna-seq-viz-with-volcanoplot/tutorial.html#:~:text=A%20volcano%20plot%20is%20a,the%20most%20biologically%20significant%20genes. | Hey,
This can be achieved via customised coding or EnhancedVolcano. Please take a look at the vignette, where there is a dedicated section for this (see Section 4.9): http://bioconductor.org/packages/devel/bioc/vignettes/EnhancedVolcano/inst/doc/EnhancedVolcano.html
Kevin | biostars | {"uid": 455167, "view_count": 1509, "vote_count": 1} |
I'm new to VCF processing and am a bit confused about the different conventions used by VCF files "in the wild". The VCF spec supports variant calls for multiple samples in a single file via the FORMAT column, with a separate column for each sample containing the explicit variant call for that sample in a "GT" field.
However, it seems like some tools (e.g. SnpEff) assume that the input VCF file contains calls for only a single sample, and so omit the FORMAT column and GT field(s) entirely. Instead, the variant call is apparently inferred from the REF and ALT columns.
My questions are:
- Which convention is actually used most often in practice, the full multi-sample format, or the simpler single-sample format?
- What happens if a tool like SnpEff assumes the simpler format, but is given a file with multiple samples? For example, the current VCFs from 1000 Genomes contain 2504 samples per file. How does SnpEff cope with this?
- It's not clear to me how the simpler convention handles heterozygosity vs. homozygosity. For example, a GT value of "0|1" is heterozygous, while "1|1" is homozygous. There is no way to convey this difference using the ALT and REF fields, is there?
Thanks for any insight you can provide.
Brian | - Which convention is actually used most often in practice, the full multi-sample format, or the simpler single-sample format?
- I've seen both of them around, although multisample files tend to be reserved for intended purposes such as trios, other type of pedigrees, or bulk data release like the 1000g project data for instance.
- What happens if a tool like SnpEff assumes the simpler format, but is given a file with multiple samples? For example, the current VCFs from 1000 Genomes contain 2504 samples per file. How does SnpEff cope with this?
- The annotation depends on the variant itself, not on the genotype, so even if you have thousands of samples genotypes each variant will be annotated only once, and the annotation will be written in the INFO column (shared by all samples, since the annotation is variant dependent, and not sample dependent). I haven't work with SnpEff in multi-sample mode, but apparently it's perfectly capable of doing it: http://snpeff.sourceforge.net/SnpEff_manual.html#cancer
- It's not clear to me how the simpler convention handles heterozygosity vs. homozygosity. For example, a GT value of "0|1" is heterozygous, while "1|1" is homozygous. There is no way to convey this difference using the ALT and REF fields, is there?
- Yes, the way you can tell if a variant call is homozygous or heterozygous for a particular variant on a vcf format is only by looking at the the genotype. the REF and the ALT column are just descriptive of what the reference genome has on that position and what the alternative allele was found. it will be homozygous if both numbers are the same (1|1, 2|2 or 3|3 and so on if the variant has more than 1 alternative allele, or even 0|0 if you deal with multisample files that have to present sites that do not vary - reference homozygous)
Extra note: if you're new to vcf format keep in mind that all the columns except the last sample's one (or the last samples' ones if multi-sample) are there to describe the variant, not to describe the genotypes. a variant is defined by its chromosome location and the alternative alleles that were found. | biostars | {"uid": 140836, "view_count": 4848, "vote_count": 1} |
Hi,
I have some paired end exome sequencing, and after checking out the samples in fastqc, I found there was an enrichment for Illumina adapters. I used Trimmomatic to remove the adapters (using Illumina Clip), which gives me paired reads, and unpaired reads for forward and reverse respectively post trimming.
I'm following the GATK best practises for variant calling, which recommends BWA MEM for alignment. I can use the paired, trimmed reads for to align, but there's quite a large proportion of reads which are trimmed and unpaired. The bedrock of my question is how can I use these effectively?
Would the best way be to do an alignment with the pairs, then unpaired, and combine the SAM files in some way? Or even merge the BAMs?
Anyone had a similar experience? Advice is welcome!
Thanks | <p>I do this regularly with exome datasets.</p>
<p>Reads where one mate fails but are otherwise fine are included in a single-end file. It's still good data! Map them the same way you do the PE data, and subsequently combine the BAMs using Picard's MergeSamFiles or another approach.</p>
<p>Some cleaning approaches make this easy. I use <a href="https://github.com/msettles/expHTS">expHTS</a> to manage my cleaning now - does a fantastic job and is blazingly fast with stream-based processing. Documentation is forthcoming, but the developers are happy to help.</p>
| biostars | {"uid": 174103, "view_count": 2690, "vote_count": 3} |
**UPDATE:** I'll leave this post up since I got a really thorough response from the tool's developer himself. This response might be of a great value later on for someone else. Thank you [jkbonfield][1]!
Hey fellow bioinformaticians!
At this point I'm really confused regarding the `-B` option with the `mpileup` function.
First to clarify what this option does, it "disables base alignment quality (BAQ) computation"
Now, in the documentation page of [samtools mpileup][2]:
> BAQ is the Phred-scaled probability of a read base being misaligned. **It greatly helps to reduce false SNPs caused by misalignments**. BAQ is calculated using the probabilistic realignment method described in the paper “Improving SNP discovery by base alignment quality”
BUT, in the documentation page of [bcftools mpileup][3], they say the exact opposite regarding BAQ:
> `-B, --no-BAQ`
Disable probabilistic realignment for the computation of base alignment quality (BAQ). BAQ is the Phred-scaled probability of a read base being misaligned. **Applying this option greatly helps to reduce false SNPs caused by misalignments.**
So at this point I'm really confused. Samtools documentation says that computing BAQ helps reduce false SNPs discovery, while the bcftools documentation says that **disabling** BAQ improves SNPs discovery.
Also, I've tried `bcftools mpileup` on the same set of data, once without `-B` and once with it. I got significantly different results: without this option, I got some INDELs in some samples, while with the `-B` argument, I got no INDELs at all in any of my 42 COVID-19 samples.
Am I missing something? Did I misunderstand the developers' wording?
EDIT: in the `ivar` [manual][4], the use of `-B` with `samtools mpileup` is recommended, but for a different reason:
> Please use the `-B` options with `samtools mpileup` to call variants and generate consensus. When a reference sequence is supplied, the quality of the reference base is reduced to 0 (ASCII: !) in the `mpileup` output. **Disabling BAQ** with `-B` seems to fix this. This was tested in `samtools` 1.7 and 1.8
I don't know what to understand from all of this.
[1]: https://www.biostars.org/u/41276/
[2]: http://www.htslib.org/doc/samtools-mpileup.html
[3]: http://samtools.github.io/bcftools/bcftools.html#mpileup
[4]: https://andersen-lab.github.io/ivar/html/manualpage.html | That looks like an error in the bcftools documentation. Good spot.
Generally though it's far more complicated than this.
BAQ assesses the per-position accuracy of an alignment. If the data is complex then it's likely there is a nice 1:1 alignment through the matrix and the Base Alignment Quality is high. For low-complexity indels, eg copy number variations in STRs, there can be multiple places the bases could be added or removed with similar or even identical alignment scores. In these scenarios BAQ will give a low score as it's not sure the alignment produced by the read mapper will be the same for all reads, particularly when close to the end of reads.
It is clear that using BAQ does remove many false positives by reducing the base quality in places where reference bias is likely to creep in. However by its very nature it also removes some true positives too (or increases false negatives if you prefer).
Last year I started looking to speed up bcftools by removing most BAQ calls, as it's the biggest CPU hog. The theory is can we do rapid assessments to identify where there appears to be no problematic alignments in the pileup, meaning BAQ isn't necessary to get the correct answer. (The idea came from Crumble, which assesses multiple alignments to work out which regions need quality values retained and which don't.) Doing this cuts out a good 90% of our BAQ calls, but rather fortuitously this also happens to still remove most false positives while not having the same detrimental impact on false negatives. A rare win/win.
It then turned into a bit of a rabbit hole of improving calling in other ways, particularly indels. This work is mostly complete, barring some tidying up. Some examples of the impact are here: https://github.com/samtools/bcftools/pull/1363#issuecomment-802042033
Hopefully we can get this in the next bcftools release.
Edit: For Covid-19 sequencing, if using amplicon methods, you may also wish to apply this Htslib PR:
https://github.com/samtools/htslib/pull/1273
It makes the overlap removal method randomised between read 1 / read 2. This removes a signficant source of strand bias from amplicon sequencing, which when coupled with BAQ can give major issues to calling. The alternative is to use `-x` to disable overlap removal. Double counting is a problem on shallow data, but it doesn't really make much difference when we get to high depth. | biostars | {"uid": 9466154, "view_count": 2580, "vote_count": 3} |
Dear all,
I was recently asked in a lab meeting whether there may be a gene length bias in the results produced by WGCNA, if the input data consists of normalized counts or variance-stabilized transformed data from DESeq2 (which does not correct for gene length). I am not sure how to answer this, as [there are several papers that used DESeq2 normalized counts as input for WGCNA][1], and I thought that this was actually [recommended by the authors of WGCNA][2].
In other words, could my detected expression modules be biased for gene length, with some modules being particularly driven by the length of the genes in that module as opposed to its actual expression?
[1]: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C9&q=deseq2%20wgcna&btnG=
[2]: https://horvath.genetics.ucla.edu/html/CoexpressionNetwork/Rpackages/WGCNA/faq.html | In any analysis that is based on the number of reads mapping to a gene, the longer a gene is the more reads will map to it.
In DE analysis (such as DESeq and edgeR), this means that genes that a long gene is more likely to be called significantly differential than a short gene as there is less noise.
I don't know of any literature on how WGCNA might be affected by this, however, one can imagine that where the expression is more accurately estimated (as it is with longer genes) there is a higher chance of the correlation being more significant. So, yes, I might image that you would find that longer genes are more likely to be assigned to clusters.
Might even be a short paper in it if you could demonstrate it.
| biostars | {"uid": 406674, "view_count": 1136, "vote_count": 3} |
Hi, I was running plink(1.9) tool to create bed file (`plink --file inputfile --make-bed --noweb --out outputfile`) which worked fine, but when I switched to plink2, the same command doesn't work. Can we do everything that plink(1.9) does with plink2? Can someone please clarify this? Thank you. | Sorry about not noticing this earlier.
The main difference is that plink 1.9 is essentially finished, while plink 2.0 is an alpha-stage program which will have significant unfinished components for a while to come. As a consequence, current development priorities for plink 2.0 are centered around things which are impossible to do with plink 1.9, such as handling multiallelic/phased variants and dosage data and reliably tracking REF/ALT alleles; while things that plink 1.9 already handles perfectly well, such as working with .ped/.map file pairs, have been deliberately deprioritized for now.
So, you should stick to 1.9 as long as it's good enough for the jobs you need to perform. But once you need to do something outside 1.9's scope, you're likely to find that 2.0 already has the additional feature you need (or it'll be added quickly after you ask for it). | biostars | {"uid": 299855, "view_count": 5409, "vote_count": 1} |
In GWAS, what it means by 'single SNP association studies only explain a small part of disease heritability'? How this explained heritability is quantified? | In statistics, when people say "17% of the variability is explained by X Y and Z", they are referring to the proportion of the variance that can be accounted for by the predictors in the statistical model.
For example, if you did a big association study on the genetics of lung cancer. You would need to include smoking as a covariate in the model. Why? Well, 1) this will prevent you from mis-attributing cancer that is actually due to smoking to the person's genetics instead. But, your question touches on a second reason. 2) When you include smoking as a covariate you can "partial out" or more colloquially "explain away" some of the variance in the data. This means there is less total variation left over in the dataset as a whole, or, "less variance left to explain". Still confused? I'll keep yammering.
Complicated phenomena like cancer are hard to predict because they have many distinct motivators. To predict cancer incidence perfectly, you would have to know about *all* of the predictors. But if you don't, your predictive model might succeed in explaining only 50% of the phenomenon (for example). A statistician might say about this, that, "the model accounts for 50% of the variance in the dataset." In the specific context in which you are speaking, you might phrase this as "only 15% of the heritability of the disease can be explained by known genetic risk factors"...
Numerically, what is going on has to do with ratios of sums of squares. Generally, you end up with the total variance of the data. That goes on the bottom as the denominator. The amount of the variance that the model explains is the numerator, which is the amount of the total sums of squares (SS) that your model explains.
So if you had
```
(explained SS) = 10000
(unexplained SS) = 40000
```
the ratio of explained to total would be 10000/40000 = 0.25, and your model would "explain" 25% of the variance.
It is not exactly correct to say this *has to* apply (only) to complex polygenic conditions, though it is likely.
As a counterexample, consider CFTR. If mutated in certain ways, the person may develop cystic fibrosis. Now, so far, we have found >1200 mutations that can lead to that phenotype or something like it... but imagine that we had only found 300 of these mutations to date.
Despite that these variants are all in the same gene, you could still enter these 300 variants into a statistical model (e.g. general linear model) as predictors, you might be able to account for 43% of the variance in the data. This would relate directly to the ratio of sums of squares mentioned earlier.
Hope this helps. | biostars | {"uid": 127513, "view_count": 3710, "vote_count": 1} |
I would really like to use FastQC for my project but am getting the following error message when I try to run it on my Ubuntu server 15.04
```
bio@ubuntu:~$ fastqc &
[1] 716
rafay@ubuntu:~$ Exception in thread "main" java.awt.HeadlessException:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
at java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:207)
at java.awt.Window.<init>(Window.java:535)
at java.awt.Frame.<init>(Frame.java:420)
at java.awt.Frame.<init>(Frame.java:385)
at javax.swing.JFrame.<init>(JFrame.java:174)
at uk.ac.babraham.FastQC.FastQCApplication.<init>(FastQCApplication.java:71)
at uk.ac.babraham.FastQC.FastQCApplication.main(FastQCApplication.java:324)
```
Whereas java is already installed
```
bio@ubuntu:~$ java -version
java version "1.7.0_79"
OpenJDK Runtime Environment (IcedTea 2.5.6) (7u79-2.5.6-0ubuntu1.15.04.1)
OpenJDK Client VM (build 24.79-b02, mixed mode, sharing)
``` | It is complaining X11 is not set (maybe not even installed?). When you call fastqc without arguments, it opens the graphical interface. Try:
fastqc *.fastq & | biostars | {"uid": 160144, "view_count": 15234, "vote_count": 3} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.