INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
---|---|---|---|
H!
Which bioinformatics tools supporting CRISPR-Cas Experiments do you recommend? Or which tools are you using?
| Here is [CHOPCHOP][1] which has been developed at our institute. It supports different approaches (Cas9, Cas13, Cpf1, TALEN) and knock-out, knock-in, activation, and repression.
And here is a more comprehensive list of all [CRISPR tools in bio.tools][2] sorted by citations, as in "most used successfully".
[1]: https://chopchop.cbu.uib.no/
[2]: https://bio.tools/t?q=crispr&sort=citationCount&ord=desc | biostars | {"uid": 9491059, "view_count": 1367, "vote_count": 1} |
Hi All,
Is it possible to re-run STAR RNA aligner to obtain wiggle plots on already aligned and sorted BAM files? Or do I need to re-align again to obtain this type of output? I really do not want to have to re-align - so I am open to other options.
Thank you!
| See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4631051/
STAR --runMode inputAlignmentsFromBAM \
--inputBAMfile Aligned.sortedByCoord.out.bam \
--outWigType bedGraph \
--outWigStrand Stranded | biostars | {"uid": 234437, "view_count": 4866, "vote_count": 1} |
<p>Hi,
I would like to calculate the percent-identity from a CIGAR string from a <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a>/SAM file containing alignments. I want to calculate the PID only for the aligned region, ignoring clipped ends ("H","S"). I can parse the CIGAR in R and get the sums for each letter in the CIGAR, so it's just a conceptual question about the definition and whether or not it is correct:
so given the CIGAR contains characters: "M" (match),"N" (skip),"D" (Deletion),"I"(Insertion), "S" (soft-clip),"H" (hard-clip), of which I ignore "S", "H", and "M" is the total sum of M, N the total of N, etc.:</p>
<p>e.g.: <code>10S 20M5I5D20M 10S</code></p>
<p>Is this a good way of defining the formula?</p>
<pre><code>pid1 := 100 * M / (M+N+I)
</code></pre>
<p>or maybe</p>
<pre><code>pid2 := 100* M / (M+N+I-D)
</code></pre>
<p>the example: pid1 = ~88% but then pid2 = 100% (which wouldn't make much sense).</p>
<p>Related: <a href='http://biostar.stackexchange.com/questions/9358/is-there-any-r-package-to-parse-cigar-element-of-sam-files/17031#17031'>http://biostar.stackexchange.com/questions/9358/is-there-any-r-package-to-parse-cigar-element-of-sam-files/17031#17031</a></p>
<p>Thank you very much</p>
| <p>I would say pid1 would be correct. Computationally, I don't think there is any difference between N (skipped) and D (deletion).</p>
<p>Both skipped and deletion should produce something like this:</p>
<pre><code>read ACGTACGT--ACGTACGT
reference ACGTACGTAAACGTACGT
</code></pre>
<p>So if you add N + D, you are adding the gap twice.</p>
<p>edit **</p>
<p>The <a href='http://samtools.sourceforge.net/SAM1.pdf'>SAM specs</a> is a bit confusing on the matter:</p>
<pre><code>M alignment match (can be a sequence match or mismatch)
I insertion to the reference
D deletion from the reference
N skipped region from the reference
S soft clipping (clipped sequences present in SEQ)
H hard clipping (clipped sequences NOT present in SEQ)
P padding (silent deletion from padded reference)
= sequence match
X sequence mismatch
</code></pre>
<p>M can be a match or mismatch??</p>
<p>Further down in the <a href='http://samtools.sourceforge.net/SAM1.pdf'>sam</a> specs:</p>
<ul>
<li>For mRNA-to-genome alignment, an N operation represents an intron. For other types of alignments, the interpretation of N is not defined.</li>
<li>Sum of lengths of the M/I/S/=/X operations shall equal the length of SEQ</li>
</ul>
| biostars | {"uid": 16987, "view_count": 14696, "vote_count": 11} |
I have gtf file containing 7 columns from which i am going extract only gene name part from column 7th.
The 7th column contains some information including `gene_id`, `gene_name` and so on that I posted one row of column 7th below:
gene_id "XLOC_000001"; transcript_id "TCONS_00000001"; exon_number "1"; gene_name "NAC001"; oId "AT1G01010.1"; nearest_ref "AT1G01010.1"; class_code "="; tss_id
I need only `gene_name` from this column, for example "NAC001", how can I extract which I need please? | Assuming the fields in your gtf are tab separated you could try something like this:
awk 'BEGIN{FS="\t"}{print $7}' YOURFILE | awk 'BEGIN{FS="gene_name"}{print $2}' | awk 'BEGIN{FS=";"}{print $1}' > OUTFILE
<a href="https://www.biostars.org/u/18713/">genomax2</a> solution with sub is nicer. | biostars | {"uid": 167957, "view_count": 5791, "vote_count": 1} |
I made a script to retrieve all of the recorded full-length human lung infecting coronaviruses in genbank (virus variation database, ncbi). After trimming out all of the really small files or those that didnt work, I'm left with ~200 fasta files to work with. What are some analyses I can run with the new COVID-19 genome? I made a blast database and ran some blastn queries but I'm wondering what bioinformatic analyses are typically run on novel viruses?
My blastn query of word size 7 had a few hits but I am wondering how to interpret these results and what to do with that data.
This is just for fun/educational. I don't have much experience with viruses or comparative genomics.
Thanks | Since you selected these strains (which are similar) there is not much of a point in doing BLAST analysis. Instead, you can start doing a multiple sequence alignment with the sequences. If you are serious about learning then there are command line version of `MAFTT`, `T-COFFEE`, `Clustal` , `MUSCLE`. You will also find web front-ends for these tools (if you search around). A multiple sequence alignment gives you an idea of relationship of the sequences to each other. This can then be used to infer possible evolutionary relationships among the genomes.
`MEGA` is [user friendly software][1] that has a GUI for doing above analysis. They have a [pretty good online][2] manual as well.
[1]: https://www.megasoftware.net/
[2]: https://www.megasoftware.net/web_help_10/index.htm#t=Preface.htm | biostars | {"uid": 423761, "view_count": 690, "vote_count": 1} |
Hi, I have two multifasta files. I want to merge them deleting all those fasta seqences from the first multifasta file which are also in the second file. I need to do it by header comparison, sequences are different under the same headers.
Alternatively, could somebody give me a hint how to generate all the contigs (even those unchanged) through bcftools consensus?
Thanks,
Pawel | You can use the BBMap package like this:
filterbyname.sh in=file1.fasta names=file2.fasta exclude out=file1_filtered.fasta
cat file1_filtered.fasta file2.fasta > combined.fasta
| biostars | {"uid": 198558, "view_count": 1780, "vote_count": 1} |
I am trying to assembled a pooled metagenomic dataset with 762909004 reads. Will using tools such as BBNorm to downsize the coverage in order to speed up the assembly significantly effect the final assemblies? | To answer your question first: normalizing it down to 100x should have no ill effect, and it may even help reduce the errors if you have very high coverage. Several times I have obtained better assemblies from 60x- or 80x-normalized data than from original datasets. Still, I suggest that you assemble from your original data so you have a baseline. I have assembled about 350 million reads with [**megahit**][1] in about 10 hours - you should be able to do it with your dataset in couple of days. It is always possible the original data is of such quality that it yields the best assembly.
[1]: https://github.com/voutcn/megahit | biostars | {"uid": 478313, "view_count": 607, "vote_count": 1} |
Good Morning,
I'm a student of Bioinformatics from the University of Bologna.
I'm working on a personal idea and I'm using MUSTANG to perform multiple structural alignment.
To run the algorithm .pdb file with just one chain are required so I'm looking for tools or a suggestion in order to edit in a clever and fast way all the files.
'til now I wrote some command lines in python but I think that they're not enough specific to correctly edit all the files in fact I often have to check manually all the structures again.
someone could help me?
thanks in advance!
Dade. |
"""
Extract a single chain from a PDB file
"""
from __future__ import print_function
import Bio.PDB
import Bio.PDB.PDBIO
import sys
import argparse
class ChainSelect(Bio.PDB.Select):
def __init__(self, target_chain):
self.target_chain = target_chain
def accept_chain(self, chain):
if chain.get_id() == self.target_chain:
return 1
else:
return 0
def main():
argparser = argparse.ArgumentParser(description="Extract chain from a PDB file")
argparser.add_argument('infile', help="Path to input file (PDB)")
argparser.add_argument('chain', help="Chain to extract")
argparser.add_argument('outfile', help="Path to output file (PDB)")
args = argparser.parse_args()
pdbparser = Bio.PDB.PDBParser()
io = Bio.PDB.PDBIO()
with open(args.infile, 'r') as infile:
struct = pdbparser.get_structure(args.infile, infile)
io.set_structure(struct)
with open(args.outfile, 'w') as outfile:
io.save(outfile, ChainSelect(args.chain))
return 0
if __name__=="__main__":
sys.exit(main())
Run it with something like:
python extract_chain.py 1XXX.pdb A 1XXXA.pdb
to extract just chain A from 1XXX.pdb. | biostars | {"uid": 186022, "view_count": 5529, "vote_count": 1} |
<p>Hi all</p>
<p>This question could be considered as a follow up of this discussion.
<a href='http://www.biostars.org/post/show/3407/how-to-extract-reads-from-bam-that-overlap-with-specific-regions/#3414'>http://www.biostars.org/post/show/3407/how-to-extract-reads-from-bam-that-overlap-with-specific-regions/#3414</a>
What I need is to extract reads from <a href='http://samtools.sourceforge.net/SAM1.pdf'>bam</a> file that fall <strong>only within a given region</strong> (not overlap the given region), the region being in the form of a gff file or bed file. Overlapping reads could be extracted by several methods (as in the discussion mentioned or <a href='https://code.google.com/p/bedtools/'>BEDTools</a>). The idea is to try to be pretty sure of excluding 5´ UTRs in the process of detecting intergenic transcripts. I saw a tool in BamUtil (<a href='http://genome.sph.umich.edu/wiki/BamUtil'>http://genome.sph.umich.edu/wiki/BamUtil</a>) called "writeRegion" which would pretty much do what I want. Somehow could not get this running for my dataset.
Was wondering if you guys might have an "R" or some other solution for this.
Thanks in advance
Abi</p>
| <p>You can extract mappings of a <a href='http://samtools.sourceforge.net/SAM1.pdf'>sam</a>/bam file by reference and region with <a href='http://samtools.sourceforge.net/'>samtools</a>. For example:</p>
<pre><code>samtools view input.bam "Chr10:18000-45500" > output.bam
</code></pre>
<p>That would output all reads in Chr10 between 18000-45500 bp.</p>
| biostars | {"uid": 48719, "view_count": 127893, "vote_count": 30} |
I am trying to modify the location of features within a GenBank file. I know `feature.type` will give gene/CDS and `feature.qualifiers` will then give "db_xref"/"locus_tag"/"inference" etc. Is there a `feature.` object which will allow me to access the location (eg: `[5240:7267](+)`) directly?
This URL give a bit more info, though I can't figure out how to use it for my purpose... http://biopython.org/DIST/docs/api/Bio.SeqFeature.SeqFeature-class.html#location_operator
Essentially, I want to modify the following bit of a GenBank file:
gene 5240..7267
/db_xref="GeneID:887081"
/locus_tag="Rv0005"
/gene="gyrB"
CDS 5240..7267
/locus_tag="Rv0005"
/inference="protein motif:PROSITE:PS00177"
...........................
to
gene 5357..7267
/db_xref="GeneID:887081"
/locus_tag="Rv0005"
/gene="gyrB"
CDS 5357..7267
/locus_tag="Rv0005"
/inference="protein motif:PROSITE:PS00177"
.............................
Note the changes from **5240** to **5357**
So far I have the following python script:
from Bio import SeqIO
gb_file = "mtbtomod.gb"
gb_record = SeqIO.parse(open(gb_file, "r+"), "genbank")
rvnumber = 'Rv0005'
newstart = 5357
final_features = []
for record in gb_record:
for feature in record.features:
if feature.type == "gene":
if feature.qualifiers["locus_tag"][0] == rvnumber:
if feature.location.strand == 1:
# Amend feature location from current to 'newstart'
else:
# do the reverse for the complementary strand
final_features.append(feature)
record.features = final_features
with open("test.gb","w") as test:
SeqIO.write(record, test, "genbank")
*Rv0005* is just an example of a locus_tag I need to update. I have about 600 more locations to update per GenBank file, and about 10-20 GenBank files to process (with more to come) | Where is the question? Do you get an error message?
Well I do get one: when I do something like:
features.location.start.position = newstart
I get an error:
seqfeature AttributeError can't set attribute
And it doesn't matter whether the genbank file was open read-only as you did, or in writing mode.
I suppose that you need to create a new genbank file, and copy all the attributes.
**Edit 1**:
Well modifying the start is possible after all.
```
from Bio import SeqIO
from Bio import SeqFeature
start_pos = SeqFeature.AfterPosition(newstart)
end_pos = SeqFeature.BeforePosition(feature.location.end.position)
my_location = SeqFeature.FeatureLocation(start_pos, end_pos)
features = my_location
```
But that does not solve the saving problem yet.
**Edit 2**:
Useful links:
- https://www.biostars.org/p/57549/
- [Dealing with GenBank files in Biopython][1]
- [Parsing Genbank files with Biopython][2]
[1]: http://www2.warwick.ac.uk/fac/sci/moac/people/students/peter_cock/python/genbank/
[2]: http://wilke.openwetware.org/Parsing_Genbank_files_with_Biopython.html | biostars | {"uid": 106108, "view_count": 4653, "vote_count": 1} |
I'm trying to add a qualifier to a new feature and then save as a genbank file. I'm quite new to python and appreciate the help.
DF posted a great guild a few years ago for making genbank files that I have been following - https://www.biostars.org/p/57549/
But I am stuck adding a qualifier to a new feature. Here is his guide. I would like to add a qualifier after step 4. Thanks.
```
################ B: Make a SeqFeature ################
# 1. Create a start location and end location for the feature
# Obviously this can be AfterPosition, BeforePosition etc.,
# to handle ambiguous or unknown positions
from Bio import SeqFeature
my_start_pos = SeqFeature.ExactPosition(2)
my_end_pos = SeqFeature.ExactPosition(6)
# 2. Use the locations do define a FeatureLocation
from Bio.SeqFeature import FeatureLocation
my_feature_location = FeatureLocation(my_start_pos,my_end_pos)
# 3. Define a feature type as a text string
# (you can also just add the type when creating the SeqFeature)
my_feature_type = "CDS"
# 4. Create a SeqFeature
from Bio.SeqFeature import SeqFeature
my_feature = SeqFeature(my_feature_location,type=my_feature_type)
#how would you add a qualifier here with key = note and value = test?
# **my_feature = SeqFeature(my_feature_location,type=my_feature_type,qualifier=....)
# 5. Append your newly created SeqFeature to your SeqRecord
my_sequence_record.features.append(my_feature)
#optional: print the SeqRecord to STDOUT in genbank format, with your new feature added.
#print "\nThis bit is the SeqRecord, printed out in genbank format, with a feature added.\n"
#print(my_sequence_record.format("gb"))
``` | Got it after I thought about it. If anyone needs this in the future -
```
notes={"notes":"test"}
my_feature = SeqFeature(my_feature_location,type=feature_type[feature_count],qualifiers=notes)
``` | biostars | {"uid": 125642, "view_count": 3683, "vote_count": 1} |
In [the user manual of velvet][1], there is a section:
> 4.2.2 The stats.txt file
>
> This file is a simple tabbed-delimited description of the nodes. The column names are pretty much self-explanatory. Note however that node lengths are given in k-mers. To obtain the length in nucleotides of each node you simply need to add k - 1, where k is the word-length used in velveth.
I don't understand the paragraph. Let's say k==10, each node in the graph would have a length of 10. Why do we have to add k-1 to convert it? The length in nucleotides should also be k! In my example, each node would have a length in nucleotides of 10 (==k).
[1]: http://www.ebi.ac.uk/~zerbino/velvet/Manual.pdf | <p>The nodes in the stats.txt file refer to the final graph after merging unambiguous paths (parts of the De Bruijn graph that do not split/merge) into single nodes. This no longer a De Bruin graph. And, now the lengths of the new nodes is the number of nodes from the De Bruin graph that went into that node in the final graph. But each node in the De Bruijn graph only represents one base (the overlap between that kmer and the previous one), so to derive the sequence length, you have to add k-1. There should really be a drawing to explain this, may one day I'll make one (as I need to explain this during my course on assembly).</p>
| biostars | {"uid": 144313, "view_count": 2817, "vote_count": 1} |
<p>For example i have the following already in the <a href='http://samtools.sourceforge.net/SAM1.pdf'>bam</a> header:</p>
<pre><code>@RG ID:110131_SN107_0398_A81DDCABXX_LANE2 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0398_A81DDCABXX_LANE4 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0398_A81DDCABXX_LANE6 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0398_A81DDCABXX_LANE8 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE2 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE4 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE6 PL:ILLUMINA LB:P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE8 PL:ILLUMINA LB:P0007 SM:tumor
</code></pre>
<p>i want to make it:</p>
<pre><code>@RG ID:110131_SN107_0398_A81DDCABXX_LANE2 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0398_A81DDCABXX_LANE4 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0398_A81DDCABXX_LANE6 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0398_A81DDCABXX_LANE8 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE2 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE4 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE6 PL:ILLUMINA LB:tumor_P0007 SM:tumor
@RG ID:110131_SN107_0399_B81CYUABXX_LANE8 PL:ILLUMINA LB:tumor_P0007 SM:tumor
</code></pre>
<p>Thanks.</p>
| samtools view -H mybamfile.bam | sed -e 's/LB:/LB:tumor_/' | samtools reheader - mybamfile.bam > mybamfile.reheadered.bam | biostars | {"uid": 8988, "view_count": 15755, "vote_count": 4} |
I am trying to map reads with `rsem-calculate-expression` using STAR aligner using a loop, however I am getting an error of the STARtemp folder not being deleted after the first run, hence it is stopping the next run. What wrong I am doing ?
1. I have changed the permission in `..\test_results` folder
2. I am running on the same number of nodes as the threads requested.
this is my code
for prefix in $(ls *.fastq.gz | rev | cut -c 12-| rev | uniq)
do
rsem-calculate-expression --star \
--star-path /share/pkg/star/2.7.0e/bin \
--star-gzipped-read-file \
-p 4 \
--paired-end \
"${prefix}R1.fastq.gz" "${prefix}R2.fastq.gz" \
../genome_indices/rsem-star/rsem-star \
../test_results2/"${prefix}res"
done
This is the error part of the output
Expression Results are written!
1000000 alignment lines are loaded!
2000000 alignment lines are loaded!
3000000 alignment lines are loaded!
4000000 alignment lines are loaded!
5000000 alignment lines are loaded!
Bam output file is generated!
Time Used for EM.cpp : 0 h 02 m 50 s
rm -rf ../test_results/G20P1sc-C05-res.temp
rm: cannot remove `../test_results/G20P1sc-C05-res.temp/.nfs000000047fe2b1c700000f07': Device or resource busy
rm: cannot remove `../test_results/G20P1sc-C05-res.temp/.nfs00000004809fdb5b00000f06': Device or resource busy
rm: cannot remove `../test_results/G20P1sc-C05-res.temp/.nfs0000000480b151e000000f09': Device or resource busy
rm: cannot remove `../test_results/G20P1sc-C05-res.temp/.nfs000000048022bb5100000f08': Device or resource busy
Fail to delete the temporary folder!
"rm -rf ../test_results/G20P1sc-C05-res.temp" failed! Plase check if you provide correct parameters/options for the pipeline!
/share/pkg/star/2.7.0e/bin/STAR --genomeDir ../genome_indices/rsem-star --outSAMunmapped Within --outFilterType BySJout --outSAMattributes NH HI AS NM MD --outFilterMultimapNmax 20 --outFilterMismatchNmax 999 --outFilterMismatchNoverLmax 0.04 --alignIntronMin 20 --alignIntronMax 1000000 --alignMatesGapMax 1000000 --alignSJoverhangMin 8 --alignSJDBoverhangMin 1 --sjdbScore 1 --runThreadN 4 --genomeLoad NoSharedMemory --outSAMtype BAM Unsorted --quantMode TranscriptomeSAM --outSAMheaderHD \@HD VN:1.4 SO:unsorted --outFileNamePrefix ../test_results/G20P1sc-C08-res.temp/G20P1sc-C08-res --readFilesCommand zcat --readFilesIn G20P1sc-C08-R1.fastq.gz G20P1sc-C08-R2.fastq.gz
EXITING because of fatal ERROR: could not make temporary directory: ../test_results/G20P1sc-C08-res.temp/G20P1sc-C08-res_STARtmp/
SOLUTION: (i) please check the path and writing permissions
(ii) if you specified --outTmpDir, and this directory exists - please remove it before running STAR
Jun 01 13:30:25 ...... FATAL ERROR, exiting
"/share/pkg/star/2.7.0e/bin/STAR --genomeDir ../genome_indices/rsem-star --outSAMunmapped Within --outFilterType BySJout --outSAMattributes NH HI AS NM MD --outFilterMultimapNmax 20 --outFilterMismatchNmax 999 --outFilterMismatchNoverLmax 0.04 --alignIntronMin 20 --alignIntronMax 1000000 --alignMatesGapMax 1000000 --alignSJoverhangMin 8 --alignSJDBoverhangMin 1 --sjdbScore 1 --runThreadN 4 --genomeLoad NoSharedMemory --outSAMtype BAM Unsorted --quantMode TranscriptomeSAM --outSAMheaderHD \@HD VN:1.4 SO:unsorted --outFileNamePrefix ../test_results/G20P1sc-C08-res.temp/G20P1sc-C08-res --readFilesCommand zcat --readFilesIn G20P1sc-C08-R1.fastq.gz G20P1sc-C08-R2.fastq.gz" failed! Plase check if you provide correct parameters/options for the pipeline!
| I found out that the ```. nfs000000047fe2b1c700000f07``` type files are the problem.
Once I directed the output to a new folder, it solved the issue :) | biostars | {"uid": 382608, "view_count": 2098, "vote_count": 1} |
Hey All I have data frame with 5 Samples A,B,C,D,E.
A is parent (reference)sample and rest of samples are from patients. each row represents a miRNA and value against that row in each column represents Back ground subtraction values of that miRNA in each sample. I want to perform ANOVA test in R. I am bit confused how I should perform either with Parent and one patient sample (A&B) A&C and so on). Secondly most of the ANOVA tests which I saw on google and youtube they have for example one column with data second column with different groups for the value for example
```
Weight Loss Diet
1.2 A
22.3 A
5.4 C
33.5 B etc
A B C D
hsa-miR-199a-3p, hsa-miR-199b-3p NA 13.13892 5.533703 25.67405
hsa-miR-365a-3p, hsa-miR-365b-3p 15.70536 52.86558 18.467540 223.51424
hsa-miR-3689a-5p, hsa-miR-3689b-5p NA 21.41597 5.964772 NA
hsa-miR-3689b-3p, hsa-miR-3689c 9.58696 44.56490 10.102051 13.26785
hsa-miR-4520a-5p, hsa-miR-4520b-5p 18.06865 28.06991 NA NA
hsa-miR-516b-3p, hsa-miR-516a-3p NA 10.77471 8.039662 NA
E
hsa-miR-199a-3p, hsa-miR-199b-3p NA
hsa-miR-365a-3p, hsa-miR-365b-3p 31.93503
hsa-miR-3689a-5p, hsa-miR-3689b-5p 24.26073
hsa-miR-3689b-3p, hsa-miR-3689c NA
hsa-miR-4520a-5p, hsa-miR-4520b-5p NA
hsa-miR-516b-3p, hsa-miR-516a-3p NA
```
How I should do for my data
Thanks in Advance
Best
Adnan | To transform your data into mirna/group/value format use `melt` from `reshape` package. After performing ANOVA you can do a post-hoc T-test using `pairwise.t.test` function, use for example `p.adjust.method="holm"` for multiple testing correction. The ANOVA will tell you if group/value are dependent, while post-hoc T-test for group A versus others will tell you which groups are different from parent. | biostars | {"uid": 117667, "view_count": 3396, "vote_count": 1} |
Hi all,
As you see in the picture, I have two columns. So, I want to replace the values smaller than 0.500000 from the column A_Freq with the values in the M.F column In the same rows. What is the best idea?
for example: replace 0.312500 In the third row of the column A_Freq with 0.687500 From the same row but in the column M.F.
CHROM_POS A_Freq M.F N_Chr
1 CM009840.1_1096 0.812500 0.187500 16.25000
2 CM009840.1_1177 0.611111 0.388889 12.22222
3 CM009840.1_1276 0.312500 0.687500 6.25000
4 CM009840.1_1295 0.277778 0.722222 5.55556
5 CM009840.1_1471 0.250000 0.750000 5.00000
6 CM009840.1_1518 0.875000 0.125000 17.50000
7 CM009840.1_1527 0.222222 0.777778 4.44444
8 CM009840.1_1533 0.777778 0.222222 15.55556
9 CM009840.1_1630 0.250000 0.750000 5.00000
10 CM009840.1_1639 0.000000 1.000000 0.00000
11 CM009840.1_1711 0.500000 0.500000 10.00000
12 CM009840.1_1972 0.250000 0.750000 5.00000
13 CM009840.1_2030 0.142857 0.857143 2.85714
14 CM009840.1_2101 0.375000 0.625000 7.50000
15 CM009840.1_2690 0.687500 0.312500 13.75000
16 CM009840.1_2849 0.142857 0.857143 2.85714
17 CM009840.1_3013 0.312500 0.687500 6.25000
18 CM009840.1_3042 0.714286 0.285714 14.28572
19 CM009840.1_3062 0.250000 0.750000 5.00000
20 CM009840.1_3128 0.250000 0.750000 5.00000
Best Regard
Modtafa | with R :
select <- df$A_Freq < 0.5
df[select,"A_Freq"] <- df[select,"M.F"]
or maybe more elegant :
df$A_Freq <- ifelse(df$A_Freq < 0.5, df$M.F, df$A_Freq) | biostars | {"uid": 344116, "view_count": 25799, "vote_count": 2} |
I know that many pigmentation traits (skin, hair and eye color) are highly heritable. For instance, I assume that the heritability of eye color should be high.
I've been trying to find articles estimating heritability of these traits either based on twin studies or using more modern methods based on GWAS SNP data, for instance calculated using GCTA or LDAK, but couldn't find any.
| Heritability of Skin Color
1) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC212702/
2) https://genepi.qimr.edu.au/contents/p/staff/CV018.pdf (h2 ranging from 0.37 to 0.83)
Probably for eye color will be harder, since it is usually not quantitatively measured.
| biostars | {"uid": 217509, "view_count": 1593, "vote_count": 1} |
Helo,
In VCF file, there GT/PL folumn for genotype and its likelihood values. If 2 allele are possible (reference allele and alternative allele) the column value would be like below:
0/1:56:0:80
The 56 score is correspond to reference homozygous, 0 is to heterezygous, and 80 is to alternative homozygous.
My question is, if there are more than 2 allele (let's say 0 for reference and 1,2 for alternate allele), the score will consist of 6 score which is corresponds to:
1. reference homozygous (0/0)
2. alt 1 homozygous (1/1)
3. alt 2 homozygous (2/2)
4. ref and alt 1 heterzygous (0/1)
5. ref and alt 2 heterozygous (1/2)
6. alt 1 and alt 2 heterozygous (2/2)
My question is what is the order in the actual VCF file? I just don't know the order of the score and its corresponding meaning. Below is the actual example of 1 line in my vcf data.
1 226548932 . ACGGCGGCGGCGGCGGCGGCGGTGGCGGCGGCGG ACGGCGGCGGCGGTGGCGGCGGCGG,ACGGCGGCGGCGGCGGCGGTGGCGGCGGCGG 39.049 . INDEL;IDV=1;IMF=1;DP=9;VDB=0.0225004;SGB=-1.15236;MQSB=0.900802;MQ0F=0;ICB=0.153846;HOB=0.0555556;AC=1,1;AN=12;DP4=4,2,1,1;MQ=60 GT:PL ./.:0,0,0,0,0,0 0/0:0,3,60,3,60,60 0/0:0,3,60,3,60,60 ./.:0,0,0,0,0,0 0/1:60,3,0,60,3,60 0/0:0,3,60,3,60,60 0/0:0,3,60,3,60,60 0/2:50,56,132,0,81,78
Look at the GT/PL list below (I have 8 samples):
1. Sample 1 : ./.:0,0,0,0,0,0
2. Sample 2 : 0/0:0,3,60,3,60,60
3. Sample 3 : 0/0:0,3,60,3,60,60
4. Sample 4 : ./.:0,0,0,0,0,0
5. Sample 5 : 0/1:60,3,0,60,3,60
6. Sample 6 : 0/0:0,3,60,3,60,60
7. Sample 7 : 0/0:0,3,60,3,60,60
8. Sample 8 : 0/2:50,56,132,0,81,78
I add more interesting result:
1. Sample 1: 1/1:26,12,9,26,12,26
2. Sample 2: 0/1:0,3,5,3,5,5
3. Sample 3: 1/1:26,12,9,26,12,26
4. Sample 4: 1/2:45,45,45,6,6,0
5. Sample 5: 1/1:20,3,0,20,3,20
6. Sample 6: ./.:0,0,0,0,0,0
7. Sample 7: ./.:0,0,0,0,0,0
8. Sample 8: 1/1:26,12,9,26,12,26
So, if anyone knows how to interpret the score, please teach me and if it is possible, maybe you can explain the general consept. I treid reading the VCF documentation but it is not written there I think.
| > My question is what is the order in the actual VCF file?
This info is present in [VCF specification][1], not easy to find though. **Section 1.4.2**
**PL** : the phred-scaled genotype likelihoods rounded to the closest integer (**and otherwise defined precisely as
the GL field**) (Integers)
**GL** : genotype likelihoods comprised of comma separated floating point log10-scaled likelihoods for all possible
genotypes given the set of alleles defined in the REF and ALT fields. In presence of the GT field the same
ploidy is expected and the canonical order is used; without GT field, diploidy is assumed. If A is the allele in
REF and B,C,... are the alleles as ordered in ALT, the ordering of genotypes for the likelihoods is given by:
F(j/k) = (k*(k+1)/2)+j. **In other words, for biallelic sites the ordering is: AA,AB,BB; for triallelic sites the
ordering is: AA,AB,BB,AC,BC,CC, etc**. For example: GT:GL 0/1:-323.03,-99.29,-802.53 (Floats)
So the order in PL is the same as GL, which follows **AA,AB,BB,AC,BC,CC**, for tri-allelic sites.
> So, if anyone knows how to interpret the score, please teach me and if
> it is possible, maybe you can explain the general consept.
This concept is very well explained in following GATK document.
http://gatkforums.broadinstitute.org/gatk/discussion/1268/what-is-a-vcf-and-how-should-i-interpret-it
If you understand the [Phred sclae][2], it should be easy to follow. In case of difficulty, let us know.
[1]: https://samtools.github.io/hts-specs/VCFv4.2.pdf
[2]: https://en.wikipedia.org/wiki/Phred_quality_score | biostars | {"uid": 246722, "view_count": 8248, "vote_count": 1} |
Hi!
I could/should have asked this question at bioC forum, but the answers
there are usually (just) over my head.
I wondered how to use a table of "Expected counts" from rsem to
obtain DEGs using DeSeq2/EdgeR? The expected counts are from UCSC Xena- processed GTEX data.
There seems to be more than one (proposed) workflows present for this kind of study. Before delving deeper into calculations, I wanted to know which approach is more often used:
1. Rounding the expected count:
https://www.biostars.org/p/382643/
2. As the tximport vignette at bioC explains
https://bioconductor.org/packages/release/bioc/vignettes/tximport/inst/doc/tximport.html#rsem
. However, in the vignette, the starting txi object is
built differently from my data (I still have not checked the structure of the resulting R object, so that I might create it manually; **waiting** for the approval of this method *over the simpler method 1 above* ) .
txi <- tximport(files, type = "rsem", txIn = FALSE, txOut = FALSE)
dds <- DESeqDataSetFromTximport(txi, sampleTable, ~condition)
| Definately use tximport if you can.
If I remember correctly, the `txi` object is a `list` with three slots: `$abundance` contains a matrix of TPMs, with one row per transcript and one column per sample, `$count` contains the same matrix for estimated counts, `$length` contains a vector with the length of each transcript. This shouldn't be too hard to create.
You can then use the `tximport` functions for collapsing transcript counts to gene counts, and creating a DESeq dataset object or edgeR object from that collapsed `txi`. The reason to go from transcript counts rather than gene counts, is that `tximport` uses the transcript counts to create a weighted effective gene length for use as an offset in the DE model. This protects against splicing changes making the counts from the same gene incomparable between samples, because the effective gene length is different.
I seem to remember that RSEM is able to do something similar, but I can't quite be sure.
| biostars | {"uid": 423237, "view_count": 2572, "vote_count": 3} |
The package upsetr was installed with conda. It **works** in my **terminal**
conda activate upsetr
conda deactivate
I was trying to invoke conda in a **Makefile** , something like
SHELL=/bin/bash
test:
conda activate upsetr && conda deactivate
but it doesn't work:
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
I can reproduce this error by piping the command into bash
$ echo 'conda activate upsetr && conda deactivate' | bash
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Why does it fail ?
| I have learned quite a few techniques above, let me share what I do to avoid sourcing `.bashrc` on every system. There is another conda specific script at `~/miniconda3/etc/profile.d/conda.sh` that seems to be more appropriate for the purpose.
The shell script below runs in whatever environment you pass to it as a parameter and it sources the `conda.sh` script.
# Conda prefix
ENV=$1
# Load conda specific commands.
source ~/miniconda3/etc/profile.d/conda.sh
# Activate the environment.
conda activate $ENV
# Print conda prefix
echo PREFIX:$CONDA_PREFIX
of course, conda may choose to change the content of `conda.sh` at any time so we need to keep that in mind. | biostars | {"uid": 9515156, "view_count": 1729, "vote_count": 1} |
Project PRJEB99111 has 147 samples. I want to download the metadata (age, sex, disease status, etc) of each sample, not fastq. The only way I can download the metadata is by downloading the xml file of each sample accession one by one - is there a way to bulk download all 147 metadata files? I can work with xml files if I have to.
You can view the metadata for a specific sample accession by clicking on the"attributes" tab. Here is an example for one sample: [https://www.ebi.ac.uk/ena/data/view/SAMEA104228123][1]
[1]: https://www.ebi.ac.uk/ena/data/view/SAMEA104228123 | with the following xslt stylesheet:
https://gist.github.com/lindenb/04ed876dc6581f33f19682968ae37bb5
$ wget -q -O - "https://www.ebi.ac.uk/ena/data/warehouse/filereport?accession=PRJEB99111&result=read_run&fields=study_accession,sample_accession,secondary_sample_accession,experiment_accession,run_accession,tax_id,scientific_name,instrument_model,library_layout,fastq_ftp,fastq_galaxy,submitted_ftp,submitted_galaxy,sra_ftp,sra_galaxy,cram_index_ftp,cram_index_galaxy&download=txt" | grep -v sample_accession | cut -f 3 | awk '{printf("https://www.ebi.ac.uk/ena/data/view/%s&display=xml\n",$0);}' | while read U; do wget -O - -q "$U" | xsltproc transform.xsl - ; done
ERS1887136|age|61
ERS1887136|age_units|years
ERS1887136|body_habitat|UBERON:feces
ERS1887136|body_product|UBERON:feces
ERS1887136|body_site|UBERON:feces
ERS1887136|collection_site|UCSF
ERS1887136|collection_timestamp|2013-10-08
ERS1887136|day_in_timeseries|Missing: Not provided
ERS1887136|disease_course|RRMS
ERS1887136|disease_state|MS
ERS1887136|dna_extracted|TRUE
ERS1887136|elevation|124
ERS1887136|env_biome|urban biome
ERS1887136|env_feature|human-associated habitat
ERS1887136|env_material|feces
ERS1887136|env_package|human-gut
ERS1887136|flare|No
ERS1887136|geo_loc_name|USA:CA:San Francisco
ERS1887136|height|Missing: Not provided
ERS1887136|height_units|Missing: Not provided
ERS1887136|host_common_name|human
ERS1887136|host scientific name|Homo sapiens
ERS1887136|host_subject_id|34
ERS1887136|host_taxid|9606
ERS1887136|household|H1004
ERS1887136|investigation_type|mimarks-survey
ERS1887136|latitude|37.76
ERS1887136|life_stage|adult
ERS1887136|longitude|-122.46
ERS1887136|physical_specimen_location|UCSF
ERS1887136|physical_specimen_remaining|FALSE
ERS1887136|repeated_sequencing|1
ERS1887136|sample_type|stool
ERS1887136|sequencing_set|2
ERS1887136|sex|female
ERS1887136|sinai_unmarked_rep|Missing: Not provided
ERS1887136|submission_number|1
(...)
| biostars | {"uid": 279582, "view_count": 7957, "vote_count": 2} |
I am aligning contigs against a reference genome and would like import the output of Nucmer-[MUMmer][1] as a track in GBrowse. MUMmer uses its own idiosyncratic output formats (delta format), and to my surprise I was unable to find any working parser for Mummer in BioPerl or BioPython.
I found some old requests, like
- http://www.biopython.org/pipermail/biopython-dev/2009-May/005971.html
- https://redmine.open-bio.org/issues/2701
but seemingly nothing of this code ever made it. Does anyone know more about it?
[1]: http://mummer.sourceforge.net/ | The Debian package for Mummer comes with the ` delta2maf` program which does exactly what it claims to do. That program comes from the Mugsy suite if you are on a different system. | biostars | {"uid": 185384, "view_count": 7604, "vote_count": 1} |
I am working on wheat genome, I want to do analyze comparative genome analysis of 3 varieties of wheat .I have sequence files from illumina 1.9 in fastq format. I checked the quality of reads by fastqc tool.GC content are not in normal range.(47- 49). What is the normal value of % GC for RNA seq reads ? The other question is that kmers are also not in correct range.How can I correct it.
For trimming,adaptor seq file is required,but i don't have this file..Is that possible to remove these two error?If yes then how can I do?Can I skip the trimming step and go to next step of mapping?
In this file all parameter values are correct except kmer s and GC content.Is there any need to trim it?If yes then how can I do?
file:///home/comsats-ra/fatimamphilldata/G1_cleaned_R1_fastqc.html#M11 | One expects many failed FastQC modules in RNAseq datasets. GC content should be similar over samples, but otherwise ignore a "Fail" in FastQC there. Similarly, you expect enriched k-mers. You should not attempt to correct this, it's already correct.
You can trim reads with `Trim Galore!`, which has the default adapters all built in. Having said that, it's quicker to just use STAR for alignment, in which case you don't need to bother trimming adapters. | biostars | {"uid": 288243, "view_count": 2575, "vote_count": 1} |
Hi,
I recently started using genomes from UCSC, but it seems like they only have soft-masked and hard-masked. Obviously I do not want to use masking for aligning RNA-seq and just wanted to check whether HISAT2 treats the lower-case sequences like the upper case ones to allow mapping to the entire genome regardless of repetitive sequences.
I could not find this information in the documentation, sorry if I missed!
Thank you in advance.
| If you take a look at the HISAT2 [source code](https://github.com/infphilo/hisat2/blob/master/ref_read.cpp#L148) (which nobody should expect you to do) it appears that all FASTA characters are converted to their uppercase representation when reading the reference file:
# ref_read.cpp
while(c != -1 && c != '>') {
if(rparms.nsToAs && asc2dnacat[c] >= 2) c = 'A';
uint8_t cat = asc2dnacat[c];
-> int cc = toupper(c);
...
| biostars | {"uid": 260391, "view_count": 3061, "vote_count": 2} |
<p>I have DNase-seq data for mouse embryonic stem cells (mESCs) and mouse embryonic fibroblast cells (MEFs). By their nature stem cells have a more globally accessible chromatin structure compared to somatic cells. I created a plot comparing the TSS coverage (normalised to RPM) of stem-cell specific genes and found that MEFs had much higher coverage. Is it possible that because their is less accessible chromatin in the MEFs I am seeing a higher coverage simply because it is a less complex library? Are there any normalisation methods which take into account the accessibility of the genome?</p>
| <p>Are the reads you are normalizing to the reads from entire run or only reads within regions around TSS? You want the former not the latter since he latter will implicitly assume that total signal is the same between your two samples (when you know you should get more total signal from mESC then MEF). One way to normalize would be to identify 'background'/closed regions and try and normalize signal to that. That is what <a href="http://www.bioconductor.org/packages/release/bioc/html/DBChIP.html">DBChIP</a> does for ChIP-seq, I'm not sure if there have been differential DNase-seq methods developed yet or what modifications they would require vs differential ChIP-seq.</p>
| biostars | {"uid": 152023, "view_count": 3839, "vote_count": 1} |
I thought to put this to the community:
Are there any proven ways for relating lists of genes to particular tissues, organelles, or processes? It cannot be as simply as performing enrichment against Gene Ontology or KEGG, can it? The risk of performing a basic enrichment is that you then 'cherry pick' the findings.
Another route is going the 'tissue specific' route and utilising datasets like FANTOM5 and GTEx to say that, e.g., these genes are definitively expressed in CNS or mitochondria.
Kevin | Yeah, enrichment analysis has a bad tendency to be misused pretty frequently for just the reason you describe. If you show all of the tissues/organelles/pathways, it's less of an issue, as it's much more believable if the top 10 hits or whatever are all related. But if you harp on a singular outlier (e.g. golgi bodies when most of the other hits are related to the ECM/cell membrane), it becomes much more apparent that you're "cherry picking". This is especially true given the nature of enrichment analysis where 3-5 genes can be considered a significant enrichment depending on the term and the number of genes attached to it. In general, if you're just trying to show a general association with a given process/pathway/tissue-specificity, I think just showing the top x hits from your enrichment analysis is usually enough to get the point across, even if every single term/tissue isn't exactly what you want. Beyond much more than that, you'd best start tying in some experimental validations.
As for other options, the tissue-specific route is a pretty good one, though more labor and time intensive. As you mention, GTEx has a wide variety, and other resources like the [Human Tissue Atlas](https://www.proteinatlas.org/humanproteome/tissue) and [JENSEN Tissue Database](https://tissues.jensenlab.org/Search) give more experimental protein data. The [Human Cell Atlas](https://www.proteinatlas.org/humanproteome/cell) provides protein-validated cellular localization data that's pretty handy. | biostars | {"uid": 375726, "view_count": 793, "vote_count": 1} |
I have a FASTQ dataset where I'm trying to find the average base quality score. I found this old link that helped somewhat (https://www.biostars.org/p/47751/).
Here is my script (I'm trying to stick to awk, bioawk or python):
bioawk -c fastx '{print ">"$name; print meanqual($qual)}' XXX.cln.fastq.gz
I was expecting a single average output for the entire set, instead I got a huge dump of data with many rows looking like this:
>FCD23GWACXX:2:1101:3183:2494
36.45
I assume that the 36.45 is my average phred quality score for that sequence? I was hoping for an average score for the entire file. Could such a script be written? | bioawk is to some extent still awk based (though with -c you get a diff read mode, as in 'per entry' rather then 'per line').
You can easily extent this cmdline to get the desired result:
bioawk -c fastx '{print ">"$name; print meanqual($qual)}' XXX.cln.fastq.gz | paste - - | awk '{sum += $3} END {print sum/NR}'
should do the trick | biostars | {"uid": 295932, "view_count": 6620, "vote_count": 1} |
In the paper [1], the following talk about the degenerative nature of "ChIP-seq" dataset. After a while googling, I still can not understand what is the meaning of "degenerative nature". Does it mean: Only part of the binding site were represented by the PWMs, there are still a large number of binding sites can not be detected by ChIP-Seq, so the scanning based on the PWMs should give more new TFBS. Isn't it?
> Although the PWMs were derived from the corresponding ChIP-seq dataset, due to their **degenerative nature**, we still expected to obtain PWM hits that did not overlap with the ChIP-seq peaks.
[1] Gong, W., et al. (2015). "Inferring dynamic gene regulatory networks in cardiac differentiation through the integration of multi-dimensional data." BMC Bioinformatics 16: 74. | The sentence probably is not referring to the 'degenerative nature' of ChIP-seq but of motifs from which PWMs are derived. At least that's what I interpret. I think that what the authors want to say that is that PWMs obtained from ChIP peaks are expected to have hits even in regions not containing peaks. However, motif hits are expected elsewhere in the genome, besides ChIP peaks, for other reasons than their **degenerative nature**. For a recent review see: <a href="http://www.cell.com/trends/biochemical-sciences/abstract/S0968-0004(14)00121-2">Slattery, M., Zhou, T., Yang, L., Dantas Machado, A. C., Gordân, R., & Rohs, R. (2014). Absence of a simple code: how transcription factors read the genome. *Trends in Biochemical Sciences*, *39*(9), 381-399. doi:10.1016/j.tibs.2014.07.002</a>
| biostars | {"uid": 161999, "view_count": 1764, "vote_count": 3} |
Hello Everyone,
We have recently generated two de-novo transcriptomics assembly for two different but related species. These new transcripts seem quite good on the basis of quality measurements, completeness and alignment with the previously sequenced genome and annotation. We were able to pick up novel genes and previously unannotated transcripts. In order to pick the alternate spliced transcripts we are currently trying running PASA (Program to Assemble Spliced Alignment) annotation.
After PASA annotation, when I made a comparison between trinity transcripts, already existing annotation and PASA annotation in IGB. I find out a case where trinity transcripts fully supported by previously curated annotation as well as the RNASeq data, but no PASA annotation. PASA annotation only shows one fragment, rest of the parts not even present in the valid and failed .gff3 file generated by PASA [Figure]. This leads to few questions - for which I don't have any answer. Here are my questions:
1. Why there is this difference in the PASA annotation- when there is already manual curation and RNAseq depth evidences are present? And which one is correct?
2. PASA annotation using blat and gmap for the alignment of transcripts to the genome. And we have also used blat to align the transcript to the genome. Then why there is different in two blat alignment?
**Data used for Transcriptomics assembly** = 100 bp Paired end reads, non-strand specific
**Attached figure description**:-
- Dark Blue colour = New assembled transcript annotation [this is the annotation generated by aligning assembled transcripts with the genome using blat].
- Orange = existing curated annotation.
- Red = PASA annotation
- Blue colours = shows valid alignment annotation for blat and gmap respectively.
- Read Depth = Green for control and Dark red = Knock-out
Figure: ![< image not found >][1]
I am very much looking forward for the reply. Any suggestions/view would be very helpful.
Many Thanks
Reema,
PS = I also posting the same post in the seq-answer.
[1]: http://www.compbio.dundee.ac.uk/user/rsingh/Figure.jpeg | Hi Reema,
Our senior software developer was able to look over your .gff3 file. The issue has to do with how IGB currently displays .gff3 data. It's something we should be able to fix with a small patch.
The .bed file does display correctly, so if you have a choice in the output, go with the .bed file for now.
Thank you for helping us identify this issue!
Nowlan | biostars | {"uid": 133348, "view_count": 6216, "vote_count": 2} |
Hi all,
I am annotating my new plant genome now and am working with Maker and its very detailed tutorial (http://weatherby.genetics.utah.edu/MAKER/wiki/index.php/MAKER_Tutorial_for_WGS_Assembly_and_Annotation_Winter_School_2018) . I have read a few really helpful posts about Maker here as well, but i still have some questions.
1. SNAP training. How do you actually know that it is enough to train and you can run your final Maker run? I have tried to run it several time and there is a difference in the number of genes every time. It is actually a kind of sinusoidal graph - number of genes are going up and down... So when do you stop? Or how do you know that SNAP is trained? Do you wait until the plateau? How many times did you do the training and why?
2. My genome has unusually high repeat content. This is why I decided to create its own repeat library with repeatModeler. The question is where on the option file do I add this repeat library?
###THANKS a lot for your help,
Alex | You can specify a custom repeat library (in FASTA format) with `rmlib` in the Repeat Masking section of the `make_opts.ctl` file
#-----Repeat Masking (leave values blank to skip repeat masking)
model_org=all #select a model organism for RepBase masking in RepeatMasker
rmlib=repeatlibrary.fa #provide an organism specific repeat library in fasta format for RepeatMasker
repeat_protein=/opt/maker/data/te_proteins.fasta #provide a fasta file of transposable element proteins
rm_gff= #pre-identified repeat elements from an external GFF3 file
prok_rm=0 #forces MAKER to repeatmask prokaryotes (no reason to change this), 1 = yes, 0 = no
softmask=1 #use soft-masking rather than hard-masking in BLAST (i.e. seg and dust filtering)`
This is an example Repeat Masking section
You might also consider running ProtExcluder on the output of RepeatModeler
http://weatherby.genetics.utah.edu/MAKER/wiki/index.php/Repeat_Library_Construction-Basic
# Run blastx then ProtExcluder to excluce known protein sequences from RepeatModeler library
/usr/bin/blastx -num_threads 75 -db /genetics/elbers/maker/uniprot_sprot.fasta -evalue 1e-6 \
-query repeatlibrary.fa -out repeatlibrary.fa.blast
/opt/ProtExcluder1.1/ProtExcluder.pl -f 50 repeatlibrary.fa.blast repeatlibrary.fa
# output of ProtExcluder is "temp"
# rename temp to whatever you desire
mv temp repeatlibrary.fa2
| biostars | {"uid": 347385, "view_count": 2528, "vote_count": 4} |
Hi,
I used data set from Encode consortium for my package development, due to size of actual peak files are rather big, I can't use these data set for my package use. Because actual size of package resulted from R CMD build must be less than 4Mb on disk, I have to use rather small peak file as an example data for my package . In Encode sample's data set, each peak files contains around 100,000 peaks each. How can I edit rather big bed files in order to keep particular chromosome ? Is there any handy tools to edit peak files ? Thanks in advance :)
Best regards :
Jurat | If you have GNU Parallel installed, you can use this with [BEDOPS `bedextract`][1] to very quickly split a BED file by chromosome:
$ bedextract --list-chr input.bed | parallel "bedextract {} input.bed > input.{}.bed"
You can then use my [`sample` utility][2] or GNU `shuf` to uniformly sample without replacement:
$ sample -k ${SAMPLE_SIZE} input.chrN.bed > input.chrN.sample.bed
Or:
$ shuf --head-count=${SAMPLE_SIZE} input.chrN.bed > input.chrN.sample.bed
[1]: http://bedops.readthedocs.io/en/latest/content/reference/set-operations/bedextract.html
[2]: https://github.com/alexpreynolds/sample | biostars | {"uid": 225732, "view_count": 1839, "vote_count": 1} |
Currently when calling variants you have to call variants on all people simultaneously, which allows for rare variant positions being check and called in every person without such variant.
My question is, why wouldn't it make more sense to just determine variant in every genomic position (3.2Billion) and that's it? You could merge this dataset with others easily, moreover, no need to do joint variant calling etc. It is very frustrating to see such an inefficiency becoming default. | You are correct, but you have conflated two issues - variant calling and warehousing. First, I don't think you will see joint genotyping being routinely done in 5 years - single sample calling with instrument-specific training sets are where things are headed. Secondly, the bigger groups are moving or have already moved to variant warehousing rather than VCFs - TileDB, Google Variant Transforms, GenomicsDB - and these want gVCFs as input. | biostars | {"uid": 9501659, "view_count": 985, "vote_count": 1} |
Hi,
Suppose I have a bam file and a vcf file containing variant calling result. I want to extract only reads with their mate that support variant allele in the vcf. It would be nice to get those reads in bam format. I tried googling such tools to do this and found like VariantBAM but it reports both reads that supporting and not supporting variant.
Thanks
| I quickly wrote something : http://lindenb.github.io/jvarkit/Biostar322664.html
caveat: BAM files must be sorted with picard **SortSam/queryname** and variants are loaded in memory and only SNP are considered.
$ java -jar picard.jar SortSam I=src/test/resources/S1.bam O=query.bam SO=queryname
$ java -jar dist/biostar322664.jar -V src/test/resources/S1.vcf.gz query.bam
(...)
RF02_358_926_2:0:0_2:1:0_83 83 RF02 857 60 70M = 358 -569 GACGTGAACTATATAATTAAAATGGACAGAAATCTGCCATCAACAGCTAGATATATAAGACCTAATTTAC 2222222222222222222222222222222222222222222222222222222222222222222222 RG:Z:S1 NM:i:3 AS:i:55 XS:i:0
RF02_362_917_2:0:0_2:1:0_6f 147 RF02 848 60 70M = 362 -556 ATAAGGAATCACGTTAACTATATACTTAAAATGGACTGAAATCTGCCATCAACAGCTAGATATATAAGAC 2222222222222222222222222222222222222222222222222222222222222222222222 RG:Z:S1 NM:i:3 AS:i:55 XS:i:0
(...)
it's not fully tested, tell me if something is looks wrong.
| biostars | {"uid": 322664, "view_count": 3647, "vote_count": 3} |
Hi,
I have a barplot like this:
![enter image description here][1]
[1]: https://imgur.com/pizrVlq.jpg
I created it with this code:
toplot.N <- data.frame( set=c("FLCN", "All SNPs", "eQTL from 103 genes","All eQTL"),
FDR =c(FDR1FI,FDR2FI,FDR3FI,FDR4FI))
p<-ggplot(toplot.N, aes(x=set, y=FDR, fill=set)) + labs(y = "pi_1")+
geom_bar(stat="identity")+theme(axis.title.x = element_blank())+
geom_hline(yintercept=0.05, linetype="dashed", color = "red")
p
How would I change this so that the order of bars is: "All SNPs","All eQTL","eQTL from 103 genes","FLCN"
Thanks
Ana
| We could change order of factor levels before plotting:
toplot.N$set <- factor(toplot.N$set,
levels = c("All SNPs", "All eQTL", "eQTL from 103 genes", "FLCN"))
| biostars | {"uid": 386512, "view_count": 11224, "vote_count": 1} |
<p>hi
can anyone tell me the name of the software for performing the alignment and constructing the phylogenetic tree of whole genome. thanks in advance.</p>
| I am not aware of an easy way to construct reliable species trees based on complete genomes. The general approach that you need to take is to pick one or more genes based on which to base your phylogeny. This could be either 16S rRNA, all ribosomal-protein-coding genes, or other highly conserved genes that are universally present and rarely subject to gene duplications or lateral gene transfer.
Once you have picked the genes, you need to make a multiple sequence alignment(s). You need to do this for each of the genes that you want to use for your phylogeny. For this I would tend to use either [muscle][1] or [mafft][2]. After that I would use [Gblocks][3] to extract the conserved blocks in the alignment(s) in order to not use potentially misaligned parts as the basis for tree building.
If you decided to use multiple genes as the basis for your phylogeny, you now have to make a big decision, namely whether to go for a concatenated alignment approach or a supertree approach. In the first case, you would concatenate all of the multiple alignments and use the resulting big alignment as input for a phylogenetic tree reconstruction program, for example [PhyML][4]. In the second case, you would use such a program to make a separate tree for each of the genes of interest, and subsequently use one of several supertree programs to derive a consensus tree based on these. If you went for just using a single gene as the basis for your tree, you obviously just build a tree for that one gene and you are done.
I hope this helps, although it is certainly very far from a "push of a button" solution.
[1]: http://www.drive5.com/muscle/
[2]: http://mafft.cbrc.jp/alignment/software/
[3]: http://molevol.cmima.csic.es/castresana/Gblocks.html
[4]: http://atgc.lirmm.fr/phyml/ | biostars | {"uid": 1930, "view_count": 23632, "vote_count": 8} |
Why is it so difficult to make things in ggplot2 , i like the way it helps in customisation but the curve is steep nevertheless
Here is my sample dataframe
df <- gene HSC CMP
ENSG00000158292.6 1.8102636 2.456869
ENSG00000162496.6 2.6796705 6.203838
ENSG00000117115.10 3.4509115 5.555739
ENSG00000159423.14 3.6809277 5.063446
ENSG00000053372.4 5.7089974 6.851090
If i have plot a boxplot i can simply write this `boxplot(df[,-1],col=c("red","blue"))`
I get a boxplot but when im trying with ggplot2 im having difficult time
ex <- melt(df, id.vars=c("HSC", "CMP"))
ggplot(data = ex,
aes(x = CMP, y = HSC)) +
geom_boxplot()
I get a single boxplot what i want is i get a box plot for HSC and CMP which i got when i use simple base R boxplot .
Any help or suggestion would be highly appreciated with my ggplot2 code |
Devon got there before me but as he mentioned the id.vars needs to be set to 'gene'
Here's a boxplot with scatterplot overlay for anyone else arriving here from Google.
I do agree that ggplot can be difficult to work with. Many functions redundant in the sense that they do the same thing as other but have different names, and conflicts frequently arise. That said, if you can master ggplot, then you can produce very nice graphics for publications.
require(reshape2)
require(ggplot2)
ex <- melt(df, id.vars=c("gene"))
colnames(ex) <- c("gene","group","exprs")
ggplot(data=ex, aes(x=group, y=exprs)) +
geom_boxplot(position=position_dodge(width=0.5), outlier.shape=17, outlier.colour="red", outlier.size=0.1, aes(fill=group)) +
#Choose which colours to use; otherwise, ggplot2 choose automatically
#scale_color_manual(values=c("red3", "white", "blue")) + #for scatter plot dots
scale_fill_manual(values=c("red", "royalblue")) + #for boxplot
#Add the scatter points (treats outliers same as 'inliers')
geom_jitter(position=position_jitter(width=0.3), size=3.0, colour="black") +
#Set the size of the plotting window
theme_bw(base_size=24) +
#Modify various aspects of the plot text and legend
theme(
legend.position="none",
legend.background=element_rect(),
plot.title=element_text(angle=0, size=14, face="bold", vjust=1),
axis.text.x=element_text(angle=45, size=14, face="bold", hjust=1.10),
axis.text.y=element_text(angle=0, size=14, face="bold", vjust=0.5),
axis.title=element_text(size=14, face="bold"),
#Legend
legend.key=element_blank(), #removes the border
legend.key.size=unit(1, "cm"), #Sets overall area/size of the legend
legend.text=element_text(size=12), #Text size
title=element_text(size=12)) + #Title text size
#Change the size of the icons/symbols in the legend
guides(colour=guide_legend(override.aes=list(size=2.5))) +
#Set x- and y-axes labels
xlab("Stem cell class") +
ylab("Expression") +
#ylim(0, 0) +
ggtitle("My plot")
<a href="https://ibb.co/isw956"><img src="https://preview.ibb.co/niQ2Q6/boxscatter.png" alt="boxscatter" border="0"></a>
| biostars | {"uid": 284326, "view_count": 8510, "vote_count": 3} |
Hello!
I currently have a 255k assembled contigs transcriptome (de novo, Trinity) and I want to retrieve indentifiers and sequences in fasta format of the contigs that had hits in BLASTx (around 50%). There's any way to filter using the BLASTx output, an script and the transcriptome itself? O I should search an approach of parsing BLASTx output to contain sequences?.
Thanks in advance! | Many people have encountered the problem before. See their ports:
https://www.biostars.org/p/157091/
https://www.biostars.org/p/108335/
or this guide:
http://sfg.stanford.edu/BLAST.html | biostars | {"uid": 192316, "view_count": 1957, "vote_count": 1} |
I have 4 control and 4 treated samples of RNA seq data, for generating gene expression network should i consider adding the normalised counts of control samples in the gene expression matrix? | Yes, you would tend to include all of the group when making a coexpression network. You need larger sample numbers to make a reliable network anyway, so you can only achieve that by including all of your samples. | biostars | {"uid": 322902, "view_count": 957, "vote_count": 1} |
<p>I am looking at RNA-seq data, which I have little experience in. I notice that for many genes, there are reliable alignments (i.e. with high mapping quality) to introns. I understand that some of them are due to unannotated transcripts, but in many regions, this does not seem to be the major cause. The intronic read hits do not seem to be purely caused by alignments artifacts, either, because the pattern is tissue specific (though this is not a compelling evidence). Another possible explanation is that this observation is due to noisy transcripts (Pickrell et al, 2010), but this seems to be a big effect: for some long genes, there are far more intronic hits than exonic hits.</p>
<p>I guess those who study RNA-seq data must have noticed the intronic hits for years. What is cause of the large amount of intronic read hits? Is it caused by alignment/library prep artifacts or noisy transcription? Are there papers addressing this? Thanks.</p>
<p>EDIT: my conclusion. I was looking at ERR030882 from Illumina BodyMap (brain). The sample were processed with oligo-dT. I am using the gencode exon annotations, including all the pseudogenes, lincRNA and known processed transcripts, totalling ~112Mbp. The initial analysis reveals ~80% of bases mapped to exons. Nonetheless, if I only look at read pairs with insert size larger than 311bp (~10% of the original data), 98.2% of these spliced read pairs are mapped to known exons, suggesting that the vast majority of the intronic and intergenic read pairs are unspliced. It is possible that some unspliced pairs come from unknown single-exon transcripts with intact polyA tail, but contaminations seem the leading cause overall.</p>
| <p>Two simple reasons:</p>
<p>1) Genomic DNA has contaminated the RNA-Seq sample, likely at the mRNA isolation step. This would look like sequence data from both strands of the intron.</p>
<p>2) There is unspliced mRNA in the sample. This would give data for the strand that encodes the gene, but within that intron.</p>
<p>There could be other explanations, but these are two principle ones that come to mind.</p>
| biostars | {"uid": 42890, "view_count": 40098, "vote_count": 43} |
Hi,
For the standalone version, can you answer the following please:
1. How does OMA deal with jobs that do not finish in one submission i.e. when working on clusters jobs can only run for a certain amount of time. To 'continue' the job do I simply resubmit the job and OMA detects where it was from the Cache?
Thank you,
R | <p>In the Cache/AllAll/ directory, all files that are gzipped represent job chunks that have successfully completed and will be used. The files that are not gzipped represent job chunks that were being processed when your job died. You can delete these and they will be restarted next time you run OMA.</p>
| biostars | {"uid": 164299, "view_count": 2106, "vote_count": 1} |
Hi all,
In the article of Miwa: https://www.ncbi.nlm.nih.gov/pubmed/15816807
They mention a polymorphism M429V of the gene ABCG8, I need the rs number of this SNP but I couldn't find it. Could someone provide this SNP.
Thank you in advance. | Using [backlocate][1], it looks like the variant would be in the BED interval 2:44100998-44101000 .
#User.Gene AA1 petide.pos.1 AA2 transcript.name transcript.id transcript.strand transcript.AA index0.in.rna wild.codon potential.var.codons base.in.rna chromosome index0.in.genomic exon messages extra.user.data
ABCG8 Met 429 Val ABCG8 ENST00000272286 + M 1284 ATG GTG A 2 44100998 ENST00000272286.Exon9 . .
ABCG8 Met 429 Val ABCG8 ENST00000272286 + M 1285 ATG GTG T 2 44100999 ENST00000272286.Exon9 . .
ABCG8 Met 429 Val ABCG8 ENST00000272286 + M 1286 ATG GTG G 2 44101000 ENST00000272286.Exon9
. .
This position would be https://www.ncbi.nlm.nih.gov/snp/rs147194762
[1]: http://lindenb.github.io/jvarkit/BackLocate.html | biostars | {"uid": 425422, "view_count": 769, "vote_count": 1} |
Hi everone,
I searched Biostars for BAM/SAM to FASTA conversion method, and found the tools EMBOSS Picard could do this (https://www.biostars.org/p/6970/). I am wondering is there any perl script to accomplish this work? I should note that my seq data is strand-specific seq, so the command `samtools view filename.bam | awk '{OFS="\t"; print ">"$1"\n"$10}'` may not be very good.
I appreciate any of your solutions. Thanks a lot! | For future record: samtools versions >1.3 can convert bam to fasta directly via `samtools fasta`
samtools --help
Program: samtools (Tools for alignments in the SAM format)
Version: 1.4 (using htslib 1.4)
Usage: samtools <command> [options]
Commands:
[...]
-- File operations
[...]
fastq converts a BAM to a FASTQ
fasta converts a BAM to a FASTA
[...]
| biostars | {"uid": 129763, "view_count": 67028, "vote_count": 5} |
Dear all,
please could you advise, what is a typical range of AF (allele fractions) of somatic mutations in the cancer samples in order to consider them clonal or sub-clonal ? Thank you ...
-- bogdan
| In addition to Kevin's answer, [Quantification of subclonal selection in cancer from bulk sequencing data][1] and references therein may be another useful read.
If you want some sensible cutoff for some rough analysis or filtering, I would say 20% AF is about right, but again, as Kevin says it depends...
[1]: https://www.ncbi.nlm.nih.gov/pubmed/29808029 | biostars | {"uid": 339074, "view_count": 1678, "vote_count": 2} |
I am trying to interpret the component graphs from my Trinity run. I rendered a couple of graphs of the components (using c*.graph.out files) from a Trinity assembly and noticed that some components had a structure where the root node (-1 node) is in the middle of a "linear" sequence of nodes.
I uploaded my ipython notebook here with 3 of the graphs I rendered:
http://nbviewer.ipython.org/github/damiankao/trinity-visualization/blob/master/trinity_vis.ipynb
The first component (c445) looks normal to me, with a root node (in red) that connects to one linear sequence and eventually splitting into two branches followed by a merge that could possibly indicate isoforms.
But the second and third component graphs showed the root node in the middle of a "linear" region. Furthermore, for the second component graph, the probable paths were the two "arms" of the root node.
There are no shared k-1-mers between the nodes on either side of the root node in the second and third component. How does the bundling of contigs work in this case? Is it putting these contigs together based on pair-end reads? And what exactly is the -1 root node? | I just got a response from the Trinity mailing list via Brian Hass:
> The -1 is the root node for the de Bruijn graph. A way you can end up with multiple 'arms' in the graph like that is if there are multiple inchworm contigs that are clustered together based on paired-read links (from the bowtie alignment step). This way, they end up having the same 'component' number in the accession string (ie. (`c\d+`) of the `c\d+_g\d+_i\d+` accession naming format. This often happens when transcripts for a given gene are fragmented. | biostars | {"uid": 104656, "view_count": 2498, "vote_count": 2} |
I have Total RNA TrueSeq Illumina Stranded library (human). My goal is to find novel (and non-novel) non-coding transcripts in my data (experimental vs control).
After a LOT of Google-fu and asking questions on this website, this is the methodology that I am currently using -
1. Align the fasta files with STAR to hg38
2. Assemble transcripts for each sample, merge transcripts from all samples (to get a unified transcriptome that represents all the samples), and estimate transcript abundances - all using Stringtie (protocol paper - https://www.nature.com/articles/nprot.2016.095#procedure)
3. Use tximport to infer integer counts from the Stringtie transcript abundances and export it to DESeq2.
I wanted to know if this methodology makes sense. Is there anything for which a better method makes more sense.
I hope my question is not too broad, given that I do specify the exact pipeline I am employing :)
| Hi, you can follow the below given pipeline for complete analyses
1. Algin reads to the genome using the STAR (Already completed)
2. Run stringtie to assemble transcripts with default parameters which (filter out transcript below 1 FPKM)
3. Merge all transcriptome assemlby into 1 merged assembly ( stringtie --merge)
4. Extract protein coding genes in gtf format from ENSEMBLE GTF
5. Download already known noncoding RNAs ( ENSEMBLE + LNCIPEDIA)
6. Merge all known noncoding annotation in to 1 gtf file by ( using cuffmerge)
7. gffcompare the protein coding and all knonw protein codng gtf to your assembled merged gtf file.
8 Extract the classes "u", "i" and "X" with bash script (For more details check gffcompare classes.)
9. Now extract the MSTRG id and extract gtf file for this using the grep or something.
10 Extract fasta file for novel noncoding using gffread
11. Filter the transcript wiht length less then 200 and single exons transcript.
12. Use CPAT and PLEK to predit the coding potenial for the novel transcripts using the transcript sequence.
13. Further filter the the transcritp by using NCBI blast with known protein homology based ( use only 3 frames)
14. This would be your final list for the transcripts.
15 Rerun the stringtie with -e -B option to get count in FPKM, Coverage and TPM.
16 Run DESEQ2 For differential expression
17. Annotate the novel transcript by looking into near by genes by bedoops closet tool.
18. Further find out signifigance by creating the corrleation beteween protein coding and your novel transcripts.
19 Gene enrchimnent anlaysis of the near by genes and protein coding gens for understaing mechanism
| biostars | {"uid": 408179, "view_count": 2058, "vote_count": 2} |
<p>Is there a tool like bedtools shuffle which I can use to randomly shuffle a bam file? or will I have to convert my bam into a bed and then shuffle it? thanks</p>
| One option is to use <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/conversion/bam2bed.html">*bam2bed*</a> in conjunction with <a href="https://github.com/alexpreynolds/sample">*sample*</a>:
$ bam2bed < reads.bam > reads.bed
$ sample -k 1234 reads.bed > random_sample.bed
One advantage of `sample` over GNU `shuf` is that `shuf` loads everything into memory before shuffling, while `sample` uses reservoir sampling on line offsets so that the memory overhead is much, much lower. Memory usage can be an issue for very large input files. | biostars | {"uid": 151102, "view_count": 7404, "vote_count": 2} |
I am trying to make a co-occurrence network graph for my presence/absence data of genes per genomes but am unsure how to go about with it. I'm hoping to end up with something like the first image below,
Where each gene is linked to another gene , considering if they are both present in the same genomes, where possibly a larger circle being used to describe a higher frequency gene. I originally tried using *widyr* and *tidygraph* packages but I am unsure that my data is not compatible (see second image), as it has the BGCs as rows and the individual genomes as columns. I am examining the presence/absence pattern of the gene pair to determine if they represent a coincident relationship; basically if gene *i* and gene *j* are observed together or apart in the input genomes more often than would be expected by chance.
1) Are there any suggestions on what packages/code I could use that would work with my data set, or how I could adapt my data set to work with these packages?
2) Are there any statistical tests that would be also recommended specifically to assure that there is a coincident or not type relationship?
<a href="https://ibb.co/RHcrqLS"><img src="https://i.ibb.co/s3qxZLP/co-occurrence-network-example.png" alt="co-occurrence-network-example" border="0"></a><br /><a target='_blank' href='https://imgbb.com/'>Network Layout</a><br />
# Example of data set
# rows = genes
# cols = genomes
set.seed(2222)
df <- matrix(sample(c(TRUE, FALSE), 50, replace = TRUE), 5)
colnames(df) <- letters[1:10]
Thanks in advanced
| To address question 1, I would suggest to use the R [igraph package][1]. There's an [excellent tutorial here][2].
Starting from a binary matrix A that can be considered as the adjacency matrix of the graph, you can do something like:
library(igraph)
G <- graph_from_adjacency_matrix(A)
plot(G)
Here you have a bipartite graph and your matrix is not square so it is not an adjacency matrix but can be considered an incidence matrix. You can expand it to a full adjacency matrix and use the above or you can do:
G <- graph_from_incidence_matrix(A)
Then you just need to style the graph to your liking.
EDIT: Re-reading the question, I see you mean co-occurrence in question 2. There are a number of R packages from different fields that can do co-occurrence analysis from binary matrices such as:
[EcoSimR][3] from ecology (see the [co-occurrence analysis vignette][4]) or [quanteda][5] from text analysis ([tutorial][6]).
[1]: https://igraph.org/r/
[2]: https://kateto.net/network-visualization
[3]: https://CRAN.R-project.org/package=EcoSimR
[4]: https://cran.r-project.org/web/packages/EcoSimR/vignettes/CoOccurrenceVignette.html
[5]: https://CRAN.R-project.org/package=quanteda
[6]: https://tm4ss.github.io/docs/Tutorial_5_Co-occurrence.html | biostars | {"uid": 422065, "view_count": 4995, "vote_count": 1} |
i am using following command to align the reads in hisat2: for my project
[memona@farooq hisat2-2.1.0]$ ./hisat2 –p 64 --max-intronlen 10000 –x /data/memona/hisat2-2.1.0/hisat_index -1 /data/memona/SRR959590_A_1P.fq -2 /data/memona/SRR959590_A_2P.fq –S /data/memona/hisatresult/hisat_align.sam
and getting this error.:
Warning: Output file '64' was specified without -S. This will not work in future HISAT 2 versions. Please use -S instead.
Extra parameter(s) specified: "–x", "/data/memona/hisat2-2.1.0/hisat_index", "–S", "/data/memona/hisatresult/hisat_align.sam"
Note that if <mates> files are specified using -1/-2, a <singles> file cannot
also be specified. Please run bowtie separately for mates and singles.
Error: Encountered internal HISAT2 exception (#1)
Command: /data/memona/hisat2-2.1.0/hisat2-align-s --wrapper basic-0 --max-intronlen 10000 -1 /data/memona/SRR959590_A_1P.fq -2 /data/memona/SRR959590_A_2P.fq –p 64 –x /data/memona/hisat2-2.1.0/hisat_index –S /data/memona/hisatresult/hisat_align.sam
(ERR): hisat2-align exited with value 1
-p 64 is not a file but number of threads im using. further i want all the output files in hisatresult directory as is spesified in command line..
kindly help me to resolve the issue. | The `-` in `-p` is an em-dash, not a hyphen, in your command. I imagine you copied and pasted from Word or something like that that "autocorrected" it for you. Just type the command manually and that should resolve the error. | biostars | {"uid": 310776, "view_count": 2649, "vote_count": 1} |
<p>Hi all,</p>
<p>I have few questions about <code>MaSuRCA </code>assembly program and I will greatly appreciate if anyone can clarify these doubts:</p>
<ol>
<li>Do I have to run <code>trimmomatic</code> or any other filtering/trimming tools before I run the assembly? I am asking this because the program already has a inbuilt error correction step.</li>
<li>How to specify the <code>overlapping libraries </code>? The <code>sr-config</code> file asks to provide input library as 1)two_letter_prefix 2)mean 3)stdev 4)fastq 1 5) fastq 2, but <code>negative mean</code> are to specify <code>paired-end</code> reads that are <code>outies (RF)</code> and it won't accept <code>0</code> as insert size. Also, is the mean, insert-size or the total size <code>(read length * 2 + insert size)</code>?</li>
<li>Do I have to combine similar library types into a single file before putting them in <code>sr-config</code> file or is it OK to specify each library separately?</li>
</ol>
<p>Thanks very much,</p>
| 1. No need to run trimmomatic. the inbuilt error correction step has adapter trimming, error correction, etc.
2. Read [the quick start guide][1] e.g.
> PE = aa 180 20 /data/fwd_reads.fastq /data/rev_reads.fastq*
The 'mean' and 'stdev' parameters are the library insert average length and standard deviation.
> "JUMP are assumed to be outties <---.--->. If there are any jump libraries that are innies, such as longjump, specify them as JUMP and specify NEGATIVE mean."
So negative mean is for longjump library, not for paired-end reads.
3. No, you can specify each library per line, e.g. add a new line, supposed 250bp PE library with 30bp stdev
PE = bb 250 30 /data/fwd_reads_1.fastq /data/rev_reads_1.fastq
Good luck!
[1]: ftp://ftp.genome.umd.edu/pub/MaSuRCA/MaSuRCA_QuickStartGuide.pdf | biostars | {"uid": 106274, "view_count": 4371, "vote_count": 1} |
I have a EMBL file ready for ENA submission but failing on translation table. It should be 5 and added to file but it seems to expect translation table 1. I can't see where any addition could be to make it pick it up? I added XXX so you don't know the species. Error seems to suggest something on first line, any ideas?
ID XXX; XXX; circular; genomic DNA; XXX; XXX; 15307 BP.
XX
AC XXX;
XX
AC * _Mitochondria
XX
PR Project:PRJEB11111;
XX
DT 01-May-2020 (Rel. 133, Created)
XX
DE XXX
XX
KW .
XX
OS XXX
XX
RN [1]
RP 1-15307
RG XXX
RT ;
RL Submitted (01-MAY-2020) to the INSDC.
XX
FH Key Location/Qualifiers
FH
FT source 1..15307
FT /mol_type="genomic DNA"
FT /organelle="mitochondrion"
FT /organism="XXX"
FT gene complement(12989..13705)
FT /locus_tag="cox2"
FT mRNA complement(12989..13705)
FT /locus_tag="cox2"
FT CDS complement(12989..13705)
FT /locus_tag="cox2"
FT /transl_table=5
Error:
ERROR: organism classified. Submitted /transl_table "5" conflicts with translation table "1" recruited from taxonomy. Please check submitted /transl_table, /organelle and /organism for agreement. Contact us if necessary. [ line: 1 of MT3.embl.gz]
| I think it is inconsistent. In the flat file the sequence is called Mitochondria (cf AC line. It is extracted from the fasta file by EMBLmyGFF3). While in your chromosome list file it is called Mt. You should replace Mt by Mitochondria. Then try again to validate | biostars | {"uid": 435707, "view_count": 1234, "vote_count": 1} |
A Manhattan Plot is created in GWAS studies to visualize where SNP positions and there logarithmic p-values.
Can somebody please with the help of a simple numerical example for 2 or 3 chromosome show how this plot is made?
**Update**:
Actually I am confused how data is processed so that each SNP has different p-values . Take for the sake of argument that I have following scenario
Chromosomes=23
Controls=500
Subjects=800
SNP=20000/SNP that is each chromosome has 20000 SNPs
Now my confusion is that how those dots are made . I mean how is it possible that on horizontal axis that is representing a particular SNP on a specific chromosome how could we have multiple dots going up or down? that is my confusion?
regards | You seem have some issues reading what is said.
- Each chromosome has multiple SNPs
- For each SNP you perform one statistical test
- Each SNP has one position and one p-value
- Each SNP has only one dot in a Manhattan plot
- It looks like they're lots of dots stacked on each other, but that's just because you are testing many many SNPs. | biostars | {"uid": 324643, "view_count": 12915, "vote_count": 2} |
Hi friends
I plotted this box-wisker for TCGA HTSeq data in R.
I want to have harf of them as red and half as blue (control vs treatment groups). or is there any better way for boxplot?
How can I do that?
I just used simple code:
data <- read.table("mydata.csv", header=TRUE, row.names = 1)
library(DESeq2)
mat <- as.matrix(data)
log <- rlog(mat)
boxplot(log)
my plot:
https://ibb.co/xXsHtC0 | maybe this one? A dodged box plot?
I used "ToothGrowth" public data for this demonstration:
data("ToothGrowth")
library(ggplot2)
ggplot(ToothGrowth, aes(x = as.factor(dose), y = len)) +
geom_boxplot(aes(fill = supp), position = position_dodge(0.9)) +
scale_fill_manual(values = c("#09E359", "#E31009")) +
theme_bw()
![enter image description here][1]
Also, a dogged violin plot would be a very informative way to present expression data. Personally, I would rather use a split violin plot for expression data when I have many cases to be plotted. The codes for creating a split violin plot in python can be found on my [GitHub,][2] which I used to plot top DE genes from RNA-seq.
[1]: https://raw.githubusercontent.com/hamidghaedi/file_sharing/main/box_demo.JPG
[2]: https://github.com/hamidghaedi/RNA-seq-differential-expression/blob/master/violin_plot_DE.ipynb
[3]: https://i.stack.imgur.com/tYQWX.png | biostars | {"uid": 469559, "view_count": 7350, "vote_count": 2} |
I have a network of miRNAs and genes that correlate in my experiment (based on expression levels). The edges link each miRNA to the gene that it correlates with. None of these miRNAs and genes have any known direct interactions according several databases of experimentally validated gene miRNA interactions. It is possible that these correlations exist because of genes which were not measured in my experiment E.g. miR-1 interacts with geneX (not measured in my experiment) which interacts with geneY(measured) leading to a correlation between miR-1 and gene-Y in my network.
I want to create a new network that tries to link (e.g. through gene-gene interactions, perhaps even transcription factors) each miRNA with its correlating gene.
I thought cytoscape may be able to help but I have never used it before and am struggling to find a plug-in which does what I want.
I have a list of experimentally validated gene miRNA interactions and I presume it is possible to get a list of gene-gene interactions e.g. from an interactome database. Does anyone have any ideas on how can I integrate this information with my network so that I can link each miRNA with its correlating gene?
Many thanks | Greetings, and welcome to Cytoscape. I think there are a couple of plugins that might be useful. What I would suggest is to use your list of genes and then probe one of the databases to explore how those genes might be related. For example, you could use the stringApp, add your list of genes, but then also add a number of high confidence edges that might bring in interacting genes that links them. If you want to view your miRNA interactions in the same network, Cytoscape has an excellent "merge networks" tool that will allow you to merge your original network with the STRING network. You might also want to look at the new IntActApp as an alternative using the IntAct database in lieu of the STRING database.
-- scooter | biostars | {"uid": 477516, "view_count": 781, "vote_count": 1} |
I am preparing admixture analyses involving the Neanderthal and Denisovan genomes, and I have downloaded extended (=generally non-vcftools friendly) VCF files (http://cdna.eva.mpg.de/neandertal/altai/AltaiNeandertal/VCF/ and http://cdna.eva.mpg.de/denisova/VCF/hg19_1000g/). The files were originally made with GATK, but the authors greatly modified to files; thus, they aren't standard VCFs.
I've got them down to just the sites that I have modern data for, so they only a fraction as massive as the original files.
A few hundred sites sites have been marked LowQual, and have been ejected using `grep -v`; however, LowQual is going into more than just Genotyping Quality (i.e., LowQual sites have GQ's as high as 59, but for the Neanderthal 8,767 of 511,858 non-LowQual sites have GQ<60).
I was wondering what would be good GQ cut-off to use for the non-LowQual line?
Also, any suggestions on how to filter them? (Remember I cannot use VCFtools or GATK or anything similar due to the non-standard formatting)
Here is an example line, GQ is in the subsequent code block
1 5031561 rs7518523 A G 909.02 . AC=2;AF=1.00;AN=2;DP=24;Dels=0.00;FS=0.000;HRun=0;HaplotypeScore=0.9665;MQ=39.43;MQ0=0;QD=37.88;1000gALT=G;AF1000g=0.34;AFR_AF=0.70;AMR_AF=0.28;ASN_AF=0.17;EUR_AF=0.27;UR;TS=HPGOMC;TSseq=A,G,A,G,G,A;CAnc=A;GAnc=A;OAnc=G;bSC=987;mSC=0.000;pSC=0.007;GRP=-2.27;Map20=1 GT:DP:GQ:PL:A:C:G:T:IR 1/1:24:72.23:942,72,0:0,0:0,0:10,14:0,0:0
```
GT 1/1
DP 24
GQ 72.23
PL 942,72,0
A 0,0
C 0,0
G 10,14
T 0,0
IR 0
``` | Since I didn't get any suggestions, I sought out how this dataset has been used recently in the literature.
I found sources by Qin and Stoneking (dx.doi.org/10.1093/molbev/msv141) and Lazaridis et al. (dx.doi.org/10.1038/nature13673) (former cites the latter), which suggest filtering the LowQual sites as well GQ < 30 and Qual < 50.
Just thought to pass this along for anyone else using these datasets. | biostars | {"uid": 148245, "view_count": 10705, "vote_count": 5} |
Hi,
I am interested in building an updated annotation of the mouse olfactory receptor genes. They have a tissue specific expression obviously, with various 3' isoforms, so the current annotation is not very good. Hand annotation using IGV to visualize my data is not an option with this gene family ;)
I have mapped RNA Seq paired ends reads from olfactory epithelia to the mm10 genome with STAR, and then I used a new tool called [IsoSCM][1] to obtain a more precise annotation (as a GTF file). It is a de novo approach.
My problem is that I can't easily (it seems) merge this new annotation with the 'official' one (mm10.gtf), which has less precise 5' and 3' (as seen in IGV) but includes Ensembl Gene IDs which I would very much like to have. I am including examples below of both syntaxes.
I would like to know if there is a tool that allows to add the Ensembl Gene IDs to a de novo transcriptome according to the features' position in the genome. I think someone already had to answer this type of question....
Otherwise, I can code simple stuff such as sorting features by their position in the genome and finding reasonable rules for merging (a feature from the new genome which has the same start or end position in mm10.gtf could inherit the mm10.gtf gene ID). Determining the feature type (5', CDS, exon, 3') is a bit more complex but possible.
Here is the mm10.gtf annotation:
```
[cyril@synapse ~]$ head -20 mm10.gtf
chr1 mm10_ensGene stop_codon 134202951 134202953 0.000000 - . gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
chr1 mm10_ensGene CDS 134202954 134203590 0.000000 - 1 gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
chr1 mm10_ensGene exon 134199223 134203590 0.000000 - . gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
chr1 mm10_ensGene CDS 134234015 134234355 0.000000 - 0 gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
chr1 mm10_ensGene start_codon 134234353 134234355 0.000000 - . gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
chr1 mm10_ensGene exon 134234015 134234412 0.000000 - . gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
chr1 mm10_ensGene exon 134235228 134235431 0.000000 - . gene_id "ENSMUST00000086465"; transcript_id "ENSMUST00000086465";
```
And the one I obtain with IsoSCM:
```
[cyril@synapse ~]$ head -20 isoSCM.gtf
chr4_JH584293_random sol exon 10822 11047 0.0 + . locus_id "locus.00000000";type "5p_exon"
chr4_JH584293_random sol exon 11191 11251 0.0 + . locus_id "locus.00000000";type "internal_exon"
chr4_JH584293_random sol exon 12077 12246 0.0 + . locus_id "locus.00000000";type "internal_exon"
chr4_JH584293_random sol exon 12375 12489 0.0 + . locus_id "locus.00000000";type "internal_exon"
chr4_JH584293_random sol exon 12643 12675 0.0 + . locus_id "locus.00000000";type "internal_exon"
chr4_JH584293_random sol exon 12754 12920 0.0 + . locus_id "locus.00000000";type "internal_exon"
chr4_JH584293_random sol exon 13477 13640 0.0 + . locus_id "locus.00000000";type "internal_exon"
chr4_JH584293_random sol exon 14817 14989 0.0 + . locus_id "locus.00000000";type "3p_exon"
chr4_JH584293_random sol exon 15884 15933 0.0 + . locus_id "locus.00000001";type "5p_exon"
```
Thanks for your help!
[1]: https://github.com/shenkers/isoscm | Use [cuffcompare][1] to compare your de novo gtf with mm10 gtf. Then look for [class code][2] j, which I think stands for novel isoform. You can play around.
[1]: http://cole-trapnell-lab.github.io/cufflinks/cuffcompare/
[2]: http://cole-trapnell-lab.github.io/cufflinks/cuffcompare/ | biostars | {"uid": 135612, "view_count": 2273, "vote_count": 1} |
how can i remove duplicated variants from vcf file? i googled and searched in biostars history but i did not fond any way to do it. | use [vcfuniq][1] or [bcftools][2] norm (with `-d` option) to remove duplicates
[1]: https://github.com/vcflib/vcflib#vcfuniq
[2]: https://samtools.github.io/bcftools/bcftools.html | biostars | {"uid": 264584, "view_count": 15837, "vote_count": 5} |
Hi.
I have a txt file containing mutiple fasta sequences, and I'd like to replace Nth character in each sequence.
If I want to replace 10th character of each sequence in the example below, which linux command can I use?
> sample1
TATCCGATGCGACGTGCAGCG
> sample2
CTAGCGTAGTGTCGACTGCAT
> sample3
GACTGACGTGACGTAGTCGAC
Thank you!
| Assuming one-line sequences in your FASTA-formatted input, replace `X` with your character of interest:
$ awk '{ \
if ($0 ~ /^>/) { \
print $0; \
} \
else { \
printf("%s%c%s\n", substr($0, 1, 9), "X", substr($0, 11, length($0) - 10)); \
} \
}' in.fa > out.fa
If you have multi-line sequences in your FASTA input, you need to render them into single-line sequences before using this one-liner. See the following Biostars question (and answer) for a solution: https://www.biostars.org/p/9262/ | biostars | {"uid": 186433, "view_count": 6848, "vote_count": 1} |
I have a bed file and wanted to calculated average PhastCon score over the regions, is there a tool which could do so? I tried using cmotifs but it submits jobs and at times does not produce timely results.
| You could use <a href="http://bedops.readthedocs.org/en/latest/content/reference/statistics/bedmap.html#score-operations">*bedmap --mean*</a> here, to map PhastCon signal onto your <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/sorting/sort-bed.html">sorted</a> regions.
First, grab PhastCon data from UCSC and convert WIG to a compressed form of BED called <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/compression/starch.html">Starch</a>, using a tool called <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/conversion/wig2bed.html">*wig2starch*</a>:
```
$ rootPath="http://hgdownload.cse.ucsc.edu/goldenpath/hg19/phastCons100way/hg19.100way.phastCons"
$ for i in `seq 1 22` X Y; \
do \
echo "getting PhastCon data for chromosome chr${i}..."; \
wigFn="chr${i}.phastCons100way.wigFix"; \
url="${rootPath}/${wigFn}.gz"; \
wget -qO- ${url} | gunzip -c - | wig2starch - > ${wigFn}.starch; \
done
```
We could have used `wig2bed` instead of `wig2starch`, but note that the final set of PhastCons files could take between 10-20 GB compressed, and uncompressed BED files could require about 10x more disk space. It takes a little longer to make Starch files, but it will take less disk space. I'd recommend some kind of compression, if you plan to do repeated map calculations.
Second, map PhastCon signal files to your <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/sorting/sort-bed.html">sorted</a> regions file by chromosome:
```
$ for i in `seq 1 22` X Y; \
do \
echo "mapping signal for chromosome chr${i}..."; \
phastConFn="chr${i}.phastCons100way.wigFix.starch"; \
bedmap --echo --mean --chrom chr${i} regions.bed ${phastConFn} > regions_with_avg_phastcons.${i}.bed; \
done
```
Third (optionally), union the per-chromosome results to a single file, using <a href="http://bedops.readthedocs.org/en/latest/content/reference/set-operations/bedops.html#everything-u-everything">*bedops --everything*</a>:
$ bedops --everything regions_with_avg_phastcons.*.bed > regions_with_avg_phastcons.bed
Doing the *bedmap* step with `--chrom` allows parallelization with grid job schedulers or GNU Parallel, splitting the tasks by chromosome. This could reduce calculation time down to that required for the largest chromosome of regions.
One thing I would note is that the average value calculated is over scores for mapped elements. If you have gaps of scores along your mapped region, those gaps would not contribute to the calculation of the mean signal.
I do not think you can interpolate PhastCon conservation signal between a gap, but it has been a while since I have worked with this particular dataset - its authors could probably confirm or negate this.
It might generally be useful to use *bedmap* with the <a href="http://bedops.readthedocs.org/en/latest/content/reference/statistics/bedmap.html#score-operations">*--skip-unmapped*</a> option to filter out results for unmapped regions. | biostars | {"uid": 129981, "view_count": 5150, "vote_count": 3} |
Is anyone aware of genome-wide CRISPR/Cas9 screens where gene expression is quantified? Meaning, after targeting a gene, what is the average gene expression change for all genes (microarray, RNAseq, etc.)?
Thanks in advance | There are some papers, done in specific cell types and disease models:
https://www.nature.com/articles/nature23477?_ga=2.135383345.140789132.1527763165-716992680.1527763162
https://www.cell.com/cell-reports/pdf/S2211-1247(18)30387-5.pdf
https://www.ncbi.nlm.nih.gov/pubmed/29038160
| biostars | {"uid": 310063, "view_count": 1414, "vote_count": 1} |
Hello,
Is there a script somewhere that I can use to convert the `FRG `files back to paired end `fastq `files?
If not, what info can I use to find the pairs? First 2 records for my `FRG` file are as follows:
```
{FRG
act:A
acc:aa202
rnd:1
sta:G
lib:aa
pla:0
loc:0
src:
.
seq:
GCACATGTGCATAGATTTCACGGACCTCAATCGCGCCTGTCCGAAAGATGACTTCCCTTTGCCCCGGATAGATCAACTGGTCGATTCTACGGCCGGCTGCGAAGCGATGAGTTTCTTGGATGCTTATTCTGGCTACCACCAGATTAGCAT
.
qlt:
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
.
hps:
.
clr:0,150
}
{FRG
act:A
acc:aa203
rnd:1
sta:G
lib:aa
pla:0
loc:0
src:
.
seq:
TTGCCCTTGAAAGGTCTTTGTCATGAGTCTCTGGAAGGTTGCTCCGGCGTTTTTCAAGCCGAATGGCATTCGGACGTAGTAGAAAGTACCCACAGGAGTGATGAAACTAGTCTTCTCTTCATCTTCCGGGATCATGCTAATCTGGTGGTA
.
qlt:
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE
.
hps:
.
clr:0,150
}
```
Thanks for any help!
EDIT: I can easily parse the FRG file to fastq, but still don't know how to find pairs. If you any of you have experience working with frg/celera assembler, please provide any insights. Thanks. | <p>There is a conversion script in celera:</p>
<p><code>celera/src/AS_GPK/frg-to-fastq.pl</code></p>
<p>Judging from the output file names, it is PE aware.</p>
<pre>
my $outAname = "$outPrefix.1.fastq";
my $outBname = "$outPrefix.2.fastq";
my $outIname = "$outPrefix.i.fastq";
my $outSname = "$outPrefix.s.fastq";</pre>
<p>There is also one in the AMOS package (not sure if this does PE though)</p>
<p><code>amos/bin/frg2fastq</code></p>
| biostars | {"uid": 134670, "view_count": 2116, "vote_count": 1} |
> Several very commonly used annotation databases for human genomes are additionally provided below. In general, users can use -downdb -webfrom annovar in ANNOVAR directly to download these databases. To view of full list of databases (and their size and last changed date) **prepared by ANNOVAR developers, use avdblist keyword in -downdb operation**
Ok so I tried this
perl annotate_variation.pl -downdb avdblist
And this
perl annotate_variation.pl -downdb -webfrom annovar avdblist
And this
perl annotate_variation.pl -downdb -webfrom avdblist
And other variations and I get no list
WTF is it so hard to give an example of the exact command to use?
| Thanks to natasha.sernova I scrolled down a little bit and found the answer (and someone just as frustrated as me)
This works
$ perl annotate_variation.pl -webfrom annovar -downdb avdblist -buildver hg19 .
![ANNOVAR needs better documentation][1]
[1]: http://i.imgur.com/Y5OWXoA.png | biostars | {"uid": 196985, "view_count": 10445, "vote_count": 4} |
I am planning to use [bam2fastx][1] tool for converting the BAM files from PacBio sequencing to fastq. (I prefer a command line tool over SMRT link). However, I cannot find any manual for this tool.
The help says that there is an option called `--split-barcodes` for demultiplexing. However, there is no information about the format to provide the barcodes in. My sequencing data contains ~120 asymmetrically PCR-barcoded samples. How do I split the data into multiple files using this tool?
[1]: https://github.com/pacificbiosciences/bam2fastx/ | Documentation for the commands can be found in the official [SMRTanalysis guide][1]. I don't see any way to specify index sequences so it must be recognizing official PacBio indexes. There is [one more page here][2].
[1]: https://www.pacb.com/wp-content/uploads/SMRT_Tools_Reference_Guide_v600.pdf
[2]: https://devhub.io/repos/PacificBiosciences-bam2fastx | biostars | {"uid": 387385, "view_count": 2269, "vote_count": 1} |
<p>Hi everyone,</p>
<p>I have a number of Human Heart RNASeq samples. I have generated a mpileup file on each of them using a bed file containing a list of positions for which I want the mpileup to be made. I have included the -B option to turn off the BAQ computation in mpileup.</p>
<p>Single sample mpileup:</p>
<pre><code>samtools mpileup -uf hg19.fa -B -l snps_chr17.bed sample1.bam > sample1.mpileup
</code></pre>
<p>Multiple sample mpileup (all samples):</p>
<pre><code>samtools mpileup -uf hg19.fa -B -l snps_chr17.bed -b bam_files_list.bam > all_samples.mpileup
</code></pre>
<p>Now, I want to output the <strong><em>frequency of nucleotides A, T, G and C at each of those positions</em></strong>. There is a perl script <a href='https://github.com/riverlee/pileup2base/blob/master/pileup2baseindel.pl'>pileup2baseindel</a> which generates <em>exactly</em> the output that I want, only that it is incorrect (inconsistent with what is seen in IGV). This program takes a single/multi-sample <a href='https://github.com/riverlee/pileup2base/blob/master/test.mpileup'>mpileup file</a> and produces the following <a href='https://github.com/riverlee/pileup2base/blob/master/sample1.txt'>output</a>. But like I said, the output is incorrect.</p>
<p>I have searched a lot but can't seem to find any program that does the same. Does anyone know any tool/script that would give me such an output with/without taking the mpileup file as an input. Suggestions on alternatives/options are welcomed.</p>
<p>[UPDATE] There was a bug in the perl script and I have updated the code, it works fine now. I have contacted the author and as soon as the author responds, I will send it to him so that he can update it. I would still like to get suggestions and try out other programs. </p>
<p>[UPDATE] Thanks everyone who helped me with this problem. Each of your suggestions worked but I accepted <a href='/u/117/'>Chris Miller</a>'s answer recommending the use of <a href='https://github.com/genome/bam-readcount'>bam-readcount</a> because of its easy of use and the elaborate output that it generates. It directly takes, as input, a <a href='http://samtools.sourceforge.net/SAM1.pdf'>bam</a> file as well as a list of positions in bed format for which you want the nucleotide frequency. So you don't need to create an intermediate mpileup file - as in <a href='https://github.com/riverlee/pileup2base'>pileup2baseindel</a> or run the program against positions that you are not interested in - as in <a href='https://github.com/alimanfoo/pysamstats'>pysamstats</a>.</p>
<p>Thanks!</p>
| Another option is just to run [bam-readcount][1], giving it the bams and specified positions. It's more straightforward than creating an mpileup and parsing that ugly string.
[1]: https://github.com/genome/bam-readcount
| biostars | {"uid": 95700, "view_count": 28778, "vote_count": 9} |
We are trying to search for variants in dbSNP but running into difficulties where dbSNP uses a human genome build other than hg19. We have a large number of variants we are examining, and this process needs to be fully automated. We have the chromosome, hg19 base position, and allele. Is there a way to determine the genome build used in dbSNP from results, we are using the entrez eutils docsum.
For example, searching '1[Chromosome] AND 69511[Base Position] AND snp[SNP_CLASS]' gives us this docsum for rs75062661.
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=snp&id=75062661
However, searching '1[Chromosome] AND 906272[Base Position] AND snp[SNP_CLASS]' doesn't give us rs28507236, but using the GRCH38 position '1[Chromosome] AND 970893[Base Position] AND snp[SNP_CLASS]' does
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esummary.fcgi?db=snp&id=28507236
The CHRPOS field in the first link is related to the GRCh37 genome build, while the CHRPOS field in the second link is related to GRCh38. Is there a way to tell which genome build that field is referring to? We need to search by genomic coordinate, however, right now the only way we can think of to search dbSNP by genomic coordinate it to lift over the coordinates to GRCh38 from GRCH37 and try them both. Even then, we wouldn't know for sure which genome build the resulting SNP is part of. Is there a way to search by build in dbSNP? Or even just to determine which genome build the CHRPOS field is referring to in the dbSNP results? | dbSNP is using GRCh38 positions in both cases. However, in your first example, the coordinates for GRCh37 and GRCh38 happen to be identical. You can see that by looking on [this page][1]. Note that the HGVS notation for both the NT accessions (which represent chromosome 1 versions) are identical.
[1]: http://www.ncbi.nlm.nih.gov/snp/?term=1%5BChromosome%5D%20AND%2069511%5BBase%20Position%5D%20AND%20snp%5BSNP_CLASS%5D | biostars | {"uid": 133203, "view_count": 1988, "vote_count": 2} |
I am trying to create a taxonomic database using GenomicFeatures package. I downloaded GFF3 file from the NCBI.
Codes :
orf <- GenomicFeatures::makeTxDbFromGFF("orf.gff3",format="auto")
I get following output :
Orf
TxDb object:
Db type: TxDb
Supporting package: GenomicFeatures
Data source: mouse.gff3
Organism: NA
Taxonomy ID: NA
miRBase build ID: NA
Genome: NA
transcript_nrow: 0
exon_nrow: 0
cds_nrow: 0
Db created by: GenomicFeatures package from Bioconductor
Creation time: 2019-05-29 22:32:09 -0500 (Wed, 29 May 2019)
GenomicFeatures version at creation time: 1.32.2
RSQLite version at creation time: 2.1.1
DBSCHEMAVERSION: 1.2
Link to the genome : https://www.ncbi.nlm.nih.gov/nuccore/AY386263.1
As you can see that there are no genes in this database. Can anyone help with this please ?
| Hi [lokraj2003][1],
You can add `gene` features to the gff3 file that you downloaded from [https://www.ncbi.nlm.nih.gov/nuccore/AY386263.1][2],
then re-load it again using the same function.
Like this (it can be any method you like to re-format the original gff3, here for example `awk` with focus on creating `gene` lines, adding `Parent` to each `CDS`, and I leave other detail parsing to you):
```
$ cat orf.gff3 \
| awk 'BEGIN{FS=OFS="\t"} $3!="CDS"{print $0} $3=="CDS"{GENE=$0; gsub("\t0\t", "\t\.\t", GENE); gsub("CDS", "gene", GENE); gsub("cds", "gene", GENE); gsub(";product=.*", "", GENE); print GENE; ID=$9; gsub(".*;protein_id=", "", ID); print $0 ";Parent=gene-" ID}' \
> orf_re.gff3
$ head orf_re.gff3
##sequence-region AY386263.1 1 137241
##species https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=10258
AY386263.1 Genbank region 1 137241 . + . ID=AY386263.1:1..137241;Dbxref=taxon:10258;country=USA: Iowa;gbkey=Src;genome=genomic;isolate=ORFA;isolation-source=nasal secretions of a lamb at the Iowa Ram Test Station during an outbreak in 1982%2C then passaged in ovine fetal turbinate cells;mol_type=genomic DNA;strain=OV-IA82
AY386263.1 Genbank gene 2409 2858 . - . ID=gene-AAR98099.1;Dbxref=NCBI_GP:AAR98099.1;Name=AAR98099.1;gbkey=gene
AY386263.1 Genbank CDS 2409 2858 . - 0 ID=cds-AAR98099.1;Dbxref=NCBI_GP:AAR98099.1;Name=AAR98099.1;gbkey=CDS;product=ORF001 hypothetical protein;protein_id=AAR98099.1;Parent=gene-AAR98099.1
```
And using that gff3 you'll get:
```
> orf <- GenomicFeatures::makeTxDbFromGFF("orf_re.gff3", format = "auto")
Import genomic features from the file as a GRanges object ... OK
Prepare the 'metadata' data frame ... OK
Make the TxDb object ... OK
> orf
TxDb object:
# Db type: TxDb
# Supporting package: GenomicFeatures
# Data source: orf_re.gff3
# Organism: NA
# Taxonomy ID: NA
# miRBase build ID: NA
# Genome: NA
# transcript_nrow: 130
# exon_nrow: 130
# cds_nrow: 130
# Db created by: GenomicFeatures package from Bioconductor
# Creation time: 2019-05-30 16:57:03 +0200 (Thu, 30 May 2019)
# GenomicFeatures version at creation time: 1.34.8
# RSQLite version at creation time: 2.1.1
# DBSCHEMAVERSION: 1.2
```
Hope it helps. :-)
[1]: https://www.biostars.org/u/46248/
[2]: https://www.ncbi.nlm.nih.gov/nuccore/AY386263.1 | biostars | {"uid": 382165, "view_count": 1422, "vote_count": 1} |
Hi all,
I have been trying to use Mutect to compare results from Varscan and other tools. To run MuTect, pre-processing from GATK and Picard tools is necessary.
1\. **Mapped reads using BWA.**
2\. **Convert to sorted BAM using PICARD**
```
java -Xmx4g \
-Djava.io.tmpdir=/tmp \
-jar SortSam.jar \
SO=coordinate \
INPUT=Trimmed_ERR361938_trimmed_bwa.sam \
OUTPUT=Test.bam \
VALIDATION_STRINGENCY=LENIENT \
CREATE_INDEX=true
```
3\. **Mark Duplicates using PICARD**
```
java -Xmx4g \
-Djava.io.tmpdir=/tmp \
-jarpicard-tools-1.119/SortSam.jar \
SO=coordinate \
INPUT=Trimmed_ERR361938_trimmed_bwa.sam \
OUTPUT=Test.bam \
VALIDATION_STRINGENCY=LENIENT \
CREATE_INDEX=true
```
4\. **Realign along INDEL using GATK**
```
java -Xmx4g \
-jar GenomeAnalysisTK.jar \
-T RealignerTargetCreator \
-R /steno-internal/chirag/data/indexGenome/hg19/bwa/hg19.fa \
-o input.bam.list \
-I input.marked.bam
```
**NOW I GET ERROR**
```
##### ERROR
##### ERROR MESSAGE: SAM/BAM file input.marked.bam is malformed: SAM file doesn't have any read groups defined in the header. The GATK no longer supports SAM files without read groups
##### ERROR
```
There is this script which should fix this, but I am not sure of some of the parameter used here,
java -jar ~/unixTools/picard-tools-1.119/AddOrReplaceReadGroups.jar
These parameters need to be used
- RGLB=String
- LB=String Read Group Library Required.
- RGPU=String
- PU=String Read Group platform unit (eg. run barcode) Required.
- RGSM=String
- SM=String Read Group sample name Required.
How do I get information on these parameters, as I am analyzing many published reads.
Are there some other ways to fix this step.
Thanks in advance! | <p>Hi,</p>
<p>SAM/BAM files need a bit of preprocessing before Picard and GATK can work on them. I'm unable to recall all the steps off the top of my head, but this should help you solve the first problem by adding read groups:</p>
<p>https://www.biostars.org/p/47487/</p>
<p>Remember, dummy read group names will suffice to bypass this error.</p>
| biostars | {"uid": 115819, "view_count": 24229, "vote_count": 6} |
Hi
I am trying to convert snp coordinates to rsIDs by using the flag `--update-name`, I am not sure if I am using it correctly, my command is:
plink1.9 --bfile files --update-name snps_coordinates_rs_IDs [1] [2] --out Test_update_name
I am using [1] [2] to dedicate columns.
The snp_coordinate file looks like this. The first column has coordinates the second has rsIDs
1:48899:G:A rs974664996
1:114531:G:C rs918310516
1:133360:T:C rs974682326
1:541281:GGGCCAACCACAGGACA:G rs752188509
1:564782:T:C rs879089824
1:636747:C:T rs974667799
My .bim file has coordinates which I want to convert to rsIDs. I am getting this output
> Error: Invalid --update-name column number.
Am I missing column names
Please assist
| 1. Remove the square brackets from your command line. See the "Interpreting our flag usage summaries" section under https://www.cog-genomics.org/plink/1.9/general_usage .
2. Swap the order of the two numbers: "2 1" instead of "1 2". The [--update-name usage summary][1] is "--update-name <filename> [new ID col. number] [old ID col.] [skip]"; the new-ID column number comes BEFORE the old-ID column number. Or you could just remove the column numbers entirely, since "2 1" corresponds to the default behavior (this is mentioned in the first paragraph of the --update-name documentation).
[1]: https://www.cog-genomics.org/plink/1.9/data#update_map | biostars | {"uid": 9468537, "view_count": 2740, "vote_count": 1} |
Hello,
When I download release 95 repeats soft masked file "ftp://ftp.ensembl.org/pub/release-95/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna_sm.toplevel.fa.gz" it is ~1 Gb, however if I decompress it, it becomes 54 Gb.
This is curious as the same soft repeat masked mouse genome decompressed is 2.7 Gb. Any idea why the human genome is so large and if there is any tool to reformat the fasta file into a smaller one?
Thanks, A | There are huge numbers of haplotypes in the human GRCh38. In the toplevel DNA sequence files these are represented as the whole chromosome, where most of it is Ns and only the haplotype sequence is actual sequence. This means that the compressed files aren't huge, as they just need to encode how many Ns there are, but decompressed are massive with all the Ns represented. | biostars | {"uid": 396993, "view_count": 1508, "vote_count": 2} |
I have a list of some 200 plant common names that I need to find the respective scientific names for. I could just google every one of them but if possible I want to use a software package that can automate the task (for the sake of reproducibility and my own sanity).
I thought of writing a python script that queries google (for spelling correction and context dependent search) and then grab the supplied Wikipedia URL and then try to find the scientific name with 'beautifulsoup' but turns out that slaps you with captcha pretty fast.
Is there a package/tool/method that comes closest to being an industry standard? | Using [**EntrezDirect**][1]:
$ esearch -db taxonomy -query "thale cress" | esummary | xtract -pattern DocumentSummary -element CommonName,ScientificName
thale cress Arabidopsis thaliana
$ esearch -db taxonomy -query "zebra fish" | esummary | xtract -pattern DocumentSummary -element CommonName,ScientificName
zebra fish Girella zebra
zebrafish Danio rerio
[1]: http://bit.ly/entrez-direct
| biostars | {"uid": 9518170, "view_count": 441, "vote_count": 2} |
Hi all,
I have to run the same program for multiple files.
I would like to know if I can do that using a loop?
Many thanks, friends!
Files: SRR10345445_1.fastq SRR10345445_2.fastq SRR10345446_1.fastq SRR10345446_2.fastq
TrimmomaticPE -threads 30 \
SRR10345445_1.fastq SRR10345445_2.fastq \
SRR10345445_1_PE.fastq SRR10345445_1_SR.fastq SRR10345445_2_PE.fastq SRR10345445_2_SR.fastq \
HEADCROP:12 ILLUMINACLIP:TruSeq3-PE-2.fa:2:30:10:2:keepBothReads \
SLIDINGWINDOW:4:20 LEADING:5 TRAILING:5 MINLEN:40
TrimmomaticPE -threads 30 \
SRR10345446_1.fastq SRR10345446_2.fastq \
SRR10345446_1_PE.fastq SRR10345446_1_SR.fastq SRR10345446_2_PE.fastq SRR10345446_2_SR.fastq \
HEADCROP:12 ILLUMINACLIP:TruSeq3-PE-2.fa:2:30:10:2:keepBothReads \
SLIDINGWINDOW:4:20 LEADING:5 TRAILING:5 MINLEN:40
| You just need to loop over one filename, pair the second file to it, and then feed these in to your command. There will be more practical examples elsewhere on the forum, but the general form will be this:
for R1 in /path/to/files/*_1.fastq ; do
R2="${R1%_1.fastq}_2.fastq"
TrimmomaticPE -threads 30 "$R1" "$R2" "${R1%.*}_PE.fastq" "${R1%.*}_SR.fastq" "${R2%.*}_PE.fastq" "${R2%.*}_SR.fastq" ...
done
Haven't tested this, so there might be syntax errors. | biostars | {"uid": 9480761, "view_count": 1395, "vote_count": 1} |
Hello
I wish to submit my custom gene sequences of different cultivars of oryza sativa. I was using Bankit to submit my sequence taking the help of this tutorial here at [Bankit][1]
Bankit accepts gene start and end locations and also the corresponding feature names of known features.
I want to submit sequences of genes for which I have found variations in the form of SNPs. How do I submit these entire gene sequences using Bankit?
[1]: https://www.ncbi.nlm.nih.gov/WebSub/html/help/feature-table.html | See the following links:
Complete Genome Submission Guide
https://www.ncbi.nlm.nih.gov/genbank/genomesubmit/
How to: Submit sequence data to NCBI
https://www.ncbi.nlm.nih.gov/guide/howto/submit-sequence-data/
Submitting Sequences using Specific NCBI Submission Tools
https://www.ncbi.nlm.nih.gov/books/NBK53709/
Formatting your Submission
https://www.ncbi.nlm.nih.gov/books/NBK53702/
| biostars | {"uid": 219356, "view_count": 1543, "vote_count": 1} |
Hi,
I am trying to visualise my overlapped chip-seq peak regions which I analysed with Homer mergePeaks function. I have got one venn info file and a "result" file. I would like to use that venn info file then visualise it but when I looked for visualisation libraries or programs, I did not find a method which merits my expectations.
the primary problem is my data is big. ( relatively :) ). I have 19 datasets in one conditions group and 9 datasets in healthy one. I have read making venn diagram for more than 3 datasets would not be smart on biostar tread.
I am trying to find overlapped regions of transcriptional factors that why I wanna know which transcriptional factors sites are most common.
I am a python coding and R mediocre.
Please don't link me to https://www.biostars.org/p/77362/ and https://www.biostars.org/p/66091/ threads I have already read them 0192308 times), also if you think homer is not the best tool for finding the overlaps, please feel free to advice others. (Yes, I do know monkseq)
Thank you very much for your help.
Best regards,
Tunc | Another alternative is a "binary heatmap". It scales better than a Venn diagram, though combinatorial binding of 19 factors is going to be complex no matter what way you look at it.
Here's a working example: https://gist.github.com/daler/07eb1a95f1e4639f22bd | biostars | {"uid": 164054, "view_count": 6777, "vote_count": 2} |
<p>I have a bam file that I would like sorted karyotypically (not lexicographically) but my contigs are not matching the reference file provided by GATK. Getting the reference file that was originally used for alignment and realigning my sample are unavailable options.</p>
<p>My reference uses "1,2,3,...,X,Y,MT" notation but my bam file uses "chr1,chr2,chr3,...chrX,chrY,chrM" notation. Is there a way to remove the chr prefix and change chrM to MT in my bam file? Can I get by with just revising the header only without messing with the reads in the bam file? Thank you for your help!</p>
| If all your bams are this way, it's probably easier to change your reference to match the bams. (That's just changing a few lines of a fasta and rebuilding an index or two).
If you do end up needing to reformat your bams, there is good advice in previous threads:
- https://www.biostars.org/p/119295/
- https://www.biostars.org/p/13462/ | biostars | {"uid": 174420, "view_count": 6510, "vote_count": 2} |
Hi,
I have a table like this;
6 6:29002062:rs7755402 0 29002062 G A
6 6:29004091:rs9468471 0 29004091 A G
6 6:29006250:rs9468473 0 29006250 A G
6 6:29006493:rs9461499 0 29006493 C A
6 6:29006844:rs7743837 0 29006844 G A
I want to remove everything before "rs" in the second column. I know I can use egrep,
egrep -o "(rs\S+)" file | cut -d " " -f 2 > newfile
However, then I`m left with only the rs string;
rs7755402
rs9468471
rs9468473
rs9461499
rs7743837
rs6919044
rs41424052
rs6924824
rs6456886
rs6456887
But I actually want the other columns too.
Any help is greatly appreciated! | sed 's/\t\b.*:rs/\trs/' file > newfile
Explantion:
s/ = substitution
\t\b = look for tab, and then a word boundary
.*:rs/\trs = ./ matches one or more charaters. you match everything till rs and replace it by tab and rs (ie \trs) | biostars | {"uid": 244515, "view_count": 2479, "vote_count": 1} |
Dear all,
I have pair-end RNA-seq data (Illumina) from parasite and I would like to do De-Novo assembly by TRINITY. I have reference genome of my host organism so I can map my data to host and remove from fastq contaminations.
My plan is:
1. Map with bwa/bowtie/novoaling my pair-end FASTQ files to a host reference genome
2. Remove hits from fastq files (cleaning contaminations)
3. For the rest of FASTQ files use TRINITY for De-Novo transcript assembly
My question is:
May I use aligners (bwa etc.) and align raw fastq files to host DNA and then remove contaminants from fastq files? Question is because my data are from RNA-seq project NOT DNA.
How can I remove the sequences from raw fastq files that align to host DNA (cleaning process)?
Or if you have any other advice how to prepare data to TRINITY pipeline I will appreciate it.
Thank you so much for any comment and sharing your experience. | I agree with Devon
I would do it in following ways:
1. Map fastq files with tophat2
2. Convert unmapped.bam file to fastq (bamTofatsq) and remap with tophat2, this time provide junctions ( with an option `-j` you got from first run, (if replicates then merge the junctions).
3. The unmapped.bam from this run can be converted to fastq.
I think that this is the fastq from which the reads you are looking for. | biostars | {"uid": 120756, "view_count": 5773, "vote_count": 1} |
Hi guys,
I am trying to use the Dedupe.sh tool from BBTools developed by Brian Bushnell (https://www.biostars.org/u/14684/) to find overlaps in my *de novo* assembled contigs. The input file is a big fasta file containing all contigs generated from different assemblers (MIRA, ABySS, SOAPdenovo, SPAdes), with the sequence headers modified to contain just a sequential counter like >1, >2, ... and so on. I am using the overlap graph generated by Dedupe.sh to merge the contigs in the overlaps using a Perl script.
The problem is if I try to run Dedupe.sh in the merged assemblies file, I get the following errors during runtime. I made a few edits to the program output pasted below for clarity:
1. Only showing first 20-something bases of sequences
2. Separated errors in two different blocks, as the first block of error repeated itself for quite a while
Here is the output of the Dedupe.sh run:
Command:
~/software/bbmap/dedupe.sh in=../../a6_best_assembly_v1.fasta out=a6_best_assembly_v1_DR.fasta outd=a6_best_assembly_v1_duplicates.fasta pattern=a6_best_assembly_v1_cluster% dot=a6_best_assembly_v1_overlapgraph.dot arc am ac fo c mcs=2 pc=t minidentity=95 mo=100 pto=t ngn=f sq=f ple=t overwrite=t
Output:
java -Djava.library.path=/home/pgbsilva/software/bbmap/jni/ -ea -Xmx68519m -Xms68519m -cp /home/pgbsilva/software/bbmap/current/ jgi.Dedupe in=../../a6_best_assembly_v1.fasta out=a6_best_assembly_v1_DR.fasta outd=a6_best_assembly_v1_duplicates.fasta
pattern=a6_best_assembly_v1_cluster% dot=a6_best_assembly_v1_overlapgraph.dot arc am ac fo c mcs=2 pc=t minidentity=95 mo=100 pto=t ngn=f sq=f ple=t overwrite=t
Executing jgi.Dedupe [in=../../a6_best_assembly_v1.fasta, out=a6_best_assembly_v1_DR.fasta, outd=a6_best_assembly_v1_duplicates.fasta,
pattern=a6_best_assembly_v1_cluster%, dot=a6_best_assembly_v1_overlapgraph.dot, arc, am, ac, fo, c, mcs=2, pc=t, minidentity=95, mo=100, pto=t, ngn=f, sq=f, ple=t, overwrite=t]
Initial:
Memory: max=68853m, free=68135m, used=718m
Found 0 duplicates.
Finished exact matches. Time: 0.197 seconds.
Memory: max=68853m, free=58795m, used=10058m
Found 0 contained sequences.
Finished containment. Time: 0.168 seconds.
Memory: max=68853m, free=63082m, used=5771m
Removed 0 invalid entries.
Finished invalid removal. Time: 0.002 seconds.
Memory: max=68853m, free=63082m, used=5771m
**First error block**
Exception in thread "Thread-72" java.lang.AssertionError:
type=FORWARD, len=567, subs=299, edits=0 (175032, length=12932, start1=12365, stop1=12931) (229892, length=1324, start2=0, stop2=566)
>1
ATTCCTTGAGTTTTTCTTCCAACCATTTTACTAACATTTTAATTTCTGCTCTCCTATTTTCAGTTATTGAGATTTTTTGCCTGGTGTTTCTGTTTATGGCCTTCTAATTTTGTTCCATGAATGCAATAAGTTCTCCT **(sequence continues)**
>2
TTTCTTCACAGAATTGGAAAAAACTACTTTAAAGTTCATATGGAACCAAAAAAGAGCCCGCATTGCCAAGTCAATCCTAAGCCAAAAGAACAAAGCTGGAGGCATCACACTACCTGACTTCAAACTATACTACAAGG **(sequence continues)**
at jgi.Dedupe$Overlap.<init>(Dedupe.java:3989)
at jgi.Dedupe$Unit.makeOverlapReverse(Dedupe.java:5295)
at jgi.Dedupe$Unit.makeOverlap(Dedupe.java:4856)
at jgi.Dedupe$HashThread.findOverlaps(Dedupe.java:3410)
at jgi.Dedupe$HashThread.processRead(Dedupe.java:3274)
at jgi.Dedupe$HashThread.processReadOuter(Dedupe.java:3152)
at jgi.Dedupe$HashThread.run(Dedupe.java:3085)
Exception in thread "Thread-63" java.lang.AssertionError:
type=FORWARD, len=714, subs=491, edits=0 (175050, length=11792, start1=11078, stop1=11791) (210803, length=1497, start2=0, stop2=713)
>1
ACCAGCATATACAGAGACCAAATCAATACAAATAGCAAAGTTAGTATAACATGCTAGTTTTGAAATGATTAATATGTAATATGTTTTTGGAAATTATTAGTTGATTTATTCCTTTACTCACAAATATTTATTCAGT **(sequence continues)**
>2
CCGTTTTAGGCGCAACAGACCAACCAGACCAGAATGGATTCATCCATACTAAGTGCCATGTAATCAAACTGACTCATACGGACCAGTTTTCCAAAAAACCTGAAGTAGAATGAAAGGAATATAAAGGAAGATACAG **(sequence continues)**
at jgi.Dedupe$Overlap.<init>(Dedupe.java:3989)
at jgi.Dedupe$Unit.makeOverlapReverse(Dedupe.java:5295)
at jgi.Dedupe$Unit.makeOverlap(Dedupe.java:4856)
at jgi.Dedupe$HashThread.findOverlaps(Dedupe.java:3410)
at jgi.Dedupe$HashThread.processRead(Dedupe.java:3274)
at jgi.Dedupe$HashThread.processReadOuter(Dedupe.java:3152)
at jgi.Dedupe$HashThread.run(Dedupe.java:3085)
Exception in thread "Thread-53" java.lang.AssertionError:
type=FORWARD, len=263, subs=28, edits=0 (221070, length=1188, start1=0, stop1=262) (223564, length=1064, start2=801, stop2=1063)
>1
AAGTGGAGCTGGCTTGGAAAGAATAGGGAAACGGGTGCAACTCCCGTGCGGTTACGCCGCTGTAACAAGTGACGAAGGCTTTATCTATAGCCACTGTCGCACCTGCCTCTTATACACAGCTGACGCTGCCGACGA **(sequence continues)**
>2
AGAAGACCTGCTTTTTCATGCTCATCACTCCCATGTAAATCGGGAGACTGTCTCGCTAAAGACAGGATGCTGTCTTTTATACACAGCTGACGCTGCCGACGACGCCTCTAGTTTATTCGTCTGTTGTCGCTCACA **(sequence continues)**
at jgi.Dedupe$Overlap.<init>(Dedupe.java:3989)
at jgi.Dedupe$Unit.makeOverlapReverse(Dedupe.java:5295)
at jgi.Dedupe$Unit.makeOverlap(Dedupe.java:4856)
at jgi.Dedupe$HashThread.findOverlaps(Dedupe.java:3410)
at jgi.Dedupe$HashThread.processRead(Dedupe.java:3274)
at jgi.Dedupe$HashThread.processReadOuter(Dedupe.java:3152)
at jgi.Dedupe$HashThread.run(Dedupe.java:3085)
Exception in thread "Thread-69" java.lang.AssertionError:
type=FORWARDRC, len=912, subs=77, edits=0 (188, length=1866, start1=954, stop1=1865) (211768, length=1176, start2=1175, stop2=264)
>1
TGTCGCTACCGCGATAGGGCAAAAAGTCCTAAAGTTTAGTAAGTGTTTGCTTGGAACACTTTTTCATGAGCCCTTTAATAAGGGGCAGTGGAAGAAATTCATGTAGAGCTCCTTTTTTTTGCATCAATAGGCAA **(sequence continues)**
>2
ATAAAGCGAAAGAGAGCGCTTTTTTTTCAGCGTCTAAATTCTTCGTATGATTTCCCTCACATAGTTAGCGAAATCCATTTCCAATGCACTGCATTTGGAAATTTTTTGCCTATTGATGCAAAAAAAAGGAGCTC **(sequence continues)**
at jgi.Dedupe$Overlap.<init>(Dedupe.java:3989)
at jgi.Dedupe$Unit.makeOverlapForwardRC(Dedupe.java:5229)
at jgi.Dedupe$Unit.makeOverlap(Dedupe.java:4851)
at jgi.Dedupe$HashThread.findOverlaps(Dedupe.java:3410)
at jgi.Dedupe$HashThread.processRead(Dedupe.java:3274)
at jgi.Dedupe$HashThread.processReadOuter(Dedupe.java:3152)
at jgi.Dedupe$HashThread.run(Dedupe.java:3085)
Exception in thread "Thread-59" java.lang.AssertionError:
type=FORWARD, len=623, subs=363, edits=0 (174723, length=12656, start1=12033, stop1=12655) (20470, length=9394, start2=0, stop2=622)
>1
GTTTGCGAAACTAAAGACAAAAGAAATGCCATAAAAATATCTTCTAGATGACAAAGTTGTGCCTTTTGGAGTTGCATTTTAACACATCGAAACCACTACACACATACACGGGAACTGCACAATTGGGTAAATA **(sequence continues)**
>2
ATGTGCAAGTTTGTTACATGGGTATACATGTGCTATGTTGGTTTGTTGCACCTATTAACTCATCACTTACATTGGGTATTTCTCCTAATGCTATCCTTCCTCCAGCCCCCCACCCCATGACAGGCCCCAGTGT **(sequence continues)**
at jgi.Dedupe$Overlap.<init>(Dedupe.java:3989)
at jgi.Dedupe$Unit.makeOverlapReverse(Dedupe.java:5295)
at jgi.Dedupe$Unit.makeOverlap(Dedupe.java:4856)
at jgi.Dedupe$HashThread.findOverlaps(Dedupe.java:3410)
at jgi.Dedupe$HashThread.processRead(Dedupe.java:3274)
at jgi.Dedupe$HashThread.processReadOuter(Dedupe.java:3152)
at jgi.Dedupe$HashThread.run(Dedupe.java:3085)
Exception in thread "Thread-57" java.lang.AssertionError:
type=FORWARD, len=210, subs=15, edits=0 (66196, length=2881, start1=2671, stop1=2880) (67503, length=1435, start2=0, stop2=209)
>1
GCTCTTTGGAATGCCAGACGCAGTGGCATGTACCTGTAGACCCACCTACAAGGTGGGCTGTTGTGGGCTGTAGTGTGCTGTGATTTTGCCTGTGATACCCACTGCCCTCCAGCCTAGGCAACATAGTGAGA **(sequence continues)**
>2
GCCACAGTTTCTTAATCCAGTCTATCACTGATGGACATTTGGGTTGGTTCCAAGTCTTTGCTATTGTGAATAGTGCCGCAATAAACATACGTGTGCATGTGTCTTTATAGCAGCATGATTTATAATCCTTT **(sequence continues)**
at jgi.Dedupe$Overlap.<init>(Dedupe.java:3989)
at jgi.Dedupe$Unit.makeOverlapReverse(Dedupe.java:5295)
at jgi.Dedupe$Unit.makeOverlap(Dedupe.java:4856)
at jgi.Dedupe$HashThread.findOverlaps(Dedupe.java:3410)
at jgi.Dedupe$HashThread.processRead(Dedupe.java:3274)
at jgi.Dedupe$HashThread.processReadOuter(Dedupe.java:3152)
at jgi.Dedupe$HashThread.run(Dedupe.java:3085)
This set of errors continues for a few more times. The program continues running and outputs this:
Found 241 overlaps.
Finished finding overlaps. Time: 0.109 seconds.
Memory: max=68853m, free=62391m, used=6462m
Overlaps: 540, length: 1025664
Counted overlaps. Time: 0.003 seconds.
Memory: max=68853m, free=62391m, used=6462m
Clusters: 8104 (114 of at least size 2)
Size Range Clusters Reads Bases
1 7990 7990 24600288
2 55 110 562883
3-4 42 145 665875
5-8 15 91 432193
9-16 1 15 48450
17-32 1 17 128562
Largest: 17
Finished making clusters. Time: 0.012 seconds.
Memory: max=68853m, free=62391m, used=6462m
Removed 0 invalid entries.
Finished invalid removal. Time: 0.001 seconds.
Memory: max=68853m, free=62391m, used=6462m
**Second set of errors:**
Exception in thread "Thread-79" java.lang.AssertionError
at jgi.Dedupe$Overlap.flip(Dedupe.java:4126)
at jgi.Dedupe$ClusterThread.canonicize(Dedupe.java:2858)
at jgi.Dedupe$ClusterThread.canonicizeNeighbors(Dedupe.java:2723)
at jgi.Dedupe$ClusterThread.canonicizeClusterBreadthFirst(Dedupe.java:2660)
at jgi.Dedupe$ClusterThread.run(Dedupe.java:2080)
Exception in thread "Thread-91" java.lang.AssertionError
at jgi.Dedupe$Overlap.flip(Dedupe.java:4126)
at jgi.Dedupe$ClusterThread.canonicize(Dedupe.java:2858)
at jgi.Dedupe$ClusterThread.canonicizeNeighbors(Dedupe.java:2723)
at jgi.Dedupe$ClusterThread.canonicizeClusterBreadthFirst(Dedupe.java:2660)
at jgi.Dedupe$ClusterThread.run(Dedupe.java:2080)
Exception in thread "Thread-90" Exception in thread "Thread-78" java.lang.AssertionError
at jgi.Dedupe$Overlap.flip(Dedupe.java:4126)
at jgi.Dedupe$ClusterThread.canonicize(Dedupe.java:2858)
at jgi.Dedupe$ClusterThread.canonicizeNeighbors(Dedupe.java:2723)
at jgi.Dedupe$ClusterThread.canonicizeClusterBreadthFirst(Dedupe.java:2660)
at jgi.Dedupe$ClusterThread.run(Dedupe.java:2080)
java.lang.AssertionError
at jgi.Dedupe$Overlap.flip(Dedupe.java:4126)
at jgi.Dedupe$ClusterThread.canonicize(Dedupe.java:2858)
at jgi.Dedupe$ClusterThread.canonicizeNeighbors(Dedupe.java:2723)
at jgi.Dedupe$ClusterThread.canonicizeClusterBreadthFirst(Dedupe.java:2660)
at jgi.Dedupe$ClusterThread.run(Dedupe.java:2080)
Exception in thread "Thread-98" java.lang.AssertionError
at jgi.Dedupe$ClusterThread.canonicize(Dedupe.java:2864)
at jgi.Dedupe$ClusterThread.canonicizeNeighbors(Dedupe.java:2723)
at jgi.Dedupe$ClusterThread.canonicizeClusterBreadthFirst(Dedupe.java:2660)
at jgi.Dedupe$ClusterThread.run(Dedupe.java:2080)
Exception in thread "Thread-81" java.lang.AssertionError
at jgi.Dedupe$Overlap.flip(Dedupe.java:4126)
at jgi.Dedupe$ClusterThread.canonicize(Dedupe.java:2858)
at jgi.Dedupe$ClusterThread.canonicizeNeighbors(Dedupe.java:2723)
at jgi.Dedupe$ClusterThread.canonicizeClusterBreadthFirst(Dedupe.java:2660)
at jgi.Dedupe$ClusterThread.run(Dedupe.java:2080)
The second block of error also repeats a few more times.
Found 6 multijoins (4178 bases).
Experienced 0 multijoin removal failures.
Flipped 118 reads and 162 overlaps.
Found 0 clusters (0 overlaps) with contradictory orientation cycles.
Found 1 clusters (2 overlaps) with remaining cycles.
After processing clusters:
Clusters: 8019 (29 of at least size 2)
Size Range Clusters Reads Bases
1 7990 7990 24600288
2 29 79 377770
Largest: 7
Finished processing. Time: 0.016 seconds.
Memory: max=68853m, free=61728m, used=7125m
Input: 8368 reads 26438251 bases.
Duplicates: 0 reads (0.00%) 0 bases (0.00%) 0 collisions.
Containments: 0 reads (0.00%) 0 bases (0.00%) 121936 collisions.
Overlaps: 241 reads (2.88%) 512832 bases (1.94%) 1492 collisions.
Result: 8368 reads (100.00%) 26438251 bases (100.00%)
Printed output. Time: 0.161 seconds.
Memory: max=68853m, free=61673m, used=7180m
Time: 0.685 seconds.
Reads Processed: 8368 12.22k reads/sec
Bases Processed: 26438k 38.60m bases/sec
I am not experienced in Java so I have no idea what the errors mean but it seems like that for some of the threads, something happened during overlap detection between two contigs and that is causing the error. For the second set, the error happens during the canonization step of Dedupe.sh. I don't know if it happens because of the first one or if it is an unrelated incident. Also, every time I run the program with this dataset, it generates a different number of clusters.
Does any of you encountered these errors while running Dedupe.sh? Any help would be fantastic! | Thanks for the detailed error report. I have a couple of ideas about this, but it might be a little difficult to test. You're getting different (and incorrect) output each time because it's crashing; it yields deterministic output when there are no crashes.
First - can you try running with adding the flag "-da" and see what happens?
Second - it's likely that "minidentity=95" is the cause of the instability. Can you try running without that flag and see what happens? If so, there may be some ways to work around it. | biostars | {"uid": 231422, "view_count": 2769, "vote_count": 2} |
Hi All,
I'm trying to speed up a BLASTP call as part of a bigger RBH workflow to detect orthologs, and I'm in the process of testing different approaches with a 100K sequence database and a 1107 sequence query (real database will be 350K, queries will differ in size). My function splits the query into separate files and processes them separately using python multiprocessing (Process or Pool), and I'm also looking at combining that with BLASTP's `-num_threads` parameter to increase speed further. I'm very knew to parallelisation in general (both threading and multiprocessing) but am keen to know more!
I posted these questions together as they all relate to the same code, are generally continuing on from each other and I can accept multiple answers (unlike stack overflow), but please let me know if you'd suggest posting them separately. I'd be grateful for any answers, doesn't have to cover every point in one go :D
***Question 1*** - I'm running local BLASTP and was wondering about the details for the `num_threads` parameter. Am I right in thinking that (as the name suggests), it spreads the workload across multiple threads across a single CPU, so is kind of analogous to Pythons `threading` module (as opposed to the `multiprocessing` module, which spreads tasks across separate CPUs)? I've heard BLAST only goes above 1 thread when it 'needs too', but I'm not clear on what this actually means - what determines if it needs more threads? Does it depend on the input query size? Are threads split at a specific step in the BLASTP program?
***Question 2*** - To check I have the right ideas conceptually, if the above is correct, would I be correct to say that BLAST itself is I/O bound (hence the threading), which makes sense as its processing thousands of sequences in the query etc so lots of input? But if you want to call BLAST in a workflow script (e.g. using Python's `subprocess` module), then the call is CPU bound if you set `num_threads` to a high number, as it spreads the work across multiple threads in a single CPU, which takes a lot of the CPU power? Or does the fact that blastP is not taking full advantage of the threading mean that the CPU is not actually getting fully utilised, so a call will still be input/output bound independent of `num_threads`? If that's correct, then maybe I could use threading to speed separately process the split queries rather than multiprocessing...
***Question 3*** - Are there any suggestions for how to get the best core and thread parameters for general use across different machines without relying on individual benchmarking (I want it to work on other peoples machines with as little tuning and optimisation as possible). Is it just **cores = as many cores as you have** (ie `multiprocessing.cpu_count()`) and **threads = cores + 1** (defined by the BLASTP parameter `num_threads`)? Would this still be true on machines with more/less cores?
***Question 4*** - for benchmarking, how do external programs affect multiprocessing speed - would scrolling the web with 100 tabs open impact multiprocessing speed by increasing the work done by one of the CPUs, taking away resources from one of the processes running my script? If the answer is yes, whats the best way to benchmark this kind of thing? I'm including this question to give context on my benchmarking questions below (i.e. the numbers I am throwing around may be crap). I tried to include graphs of the numbers but they wont copy in, however I found a post explaining how to add pics so if they are helpful I can add them in.
***Question 5*** - Perhaps a more general question, I'm only splitting my query into 4 processes so would have thought `multiprocessing.Process` would be better (vs `multiprocessing.Pool`, which seems the preferred choice if you have lots of processes). But this isn't the case in my benchmarks, for multiprocessing using `blastP_paralellised_process` and `blastP_paralellised_pool` - any idea why? Timewise the `process` to `pool` 'seconds' ratio hovers around 1 with no obvious pattern for all `num_threads` (1-9) and `core` (1-5) combinations.
***Question 6*** - why does increasing the numbers of cores used to process `number of cores` * `split BLASTP queries` not result in obvious speed improvements? I would expect this with cores set >4, as my pc is a 4-core machine, but there seems to be little difference between processing 1/4 query files across 4 cores vs processing 1/2 query files across 2 cores. Is my assumption for **Question 2** incorrect? There is a little bit of slowdown for running on a single core and a dramatic increase for 1 core with 2 and 1 threads (1618 seconds and 2297 seconds), but for 2-5 cores with 1-9 threads the time for each blastP run is around 1000 seconds with some small random fluctuations (eg 4 cores 1 thread is 1323 seconds, but the other multicore single thread runs are normal timewise relative to the baseline of the other values).
I've copied my code below. I've not included functions like `split_fasta` etc, as both they and BLASTP seem to be working (in the sense that im getting xml results files that I havent started parsing yet but look ok when i open in notepad) and I don't want to add 100 lines of unnecessary code and comments. Also, theyre used in the same way for both `blastP_paralellised_process` and `blastP_paralellised_pool`, so I don't think they are causing the time differences. Please let me know if including these would help though!
def blastP_paralellised_process(evalue_user, query_in_path, blastp_exe_path, results_out_path, db_user, num_cores, thread_num):
#function to split fasta query in 1 txt file per core
filenames_query_in_split=fasta_split(query_in_path, num_cores)
#function to construct result names for blastp parameter 'out'
filenames_results_out_split=build_split_filename(results_out_path, num_cores)
#copy a makeblastdb database given as iinput. generate one database per core.
#Change name of file to include 'copy' and keep original database directory for quality control.
delim=db_user.rindex('\\')
db_name=db_user[delim::]
db_base=db_user[:delim]
databases=copy_dir(db_base, num_cores)#1 db per process or get lock
#split blastp params across processes.
processes=[]
for file_in_real, file_out_name, database in zip(filenames_query_in_split, filenames_results_out_split, databases):
#'blastP_subprocess' is a blast specific subprocess call that sets the environment to have
#env={'BLASTDB_LMDB_MAP_SIZE':'1000000'} and has some diagnostic error management.
blastP_process=Process(target=blastP_subprocess,
args=(evalue_user,
file_in_real,
blastp_exe_path,
file_out_name,
database+db_name,
thread_num))
blastP_process.start()
processes.append(blastP_process)
#let processes all finish
for blastP_process in processes:
blastP_process.join()
def blastP_paralellised_pool(evalue_user, query_in_path, blastp_exe_path, results_out_path, db_user, num_cores, thread_num):
####as above####
filenames_query_in_split=fasta_split(query_in_path, num_cores)
filenames_results_out_split=build_split_filename(results_out_path, num_cores)
delim=db_user.rindex('\\')
db_name=db_user[delim::]
db_base=db_user[:delim]
databases=copy_dir(db_base, num_cores)
################
#build params for blast
params_new=list(zip(
[evalue_user]*num_cores,
filenames_query_in_split,
[blastp_exe_path]*num_cores,
filenames_results_out_split,
[database+db_name for database in databases],
[thread_num]*num_cores))
#feed each param to a worker in pool
with Pool(num_cores) as pool:
blastP_process=pool.starmap(blastP_subprocess, params_new)
if __name__ == '__main__':
#make blast db
makeblastdb_exe_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Executables\NCBI\blast-2.10.1+\bin\makeblastdb.exe'
input_fasta_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Precomputed_files\fasta_sequences_SMCOG_efetch_only.txt'
db_outpath=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_queries\DEMgenome_old\database\smcog_db'
db_type_str='prot'
start_time = time.time()
makeblastdb_subprocess(makeblastdb_exe_path, input_fasta_path, db_type_str, db_outpath)
print("--- makeblastdb %s seconds ---" % (time.time() - start_time))
#get blast settings
evalue_user= 0.001
query_user=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_queries\DEMgenome_old\genome_1_vicky_3.txt'
blastp_exe_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Executables\NCBI\blast-2.10.1+\bin\blastp.exe'
out_path=r'C:\Users\u03132tk\.spyder-py3\ModuleMapper\Backend\Intermediate_files\BLASTP_results\blastresults_genome_1_vicky_3.xml'#zml?
num_cores=os.cpu_count()
#benchmarking
for num_cores in range(1,6)[::-1]:
print()
for num_threads in range (1,10)[::-1]:
start_time = time.time()
blastP_paralellised_process(evalue_user, query_user, blastp_exe_path, out_path, db_outpath, num_cores, num_threads)
end_time=time.time()
print(f"blastP process\t{end_time - start_time} seconds\t{num_cores} cores\t{num_threads} threads\treplicate {replicate}" )
start_time = time.time()
blastP_paralellised_pool(evalue_user, query_user, blastp_exe_path, out_path, db_outpath, num_cores, num_threads)
end_time=time.time()
print(f"blastP pool\t{end_time - start_time} seconds\t{num_cores} cores\t{num_threads} threads\treplicate {replicate}" )
print()
| Q1: Generally, I think you are correct in your interpretation between threading and multiprocessing. For BLAST, `num_threads` also has to do with spreading the workload of aligning across the threads you've allotted and query sequences you've provided, e.g. if you're querying `nr`, more threads will increase the throughput simply by expanding how many sequences can be aligned at once. Nevertheless, threading and multiprocessing are more nuanced than that and don't scale linearly sometimes when you think they should, so there is typically an asymptote of performance gains when increasing threads... beyond the asymptote you will likely decrease throughput.
Q2: I'm not sure I interpret what you're saying, but let's say you that subprocessed a bunch of BLAST searches with one thread alotted per search; the amount of concurrent BLAST queries you are running would scale with subprocessing, and the speed of each of those query should scale to a point with the threading. However, you may be starting to unnecessarily convolute things here - why not just compile your queries into a single fasta and search with one BLAST search that uses all available threads instead of relying on yourself to implement the subprocessing optimally? BLAST is a significantly tested program - probably one of, if not the best for all bioinformatic software - I trust their implementation of multithreading over my own - anything well written at the C level will surely be faster than Python. The only exception I can think of here is if you are reaching the peak threading improvement for BLAST, then it may make sense to start multiprocessing the workflow with each subprocess allocating the optimal threads.
Q3: For this you will have to turn to the literature or user forums to find actual graphs on how BLAST scales with multithreading.
Q4: If you are spreading your workflow across all your CPUs and maximizing CPU usage within the workflow then of course you are going to be cutting into performance once you start performing other computer processes. It is best practice, in my experience, to leave 1 to 2 CPU cores free during these analyses if you aren't using a HPC so that you don't interfere with system processes and potentially crash your system. With respect to many tabs, you may run into RAM issues before CPU problems.
Q5: No clue, I simply use `pool` so that I can scale in whichever way I choose. Most scripts I've reviewed do the same.
Q6: Again, if at all possible, leave the threading to the program that's been deployed around the globe and used for a couple decades haha. Multithreading in BLAST is tailored toward the operation, multiprocessing by you is opening a blackbox of different things that can affect performance. Unless you're at the asymptote of performance increase v threads, then prefer BLAST's implementation of multithreading over multiprocessing in my opinion. | biostars | {"uid": 487527, "view_count": 1993, "vote_count": 1} |
Hi everyone
recently I download some SRA files (12 files) using this command from NCBI:
prefetch ERR4172779
all of SRA file were dowloaded properly but when I try to convert them to fastq files using this command :
fastq-dump --split-3 ERR4172779
, it shows this:
2020-08-12T11:42:02 fastq-dump.2.9.4 err: file invalid while opening directory within file system module - failed to resolve column 'READ' idx '1'
2020-08-12T11:42:02 fastq-dump.2.9.4 err: file invalid while opening directory within file system module - failed to resolve column 'READ' idx '1'
2020-08-12T11:42:02 fastq-dump.2.9.4 err: file invalid while opening directory within file system module - failed ERR4172779.sra
==================================================
An error occurred during processing.
A report was generated into the file '/home/mohammad/ncbi_error_report.xml'.
If the problem persists, you may consider sending the file
to '[email protected]' for assistance.
what's the problem??
| Looking at the ENA archive [here](https://www.ebi.ac.uk/ena/browser/view/ERR4172779), it seems fastq files are not available for this accession. However, they provide BAM files.
(As a side note, in my opinion is better to work with ENA than SRA, see also this answer of mine at https://bioinformatics.stackexchange.com/a/14121/339) | biostars | {"uid": 454947, "view_count": 1205, "vote_count": 1} |
I generated TPM counts from fastq data using salmon. This leaves me with the NM_transcript IDs. I would like to generate the gene symbols from the transcript IDs . Biomart does not recognize transcripts, NCBI Datasets produces an error when I run the entire transcriptome. I have been exploring tximport and tximeta, however, I have run into numerous issues particularly with tximeta not detecting my ref file. Any advice would be greatly appreciated.
Update: I have txiimport and tximeta now running, however they create S4 objects and I am unsure how to make these readable
| tximeta was able to compile all my quant.sf files and summarize to gene level | biostars | {"uid": 476347, "view_count": 1355, "vote_count": 1} |
how many read per sample give me good coverage for soybean rna sequensing with 1.1 GB genome length.
I will use Illumina rRNA depleted plant stranded mRNA library prep for cdna liberary and i like to discover rare gene express
the genome of soybean was sequenced.
| Sequencing depth is usually referenced to be the expected mean coverage at all loci over the target sequences, in the case of RNASeq experiements assuming all transcripts have similar level of expression.
For researchers with a fixed budget, often a critical design question is wether to increase the sequencing depth at the cost of reduced sample numbers or to increase the sample size with limited coverage for each sample.
Necessary coverage is determined by the type of study, gene expression level, size of reference genome, published literature, and best practice defined by the scientific community.
C = LN / G
C : coverage
L : read length
N : number of read
G : haploid genome length
On Hiseq 2500 high output run mode, single flow cell for 2X100bp reads : 4 billion paired end read (claimed by illumina so might expect a bit lower), 0.5 Billion per lane.
C = 2 x100 x 500 000 000 / 1 100 000 000 = 91 X
So you should have about 91X of coverager per lane (~3000$) that you now need to divide into conditions and replicates. IF you haven't heard about randomization, replication, blocking and Sampling, now is a good time to do so. | biostars | {"uid": 191031, "view_count": 2073, "vote_count": 1} |
<p>Is there a better way of downloading the human genome reference sequence in fasta format than downloading it from the UCSC site? <a href='http://bio-bwa.sourceforge.net/'>BWA</a> protocol asks for an index to be created from the human genome reference multi fasta so I want to get this.
Thanks</p>
<p>[Edited for clarification in response to answers and comments:]</p>
| [The version][1] used by the 1000 genomes project is recommended. The mitochondrial genome in the g1k version is the most widely used [rCRS][2]. The chromosomes and contigs are concatenated, so it is less likely to make mistakes (people frequently concatenate all sequences including different haplotypes from the same region).
We have seen a lot of complications caused by different chromosome names (chr1 vs. 1) or different ordering (chr2 before chr10 or after). It is true that which b37 version to use does not matter too much, but converging to something close to a standard would reduce a lot of unnecessary works for everyone.
[1]: ftp://ftp-trace.ncbi.nih.gov/1000genomes/ftp/technical/reference/
[2]: http://en.wikipedia.org/wiki/Cambridge_Reference_Sequence | biostars | {"uid": 1796, "view_count": 117703, "vote_count": 49} |
Hi all,
I have a fasta file of an assembled genome of a bacterial strain. I would like to identify the species of this strain (or the closest one possible) using the 16s rRNA sequence.
I tried downloading multiple 16s rRNA sequences from the NCBI database and to perform blastn analysis but I don't think I am using the most efficient method.
Do you have any recommendations on what tool to use to identify the species of a particular strain using 16s rRNA ?
Thank you in advance for your help!
Audrey | Try this:
Download the [bacterial SSU][1] model from RFAM: https://rfam.org/family/RF00177#tabview=tab9
Then use the .cm file with the tool cmsearch of the [Infernal suite][2] to locate the SSU in the genome, extract the genome sequence at the
best scoring coordinates and submit to NCBI Blast or blast against SILVA. You can repeat that with the LSU as well if you like. If you want to blast the rest of the genome to gain further confidence, then you can use the taxon obtained by SSU search to restrict the blast taxon range and thereby speed up the blast search.
Hope this works out for you.
[1]: https://rfam.org/family/RF00177
[2]: http://eddylab.org/infernal/ | biostars | {"uid": 491178, "view_count": 1382, "vote_count": 1} |
<p>how to use birch function in R,birch pakage is removed from CRAN it shows errors while installing from archive and i also want to compare the performance of birch and k means clustering.please anyone help me thanks in advance</p>
| Package 'birch' was removed from the CRAN repository.
Formerly available versions can be obtained from [the archive][1].
Archived on 2014-05-30 as long-standing memory access errors leading to segfaults were ignored.
> Therefore, your performance comparison might be hampered by implementation errors. You will most likely not get any meaningful performance data with respect to the algorithm itself, instead it will be influenced heavily by the implementation problems of birch. If you want a fair comparison you should either fix the package (might be hard) or look for a different implementation.
[1]: http://cran.r-project.org/src/contrib/Archive/birch | biostars | {"uid": 133153, "view_count": 4251, "vote_count": 1} |
<p>I have a very basic question. I have created a local (mysql) database in my system. I would like to know what would be the best option for a front end. I had used html before and want to try out something new. I might also add some perl code in between for some computational work. Kindly help me in this regard..</p>
| <p>Hi, </p>
<p>So it depends on what you want to do, personally I prefer Python and find it very easy to build html interfaces to databases very quickly (its very easy to get something working quickly):</p>
<p>For example pylons has a very simple way of building templates for webpages and it uses SQLAlchemy (an object-relational mapper) to access the database side. </p>
<p><a href="http://pylonshq.com/">http://pylonshq.com/</a></p>
<p>or</p>
<p><a href="http://www.djangoproject.com/">http://www.djangoproject.com/</a></p>
<p>Both have relatively good AJAX support. I know you may want to use Perl ... but I thought I'd suggest that there are some very easy Python frameworks available for this.</p> | biostars | {"uid": 1524, "view_count": 8372, "vote_count": 4} |
The topGO package appears to have a function `showSigOfNodes`, which plots the GO graph and highlights the enriched terms. I have done my GO enrichment analysis with GOseq and that packages doesn't provide a plot function like that. Is there any way the function from topGO can be made to accept the data from GOseq?
If not, what else should be used? I know REVIGO and it's nice but it shows graphs based on semantic similarity, not the GO hierarchy.
Is there anything else available in R/bioconductor? | If you just want to view the hierarchy of the GO with provided GOids, you can have a try the combination of function `makeGOGraph` from package AnnotationDbi and `plotGOTermGraph` from GOstats.
---
UPDATE: 20170510
```r
goIDs <- c("GO:0051130","GO:0019912","GO:0005783")
color <- c("lightblue","red","yellow")
pv <- c(0.0001,0.03,1e-10)
## get results
pp <- getAmigoTree(goIDs=goIDs,color=color,filename="example",picType="png",pvalues=pv)
```
The graph can be get from: http://pan.baidu.com/s/1c2lEkxy | biostars | {"uid": 134159, "view_count": 4994, "vote_count": 1} |
I'm sorry if this is a repeated question, but I continue to have doubts.
I created a blastdb like this:
makeblastdb -in input.fasta -dbtype nucl -title test_DB -parse_seqids -out test_DB
But I can't understand how to add the taxid in order to have the same result has if I used the all nt database.
Can someone clarify me? Thanks | your command should look like:
makeblastdb -in input.fasta -dbtype nucl -title test_DB -parse_seqids -taxid_map taxidmapfile -out test_DB
The taxidmap file is a text file consisting of two columns.
You can download the taxonomy id information here:
ftp://ftp.ncbi.nih.gov/pub/taxonomy/accession2taxid/nucl_gb.accession2taxid.gz
You need to unpack it and you can make a taxidmap file by doing (something like):
```
sed '1d' nucl_gb.accession2taxid | awk '{print $2" "$3}' > taxidmapfile
``` | biostars | {"uid": 419191, "view_count": 2385, "vote_count": 3} |
I have some older VCF files that don't have the contig length set in the VCF header. This means that Picard and some other tools that are very strict with the VCF spec won't accept them.
The contig entries in the header should be
##contig=<ID=1,length=195471971>
##contig=<ID=2,length=182113224>
but they are
##contig=<ID=1>
##contig=<ID=2>
I know that I can manually fix this by doing the following steps.
- unzipping the file
- extracting the header
- lookup the contig lenghts in a fasta.fa.fai file
- adding the lenght to the contigs records in the header with vim
- re-header with bcftools
- bgzip and tabix the the re-headered vcf file
In my hands this works but if you make a slight VIM copy paste error you will have spend a lot of time reheadering and bgzipping a large VCF file for nothing.
Therefore I would like to have a more robust automatic solution where I just give the VCF file and the reference genome file and the header is automatically fixed and a new bgzipped VCF file is written out.
Is there a tool that can add the contig lenghts to the VCF header and write out a new bgzipped VCF file? | ## Use bcftools and a reference fasta index
You can use the following [bcftools command][1] to add contig info into your VCF / BCF files:
bcftools reheader --fai ref.fa.fai -Oz input.vcf.gz > output.vcf.gz
It will also remove unused contigs and generally fix issues.
[1]: https://samtools.github.io/bcftools/bcftools.html#reheader | biostars | {"uid": 198660, "view_count": 8062, "vote_count": 4} |
Hi,
I am trying to parse MEME formatted motif using Bio.motifs object.
>>> list(Motif.parse(open("test.meme"),"MEME"))
...
AttributeError: type object 'Motif' has no attribute 'parse'
Does anyone know if there is a replacement function for this?
Thanks.
J.
| [Looking at the docs][1], your usage looks off
>>> from Bio import motifs
>>> with open("Motif/alignace.out") as handle:
... for m in motifs.parse(handle, "AlignAce"):
... print(m.consensus)
[1]: http://biopython.org/DIST/docs/api/Bio.motifs-module.html
So motifs, not Motif? | biostars | {"uid": 183654, "view_count": 1703, "vote_count": 1} |
Hi everyone,
Sometimes I am asked to mention the most exciting moment or part of my research works, I often say that I was excited first time I know how to work with command lines in Linux. However I think might be there is a specific purpose behind asking this question. What would you reply if you asked so?
Thank you very much
| I believe that is a common job interview question, in general, it is used to check with the candidate what are his/her expectations and what consider important in his/her career. It also depends on the background, a biologist entering into bioinformatics will be excited about CS stuff and vice-versa. So, there is no correct answer, just be sincere.
On my particular case, I am constantly excited but I can highlight:
- Learning the command line
- Learning to code
- The first time I use an HPC
- The first paper in a high-impact journal | biostars | {"uid": 282918, "view_count": 1088, "vote_count": 1} |
Hello, everyone, I am trying to reproduce the lncRNA get from the paper "Portraying breast cancer with lncRNAs" by Olivier et. al. Here I came to the reproducing step:
![enter image description here][1]
I don't know whether my code is doing right, can someone give me your comment?
# load RAW files
RAW_files = list.files(path = "./RAWsets", pattern = "_RAW.tar")
# RNA.list = list of RAW file after processed by Frozen Robust Multiarray Analysis
RNA.list = list()
# iterate through the .CEL files
for (RAWstr in RAW_files) {
print(RAWstr)
# extract those raw .tar files
RAWdirectory = paste("./RAWsets", substr(RAWstr, 1, nchar(RAWstr) - 8), sep = "/")
tardir = paste("./RAWsets", RAWstr, sep = "/")
untar(tardir, exdir = RAWdirectory, verbose = F)
# extract those extracted .CEL files
cels = list.files(RAWdirectory, pattern = "CEL.gz", full.names = T)
sapply(cels, gunzip)
# read all cel files to data.raw variable
data.raw = ReadAffy(filenames = substr(cels, 1, nchar(cels)-3), verbose = F, cdfname = "hgu133plus2cdf")
# frozen robust multiarray analysis
data.matrix = frma(data.raw)
# add to RNA.list as described above
tmp = list(data.matrix)
names(tmp) = substr(RAWstr, 1, nchar(RAWstr) - 8)
RNA.list = append(RNA.list, tmp)
}
### Combine into 1 matrix of all data sets
RNAdata = as.matrix(exprs(RNA.list[[1]]))
for (i in 2:length(RNA.list)) {
RNAdata = CombineMatrix(RNAdata, as.matrix(exprs(RNA.list[[i]])))
}
# normalize between arrays on a data of fRMA before
RNAdata_scaled = normalizeBetweenArrays(RNAdata)
# because samples names are GSMxxxxxx.CEL => erase .CEL extension
colnames(RNAdata_scaled) = substr(colnames(RNAdata_scaled), 1, 9)
# load data from table S1, S2, S3, S7
load("tables_set.RData")
# mutual index between total data and tableS1 which have clinical information
tableS1 = tableS1[match(colnames(RNAdata_scaled), tableS1[, 1]), ]
# compute batch data using tumor vs normal tissue as covariate
batch = as.numeric(lapply(tableS1[, 3], function(str) if (str == "normal") return(1) else return(2)))
# applied Combat algorithm in sva library with default parameters to adjust data for batch effects
RNAdata_scaled = ComBat(dat = RNAdata_scaled, batch = batch)
Your help is really appreciated!
[1]: https://image.ibb.co/h3Mvmk/Screenshot_from_2017_08_28_11_23_14.png | Hey, it seems that what you are setting as batch is in reality the covariate value (which is defined as the condition or "variable of interest" in the linear model). This is the code for applying ComBat if you specify the batches and the conditions that you have in your dataset
sample <- Pheno$Sample.Name
batch <- Pheno$Batch
condition <- Pheno$Covariate.1
pdata <- data.frame(sample, batch, condition)
modmatrix <- model.matrix(~as.factor(condition), data=pdata)
combat_data_matrix <- ComBat(dat=yourmatrix, batch=batch, mod=modmatrix) | biostars | {"uid": 269555, "view_count": 2500, "vote_count": 1} |
Hi. I am trying to conceptualise how I could do this, and whether it makes sense, and looking for advice.
I have 4 RNAseq bioreps from a batch of samples, which corresponds to some DNA work that I am doing. I intend to use the RNAseq to improve my TSS positions by using the correct splice variants and general improved accuracy than the publicly downloadable gtf file.
This is fine for the independent bioreps (I have done this using cufflinks for each sample and have 4 gtf files), but I am wondering whether I can use the four together to build a consensus for greater accuracy.
Question:
**Would I be better to just merge the 4 bams and call TSSs from the whole dataset, or is there a way to interpret them together which would have better accuracy in generating the final GTF annotation? Is there a standard practice for this step?**
What I can't understand is how differences between the bioreps would be dealt with. I imagine minor differences could end up giving 4 models for each gene, and this wouldn't help.
Thanks | Are these 5'RNA-Seq datasets? I would do them both ways.
1. Generate four TSS lists from four replicates
2. Merge them and generate a single list.
Also, try ranking each TSS locus with a parameter (eg how much enichment you can found and from which rep), then generate a high, medium and low confidence list of TSS and then cross-compare. You can also use additional scores like presence of conserved TATA boxes, CpG island and GC strength.
Copying excerpt for the Homer Suite:
http://homer.salk.edu/homer/ngs/tss/index.html
> **Introduction to Transcriptional Initiation at Metazoan Promoters**
>
> To understand the analysis of 5'RNA data, it is worth taking a moment highlight that there are multiple 'types' of promoters in living organisms. First of all, there are different RNA polymerases including RNA polymerase I (rRNA), II (mRNA, lncRNA, miRNA), III (tRNA), IV(plant specific), viral polymerases, etc., and each polymerase has different mechanisms of transcriptional initiation that may vary between different distally related organisms. Also be aware that different RNA polymerases may generate RNAs with different covalent modifications and may or may not be present in your5' RNA sequencing, depending on how the experiment was performed. By in large most researchers are interested in RNA polymerase II transcripts (mRNA) and as a result most 5'RNA methods focus on the identification of
>
> **RNAs containing a 7-methylguanosine cap protecting their 5' end.**
>
> With respect to RNA polymerase II initiation sites, there are two generally recognized 'types' of TSS. Sharp (or Focused) TSS initiate transcription from a single nucleotide (or +/- 2 nt) and resemble the promoters found in molecular biology text books. They often contain well define core-promoter elements such as the TATA box and usually initiate transcription from a purine preceded by a pyrimidine (PyPu, i.e. CA, with the A being the initiating nucleotide).
>
> The other, more common TSS is a broad (or dispersed) TSS. These promoters initiate transcription from sevearl different sites within a large area (often 50-100 nt in size). These promoters usually lack core promoter elements (no TATA box), but they each individual initiation site DOES normally still initiate on a purine preceded by a pyrimidine (PyPu).
>
> **False TSS - be careful of artifacts**
>
> A quick note about artifacts in 5'RNA-Seq data: Most 5' RNA-Seq methodologies work by enriching for 5' cap-protected RNA, which means that most of the sequence data describes 5' RNA ends, but a fraction of it may be noise from random RNA-Seq fragments (again, a lot like ChIP-Seq). In particular, highly expressed RNAs may yield "5'RNA-Seq" reads along the whole body of the gene giving the appearance of alternative TSS which are likely false positives. Because of this, I would highly recommend using traditional RNA-Seq as a "background" when analyzing 5' RNA-Seq data. This approach (describe below) may remove several real TSS from the results, but it is also likely to remove a large number of false positives and clean up your analysis.
>
> Transcplicing of transcripts (where the 5' end of one transcript is added to the front of another) and recapping (where a transcript is cleaved and a new cap placed on the truncated product) are two phenomena you may want to think carefully about when analysing 5' RNA-Seq data. Transplicing will create false negatives and recapping will create false-positives. In certain organisms, such as C. elegans, transcplicing is very common, making 5'GRO-Seq a much better assay for identifying TSS than 5'RNA-Seq (i.e. measuring the 5' RNA ends before they have a chance to transplice). In other organisms (e.g. mouse, human, fly, etc.) it appears to be rare. The degree to which transcription are 'recapped' is a matter of debate because it can be hard to distinguish them from true alternative TSS or noise in the 5' RNA-seq assay. | biostars | {"uid": 153546, "view_count": 2927, "vote_count": 2} |
<p>I have been asked to recommend introductory books and resources to R and Bioconductor. My problem is just, I never read a book to learn R or Bioconductor, so I have no experience with this and cannot recommend one. I am interested in mainly introductory books, possibly targeting various groups of readers (computer scientists, molecular biologists, (bio-)statisticians), any recommendation appreciated. </p>
<p>For example, I used the following resources:</p>
<ul>
<li>The <a href='http://cran.r-project.org/manuals.html'>R-manuals</a>, especially <a href='http://cran.r-project.org/doc/manuals/R-intro.html'>the R intro</a></li>
<li>There are also <a href='http://cran.r-project.org/other-docs.html'>a lot of contributed documents there</a> on the R web site, but I didn't
use.</li>
<li>If a package from Bioconductor interests me, I read the package vignette.</li>
<li>I read the Bioconductor mailing list, that helps to see what other people use.</li>
<li>I have the "<a href='http://www.springer.com/statistics/computanional+statistics/book/978-0-387-98966-2'>Venebles, Ripley. S Programming</a>" book, that is hardly introductory. </li>
</ul>
<p>Which books did you find helpful or completely useless to learn R/Bioconductor? For example: <a href='http://www.bioconductor.org/pub/RBioinf/'>R Programming for Bioinformatics</a> looks promising, anybody read it?</p>
<p>Or do you share my reluctance towards R-books and prefer online resources?</p>
| <p><a href="http://cran.r-project.org/doc/contrib/Seefeld_StatsRBio.pdf">"Statistics Using R with Biological Examples"</a> by Kim Seefeld helped me a lot</p>
<p>also <a href="http://cran.r-project.org/doc/contrib/Krijnen-IntroBioInfStatistics.pdf">"Applied Statistics for Bioinformatics using R"</a> by Wim P. Krijnen</p>
| biostars | {"uid": 539, "view_count": 23484, "vote_count": 73} |
When extracting reads aligned to a certain region, what is the difference between using samtools view and bamtools filter region?
Here is the code and the counts of the number of reads recovered. I don't understand why the number of reads recovered are so different between the two approaches. Any help??
```
samtools index IV75_realigned_reads.bam
samtools view -hb IV75_realigned_reads.bam 3R:15,000,000-15,100,000 > Control_region.bam
bamtools count -in Control_region.bam
# 28034
bamtools filter -region 3R:15,000,000-15,100,000 -in IV75_realigned_reads.bam -out test.bam
bamtools count -in test.bam
# 7523775
``` | as far as I can read in the manual of bamtools, region are defined with two dots and not using a hyphen. You should try (and don't play with fire, remove the commas):
3R:15000000..15100000
> REGION string format: A proper REGION string can be formatted like any of the following
examples (where 'chr1' is the name of a reference (not its ID)and '' is any valid integer position
within that reference.):
> -region chr1
> only alignments on (entire) reference 'chr1'
> -region chr1:500
> only alignments overlapping the region starting at chr1:500 and continuing to the end of chr1
> -region chr1:500..1000
> only alignments overlapping the region starting at chr1:500 and continuing to chr1:1000
> -region chr1:500..chr3:750
> only alignments overlapping the region starting at chr1:500 and continuing to chr3:750. T
| biostars | {"uid": 106861, "view_count": 6376, "vote_count": 1} |
<p>I have been asked to recommend introductory books and resources to R and Bioconductor. My problem is just, I never read a book to learn R or Bioconductor, so I have no experience with this and cannot recommend one. I am interested in mainly introductory books, possibly targeting various groups of readers (computer scientists, molecular biologists, (bio-)statisticians), any recommendation appreciated. </p>
<p>For example, I used the following resources:</p>
<ul>
<li>The <a href='http://cran.r-project.org/manuals.html'>R-manuals</a>, especially <a href='http://cran.r-project.org/doc/manuals/R-intro.html'>the R intro</a></li>
<li>There are also <a href='http://cran.r-project.org/other-docs.html'>a lot of contributed documents there</a> on the R web site, but I didn't
use.</li>
<li>If a package from Bioconductor interests me, I read the package vignette.</li>
<li>I read the Bioconductor mailing list, that helps to see what other people use.</li>
<li>I have the "<a href='http://www.springer.com/statistics/computanional+statistics/book/978-0-387-98966-2'>Venebles, Ripley. S Programming</a>" book, that is hardly introductory. </li>
</ul>
<p>Which books did you find helpful or completely useless to learn R/Bioconductor? For example: <a href='http://www.bioconductor.org/pub/RBioinf/'>R Programming for Bioinformatics</a> looks promising, anybody read it?</p>
<p>Or do you share my reluctance towards R-books and prefer online resources?</p>
| <p>I have R in a Nutshell on my desk and I use it at least weekly if not daily. Its well indexed with a lot of great examples and helps me save time by not searching online. The BioC chapter is decent, but quite short. I also bought Data mashups in R which I found to be a great purchase and a fun way to learn about GIS capabilities and the Yahoo API in addition to R.</p> | biostars | {"uid": 539, "view_count": 23484, "vote_count": 73} |
Dear all,
I was wondering if there is any tool to calculate PSI from the BAM files. Basically what I want is-
<img alt="" src="http://bioinformatics.oxfordjournals.org/content/29/2/273/F1.large.jpg" style="height:121px; width:500px" />
I have gtf file with all the exons and I want to calculate for each exon (lets take exon filled with grey for example) how many reads mapped to junction of that exon (a and b in figure) and how many reads skip that exon (c in figure). So if we have, let's say, 10k exons in the gtf, the output should contain 10k rows with a,b and c as columns.
I used [MATS][1] and [bam2ssj][2] but did not get what I was expecting. I have gtf file with approx. 300k exons but MATS gives the output only for 3k exons. bam2ssj is calculating the values for introns rather than exons.
Any suggestions?
Thanks in advance.
[1]: http://rnaseq-mats.sourceforge.net/user_guide.htm
[2]: https://github.com/pervouchine/bam2ssj | <p>I just tried to write something for this: https://github.com/lindenb/jvarkit/wiki/Biostar103303</p>
<p>I don't have any data to test though.</p>
| biostars | {"uid": 103303, "view_count": 18213, "vote_count": 4} |
Hi,
Here is the `flagstat` output of my bam file:
37750740 + 352032 in total (QC-passed reads + QC-failed reads)
0 + 0 secondary
0 + 0 supplementary
1965916 + 0 duplicates
20118207 + 184214 mapped (53.29% : 52.33%)
37750740 + 352032 paired in sequencing
18875370 + 176016 read1
18875370 + 176016 read2
15175066 + 138352 properly paired (40.20% : 39.30%)
16901306 + 153876 with itself and mate mapped
3216901 + 30338 singletons (8.52% : 8.62%)
1538902 + 13814 with mate mapped to a different chr
561025 + 5046 with mate mapped to a different chr (mapQ>=5)
extracted **properly-paired reads** with `samtools view -bf 0x2 1.bam > 1pp.bam` and again cross-checked with `flagstat`:
15175098 + 138352 in total (QC-passed reads + QC-failed reads)
0 + 0 secondary
0 + 0 supplementary
930902 + 0 duplicates
15175066 + 138352 mapped (100.00% : 100.00%)
15175098 + 138352 paired in sequencing
7587550 + 69176 read1
7587548 + 69176 read2
15175066 + 138352 properly paired (100.00% : 100.00%)
15175066 + 138352 with itself and mate mapped
0 + 0 singletons (0.00% : 0.00%)
2 + 0 with mate mapped to a different chr
2 + 0 with mate mapped to a different chr (mapQ>=5)
How can I extract **properly-paired QC-passed reads** instead of extracting only properly-paired?
Kindly guide.
Thanks
| This should do it:
samtools view -b -f 2 -F 524 1.bam > 1pp.bam
You appear to have unmapped entries with assigned chromosomes. Presumably you used bwa, since it's the only tool I know of that produces things like that. | biostars | {"uid": 257094, "view_count": 8160, "vote_count": 2} |
As the title says, I'd like to know if there's a set convention for matching dna/rna/aa letter symbols (that is ACGT for DNA, ACGU for RNA, and a lot more letters for AAs) with some properly defined colors (for example, DNA letters might be <span style="background-color:Green">A</span>: green, <span style="background-color:Blue">C</span>: blue, <span style="background-color:Yellow">G</span>: yellow/black and <span style="background-color:Red">T</span>: red)? The idea is that such colors would then be used in the areas for better recognition of the characters. Thanks! | As far as I know, there are no standards for colors. Each program uses slightly different colors.
You might consider using the same colors as JalView: http://www.jalview.org/help/html/colourSchemes/
Here are the available options for nucleotides in Geneious:
<img alt="" src="http://i.imgur.com/kuYCD2l.png" />
And here are the available options for amino acids in Geneious:
<img alt="" src="http://i.imgur.com/zhxauCR.png" /> | biostars | {"uid": 171056, "view_count": 11329, "vote_count": 1} |
I am looking for interactions between a protein (ACTB) and calcium and I want to use the paxtoolsr package. Is this possible? What code is necessary? | Here is code that would get you interactions that include actin or calcium from Pathway Commons:
library(paxtoolsr)
# READ AND FILTER PATHWAY COMMONS SIF INTERACTION TABLE ----
# With the file PathwayCommons12.reactome.hgnc.txt.gz downloaded from
# https://www.pathwaycommons.org/archives/PC2/v12/ and uncompressed
# NOTE: Either download directly or use downloadPc2()
# Read file
dat <- readSifnx("PathwayCommons12.reactome.hgnc.txt")
# Pathway Commons uses CHEBI identifiers for small molecules, chemicals, drugs, metabolites, etc.
# IDs:
# * Calcium: "CHEBI:29108" from https://www.ebi.ac.uk/chebi/searchId.do?chebiId=CHEBI:29108
# * Actins from https://www.genenames.org/data/genegroup/#!/group/929
ca2 <- "CHEBI:29108"
actins <- c("ACTA1", "ACTA2", "ACTB", "ACTBL2", "ACTC1", "ACTG1", "ACTG2")
# Subset the SIF interaction table to those entries with either calcium or actin
# Use idsBothParticipants=TRUE if it must be both source and target nodes must come from ids parameter
sif <- filterSif(dat$edges, ids=c(ca2, actins), idsBothParticipants=FALSE)
sif_ids_both <- filterSif(dat$edges, ids=c(ca2, actins), idsBothParticipants=TRUE)
write.table(sif, "calcium_actin_interactions.txt", sep="\t", quote=FALSE, row.names=FALSE)
# > head(sif)
# # A tibble: 6 × 7
# PARTICIPANT_A INTERACTION_TYPE PARTICIPANT_B INTERACTION_DATA_SO… INTERACTION_PUB… PATHWAY_NAMES MEDIATOR_IDS
# <chr> <chr> <chr> <chr> <chr> <chr> <chr>
# 1 ACTA2 in-complex-with CALD1 Reactome NA Smooth Muscl… http://path…
# 2 ACTA2 in-complex-with ITGA1 Reactome NA Smooth Muscl… http://path…
# 3 ACTA2 in-complex-with ITGB5 Reactome NA Smooth Muscl… http://path…
# 4 ACTA2 in-complex-with LMOD1 Reactome NA Smooth Muscl… http://path…
# 5 ACTA2 in-complex-with MYH11 Reactome NA Smooth Muscl… http://path…
# 6 ACTA2 in-complex-with MYL10 Reactome NA Smooth Muscl… http://path…
| biostars | {"uid": 9554423, "view_count": 176, "vote_count": 1} |
Hello,
I'm really new if the field and maybe my question is a little bit naive.
I'm trying to annotate my SNPs by using VEP. The reason to do this is to find the nonsense SNPs on each genome, the synonymous and non-Synonymous.
The organism that I'm working on is sheep.
The thing that is a little bit confusing to me is the tag system that Ensembl uses.
For example, the "synonymous_variant" is for the synonymous SNPs, but I'm not so sure about the non-synonymous and the nonsense. I'm taking the "coding_sequence_variant" and "stop_gained", respectively. Am I right?
Also, I cannot identify the CNVs. Is there any particular tag for this?
A second issue that I faced is that for some gene IDs there is no information about the name of the gene (symbol tag). Is there any way to use somehow a list with these IDs and find the names of these genes?
Thank you very much in advance and I'm really sorry for the questions "bombing".
| The tag system is based on the [Sequence Ontology][1] (SO) consequence terms. Non-synonymous is not a SO term. This should be referred to as missense_variants according to SO. Check the SO definitions and a diagram showing the location of variants on the [Calculated consequence variants][2] page. The nonsense is known as stop_gained. If you annotate CNVs (larger insertions or deletions for example), you will have the same SO consequence terms. These are some of the consequences for [copy_number_variation according to SO][3] As for your second issue, I'd guess there is no gene name for the sheep gene, but you will have the Ensembl stable ID, e.g. ENSOARG00000005819. If you sent some examples, it'd be easier to help.
[1]: http://www.sequenceontology.org/
[2]: http://www.ensembl.org/info/genome/variation/predicted_data.html#consequences
[3]: http://www.sequenceontology.org/browser/current_svn/term/SO:0001742 | biostars | {"uid": 202928, "view_count": 2874, "vote_count": 1} |
Hi all,
I'm running about 50 proteins of a particular species (sub_proteins.fa) against all proteins for that specie (proteome.fa) using hmmer. I have done this using phmmer with the following command:
phmmer --tblout results.table sub_proteins.fa proteome.fa
However, I noticed that in the resulting file (results.table) there are multiple lines where the same protein is being compared to itself (I guess because the 50 query proteins are also found in the proteome file). See below for an example:
# target name accession query name accession E-value score bias E-value score bias exp reg clu ov env dom rep inc description of target
#------------------- ---------- -------------------- ---------- --------- ------ ----- --------- ------ ----- --- --- --- --- --- --- --- ---
YLL_767 - YLL_767 - 8.1e-177 580.6 1.4 1e-175 582.3 1.4 1.0 1 0 0 1 1 1 1
is there anyway to prevent the hmmer programs from doing this? The reason I am concerned about this is because after this step I am going to be combining the query and significant targets together into a model that will then be used to search against other species proteomes...will having redundant proteins affect my results?
Any help is greatly appreciated! | There is no way to prevent the program from detecting identical matches. In most instances that's the exact purpose of these programs!
As to whether that will affect your downstream results, it depends on what you mean exactly by `I am going to be combining the query and significant targets together into a model`. If you mean that you plan to build a hidden Markov model from your query + identified sequences, then it doesn't matter that some sequences in your alignment are going to be identical. Model building procedure automatically down-weighs identical (and even very similar) sequences, so the net effect will be as if you did not have duplicates.
If you want to convince yourself that this is a case, make an alignment of two identical sequences - or 10 identical sequences, for that matter. When you run `hmmbuild` on that alignment, it will print a summary saying how many sequences are in the alignment (`nseq`), followed by how many effective sequences (`eff_nseq`) are there. No matter how many identical sequences you put in your alignment, the `eff_nseq` will be 1 at most, but more likely smaller than 1. That means even though the alignment has X number of sequences, the HMM building program will act as if it has seen only one of them. Even for large alignments of non-identical sequences, say 2000 of them, `eff_nseq` will rarely go into double digits unless your sequences are truly diverse. | biostars | {"uid": 9461363, "view_count": 709, "vote_count": 3} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.