INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
---|---|---|---|
<p>Hi all</p>
<p>This question could be considered as a follow up of this discussion.
<a href='http://www.biostars.org/post/show/3407/how-to-extract-reads-from-bam-that-overlap-with-specific-regions/#3414'>http://www.biostars.org/post/show/3407/how-to-extract-reads-from-bam-that-overlap-with-specific-regions/#3414</a>
What I need is to extract reads from <a href='http://samtools.sourceforge.net/SAM1.pdf'>bam</a> file that fall <strong>only within a given region</strong> (not overlap the given region), the region being in the form of a gff file or bed file. Overlapping reads could be extracted by several methods (as in the discussion mentioned or <a href='https://code.google.com/p/bedtools/'>BEDTools</a>). The idea is to try to be pretty sure of excluding 5´ UTRs in the process of detecting intergenic transcripts. I saw a tool in BamUtil (<a href='http://genome.sph.umich.edu/wiki/BamUtil'>http://genome.sph.umich.edu/wiki/BamUtil</a>) called "writeRegion" which would pretty much do what I want. Somehow could not get this running for my dataset.
Was wondering if you guys might have an "R" or some other solution for this.
Thanks in advance
Abi</p>
| I think the options have changed since this reply was written
if you want the default headerless sam file then use the command as per Damian Kao
samtools view input.bam "Chr10:18000-45500" > output.sam
if it's a bam output that you want you will need a '-b'
samtools view -b input.bam "Chr10:18000-45500" > output.bam | biostars | {"uid": 48719, "view_count": 127893, "vote_count": 30} |
Hi,
Have a CWL defined that needs to return a set of files (all filename extensions) so I have defined my outputs as:
outputs:
mapped_out:
type: File
outputBinding:
glob: $(inputs.sample).bam
secondaryFiles:
- .bai
- .bas
- .md5
- .met
- .maptime
I've tried a couple of variations of the json:
{
...
"mapped_out": {
"path": "/tmp/mapped.bam",
"class": "File"
},
...
}
Yeilded one file provisioned to /tmp/mapped.bam
This version (based on [alea-createGenome.cwl][1] & [alea-alignReads-job.json][2]) didn't stage anything:
{
...
"mapped_out": "/tmp/mapped",
...
}
Everything seems to have compelted in the cwltool side:
Final process status is success
{
"mapped_out": {
"checksum": "sha1$53bb0c4abb07013393891cb50a3feec4c6381304",
"basename": "insilico_21.bam",
"location": "file:///home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam",
"path": "/home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam",
"secondaryFiles": [
{
"checksum": "sha1$ef6f2cf70e11d7d0be17b79dfb02eb1277e43b41",
"basename": "insilico_21.bam.bai",
"location": "file:///home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.bai",
"path": "/home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.bai",
"class": "File",
"size": 1370120
},
{
"checksum": "sha1$4bf5068040c0e2a350aa21fa299f6567230bfbeb",
"basename": "insilico_21.bam.bas",
"location": "file:///home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.bas",
"path": "/home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.bas",
"class": "File",
"size": 1973
},
{
"checksum": "sha1$4a60424144f5283c4e9cf74deb214597cac8bae8",
"basename": "insilico_21.bam.md5",
"location": "file:///home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.md5",
"path": "/home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.md5",
"class": "File",
"size": 32
},
{
"checksum": "sha1$63139bed16686c6be0dd5469342af1dac8795260",
"basename": "insilico_21.bam.met",
"location": "file:///home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.met",
"path": "/home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.met",
"class": "File",
"size": 1521
},
{
"checksum": "sha1$39f641f432b510034fb96b3e73569f5fc1824521",
"basename": "insilico_21.bam.maptime",
"location": "file:///home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.maptime",
"path": "/home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.maptime",
"class": "File",
"size": 279
}
],
"class": "File",
"size": 42245405
}
}
Any help gratefully recieved.
Thanks,
Keiran
[1]: https://github.com/common-workflow-language/workflows/blob/master/tools/alea-createGenome.cwl
[2]: https://github.com/common-workflow-language/workflows/blob/master/test/alea-alignReads-job.json | Ah, I think I understand the confusion here. Apologies since I think we created it.
1) Dockstore input JSON can (optionally) include output parameters in order to provision files to locations like S3, icgc-storage, ftp. This is an artifact of Dockstore's beginnings in the pan-cancer project where we always wrote workflows that look like "download from GNOS/S3 -> do processing -> upload to GNOS/S3"
In other words, you should be able to do this to upload bamstats_report, an output to s3:
$ cat sample_configs.json
{
"bam_input": {
"class": "File",
"path": "https://s3.amazonaws.com/oconnor-test-bucket/sample-data/NA12878.chrom20.ILLUMINA.bwa.CEU.low_coverage.20121211.bam"
},
"bamstats_report": {
"class": "File",
"path": "s3://oicr.temp/bamstats.zip"
}
}
dockstore tool launch --entry quay.io/collaboratory/dockstore-tool-bamstats:1.25-6_1.0 --json sample_configs.json
And you should be able to do this to just leave the results in place on your local host
$ cat sample_configs2.json
{
"bam_input": {
"class": "File",
"path": "https://s3.amazonaws.com/oconnor-test-bucket/sample-data/NA12878.chrom20.ILLUMINA.bwa.CEU.low_coverage.20121211.bam"
}
}
$ dockstore tool launch --entry quay.io/collaboratory/dockstore-tool-bamstats:1.25-6_1.0 --json sample_configs2.json
This is a red herring though.
2) It looks like Dockstore has a bug/missing feature where we probably missed that output parameters (in the CWL) can also specify secondary files. While the secondary files look like they're being generated properly coming out of cwltool (in /home/ubuntu/./datastore/launcher-ccd381b4-c475-4770-b88b-bebd2b06439c/outputs/insilico_21.bam.*) , they aren't being moved further along to /tmp/mapped.* as we would have expected.
We're adding this as an issue https://github.com/ga4gh/dockstore/issues/544 | biostars | {"uid": 227168, "view_count": 2435, "vote_count": 1} |
Hello,
I'm trying to assemble contigs from a set of paired end reads, denoted as SRR960028_1.fasta and SRR960028_2.fasta. I'm running ABySS on my university's HPC facility. When I run ABySS it terminates early. The line of code itself (within the PBS file) is:
abyss-pe name=abyss_test1 k=63 in='SRR960028_1.fastq SRR960028_2.fastq' v=-v
The tail of the error file looks like this:
Mapped 272979576 of 273907308 reads (99.7%)
Mapped 247491841 of 273907308 reads uniquely (90.4%)
Read 273907308 alignments
Mateless 273907308 100%
Unaligned 0
Singleton 0
FR 0
RF 0
FF 0
Different 0
Total 273907308
abyss-fixmate: error: All reads are mateless. This can happen when first and second read IDs do not match.
error: 'abyss_test1-3.hist': No such file or directory
make: *** [abyss_test1-3.dist] Error 1
make: *** Deleting file `abyss_test1-3.dist'
I've seen the abyss-fixmate error pop up on threads here before, but most of threads about the error seem to have 0 reads in the "Mateless" or the "Total" read section, whereas I have a number. I've also opened the .fasta files and they definitely contain reads. I've seen a few threads that recommend denoting the lines within the .fasta files with a /1 or a /2, but I was under the impression that denoting the files themselves as reads1.fa and reads2.fa would suffice for ABySS (or at least according to the ABySS manual, unless I'm incorrect).
The only thing in the output file is this:
abyss-map -v -j40 -l40 SRR960028_1.fastq SRR960028_2.fastq abyss_test1-3.fa \
|abyss-fixmate -v -l40 -h abyss_test1-3.hist \
|sort -snk3 -k4 \
|DistanceEst -v -j40 -k63 -l40 -s1000 -n10 -o abyss_test1-3.dist abyss_test1-3.hist
The output files generated by ABySS from the run included abyss_test1-1.fa, abyss_test1-2.fa, abyss_test1-3.fa and a abyss_test1-unitigs.fa file. I've checked the head and tail of the files, and they appear to contain contigs.
I'm reluctant to use these files for any analysis because I'm not sure how ABySS assembled them - does anybody know how ABySS assembled them?
Does anybody have any clue as to why ABySS is terminating early, and how I can fix it?
Thanks in advance! | Hi @jozs2019,
It is most likely a read naming issue that is preventing the reads from being properly paired by ABySS. (I can confirm this if you post the first 10 lines of your read 1 and read 2 files. You can use a gist if you like.)
ABySS requires that the FASTQ IDs (i.e. the first whitespace separated word of the lines beginning with `@`) for reads 1 and read 2 are either identical or have an identical prefix followed by `/1` and `/2`. (See https://github.com/bcgsc/abyss/wiki/ABySS-Users-FAQ#4-my-abyss-assembly-fails-and-i-get-an-error-that-says-abyss-fixmate-error-all-reads-are-mateless-this-can-happen-when-first-and-second-read-ids-do-not-match).
Your results up to `test-3.fa` should be fine, because those first steps of the pipeline don't make any use of the read pairing information. But for the sake of cleanliness and reproducibility, you may want to do a complete rerun of the pipeline after fixing the read IDs. | biostars | {"uid": 247711, "view_count": 3122, "vote_count": 1} |
Hi all,
The documentation for Samtools is minimal at best. I'm still confused on the concept of a clipped read.
1. What is a clipped read? How is it different from a deletion?
2. What is a Soft clip? If the sequence is present in the reference is it different from a mismatch?
3. What is a Hard clip?
Say if I wanted to calculate base pair coverage, would I include soft clipped bases because 'they are present in the <seq>?'
If someone can provide an example such as
```
REF: AGTCG GATCG GTACG
Read: AGTCG xxxCG GTACG
```
That would be even more awesomer
One last question:
I found a ' * ' as a CIGAR string, what does that mean?
Thanks | Hard masked bases do not appear in the SEQ string, soft masked bases do.
So, if your cigar is: `10H10M10H` then the SEQ will only be 10 bases long.
if your cigar is `10S10M10S` then the SEQ and base-quals will be 30 bases long.
In the case of soft-masking, even though the SEQ is present, it is not used by variant callers and not displayed when you view your data in a viewer. In either case, masked bases should not be used in calculating coverage.
Both of these maskings are different from deletions. Masking simply means the part of the read can not be aligned to the genome (simplified, but a reasonable assumption for most cases, I think). A deletion means that a stretch of genome is not present in the sample and therefore not in the reads.
I'm not sure when H is used instead of the S and vice-versa. I would like to know that. | biostars | {"uid": 109333, "view_count": 49476, "vote_count": 29} |
<p>I'm trying to perform a thorough review of Hi-C and RNAi developments and have found myself looking at publication A (2011) and publication B (2015). B doesn't cite A explicitly but I'm sure there are papers linking the two and I'd like to find them. Are there existing utilities to help with this? I've used <a href="http://www.ncbi.nlm.nih.gov/home/api.shtml">NCBI E-utilities</a> in the past but would prefer not to rewrite an application that already exists.</p>
| <p>I played with neo4j and pubmed in 2010: http://plindenbaum.blogspot.fr/2010/02/path-from-egonwillighagen-to-jandot.html but it required to load a bunch of articles.</p>
| biostars | {"uid": 163058, "view_count": 1714, "vote_count": 1} |
<p>I just assembled a plasmid using Illumina 2x250 PE. I am pretty confident that the assembly is fine.</p>
<p>I check by mapping the raw data to the assembly. In general I have a very high coverage (sometimes more than 1000x). Also I have very few unmapped pairs.</p>
<p>But I have some regions where the coverage drops to 10-20 fold coverage. This looks concerning, but I still think my assembly is fine because:</p>
<p>1. I this areas I also see barely unmapped pairs</p>
<p>2. I did not see more missmatches from the reads to the assembly sequence as in other regions</p>
<p>Now my questions:</p>
<p>1. What are the properties of sequences where Illumina gives a lower coverage? Any paper? Or a SW to check?</p>
<p>2. How else can I check if my assembly is fine in this low coverage areas?</p>
| Check GC content. Illumina library prep can have problems with both, regions of high as well as low GC | biostars | {"uid": 159481, "view_count": 2098, "vote_count": 1} |
PDB:4DMN is a HIV-1 integrase structure with a bound ligand (PDB:0L9). i want to bring this ligand into alignment with another, reference structure PDB:3AO1.
i can do alignments of the two structures, and i can identify the ligand's heteroatoms by checking the residue.get_full_id() tags. but there are obviously no CA atoms to associated with the ligand to use as references for `Superimposer` to use for alignment. So i figure i need to do it in two steps: first align the two structures, then use the transform found by `Superimposer` on the ligand.
but i don't know how to do the second bit? thanks for any hints,
Rik
| Hi there,
Cross-posting from the Biopython mailing lists.
Superimposer() will give you the rotation/translation matrix you need to superimpose the two structures. Then you just need to apply them selectively to the atoms you want (with Superimposer.apply).
This script posted on [github][1] will do this and an additional sequence alignment step necessary when the two proteins are not exactly the same, in order to get matching atoms to perform the superimposition on.
[1]: https://gist.github.com/JoaoRodrigues/e3a4f2139d10888c679eb1657a4d7080 | biostars | {"uid": 195014, "view_count": 3559, "vote_count": 1} |
Hi everyone.
I find some protocol will recommend using
```
pbmc <- ScaleData(pbmc, vars.to.regress = "percent.mt")
```
to regress out cell cycle influence or other "noise". But I am confused why we can not drop these genes in `feature selection` steps. For example, we can drop these genes from high variable gene set, so that protecting our latter PCA and cluster analysis from these influence genes.
Best wishes
Guandong Shang | I still always advise people not to regress out cell cycle status or info. These can be biologically interesting and a big part of many phenotypes. Differing proportions of cycling cells between conditions or samples is both low hanging fruit and something that's easily validated at the bench. I really don't get why people try to remove that, as it's very simple to label populations as "cycling monocytes" and "monocytes" (or whatever) and create supersets as necessary.
I'd prefer that Seurat drop that part of their cell cycle vignette, I have yet to see a case where it's actually helpful. | biostars | {"uid": 9533437, "view_count": 479, "vote_count": 1} |
Hi,
I just downloaded the IGB genomics viewer and would like to use the WS220 version of the C. elegans reference genome. I thought that this version corresponded to the ce10 build according to UCSC:
> This directory contains the Oct. 2010 (WS220/ce10) assembly of the
> C. elegans genome (ce10, Washington University School of Medicine
> GSC and Sanger Institute WS220), as well as repeat annotations and
> GenBank sequences.
Within IGB, when I select the genome version "C_elegans_Oct_2010", the title within the browser window is listed as "C. elegans Oct 2010 (WS140/ce10)".
So now I am confused. Does this genome version within IGB correspond to the WS220 version or the WS140 version of the C. elegans reference genome? Thank you so much for your help! | <p>Hi smpyonteck,</p>
<p>The C. elegans genome version in IGB "C_elegans_Oct_2010" is the most recent version, i.e. WS220/ce10. The title within the browser "WS140/ce10" is a typo in our synonym file.</p>
<p>Thank you for finding this!</p>
<p>Nowlan</p>
| biostars | {"uid": 137318, "view_count": 2236, "vote_count": 1} |
So I have a sam file with extended CIGAR format like this:
17H87=1D12=1D2=3D32=1D4=2D5=2D13=....
And I want to convert it to a sam file containing regular CIGAR format that will look like this:
17S83M1D14M1D4M3D31M1D2M2D6M...
(they do not represent the same reads... I want only to visualize the problem)
Anyone knows some software/script that will convert sam/bam with extended CIGAR to a sam/bam with regular CIGAR?
It seems like a standard problem that might already have a solution so I want to ask here before solving it the hard way(coding) | I've quickly written one: https://github.com/lindenb/jvarkit/wiki/Biostar234081
$ cat toy.sam
@SQ SN:ref LN:45
@SQ SN:ref2 LN:40
r001 163 ref 7 30 1M2X5=4I4M1D3M = 37 39 TTAGATAAAGAGGATACTG*XX:B:S,12561,2,20,112
$ java -jar dist/biostar234081.jar toy.sam
@HD VN:1.5 SO:unsorted
@SQ SN:ref LN:45
@SQ SN:ref2 LN:40
r001 163 ref 7 30 8M4I4M1D3M = 37 39 TTAGATAAAGAGGATACTG*XX:B:S,12561,2,20,112
| biostars | {"uid": 234081, "view_count": 2828, "vote_count": 1} |
Dear all,
I am testing TAXAasign and getting the following error:
[esther@localhost testes]$ ~/Softwares/TAXAassign-master/TAXAassign.sh -t 70 -m 60 -a "60,70,80,95,95,97" -f test.fasta
[2017-05-02 17:10:09] TAXAassign v0.4. Copyright (c) 2013 Computational Microbial Genomics Group, University of Glasgow, UK
[2017-05-02 17:10:09] Using ~/ncbi-blast-2.6.0+-src/c++/ReleaseMT/bin/blastn
[2017-05-02 17:10:09] Using ~/Softwares/TAXAassign-master/scripts/blast_concat_taxon.py
[2017-05-02 17:10:09] Using ~/Softwares/TAXAassign-master/scripts/blast_gen_assignments.pl
[2017-05-02 17:10:09] Blast against NCBI's nt database with minimum percent ident of 60%, maximum of 10 reference sequences, and evalue of 0.0001 in blastn.
BLAST Database error: No alias or index file found for nucleotide database [~ncbi-blast-2.6.0+-src/c++/ReleaseMT/bin/nt] in search path [~/testes::]
I followed all steps here (https://github.com/umerijaz/TAXAassign). When I run update_blastdb.pl nt the tar.gz files are saved at: ncbi-blast-2.6.0+-src/c++/ReleaseMT/bin
I created the folder nt inside ncbi/bin and moved the tar.gz files there. I also tried to untar. Not working yet.
Any idea?
Kind regards | Have you tried to set BLASTDB variable to the location where the nt files are located? | biostars | {"uid": 250604, "view_count": 1281, "vote_count": 1} |
Hello,
I have aligned the RNA-seq data to the human genome and I used FASTQC for the pre-processing data. Can I use FASTQC for post alignment QC also? Or is there a better way of doing it?
Thank you in advance. | RSeQC is used a lot, but I find [QoRTs](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4506620/) a bit more user-friendly, especially if you have numerous samples.
In addition, feeding the results of a) FastQC, b) STAR (or whatever aligner you've used), and c) featureCounts to [MultiQC](http://multiqc.info/docs/) is already quite useful.
The typical things you want to look out for:
* at least 80% alignment rate
* not too many intronic/intergenic reads
* even gene body coverage
| biostars | {"uid": 273499, "view_count": 7306, "vote_count": 4} |
Hi to all.
I have a really basic question. Studying ChIP coupled with q-PCR or sequencing, I see different levels of a certain protein among a genomic region. I cannot really understand how can we have different protein levels. I thought that the binding of a protein in a region is bivalent-it occurs or not. But, as I can understand, we can have different concentration/quantity of protein bound to a region? And if this is the case, how in ChIP-qPCR we get finally DNA fragments, so how can we measure this binding from the quantity of fragments?
Thank you very much | While a protein is either bound or not to a particular chromosome in a particular cell, you're not doing ChIP qPCR on single cells, but rather on a bulk of cells. Heck, even within a single cell you can have different amounts of binding due to the cell's ploidy. | biostars | {"uid": 230844, "view_count": 1180, "vote_count": 1} |
Hi,
I just started to work with single end reads, which are already trimmed for adapter sequences and quality.
Do I have to trimm the reads now to the same length of e.g. 100nt for mapping them with STAR?
Is there a negative effect, if I don't? | If they are already trimmed for adapters and quality, don't trim more. Trimming will make sequences shorter, and shorter sequences tend to map more to multiple locations.
What is the length range of your reads? I generally keep reads only within a certain range, and discard the shorter reads. For example, for a 100bp dataset, I keep reads from 70-100bp after trimming, and discard the rest. | biostars | {"uid": 301761, "view_count": 4513, "vote_count": 2} |
I have a salmonella treated vs KD condition , i want something like this but I not sure how linear regression can be done
Gene SCRSAL1 KDSAL3
CXCL1 10.8273907686 11.2751328718
CXCL2 8.001064472 8.5753884341
CXCL3 8.1965832765 8.7086794002
CXCL5 8.0973357585 8.6242736829
CXCL6 9.9053940183 10.8940613053
CXCL8 13.1127454083 13.4402538026
CXCL10 7.2590758038 7.8202130661
CCL20 9.0933584064 9.9218889828
CCL22 7.3643976273 7.7911788897
CCL28 8.3723507486 8.6786344048
IL36G 4.3423919715 4.5673548181
This is my dataset the kind of figure i m trying is this ![enter image description here][1]
Any suggestion or help would be highly appreciated
[1]: https://i.imgur.com/72t9hkq.png | With perfect data, if you plot gene levels for knockdown vs control, any gene above the 45˚ line is upregulated in the knockdown compared to control, the farther away from the line, the stronger the effect. A linear regression will help in dealing with imperfect data. To help decide on which genes are upregulated, you can calculate (and plot) a confidence interval on each side of the regression line.
For how to do this in R, see this [Rstudio notebook][1].
[1]: https://rpubs.com/aaronsc32/regression-confidence-prediction-intervals | biostars | {"uid": 293056, "view_count": 925, "vote_count": 1} |
<p>I have several datasets in vcf format(about 1GB per file) and I want to extract snp info in each samples. I usually use python so I tried pyvcf, but it works slow in such large datasets. Are there any better ways to do that, especially in python? Thanks in advance</p>
| A trick is to compress the VCF file with bgzip, and index it with tabix. Both these tools are downloadable from the <a href="http://sourceforge.net/projects/samtools/files/tabix/">tabix</a> home page.
See also <a href="http://vcftools.sourceforge.net/perl_module.html">this page</a> for more documentation. Follow the instructions there:
bgzip my_file.vcf
tabix -p vcf my_file.vcf.gz
I am not sure if pyvcf supports compressed and indexed files. According to the <a href="http://pyvcf.readthedocs.org/en/latest/_modules/vcf/parser.html">source code</a>, it seems so.
EDIT: yes, it seems that pyvcf supports compressed and indexed VCF files. See the example in <a href="http://pyvcf.readthedocs.org/en/latest/INTRO.html">pyvcf documentation</a>:
>>> vcf_reader = vcf.Reader(filename='vcf/test/tb.vcf.gz')
>>> # fetch all records on chromosome 20 from base 1110696 through 1230237
>>> for record in vcf_reader.fetch('20', 1110695, 1230237):
... print record
Record(CHROM=20, POS=1110696, REF=A, ALT=[G, T])
Record(CHROM=20, POS=1230237, REF=T, ALT=[None]) | biostars | {"uid": 101444, "view_count": 10008, "vote_count": 1} |
<p>Hello folks!</p>
<p>I want to mask a genome with a particular repeat library using RepeatMasker.</p>
<p>Then I want to cross the coordinates of the repeats with those of gene annotations to find overlaps between them and study associations and stuff.</p>
<p>I'm only starting to consider feasible ways to do that so any input would be great.</p>
<p>Thanks!</p>
| Sounds like a job for [BedTools][1]. You should be able to make a GFF from the RepeatMasker output and use `bedtools intersect` to find overlaps.
[1]: http://bedtools.readthedocs.org/en/latest/ | biostars | {"uid": 164800, "view_count": 2600, "vote_count": 1} |
Dear all,
I have pair-end RNA-seq data (Illumina) from parasite and I would like to do De-Novo assembly by TRINITY. I have reference genome of my host organism so I can map my data to host and remove from fastq contaminations.
My plan is:
1. Map with bwa/bowtie/novoaling my pair-end FASTQ files to a host reference genome
2. Remove hits from fastq files (cleaning contaminations)
3. For the rest of FASTQ files use TRINITY for De-Novo transcript assembly
My question is:
May I use aligners (bwa etc.) and align raw fastq files to host DNA and then remove contaminants from fastq files? Question is because my data are from RNA-seq project NOT DNA.
How can I remove the sequences from raw fastq files that align to host DNA (cleaning process)?
Or if you have any other advice how to prepare data to TRINITY pipeline I will appreciate it.
Thank you so much for any comment and sharing your experience. | <p>If you have RNAseq data, you'd be better to stick with an aligner intended for spliced alignments (e.g. STAR). Most of these have an option to place unmapped reads/pairs in a new fastq file(s), which you could then feed to trinity or any other assembler (i.e, step #2 will be done for you). I don't have any advice on good assemblers, hopefully others will chime in with feedback there.</p>
| biostars | {"uid": 120756, "view_count": 5773, "vote_count": 1} |
<p>I am attempting to call mutations of raw RNA-Seq of single prostate circulating tumor cells (<a href="http://science.sciencemag.org/content/349/6254/1351.abstract">paper</a>) without matching normals.</p>
<p>From cursory research I found that tumor only without a matched normal calls 117x more variants than matched tumor/normal pairs (<a href="http://www.appistry.com/2014/mutect-tumor-samples-vs-tumornormal-matched-pairs/">Appistry</a>). I also found an interesting study that (successfully) discriminated somatic and germline mutations without matching normals but using <em>virtual </em><em>normal </em>(i.e., normal samples from unrelated individuals) (<a href="https://www.ncbi.nlm.nih.gov/pubmed/26209359">paper</a>).</p>
<p>My question is, are blood tumor samples treated any differently than tissue tumor samples in respect to calling variants? If there are no normal blood samples available, is filtering against the EVS dataset my best option? Thank you for your time and help.</p>
| Some of this has been covered in previous questions on here (dig around a little), but the gist is this
- you will never get rid of all germline mutations without a matched normal.
- make sure your variant caller is appropriate and doesn't bias you towards mutations at 50%/100% VAF, as some germline callers can
- a panel of normals can be very helpful in filtering out both common population variants and sequencing artifacts. Every individual contains private rare mutations that will ultimately be indistinguishable from a somatic hit, though.
- yes, use EVS (or maybe even the non-TCGA-derived part of ExAC) to remove common variants.
- RNAseq complicates things even further, because you're dealing with an enhanced error rate. | biostars | {"uid": 174101, "view_count": 3404, "vote_count": 2} |
I have a `data.frame` with 48 columns. the columns are samples in triplicates, the rows are genes. The values are expression values.
For each group of triplicates I would like to calculate the averaged expression per gene, resulting in a new `data.frame` with 16 columns and for each row the average of the three triplicates for each group.
can this be done with the `tidyverse` tools?
thanks
a small example table of 6x9 is here
dput(head(normCounts[,1:9]))
structure(c(0, 4.89997034943019, 2.4499851747151, 0, 46.5497183195869,
14.6999110482906, 0.998187766715749, 1.9963755334315, 0, 0.998187766715749,
55.898514936082, 7.98550213372599, 0, 1.57112407949228, 0, 1.57112407949228,
53.4182187027374, 4.71337223847683, 0, 1.25548317693578, 0, 0,
52.7302934313026, 10.0438654154862, 0, 0, 0, 0, 66.3962127189125,
23.2386744516194, 2.18533123780511, 3.27799685670766, 0, 0, 65.5599371341532,
9.83399057012298, 0, 0, 0, 0, 74.1086143860152, 18.9580176336318,
0, 0, 0, 0, 66.8826789069951, 13.376535781399, 0, 0, 0, 0, 50.7776960416371,
13.0791035258762), .Dim = c(6L, 9L), .Dimnames = list(c("ENSMUSG00000103147",
"ENSMUSG00000102269", "ENSMUSG00000096126", "ENSMUSG00000102735",
"ENSMUSG00000098104", "ENSMUSG00000102175"), c("Sample_1", "Sample_2",
"Sample_3", "Sample_4", "Sample_5", "Sample_6", "Sample_7", "Sample_8",
"Sample_9"))) | It will be a lot simpler to first make a group key table.
library("tidyverse")
groups <- tibble(
sample=colnames(normCounts),
group=rep(seq(1, ncol(normCounts)/3), each=3)
)
> groups
# A tibble: 9 x 2
sample group
<chr> <int>
1 Sample_1 1
2 Sample_2 1
3 Sample_3 1
4 Sample_4 2
5 Sample_5 2
6 Sample_6 2
7 Sample_7 3
8 Sample_8 3
9 Sample_9 3
Then you can pivot your count data to long format, join the groups, and group_by/summarize to get the means.
mean_exp <- normCounts %>%
as_tibble(rownames="gene") %>%
pivot_longer(starts_with("Sample"), names_to="sample", values_to="counts") %>%
left_join(groups, by="sample") %>%
group_by(gene, group) %>%
summarize(mean_count=mean(counts))
> head(mean_exp)
# A tibble: 6 x 3
# Groups: gene [2]
gene group mean_count
<chr> <int> <dbl>
1 ENSMUSG00000096126 1 0.817
2 ENSMUSG00000096126 2 0
3 ENSMUSG00000096126 3 0
4 ENSMUSG00000098104 1 52.0
5 ENSMUSG00000098104 2 61.6
6 ENSMUSG00000098104 3 63.9
You can pivot back to a wider format if you want too.
mean_exp_wider <- pivot_wider(mean_exp, names_from=group, values_from=mean_count)
> mean_exp_wider
# A tibble: 6 x 4
# Groups: gene [6]
gene `1` `2` `3`
<chr> <dbl> <dbl> <dbl>
1 ENSMUSG00000096126 0.817 0 0
2 ENSMUSG00000098104 52.0 61.6 63.9
3 ENSMUSG00000102175 9.13 14.4 15.1
4 ENSMUSG00000102269 2.82 1.51 0
5 ENSMUSG00000102735 0.856 0 0
6 ENSMUSG00000103147 0.333 0.728 0
| biostars | {"uid": 468575, "view_count": 1159, "vote_count": 1} |
Hello everyone,
I got a few flanking sequences around a bunch of SNPs. Since they were export from biomart, the variant allele was like a _:
>2|28804|28804|rs544393009|dbSNP|C/T
TGCAT_TCAGC
>2|29358|29358|rs943169482|dbSNP|G/GAAAA
CCCAA_AAGCC
>2|28396|28396|rs781239227|dbSNP|T/C
AATGC_CTTGG
Can some experts help me to replace the _ in each fasta sequence with its header description? That is each fasta input will be turned in 2 or 3 flanking sequence:
>2|28804|28804|rs544393009|dbSNP|C
TGCATCTCAGC
>2|28804|28804|rs544393009|dbSNP|T
TGCATTTCAGC
And that is what I am desperately seeking for!!!
Thanks in advance!
| That's not that hard by matching the SNPs and flanking sequences using a regular expression.
To make it easy to process, let's linearize the FASTA sequences,
i.e. converting to tabular format. For example:
$ seqkit fx2tab seqs.fa
2|28804|28804|rs544393009|dbSNP|C/T TGCAT_TCAGC
Hereby, we can follow the original answer with little modification.
Note that the 1 or 2 before allels are used to keeping order, which will be
removed in the end.
# FASTA -> tabular
# replace 'C/T\tTGCAT_TCAGC' with '1C\tTGCATCTCAGC'
# tabular -> FASTA
$ seqkit fx2tab seqs.fa | \
perl -pe 's/(\w+)\/(\w+)\t(\w+)_(\w+)/1$1\t$3$1$4/' | \
seqkit tab2fx > seqs.allel1.fa
# FASTA -> tabular
# replace 'C/T\tTGCAT_TCAGC' with '2T TGCATTTCAGC'
# tabular -> FASTA
$ seqkit fx2tab seqs.fa | \
perl -pe 's/(\w+)\/(\w+)\t(\w+)_(\w+)/2$2\t$3$2$4/' | \
seqkit tab2fx > seqs.allel2.fa
Then sort records in the two files by sequence ID in alphabetical order
# sort by ID
# remove the 1/2 before alleles
$ seqkit sort seqs.allel*.fa |\
perl -pe 's/\|\d(\w+)$/\|$1/'
>2|28396|28396|rs781239227|dbSNP|T
AATGCTCTTGG
>2|28396|28396|rs781239227|dbSNP|C
AATGCCCTTGG
>2|28804|28804|rs544393009|dbSNP|C
TGCATCTCAGC
>2|28804|28804|rs544393009|dbSNP|T
TGCATTTCAGC
>2|29358|29358|rs943169482|dbSNP|G
CCCAAGAAGCC
>2|29358|29358|rs943169482|dbSNP|GAAAA
CCCAAGAAAAAAGCC
<hr/>
#### Original answer:
That's not that hard by matching the SNPs and flanking sequences using a regular expression.
# replace 'C/T TGCAT_TCAGC' with 'C TGCATCTCAGC'
$ perl -pe 's/(\w+)\/(\w+) (\w+)_(\w+)/$1 $3$1$4/' seqs.fa > seqs.allel1.fa
# replace 'C/T TGCAT_TCAGC' with 'T TGCATTTCAGC'
$ perl -pe 's/(\w+)\/(\w+) (\w+)_(\w+)/$2 $3$2$4/' seqs.fa > seqs.allel2.fa
If the order between two allels is not important, just sort them by sequence ID in alphabetical order:
$ seqkit sort seqs.allel*.fa
>2|28804|28804|rs544393009|dbSNP|C TGCATCTCAGC
ACTGN
>2|28804|28804|rs544393009|dbSNP|T TGCATTTCAGC
ACTGN
>2|29358|29358|rs943169482|dbSNP|G CCCAAGAAGCC
actgn
>2|29358|29358|rs943169482|dbSNP|GAAAA CCCAAGAAAAAAGCC
actgn
If the order is important, it can be done with a trick.
# replace 'C/T TGCAT_TCAGC' with '1C TGCATCTCAGC'
$ perl -pe 's/(\w+)\/(\w+) (\w+)_(\w+)/1$1 $3$1$4/' seqs.fa > seqs.allel1.fa
# replace 'C/T TGCAT_TCAGC' with '2T TGCATTTCAGC'
$ perl -pe 's/(\w+)\/(\w+) (\w+)_(\w+)/2$2 $3$2$4/' seqs.fa > seqs.allel2.fa
# remove the 1/2 before allels after sort
$ seqkit sort seqs.allel*.fa | perl -pe 's/\|\d(\w+) /\|$1 /'
>2|28804|28804|rs544393009|dbSNP|C TGCATCTCAGC
ACTGN
>2|28804|28804|rs544393009|dbSNP|T TGCATTTCAGC
ACTGN
>2|29358|29358|rs943169482|dbSNP|G CCCAAGAAGCC
actgn
>2|29358|29358|rs943169482|dbSNP|GAAAA CCCAAGAAAAAAGCC
actgn
| biostars | {"uid": 272783, "view_count": 1662, "vote_count": 1} |
Hi all,
I was wondering if the order of records in a bamfile produced by bwa aln/mem/sampe is guaranteed to be the same as in the fastq files that were used as input.
I checked the bwa manual but the only thing I found is this: "Repetitive read pairs will be placed randomly" (part of sampe description).
If anyone has an idea, I'd welcome your feedback!
Best,
~Lina | Yes, the same. | biostars | {"uid": 116300, "view_count": 3930, "vote_count": 3} |
I have a list of about 800 KEGG IDs for my genes of interest. I want to plot those genes and get a nice KEGG mapped plot for each of my 800 genes. How can I do this in R package? Thanks | You may use the `pathview` package:
<s>1) use `kegg.gsets()` to download the KEGG pathways of your species.
2) map your gene ids to the pathways - I use `ids2indices()` from limma.</s>
3) use `pathview()` to download KEGG pathways figures and color your genes on them.
Example:
library(pathview)
test.uid <- c("5494737" , "5495078", "5495093", "5494418", "5495039")
pv.out <- pathview(gene.data = test.uid, pathway.id = "ssl00190", species = "ssl",
out.suffix = "kegg.get.all", kegg.native = T) | biostars | {"uid": 274813, "view_count": 3087, "vote_count": 1} |
Hi,
I am a beginner of the RNA seq. Now I do some analysis about the paired-end data. For filtering the raw data, I use the Trimmomatic to classify the data to two parts: unpaired reads and paired reads.
So, I don't understand what's the unpaired. Because every reads should have its partner, I always get the same number of the Reads 1 and Reads 2. Why exist the unpaired reads?
Thanks for your reply.
Yunlong | When a trimming program trims data one of the reads (assuming you are using paired-end data) may become short and fail a criteria you have set (e.g. minimum length 25 bp). At the point that read is removed by the trimming program. Let us say that was from R2 file. A trimming program should remove the corresponding R1 read from R1-file (even though that may have passed) when R2 read is dropped. If it does not do that you are left with an unpaired read in R1 file.<br><br>Aligners expect reads to be in the same order in R1/R2 files. If they are not then you can get strange results (e.g. discordant mapping). Presence of unpaired reads in main sequence files (As @mastal points out trimmomatic should collect them in separate files) generally signifies improper use of a trimming program (or using a trimmer that is not paired-end aware). If that happens `repair.sh` tool from BBMap suite can be used to remove those unpaired reads and bring R1/R2 files back in sync. | biostars | {"uid": 192788, "view_count": 7717, "vote_count": 3} |
HI, im having trouble with this task
Write a perl script that will generate a new output file (“task1 output.txt”) which contains
the sequence name, length, and GC-content for each sequence. There should be a header
line which identifies the contents of columns (so the first line in the output file should be
“SeqName Length GC-Content” or something similar). The GC-content of a sequence is
defined as the percentage of bases that are G or C (from 0% to 100%), and a high GCcontent
is associated with coding sequences.
>Seq1
ACGT
Then your output file should look like:
SeqName Length GC-Content
Seq1 4 50
I can do the in and out for the file handles, but im confused as to what to put in my while loop. and how will it know to match in the file ? | Like mentioned, GC content is the percentage of bases that are G or C in the sequences. Percentage is calculated quite easily using basic math once you obtain the counts of bases that are G or C in and the total sequence length for each sequence. Your loop with iterate through the file, executing its code block for each sequence it finds.
I cannot - and I hope others do not too - provide code. That would cripple learning. | biostars | {"uid": 181438, "view_count": 2074, "vote_count": 2} |
Hi everybody,
I have a list of recurrent mutations from a WGS dataset. I would like to know on which exons they sit (the exon number for a certain transcript).
If there are multiple transcripts at a certain position, annovar puts out a list with all nonsynonymous SNVs and I can just pick the transcript I want. My only problem is the following: If there is for example a stoploss for any of the transcripts at that genomic position, ANNOVAR doesn't output the nonsynonymous SNVs anymore, but only a list of mutation of that category, which has a higher precedence.
Is there any way to change that? Can I either change the precedence, or tell the software to output the effects on all transcripts or anything similar?
Best regards, and thanks a lot in advance,
Gero
|
Hello, all that you need to add is the `--separate` command line parameter:
#1, view input:
cat test.ann
3 38182727 38182727 A C
3 38182316 38182316 A G
3 38182641 38182641 T C
#2, annotate:
perl annotate_variation.pl -out ex1 -build hg19 test.ann /Programs/annovar/humandb/ --separate
#3, view output:
cat ex1.exonic_variant_function
line1 nonsynonymous SNV MYD88:NM_001172568:exon4:c.A745C:p.T249P,MYD88:NM_002468:exon5:c.A880C:p.T294P,MYD88:NM_001172567:exon5:c.A904C:p.T302P, 3 38182727 38182727 A C
line2 nonsynonymous SNV MYD88:NM_001172568:exon3:c.A617G:p.K206R,MYD88:NM_001172569:exon3:c.A571G:p.N191D,MYD88:NM_002468:exon4:c.A752G:p.K251R,MYD88:NM_001172566:exon2:c.A436G:p.N146D,MYD88:NM_001172567:exon4:c.A776G:p.K259R, 3 38182316 38182316 A G
line3 stoploss MYD88:NM_001172569:exon4:c.T613C:p.X205R,MYD88:NM_001172566:exon3:c.T478C:p.X160R, 3 38182641 38182641 TC
line3 nonsynonymous SNV MYD88:NM_001172568:exon4:c.T659C:p.L220P,MYD88:NM_002468:exon5:c.T794C:p.L265P,MYD88:NM_001172567:exon5:c.T818C:p.L273P, 3 38182641 38182641 T C
| biostars | {"uid": 320096, "view_count": 1779, "vote_count": 1} |
Hi!
I'm starting to use kallisto to do transcript-level expression quantification.
I have some questions:
**1)** Does `kallisto` infer the strandness of the input data just like `salmon` does (`--libType A`)? I guess the answer is no.
**2)** For other hand, `kallisto` has the next to options:
--fr-stranded Strand specific reads, first read forward
--rf-stranded Strand specific reads, first read reverse
Are these options only working for PE data?
**3)** Regarding the fragment length estimation when using SE datasets:
-l, --fragment-length=DOUBLE Estimated average fragment length
-s, --sd=DOUBLE Estimated standard deviation of fragment length
(default: -l, -s values are estimated from paired
end data, but are required when using --single)
What does `DOUBLE` mean? Do we have to specify the double of the number calculated?
Thank you in advance
| 1. No
2. They should work for SE data too (never tried, though). You probably want --rf-stranded for anything remotely recent.
3. An example of a double is `200.0` or `123.4`. That is, any number with a decimal point. The documentation there should really be changed, since I don't expect those without C/C++/etc. programming experience to know that "double" means "double precision floating point value" (or what that even means)). | biostars | {"uid": 252823, "view_count": 8452, "vote_count": 1} |
Who can give a finally determined answer: **what is paired and unpaired reads by trimmomatic**? what kind of sequences are in unpaired reads? please see [the question link and my comments][1]. Many many many thanks! here is my comments:
> "Hi Genomax, I see your reply for this question, but I still do not
> understand what is unpaired reads or unpaired.fastq file? based on
> your answer, my understanding is that for a Paired End sequencing,
> generally, the types of sequences in the R1 file is equal to that in
> the R2 file. Here, we do not care about the number of each sequence.
> for instance, if one sequence cannot pass the QC (set in
> trimmomatic)in R1 file, but this sequence pass the QC in R2 file,
> however, this sequence in both R1 and R2 file will be classified into
> unpaired reads/.fastq file, which means all the copies in R1 and R2
> files also will be classified into the unpaired reads. Or another
> understanding is that one sequence exist in both R1 and R2 file, but
> one copy in either R1 or R2 cannot pass the QC, this copy will be
> classified into unpaired fastq file/reads. (I think the second view
> might right). if so, some guys also mentioned using the unpaired for
> alignment/mapping with BWA, whether these under-QC sequences should be
> dealt with trimmomatic again with a strict set? are they useful?
> Finally, what kind of sequences are in unpaired.fastq file? please
> five some examples. my email is [email protected] Thanks a lot!"
[1]: https://www.biostars.org/p/192788/ | > my understanding is that for a Paired End sequencing, generally, the
> types of sequences in the R1 file is equal to that in the R2 file.
I don't know what you mean by that. There should only be one type/format of sequences, fastq. Number of reads in `R1/R2` files will be identical when the sequence comes off the sequencer.
> Here, we do not care about the number of each sequence.
Some may not (if you are willing to accept files that are out of sync in terms of order of `R1/R2` reads. Aligners will produce odd results if you use such a file.
> if one sequence cannot pass the QC (set in trimmomatic)in R1 file, but
> this sequence pass the QC in R2 file, however, this sequence in both
> R1 and R2 file will be classified into unpaired reads/.fastq file,
> which means all the copies in R1 and R2 files also will be classified
> into the unpaired reads.
That is the reason you should trim reads together and use a trimming program that is PE aware. There should be only one copy of sequence for each cluster coordinate.
| biostars | {"uid": 347399, "view_count": 4494, "vote_count": 1} |
<p>I recently used BLAT to generate a large number of alignments but the input files were on the chromosome level, not individual genes. I realize BLAT and filtering the subject by taxonomical ID is an option, but BLAT seems to be a stricter option. The alignments are 60bp with no gaps. What would be the best way to find the gene associated with each of these regions, if it exists?</p>
| As @AndreiR as said...
Download a bed file of all the genes of interest (from the ucsc table browser you can download ready-made bed files)
then make a bed file from your alignments:
then install <a href='https://code.google.com/p/bedtools/'>bedtools</a> and:
intersectBed -a alignments.bed -b refseq.bed -wo > intersections.txt
This should give you all the info you need. | biostars | {"uid": 71230, "view_count": 3088, "vote_count": 1} |
I have a biopython program that uses newick NHX extensions to pass information to the "R" tree plotting program ggtree. I am certain this used to work, but with the current version of biopython, it does not seem possible to write arbitrary comments in newick output.
(1) Am I confused, has BioPython phylo.write('newick') never supported NHX comments?
(2) is there an option for phylo.write() to write out the comment string?
| This is occurring because of a typo in the BioPython Phylo/NewickIO.py code (line 87), where the word "comment" is mis-spelled "coment". Fixing the typo fixes the problem. I have opened an issue on GitHub.
| biostars | {"uid": 477713, "view_count": 469, "vote_count": 1} |
I just want to make it clear.
I need to calculate FPKM. I use this formula:
**Normalized = [(raw_read_count)(10^9)] / [(gene_length)(XXXX)],**
*XXXX = the count of all reads that are aligned to protein-coding genes in that alignment.*
How should I calculate XXXX? Is it just sum of **all** raw_read_counts after htseq-count (e.g. in R it will be XXXX <- sum(collumn_with_raw_red_counts)?
Thanks! | I've got answer from GDC portal:
> 1. Download GTF files used in HTSeq analyses: https://gdc.cancer.gov/about-data/data-harmonization-and-generation/gdc-reference-files (GDC.h38 GENCODE v22 GTF)
2. Extract only protein-coding gene IDs: less gencode.v22.annotation.gtf | grep "\tgene\t" | grep protein_coding | cut -f9 | cut -f2 -d '"' > EnsembleIDsPCG.txt
3. Use resulting list to extract only protein-coding values from counts file: less CountFile.txt | grep -Ff ProteinCodingGeneList.txt > CountOnlyProt.txt
4. Sum the values of "CountOnlyProt.txt" and that will give you your denominator value.
My problem was that I counted reads for all genes, but should only for protein-coding.
P.S. thanks to GDC support team! | biostars | {"uid": 237844, "view_count": 15533, "vote_count": 1} |
Hi,
I need the .dat file for the UniProt Metazoa. .dat has the EMBL format, which I need. But I don't see it here:
**https://www.uniprot.org/taxonomy/33208#**
I can download the clade specific .dat files from here: **https://www.uniprot.org/downloads**. But I don't see the metazoan .dat file here.
Cab anybody help?
Cheers!! | Go to this page: https://www.uniprot.org/uniprot/?query=reviewed:yes%20taxonomy:33208 and click on the Download button. If you choose `Text` from the drop-down list for Format, you will get the data in EMBL format. | biostars | {"uid": 357012, "view_count": 1278, "vote_count": 1} |
Hello everyone
MuTect2 from GATK3 discarded `87.33%` of the reads. This is during preparing panel of normals.
I will use sp1.vcf in the PON creation !
sp1.vcf: sp1.bam
java -jar ${GATK}/GenomeAnalysisTK.jar \
-T MuTect2 \
-R ${hg38}.fasta \
-I:tumor sp1.bam \
--dbsnp ${DBSNP} \
--cosmic ${COSMIC} \
--artifact_detection_mode \
-o sp1.vcf
**Does this seem okay to you? Is there anything I can do to fix non primary reads `(42.77%)` and duplicate reads `(44.56%)`?**
INFO 00:57:17,188 MicroScheduler - 158885515 reads were filtered out during the traversal out of approximately 181931389 total reads (87.33%)
INFO 00:57:17,190 MicroScheduler - -> 9210 reads (0.01% of total) failing BadCigarFilter
INFO 00:57:17,192 MicroScheduler - -> 81066454 reads (44.56% of total) failing DuplicateReadFilter
INFO 00:57:17,193 MicroScheduler - -> 0 reads (0.00% of total) failing FailsVendorQualityCheckFilter
INFO 00:57:17,195 MicroScheduler - -> 0 reads (0.00% of total) failing MalformedReadFilter
INFO 00:57:17,197 MicroScheduler - -> 0 reads (0.00% of total) failing MappingQualityUnavailableFilter
INFO 00:57:17,198 MicroScheduler - -> 77809851 reads (42.77% of total) failing NotPrimaryAlignmentFilter
INFO 00:57:17,200 MicroScheduler - -> 0 reads (0.00% of total) failing UnmappedReadFilter
------------------------------------------------------------------------------------------
Done. ------------------------------------------------------------------------------------------
**This is how STAR log summary looks:**
UNIQUE READS:
Uniquely mapped reads number | 29463720
Uniquely mapped reads % | 72.35%
Average mapped length | 271.00
Number of splices: Total | 21158459
Number of splices: Annotated (sjdb) | 21067563
Number of splices: GT/AG | 20952149
Number of splices: GC/AG | 126813
Number of splices: AT/AC | 5622
Number of splices: Non-canonical | 73875
Mismatch rate per base, % | 0.48%
Deletion rate per base | 0.02%
Deletion average length | 1.27
Insertion rate per base | 0.01%
Insertion average length | 1.84
MULTI-MAPPING READS:
Number of reads mapped to multiple loci | 9507878
% of reads mapped to multiple loci | 23.35%
Number of reads mapped to too many loci | 355263
% of reads mapped to too many loci | 0.87%
UNMAPPED READS:
% of reads unmapped: too many mismatches | 0.00%
% of reads unmapped: too short | 2.76%
% of reads unmapped: other | 0.67%
CHIMERIC READS:
Number of chimeric reads | 0
% of chimeric reads | 0.00%
| If you are sure you want to retain non primary and duplicates, I think you can add to mutect the option `--disable_read_filter NotPrimaryAlignmentFilter --disable_read_filter DuplicateReadFilter` (not tested). See some docs [here.][1]
[1]: https://software.broadinstitute.org/gatk/documentation/tooldocs/3.8-0/org_broadinstitute_gatk_tools_walkers_cancer_m2_MuTect2.php | biostars | {"uid": 298993, "view_count": 1963, "vote_count": 1} |
Hi,
We have experimentally found that given gene X causes some phenotype. To start exploring the underlying mechanism, we decided to use TCGA data as its freely available. So within the cancer patients can we simple divide two samples based on gene X expression.
The first thing that came to my mind was Z normalise X gene and get the [-inf, +1.96] and [196,+inf] patients and do differentially expressed gene analysis.
Pleas keep in mind that this is only for preliminary data. So we aim just to hand wave and explore.
Any idea would make my day? Please.
Best regards,
Tunc. | Do I understand correctly that you are dividing the samples by the expression of the gene (rather than observing the expression of the gene in different groups)?
If so, the first thing I would do is make a histogram of the log TPM and see if it is bimodal. Then if it is you will know where the cutoff is. If it's not bimodal it may be just a noisy gene. I would do it in GTEX for the the normal tissue as well. | biostars | {"uid": 296610, "view_count": 1143, "vote_count": 1} |
I used The 9th column of `.bam` file to get the insert size of atac-seq data.
```
samtools view bam/${FILE}"_mm10.bam" | awk '$9>0' | cut -f 9 | sort | uniq -c | sort -b -k2,2n > insert_size_plotting/${FILE}".txt"
```
But I found all of the inserts are less than 500bp,
From the insert size plot, I can see there should be longer than 500bp inserts, but why bowtie2 doesn’t report >500bp inserts?
![enter image description here][1]
Thanks
[1]: /media/images/c6891381-0797-4efd-8f9f-5d375f85 | By default, bowtie2 expects a maximum fragment length of 500 bp for paired-end alignments. To adjust this, change the `-X` or `--maxins` parameter.
http://bowtie-bio.sourceforge.net/bowtie2/manual.shtml | biostars | {"uid": 9528915, "view_count": 451, "vote_count": 1} |
<p>I have been asked to recommend introductory books and resources to R and Bioconductor. My problem is just, I never read a book to learn R or Bioconductor, so I have no experience with this and cannot recommend one. I am interested in mainly introductory books, possibly targeting various groups of readers (computer scientists, molecular biologists, (bio-)statisticians), any recommendation appreciated. </p>
<p>For example, I used the following resources:</p>
<ul>
<li>The <a href='http://cran.r-project.org/manuals.html'>R-manuals</a>, especially <a href='http://cran.r-project.org/doc/manuals/R-intro.html'>the R intro</a></li>
<li>There are also <a href='http://cran.r-project.org/other-docs.html'>a lot of contributed documents there</a> on the R web site, but I didn't
use.</li>
<li>If a package from Bioconductor interests me, I read the package vignette.</li>
<li>I read the Bioconductor mailing list, that helps to see what other people use.</li>
<li>I have the "<a href='http://www.springer.com/statistics/computanional+statistics/book/978-0-387-98966-2'>Venebles, Ripley. S Programming</a>" book, that is hardly introductory. </li>
</ul>
<p>Which books did you find helpful or completely useless to learn R/Bioconductor? For example: <a href='http://www.bioconductor.org/pub/RBioinf/'>R Programming for Bioinformatics</a> looks promising, anybody read it?</p>
<p>Or do you share my reluctance towards R-books and prefer online resources?</p>
| <p><a href='http://www.amazon.com/Programming-Bioinformatics-Chapman-Computer-Analysis/dp/1420063677'>R Programming for Bioinformatics (Chapman & Hall/CRC Computer Science & Data Analysis)</a></p>
<p><a href='http://www.amazon.com/Bioinformatics-Computational-Solutions-Bioconductor-Statistics/dp/0387251464'>Bioinformatics and Computational Biology Solutions Using R and Bioconductor (Statistics for Biology and Health)</a> </p>
<p><a href='http://www.statmethods.net/'>Quick-R</a></p>
<p><a href='http://cran.r-project.org/doc/contrib/Owen-TheRGuide.pdf'>cran: TheRGuide</a> </p>
<p><a href='http://cran.r-project.org/doc/manuals/R-intro.pdf'>cran: An Introduction to R</a></p>
<p><a href='http://google-styleguide.googlecode.com/svn/trunk/google-r-style.html'>Google's R Style Guide</a></p>
<p><a href='http://www.springer.com/statistics/computanional+statistics/book/978-0-387-93836-3'>A Beginner's Guide to R</a></p>
<p><a href='http://www.r-bloggers.com/r-tutorial-series-r-beginners-guide-and-r-bloggers-updates/'>R Tutorial Series: R Beginner’s Guide and R Bloggers Updates</a></p>
| biostars | {"uid": 539, "view_count": 23484, "vote_count": 73} |
I have a list of gene name in a file
```
CHRNB2
EGR2
GCK
KRT14
LMNA
FGF3
TK2
ABCC8
```
How can U map them to uniprot ID?
**P.S** I tried Uniprot "ID mapping" (from-"GENEID" to-"UNIPROTKB AC").But it couldn't map.
Please suggest me what to do.Thnx | Use Mygene.info. You can do batch requests via post, or you can use the live API to do batch requests as well.
Here's how via the live API:
Click on the "Try API live!", select "gene query service". Click on "post"
For "q" put in your gene names separated by a comma.
For "scopes" type "symbol" (without the quotation marks)
For "fields", use "symbol,entrezgene,uniprot" and any other parameter of interest
Click "try it" when done.
Result will be in the response body. | biostars | {"uid": 105329, "view_count": 23540, "vote_count": 2} |
Can anyone share me a good GATK4 workflow, I Couldnt find anything good that explanins the process in a crisp and easy way. | Completly, agree - there's a real lack of clarity when it comes to documentation of some tools and options. And it's only trolling through dozens of forum posts on any number of issues that you can begin to troubleshoot common issues. Example workflows won't resolve those headaches but the following resources may help you get started.
1. IBM White Paper - I found this most useful because I didn't want a .wdl cromwell workflow. https://www.ibm.com/downloads/cas/ZJQD0QAL
2. NIH HPC has a Good GATK4 tutorial/workflow with detailed explanations and examples I wish had been available when I made my workflow - https://hpc.nih.gov/training/gatk_tutorial/
3. Broad github .wdl scripts - Not ideal if you don't want to use .wdl, but they can still clarify some things. Making them even less ideal, the Broad regularly deems these scripts as 'outdated'. https://github.com/gatk-workflows/broad-prod-wgs-germline-snps-indels and also https://github.com/gatk-workflows/gatk4-basic-joint-genotyping/blob/main/gatk4-basic-joint-genotyping.wdl
4. Workflow using Nextflow for the variant calling - https://github.com/gencorefacility/variant-calling-pipeline-gatk4
| biostars | {"uid": 9471704, "view_count": 2026, "vote_count": 2} |
(I also asked this question on the Bioconductor support site [here][1].)
I'm stumped. I'm trying to plot a few transcripts at the same time, given transcript names and a TxDb. These are examples of approaches I've tried:
# ------------------------------------------------------------------------------
# Setup:
library(TxDb.Hsapiens.UCSC.hg19.knownGene)
library(Gviz)
txdb <- TxDb.Hsapiens.UCSC.hg19.knownGene
# ------------------------------------------------------------------------------
# Try 1:
gr <- GenomicFeatures::exons(
txdb,
vals = list(tx_name = c("uc001aaa.3", "uc010nxq.1")),
columns = list("EXONNAME", "TXNAME", "GENEID"))
track <- Gviz::GeneRegionTrack(gr)
Gviz::plotTracks(track)
# Creates a plot, but doesn't show transcript grouping
![enter image description here][2]
# ------------------------------------------------------------------------------
# Try 2
gr <- GenomicFeatures::transcripts(
txdb,
vals = list(tx_name = c("uc001aaa.3", "uc010nxq.1")),
columns = list("EXONNAME", "TXNAME", "GENEID"))
track <- Gviz::GeneRegionTrack(gr)
Gviz::plotTracks(track)
# Creates a plot, but has no exon/intron information
![enter image description here][3]
# ------------------------------------------------------------------------------
# Try 3
gr <- exonsBy(txdb, by = "tx", use.names=TRUE)[c("uc001aaa.3", "uc010nxq.1")]
track <- Gviz::GeneRegionTrack(gr)
# Error in .fillWithDefaults(DataFrame(chromosome = as.character(seqnames(range)), :
# Number of elements in argument 'feature' is invalid
None of these work for me. I want to display exon/intron-structures and have arrows between exons. Does anyone have suggestions?
[1]: https://support.bioconductor.org/p/80221/
[2]: http://s8.postimg.org/dv9mopwhh/try1.png
[3]: http://s23.postimg.org/hi1ta61pn/try2.png | I found a solution that worked! Typical to find it shortly after asking the question, but here it is:
# ------------------------------------------------------------------------------
# Try 4
gr <- exonsBy(txdb, by = "tx", use.names=TRUE)[c("uc001aaa.3", "uc010nxq.1")]
gr <- unlist(gr)
elementMetadata(gr)$transcript <- names(gr)
track <- Gviz::GeneRegionTrack(gr)
Gviz::plotTracks(track)
![enter image description here][1]
Using exonsBy gives a GrangesList with one GRanges object for each transcript. Unlisting the GrangesList creates a named (important!) GRanges object with all exons. Using those names to set the $transcript metadata column solved the problem.
[1]: http://s16.postimg.org/ong5l77w5/try4.png | biostars | {"uid": 184091, "view_count": 4603, "vote_count": 1} |
I have a dataframe in R and I want to plot a subset of the plot as a line graph in ggplot. I have 8 different variables, with no guarantee all 8 will appear in the subset I want to plot. How can I for ggplot to assign variable A to a particular color code #B35806 and H to #542788? I tried to assign this to the dataframe itself (a column where if A is present, #B35806 would be) and calling on that in ggplot but that did not help
geom_line(aes(color=Line_color))
Advice would be very useful to this R noob. | I found a solution
p + scale_colour_manual(values = c("A" = "#E08214", "B" = "#E08214")) | biostars | {"uid": 204891, "view_count": 47593, "vote_count": 1} |
<p>Hi there,</p>
<p>I have ran TopHat successfully with the fusions.out also. But I am unable to understand the directory structure i.e. how to create the directory structure although I have gone through the tutorial but always arising with errors in Tophat fusion.</p>
<p>The created directories are:</p>
<pre><code>./blast_human/human_genomic*, nt* and other_genomic*
tophat_sample11 (containing the output files from TopHat2)
hg19
ensGene.txt
refGene.txt
mcl.txt
./blast/blastall and blastn
</code></pre>
<p>whenever I ran the command:</p>
<pre><code>./tophat-fusion-post -o ./FUSIONOUT --num-fusion-reads 1 --num-fusion-pairs 2 --num-fusion-both 5 hg19
</code></pre>
<p>the error is:</p>
<pre><code>[Mon Jan 21 16:00:55 2013] Beginning TopHat-Fusion post-processing run (v2.0.6)
[Mon Jan 21 16:00:55 2013] Extracting 23-mer around fusions and mapping them using Bowtie
samples updated
Traceback (most recent call last):
File "./tophat-fusion-post", line 2091, in ?
sys.exit(main())
File "./tophat-fusion-post", line 2059, in main
map_fusion_kmer(bwt_idx_prefix, params, sample_updated)
File "./tophat-fusion-post", line 315, in map_fusion_kmer
subprocess.call(cmd, stdout=open(output_dir + 'fusion_seq.bwtout', 'w'), stderr=open('/dev/null', 'w'))
File "/usr/lib64/python2.4/subprocess.py", line 412, in call
return Popen(*args, **kwargs).wait()
File "/usr/lib64/python2.4/subprocess.py", line 542, in __init__
errread, errwrite)
File "/usr/lib64/python2.4/subprocess.py", line 975, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
</code></pre>
<p>No log file is generated.</p>
<p>Thanking in advance!</p>
| <p>The error was arising because Bowtie wasn't found in the path. So the path need to be defined as </p>
<p>PATH=$PATH:/path/to/bowtie</p>
<p>It worked for me...</p>
| biostars | {"uid": 61444, "view_count": 3829, "vote_count": 1} |
Hi,
We have a lot of bams from data that was sequenced and aligned before we received it. All of the bams are missing the read group (@RG) in the reads even though the read group is defined in the header. We ran `bamaddrg` (https://github.com/ekg/bamaddrg) to add the read groups to all reads and it appeared to run fine. The problem is that running Picard's MarkDuplicates now throws the following error on the bam with read groups added:
```
Exception in thread "main" htsjdk.samtools.SAMFormatException: SAM validation error: ERROR: Record 1642900, Read name HS2000-1005_167:8:1103:3541:88508, bin field of BAM record does not equal value computed based on alignment start and end, and length of sequence to which read is aligned
at htsjdk.samtools.SAMUtils.processValidationErrors(SAMUtils.java:452)
at htsjdk.samtools.BAMFileReader$BAMFileIterator.advance(BAMFileReader.java:643)
at htsjdk.samtools.BAMFileReader$BAMFileIterator.next(BAMFileReader.java:628)
at htsjdk.samtools.BAMFileReader$BAMFileIterator.next(BAMFileReader.java:598)
at htsjdk.samtools.SamReader$AssertingIterator.next(SamReader.java:514)
at htsjdk.samtools.SamReader$AssertingIterator.next(SamReader.java:488)
at picard.sam.MarkDuplicates.buildSortedReadEndLists(MarkDuplicates.java:413)
at picard.sam.MarkDuplicates.doWork(MarkDuplicates.java:177)
at picard.cmdline.CommandLineProgram.instanceMain(CommandLineProgram.java:183)
at picard.sam.MarkDuplicates.main(MarkDuplicates.java:161)
```
We do not get the error on the original bam. Another post suggested the option `VALIDATION_STRINGENCY=LENIENT`, which allows MarkDuplicates to proceed, but there are a fair number of reads that get this error. I don't know how these are being treated, but for our study we cannot ignore these reads. Perhaps we could MarkDuplicates before adding read groups, but I'm concerned about unknown consequences. Reindexing does not solve the problem. I also posted an issue on the bamaddrg github page. Hopefully the author will be able to make a suggestion.
Questions:
1. Does anyone know why we would get this error after adding read groups? There are no other differences in the reads as far as I can tell.
2. The error states that the bin is calculated based on alignment start and end. These values did not change! So why would the calculated bin change?
3. I came across one solution to convert the bam > sam > bam so the bins are recalculated, but we have 1000 whole genomes. That seems a bit ridiculous. Any other suggestions?
Thanks in advance! Let me know if I can clarify anything. | This might be a bug in BamTools library, e.g. treating some edge case incorrectly.
For some reason, it always calculates bin number upon writing a record to the output file, and [here][1] is the place where it may change.
Now regarding your 3rd question. Picard source code contains a [tool][2] specifically intended for fixing bin numbers but it's not included in the distribution, you have to compile it yourself.
But I would suggest an alternative for this particular situation: modify Picard tools a little. MarkDuplicates doesn't use bin values at all, so wrong values won't affect the result. Fixing them requires adding a single line in htsjdk/src/java/htsjdk/samtools/BAMFileReader.java after the line `++this.samRecordIndex;`:
`mNextRecord.setIndexingBin(mNextRecord.computeIndexingBin());`
That should do the trick, as it forces all tools to recompute bin number on the fly.
I've uploaded the jars built with this patch [here][3].
[1]: https://github.com/pezmaster31/bamtools/blob/2d7685d2aeedd11c46ad3bd67886d9ed65c30f3e/src/api/internal/bam/BamWriter_p.cpp#L216-L218
[2]: https://github.com/samtools/htsjdk/blob/master/src/java/htsjdk/samtools/FixBAMFile.java
[3]: https://docs.google.com/uc?id=0BweRYMNLZaglUjNWaHFEWWg2S3c&export=download | biostars | {"uid": 110004, "view_count": 4927, "vote_count": 1} |
Dear Friends Hi (*'m not native in English!*)
I want to search for orthologs locally using EggNOG in my **fish** transcriptome samples.
First I have download [fiNOG][1] which is specially for fishes from [EggNOG][2],
then built a HMMER database using hmmpress (i.e. `cat fiNOG_hmm/*.hmm > fishDB.hmmer`), and run hmmpress fishDB.hmmer.
then I intend to run such script : `hmmscan --cpu 24 '/home/fiNOG_hmm/fishDB.hmmer' '/home/Transcriptome.fasta`'
**My question:** Can I use my transcriptome assembly .fasta file **directly** in this script or I must convert (translate) it into protein in advance (which tool is better for this job? *[Transdecoder][3] ?* )?
Thank you
[1]: http://eggnogdb.embl.de/download/eggnog_4.5/data/fiNOG/
[2]: http://eggnogdb.embl.de/#/app/seqscan
[3]: https://transdecoder.github.io/ | There is now a better resource for functional annotation using eggNOG orthology. Check these links:
*Fast genome-wide functional annotation through orthology assignment by eggNOG-mapper*
- http://eggnog-mapper.embl.de (online tool)
- http://biorxiv.org/content/early/2016/09/22/076331 (method description and benchmark)
- https://github.com/jhcepas/eggnog-mapper (eggnog-mapper tool. Use the option --translate if using nucleotide seqs) | biostars | {"uid": 217105, "view_count": 4200, "vote_count": 2} |
Hi everyone,
we have a new Biolinux system in the lab (Biolinux 8 which is Ubuntu Linux 14.04 LTS base) and I am struggling to install the program [Vicuna][1]. I am wondering if anyone could help me to sort it out. I think the major problem is the dependency from ncbi_tools++.
I have followed the installation for external users and downloaded [the NCBI toolkit][2], then configured it specifying the path, and finally `make` and `make install`.
The problem is that every time I am trying to run 'make' after the configuration of the ncbi_tools++ I get various make errors. And if I try to ignore these errors and install vicuna, the program will not run (I just gave it a try anyway).
I have tried to install the program both in bash and zsh environments as well as installing it in different locatoions without much luck.
I know Biolinux have already a version of the ncbi_tools
> ncbi-tools-x11 6.1.20120620 NCBI libraries for biology applications (X-based utilities)
But I am not sure if this is creating conflict with what I am trying to install.
If anyone could help me it would be much appreciated.
Any advice will be welcome
Regards
F.
[1]: http://www.broadinstitute.org/scientific-community/science/projects/viral-genomics/vicuna
[2]: ftp://ftp.ncbi.nih.gov/toolbox/ncbi_tools++/CURRENT | Let's put on the magic hat and see if we can guess what the problem is without seeing the errors...
If make works, but make install doesn't, then very likely it's a permissions problem.
The other common issue is missing headers.
That's about as far as the crystal ball with take us. If you post the errors from make, this should get a lot easier :) | biostars | {"uid": 177810, "view_count": 3312, "vote_count": 1} |
I want to count number of the variants called on each chromosome for each sample from a multi sample vcf file.
Any help would be really appropriated.
Thanks
Ram | Split your vcf file by sample and count how many times chromosom appear in each file .
FILE=yourfile.vcf
for sample in `bcftools query -l $FILE`
do
bcftools view -c1 -H -s $sample -o ${sample}.vcf $FILE
cat ${sample}.vcf |cut -f1|uniq -c > ${sample}.count
done | biostars | {"uid": 336206, "view_count": 10358, "vote_count": 1} |
<p>I have few motifs whose location on a given DNA sequence has been identified using MATCH (from TRANSFAC). MATCH provides the location of these matrices in a numerical way as motif 1 236 (+) (for an example - motif 1 is present at location 236). I want to get a graphical output by feeding the length of DNA sequence and the the position of matrices so that its easy for presentation and comparing two different sequences with almost similar motifs</p>
<p>Thanks in advance for any suggestions</p>
| <p>Have you tried the features and annotations options in <a href="http://www.jalview.org">Jalview</a></p>
| biostars | {"uid": 54669, "view_count": 4475, "vote_count": 9} |
Hi,
Is there a way to force GATK haplotypecaller to output all genomic positions in the vcf; not only variant ones ?
Example here :
Top : Raw bam file
Middle : -outbam using : `gatk HaplotypeCaller -R $genome -O test.vcf -I in.bam -ERC GVCF --max-alternate-alleles 3 -bamout test.bam -L interval.bed --output-mode EMIT_ALL_ACTIVE_SITES`
bottom : -outbam using `gatk HaplotypeCaller -R $genome -O test.vcf -I in.bam -ERC GVCF --max-alternate-alleles 3 -bamout test.bam -L interval.bed --output-mode EMIT_ALL_CONFIDENT_SITES`
You can see that the region without variant sites are not processed by haplotypecaller ; even with EMIT_ALL_ACTIVE_SITES and EMIT_ALL_CONFIDENT SITES.
Any idea how to tune haplotypecaller to output all sites with enough coverage ?
gatk version : 4.1.7.0
![enter image description here][1]
[1]: https://i.ibb.co/Dtk8WwK/igv.png
Thanks | `gatk HaplotypeCaller -R {reference} -I {bam_path} -L {bed_path} -ERC BP_RESOLUTION -O {out_vcf}`
Maybe look at BP_RESOLUTION | biostars | {"uid": 436351, "view_count": 2834, "vote_count": 3} |
FASTQC, for example, doesn't seem to have a publication associated with it. How would you cite it? | Just to say, thank you for wanting to cite tools. From our investigations, we've found that over 2/3 of the papers that mention Ensembl do not cite our papers. No idea how many more than that use our stuff and don't even bother to mention us. It's as if people think that building a bioinformatic database or tool is not real science, so doesn't need acknowledgement. In science, citations are currency, and using someone's work without citing them is essentially theft. | biostars | {"uid": 180392, "view_count": 41874, "vote_count": 12} |
I am trying to run the R command readVcf in R, it shows function not found.
I have already downloaded the package "VariantAnnotation". I don't know if it helps.
Does anyone have any idea? | Have you loaded it, `library('VariantAnnotation')`!! | biostars | {"uid": 64380, "view_count": 24806, "vote_count": 2} |
Hello everyone!
I would like to know if there is a way to get all the id's from BioSample. I already tried a link that i saw in another post that works for BioProject (https://www.ncbi.nlm.nih.gov/bioproject/browse/) but it doesn´t work for BioSample.
I also tried to download the summary of BioSample using this example from BioProject (ftp://ftp.ncbi.nlm.nih.gov/bioproject/summary.txt)
Also, can this be solved programmatically, i.e. using Eutils or EDirect?
What i really want to know is, simply, how many id's exist and how can i search for a list of them.
Thank you very much! | using my tool XsltStream http://lindenb.github.io/jvarkit/XsltStream.html and the NCBI Biosample XML dump
https://gist.github.com/lindenb/7d7397f6b11d140ed9c407e3e91a4c92 | biostars | {"uid": 280581, "view_count": 2882, "vote_count": 2} |
<p>Hi :)<br />
<br />
A few weeks ago I was looking for a tool that would help me get "DNA composition" statistics for my sequencing. Something that would give me a dataset which which I could ask questions about GC bias, or over-represented sequences, motifs, etc. There are tools to answer each question specifically, but I was looking for something more general from which many analyses could be built on. This led me to k-mer counting, and all the down-stream tools which leverage k-mer result files.<br />
<br />
k-mer counting tools are pretty cool, but all the ones I tried had some draw backs; like high RAM requirements, very long run-times (although the latests bloom-filter based tools seem to mitigate this somewhat), but most importantly requiring a specific k-mer size to be chosen. I really wanted all mers in the dataset so I could look at ''GC' and 'GCG' and 'GCCGACGGACGAC' without having to re-run any analyses. I couldn't find a tool like this after a brief search, so gave up and wrote my own in NumPy based off suffix-arrays.<br />
<br />
Two weeks later I have a functional program in the sense that I get results, but before I invest any time making it usable for others, I thought I should investigate further if there are tools which already do this. Making a suffix array was a nice learning experience for me so I haven't lost anything if such tools already exist - and if they do i'd love to compare performance characteristics - but if not I might consider tidying up the code and making proper documentation. Does anyone know of such tools?<br />
<br />
Thank you so much, and happy Diwali :)</p>
| For fixed k-mer, use kmc2 and DSK, especially when you have a fast disk. They are faster and use less memory than jellyfish.
If you don't want to use fixed k-mer, the right data structure is suffix array, FM-index or something equivalent. You can query any sequences up to the read length. Nonetheless, for a particular k-mer size, it is usually not as efficient as k-mer based methods. | biostars | {"uid": 165465, "view_count": 6989, "vote_count": 6} |
<p>Dear lazyweb,</p>
<p>do you know any tool/script to visualize the different outputs of GATK <a href='http://www.broadinstitute.org/gatk/gatkdocs/org_broadinstitute_sting_gatk_walkers_annotator_DepthOfCoverage.html'>DepthOfCoverage</a> ?</p>
<p>P.</p>
| <p>I wrote these scripts some time ago:</p>
<p><a href="https://github.com/johandahlberg/Scripts/blob/master/plottingLociVsCoverage.R">https://github.com/johandahlberg/Scripts/blob/master/plottingLociVsCoverage.R</a></p>
<p><a href="https://github.com/johandahlberg/Scripts/blob/master/plottingCumulativeCoverage.R">https://github.com/johandahlberg/Scripts/blob/master/plottingCumulativeCoverage.R</a></p>
<p>R really isn't my strong suite, but at least its something, and they served my purpose at the time. I would be happy to accept any pull requests to improve on them, though.</p>
| biostars | {"uid": 61165, "view_count": 6349, "vote_count": 1} |
Hi
I am new in python and want to see how this can be done in python (I can do this in R). I have a text file `myfile.txt` with one column and thousands of rows as shown below. I want to convert this to fasta `result.fasta` format as shown below. How can I do this in python?
`myfile.txt`
ATGTGTGGTTTTCCCCC
ATTGGCGGGGTTTTTCAGGGG
ATGGGGGGGCCCCCCCCAAAAAA
TTGGTGGGGGGGGGGGGAA
`result.fasta`
>1
ATGTGTGGTTTTCCCCC
>2
ATTGGCGGGGTTTTTCAGGGG
>3
ATGGGGGGGCCCCCCCCAAAAAA
>4
TTGGTGGGGGGGGGGGGAA | #!/usr/bin/env python
n = 0
with open('myfile.txt', 'r') as f:
for line in f:
n += 1
print('>' + str(n) + '\n' + line.strip()) | biostars | {"uid": 271977, "view_count": 14630, "vote_count": 1} |
Hi List,
Is there any way to get RNAfold predicted secondary structure in text format? It provides a PDF and PNG file but is it possible to get something like structures in this figure: [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3787264/figure/F3/][1]
Bade
[1]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3787264/figure/F3/ | I had a Perl Module to output ASCII text of **miRNA precursor secondary structure**.
RNA::HairpinFigure - Draw hairpin-like text figure from RNA sequence and its secondary structure in dot-bracket notation.
http://search.cpan.org/dist/RNA-HairpinFigure/lib/RNA/HairpinFigure.pm#synopsis
>hsa-mir-92a-1 MI0000093 Homo sapiens miR-92a-1 stem-loop
CUUUCUACACAGGUUGGGAUCGGUUGCAAUGCUGUGUUUCUGUAUGGUAUUGCACUUGUCCCGGCCUGUUGAGUUUGG
..(((...((((((((((((.(((.(((((((((((......)))))))))))))).)))))))))))).))).....
---CU UAC C U UU
UUC ACAGGUUGGGAU GGU GCAAUGCUGUG U
||| |||||||||||| ||| |||||||||||
GAG UGUCCGGCCCUG UCA CGUUAUGGUAU G
GGUUU --U U - GU
| biostars | {"uid": 202531, "view_count": 2085, "vote_count": 1} |
Hey guys,
I have an accession number, how I can convert the sra to fasta in Windows? | You can use [SRAtools][1] for this purpose. I think it is available for both Windows and Linux. Once installed, you can download/convert sra files to fastq as follows:
```
fastq-dump myfile.sra #for single end
fastq-dump --split-files myfile.sra #if paired end
```
[1]: http://www.ncbi.nlm.nih.gov/books/NBK158900/ | biostars | {"uid": 141361, "view_count": 3334, "vote_count": 2} |
Is it possible to use scikit-allel to generate per sample genotype counts per gene (from gff)?
I've loaded a gff file:
geneset = allel.FeatureTable.from_gff3('~/geneset.gtf', attributes = ["gene_id","gene_name"])
Converted to pandas table following tutorial:
def geneset_to_pandas(geneset):
"""Life is a bit easier when a geneset is a pandas DataFrame."""
items = []
for n in geneset.dtype.names:
v = geneset[n]
# convert bytes columns to unicode (which pandas then converts to object)
if v.dtype.kind == 'S':
v = v.astype('U')
items.append((n, v))
return pandas.DataFrame.from_dict(dict(items))
geneset = geneset_to_pandas(geneset)
Loaded vcf file and converted to genotype array:
callset = allel.read_vcf(called.vcf.gz)
gt = allel.GenotypeArray(callset['calldata/GT'])
Started iterating through geneset to create variables for use with query (of vcf file), but am having trouble aggregating the genotype counts by the positions in the geneset:
for index,row in geneset_Homo_sapiens_GRCh37_75.iterrows():
end = row['end']
start = row['start']
chrom = row['seqid']
gene_name = row['gene_name']
for i,(x,y) in enumerate(zip(callset['variants/CHROM'],callset['variants/POS'])):
if (var_chrom == chrom and var_pos >= start and var_pos <= end):
print var_chrom, var_pos, gene_name, gt[i] | Here's a [gist with a worked example using human data][1]. A few extracts...
To load data from a GFF3 file into a pandas DataFrame you can now use the scikit-allel gff3_to_dataframe() function, e.g.:
geneset = allel.gff3_to_dataframe('GRCh38_latest_genomic.gff.gz', attributes=['Name'])
# select only gene records
genes = geneset[geneset['type'] == 'gene']
# select only genes on Chromosome 22
genes_chr22 = genes[genes.seqid == 'NC_000022.11']
To extract genotypes for a given sample and a given gene, it's necessary to obtain the variant start and stop indices corresponding to the first and last variants within the gene. If the data are for a single chromosome, this can be done using a SortedIndex on the variant positions.
If you have previously parsed the VCF out into Zarr format grouped by chromosome as described [here](http://alimanfoo.github.io/2017/06/14/read-vcf.html), assuming you are working with a single chromosome (e.g., Chromosome 22), this will set up a SortedIndex for all variants in the chromosome:
zarr_path = 'ALL.phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.zarr'
import zarr
callset = zarr.open(zarr_path, mode='r')
chrom = '22'
pos = allel.SortedIndex(callset[chrom]['variants/POS'])
This will load genotypes for a given sample over all variants in the chromosome:
# pick an arbitrary sample to work with
sample_idx = 42 # N.B., this is the 43rd sample, zero-based indexing
# load genotypes for the sample
gv = allel.GenotypeVector(callset[chrom]['calldata/GT'][:, sample_idx])
This will then iterate over genes and compute genotype counts for each gene:
# setup some arrays to hold per-gene genotype counts for our sample of interest
import numpy as np
n_genes = len(genes_chr22)
n_hom_ref = np.zeros(n_genes, dtype=int)
n_het = np.zeros(n_genes, dtype=int)
n_hom_alt = np.zeros(n_genes, dtype=int)
n_variants = np.zeros(n_genes, dtype=int)
# iterate over genes
for i, (_, gene) in enumerate(genes_chr22.iterrows()):
try:
# locate data for this gene - this maps genomic coordinates onto variant start and stop indices
loc_gene = pos.locate_range(gene.start, gene.end)
except KeyError:
# no data for the gene, leave counts as zero
pass
else:
# extract genotypes for the gene
gv_gene = gv[loc_gene]
# compute genotype counts
n_hom_ref[i] = gv_gene.count_hom_ref()
n_het[i] = gv_gene.count_het()
n_hom_alt[i] = gv_gene.count_hom_alt()
# also store number of variants in the gene
n_variants[i] = loc_gene.stop - loc_gene.start
[1]: http://nbviewer.jupyter.org/gist/alimanfoo/6e6b7854a735907e2837f306adf8680b | biostars | {"uid": 335077, "view_count": 2788, "vote_count": 1} |
Go to ucsc genome browser, and enter the coordinate chr9:131,939,104-131,939,143 (hg19). It's part of the gene IER5, going from right to left. If you take a look at the DNA sequence CAG, you will see below that it codes for V (Valine). But, doesn't CAG code for Gln, and GAC code for Asp? Why does CAG correspond to Val on the browser?
Am I missing something terribly? | your gene is mapped on the reverse strand (3' -> 5') , the codon is not CAG but the complement: GTC (valine) | biostars | {"uid": 259152, "view_count": 871, "vote_count": 1} |
<p>Hi everyone; this is my first question on the forum.</p>
<p>How can I compare if two fasta files contain the same sequence headers? </p>
<p>Does any BioPython module exist for doing this?</p>
<p>Thanks in advance,
peixe </p>
| <p>In BioPython you could do the following:</p>
<pre><code>from Bio import SeqIO
def get_ids(fname):
reader = SeqIO.parse(fname, 'fasta')
ids = map(lambda x: x.id, reader)
return set(ids)
s1 = get_ids('f1.fasta')
s2 = get_ids('f2.fasta')
print 'Common:', s1 & s2
print 'In f1 and not f2:', s1 - s2
print 'In f2 and not f1:', s2 - s1
</code></pre>
<p>prints for my testcase:</p>
<pre><code>Common: set(['a', 'c', 'b'])
In f1 and not f2: set(['x'])
In f2 and not f1: set(['y'])
</code></pre> | biostars | {"uid": 10162, "view_count": 11103, "vote_count": 8} |
Hi!
Do you know how can I filter out supplementary alignments from a bam file? I was reviewing http://broadinstitute.github.io/picard/explain-flags.html and I am aware that the flag for this kind of alignments is "2048". However, depending on another features (ex: paired read, second in pair, etc), the flag can vary.
So, I am not sure about how can I filter out this kind of alignments. | `samtools view -F 2048 -bo filtered.bam original.bam`
You don't have to care about what other flags are set, the `-F` option will filter any entry with bit 2048. | biostars | {"uid": 432137, "view_count": 4563, "vote_count": 1} |
I want to compare data two different table like that using R.
Table 1
X1
a;b
c
d;e
Table 2
Y1
a uuu uu
b vvv vv
c xxx xx
d yyy yy
e zzz zz
Ouptput Table 3
X1 Y2
a;b uuu|uu<\br>vvv|vv
c xxx|xx
d;e yyy|yy<\br>zzz|zz | Here is the dplyr way. Make sure that you provide column names as given in `t1` and `t2`
library(tidyverse)
t1 <- tibble(X1 = c("a;b","c","d;e"))
t2 <- tibble(Y1 =letters[1:5] , Y2 = c("uuu","vvv","xxx","yyy","zzz") ,Y3 =c("uu","vv","xx","yy","zz"))
t3 <- t1 %>%
mutate(mm = map(X1, ~(strsplit(.,split = ";"))[[1]] %>% as_tibble)) %>%
mutate(nn = map(mm , function(.){
col_binds <- left_join(.,t2, by = c("value" = "Y1")) %>%
unite(col = "comb",sep = "|" , Y2:Y3)
row_binds <- col_binds %>%
summarise(value = paste0(value , collapse = ";") , times = paste0(comb , collapse = "<\br>"))
return(row_binds)
})) %>%
select(nn) %>%
unnest() %>%
dplyr::rename(X1 = value , Y2 = times)
## output
t3
# A tibble: 3 x 2
X1 Y2
<chr> <chr>
1 a;b "uuu|uu<\br>vvv|vv"
2 c xxx|xx
3 d;e "yyy|yy<\br>zzz|zz"
| biostars | {"uid": 343805, "view_count": 2895, "vote_count": 1} |
Hi!
I am trying to run the `drugInteractions` function in the MAFtools pipeline, but it seems that it is simply not there.
Here is the message I am getting:
> library("maftools")
> druggable = drugInteractions(maf = STES_maf, fontSize = 0.75)
Error in drugInteractions(maf = laml, fontSize = 0.75) :
could not find function "drugInteractions"
According to the authors, this function compiles info from the [DGI-db][1]. So I checked whether this funtion was borrowed from DGI, but I had no success.
Did anybody face the same issue? Would anybody know how to fix it? Not ideal, but recommending an alternative package for this kind of analysis would also help.
Thanks a lot in advance!
[1]: http://www.dgidb.org/ | Problem solved!
There is no logic behind it, really. But it worked.
Well, simply get it directlly from GitHub:
devtools::install_github('PoisonAlien/maftools')
I have tried that on the day I posted the issue here, but didn't work. I did that again today, and it worked. :|
In any case, the author was very responsive! Very good feedback time. Thanks! | biostars | {"uid": 354416, "view_count": 868, "vote_count": 1} |
Hi everybody!
Recently I became familiar with a new Gene Ontology plotting library in R called "GOplot". There is a tutorial in the developer site but from the very first point i have some problems with it. I put the tutorial in this question so you can review it: [http://wencke.github.io/][1]
1) There is no function in R like:
install.github('wencke/wencke.github.io')
2) If you skip this part you go through making dataframes. I don't know how to make mentioned dataframes in **toy example** part of tutorial for my microarray analysis. I would be really appreciate if you can help me with the second problem specially.
Thanks a lot.
[1]: http://wencke.github.io/ | 1. You have a typo, try: `devtools::install_github("wencke/wencke.github.io")`, note that you need to have [devtools package][1] installed.
2. Data is within the package, you get them as noted in the tutorial `data(EC)`, it is a list of dataframes.
> **The toy example**
> GOplot comes with a manually compiled data set....
[1]: https://cran.r-project.org/package=devtools | biostars | {"uid": 318129, "view_count": 4151, "vote_count": 1} |
Hi everyone,
I have vcf file in dosage format like 0/0, 0/1 and etc. After filtering for quality, I need to impute the missing value using BEAGLE software so that my variants can be in float numeric format in order to combine with phenotypic data for further analysis. Apart from BEAGLE, if there is any other software that is much easier to use, I would not mind.
Could someone help me with script and guide me through imputation, please. | Which imputation program are you going to use? | biostars | {"uid": 355488, "view_count": 5679, "vote_count": 4} |
Dear All
I have a combined VCF file of few individuals. In this VCF instead of having normal CHR1, chr2 notions for chromosomes it is having the chromosome information as
gi|996703411|ref|NW_015379183.1|, gi|996703411|ref|NW_015379175.1
In this notion `NW_015379183.1` corresponds to a specific Chromosome. The same is true for its positions, If I have the chromosome numbers for all `gi|996703411|ref|NW_015379183.1|` sort of notions how I can replace the chromosome names to the original names. | use `bcftools annotate` https://samtools.github.io/bcftools/bcftools.html
with
> --rename-chrs file
> rename chromosomes according to the map in file, with "old_name new_name\n" pairs separated by whitespaces, each on a separate line. | biostars | {"uid": 299110, "view_count": 10581, "vote_count": 1} |
<p>When I convert a SOAP Paired end output to a SAM file using the soap2sam.pl script(http://soap.genomics.org.cn/down/soap2sam.tar.gz) my SAM file headers turn out to be empyty.</p>
<p>Is there a way I can add headers to the SAM file ?</p>
| You can add the headers using samtools view:
samtools view -bT your_reference.fasta your_input.sam > your_input.bam
Use `samtools view -H your_input.bam` to check whether the header was added. | biostars | {"uid": 56476, "view_count": 5236, "vote_count": 1} |
My understanding of SAM files and the format is fairly good, but there are some things I haven't quite grasped. I'm not sure how obvious you all may find these questions, but they're what have come to mind.
I'm interested in recovering the original sequenced read after some alignment has been done. I'd like to know which pieces of the read/read segments I need. How do I know what the full read sequence was that came off the sequencer? Can I reconstruct it by connecting the read segments in the same template together? If so, what is the template? It's not the *read* is it? The SAM format doesn't suggest that it is; it says the template is some DNA/RNA fragment.
Here are some questions:
- "What is the difference between the read I get from sequencing and the read segments I see in a SAM file?"
- Intuition tells me that read segments are *mapped portions of the larger read*, but are they arbitrarily segmented in the SAM presentation?
- Are segments contiguous? Can they also be non-contiguous?
- Can I reconstruct the full read from the multiple read segments?
- How does *template* correspond to a sequence *read*?
I'm very grateful for any clarification I can get on these questions. | The given read from your query file (fastq file) can match to multiple locations in genome. You can check this with NH flag in sam file. The reads can also overlap each other as they are sequenced from DNA fragments and you can find this by comparing the mapping co-ordinates in sam file.
The aligner take only read sequence from fastq file for mapping to reference sequence. If you want use contigous sequence (contig), you need to assemble it first and then map with reference sequence. As the contig will be longer in length, you need to be cautious while using aligner. | biostars | {"uid": 132982, "view_count": 3290, "vote_count": 2} |
I might misunderstanding the following idea:
Starting at the variant of interest, the region is extended out ±0.1cM, using HapMap fine-scale estimates of recombination rates.
From the HapMap you get file containing the following:
```
chrom start stop Rate_cM.Mb Avg_cM Gen_map_cM
Chr1 45413 72433 2.27700294070856 0.061526896460886 0
Chr1 72434 78031 2.42987467531016 0.0136024384323863 0.061526896460886
Chr1 78032 227743 2.43386172133186 0.364378306024035 0.0751293348932723
```
If a variant falls into 45413 - 72433 range then in order to calculate +- 0.1 cM region around it, I need to proceed as follows:
72411-45413 = 27009 bp , this region has an average 0.0615 cM, meaning that 0.1cM corresponds to 43917 bp ( 43917 = 27009*0.1/0.0615). Is it right?
Thank you in advance. | <p>Not quite. Remember that the recombination rate changes with position, so the expected recombination rate over the next 43917 bases isn't going to be 0.1. So if we're interested in +0.1cM from 72411, then getting to 72433 gets us 4.844534e-05cM. Getting to 78031 adds an additional 1.36e-2, bringing us to 0.01365088. The means we need another 0.08634912cM. The last entry has an average rage of 2.434e-6cM/base, so dividing gives us an additional ~35478 bases. That puts us at 113509, meaning that the total width is 41098 bases.</p>
<p>The distance also won't be symmetric (i.e., -0.1cM probably won't be 41098 bases in the other direction).</p>
| biostars | {"uid": 133430, "view_count": 1387, "vote_count": 1} |
Hello folks,
I would like to use a gene name (*e.g. DOCK2*) or its ENSEMBL ID as input, and get as output the reported GWAS traits to the respective gene. I did give a look into the [GWAS Catalog API](https://www.ebi.ac.uk/gwas/docs/api) webpage, but I didn't find a way to use gene as input. Also, I looked for R and Python libraries for doing that, and I couldn't find either (maybe I didn't search well).
I know it's possible to do that for associations, studies, SNPs, and efoTraits. For instance, I could just use https://www.ebi.ac.uk/gwas/rest/api/singleNucleotidePolymorphisms/rs4918943 url, where it returns info for the SNP "rs4918943". I would like something similar for a gene ID, or a library that does it.
It would be great to input a gene and receives any output where I could retrieve the GWAS trais related to such gene.
Thanks in advance! | I didn't find the usage for it, but I figured out how to query it by using the following URL:
```
https://www.ebi.ac.uk/gwas/api/search/downloads?q=ensemblMappedGenes:{gene}&pvalfilter=&orfilter=&betafilter=&datefilter=&genomicfilter=&genotypingfilter[]=&traitfilter[]=&dateaddedfilter=&facet=association&efo=true
```
where we should replace `{gene}` with the gene symbol (*e.g.* TP53).
**Example using R:**
```r
# Required library # could be readr::read_delim() as well
library(data.table)
# Set the function
genesymbol2gwas <- function(gene){
url <- paste0(
"https://www.ebi.ac.uk/gwas/api/search/downloads?q=ensemblMappedGenes:", gene,
"&pvalfilter=&orfilter=&betafilter=&datefilter=&genomicfilter=&genotypingfilter[]=&traitfilter[]=&dateaddedfilter=&facet=association&efo=true"
)
return(fread(url))
}
# Use the function for gene "TP53"
genesymbol2gwas("TP53")
```
**Example using Python:**
```python
# Load library to read the result table as a dataframe
import pandas as pd
# Define the function
def genesymbol2gwas(gene):
url = "https://www.ebi.ac.uk/gwas/api/search/downloads?q=ensemblMappedGenes:{}&pvalfilter=&orfilter=&betafilter=&datefilter=&genomicfilter=&genotypingfilter[]=&traitfilter[]=&dateaddedfilter=&facet=association&efo=true"
return pd.read_csv(url.format(gene), sep='\t')
# Use the function for gene "TP53"
genesymbol2gwas("TP53")
```
Hope it helps someone in a similar situation =) | biostars | {"uid": 438387, "view_count": 1470, "vote_count": 1} |
Hi all,
I am using shapeit to prephase the genotypes, followed by using minimac to do the imputations (with 1000g phase1 v3 as the reference panel)
Now when I phase the genotypes - I would like to use the genetic map from 1000g instead of the hapmap genetic map that is mentioned in shapeit manual..
Is it possible for someone to direct me to the right source.. also it would be great if you can correct if my reasoning is wrong.
Thanks | <p>Have you checked the Impute2 1000 Genomes phase 3 reference data?</p>
<p>https://mathgen.stats.ox.ac.uk/impute/1000GP_Phase3.html</p>
| biostars | {"uid": 170418, "view_count": 4896, "vote_count": 1} |
Hi,
I am trying to visualise my overlapped chip-seq peak regions which I analysed with Homer mergePeaks function. I have got one venn info file and a "result" file. I would like to use that venn info file then visualise it but when I looked for visualisation libraries or programs, I did not find a method which merits my expectations.
the primary problem is my data is big. ( relatively :) ). I have 19 datasets in one conditions group and 9 datasets in healthy one. I have read making venn diagram for more than 3 datasets would not be smart on biostar tread.
I am trying to find overlapped regions of transcriptional factors that why I wanna know which transcriptional factors sites are most common.
I am a python coding and R mediocre.
Please don't link me to https://www.biostars.org/p/77362/ and https://www.biostars.org/p/66091/ threads I have already read them 0192308 times), also if you think homer is not the best tool for finding the overlaps, please feel free to advice others. (Yes, I do know monkseq)
Thank you very much for your help.
Best regards,
Tunc | EDIT: I made a script that can parse the venn.txt output of HOMER mergePeaks for comparisons of 2 to 5 peak files (bed files) and automatically create Venn Diagrams. All you need to do is pass it a sample ID (e.g. "ABC") and the venn.txt file output by HOMER, it will create the plot in the same directory as the venn.txt file. This uses R and the VennDiagram package. Script is located here: https://github.com/stevekm/Bioinformatics/blob/master/HOMER_mergePeaks_multiVenn/multi_peaks_Venn.R
EDIT2: I also posted an implementation of this with Upset plots, which allows for >5 comparison categories, here: https://www.biostars.org/p/192217/
EDIT3: scripts have been moved [here](https://github.com/stevekm/Bioinformatics/tree/a2a052029980369545085aadbd478e32c8ba6213/HOMER_mergePeaks_pipeline)
---
In my experience it is easier to just count the number of entries (lines) in each of the bed files output by HOMER mergePeaks and pass these values to R for plotting, instead of trying to parse the venn.txt file. This is the script I am using for this purpose (including the bash mergePeaks commands). You should be able to easily modify it to add more entries
#!/bin/bash
# BED files with the peaks to overlap
tmp_outH3K4ME3="peaks_H3K4ME3.bed"
tmp_outH3K27AC="peaks_H3K27AC.bed"
# a sample ID
tmp_sampleID="ABC"
# HOMER mergePeaks
mergePeaks "$tmp_outH3K4ME3" "$tmp_outH3K27AC" -prefix mergepeaks -venn mergepeaks_venn
# the mergePeaks file outputs names:
tmp_mergeH3K4ME3="mergepeaks_${tmp_outH3K4ME3}"
tmp_mergeH3K27AC="mergepeaks_${tmp_outH3K27AC}"
# count the number of unique peaks
num_H3K4ME3=$(tail -n +2 $tmp_mergeH3K4ME3 | wc -l)
echo "num_H3K4ME3 is $num_H3K4ME3"
num_H3K27AC=$(tail -n +2 $tmp_mergeH3K27AC | wc -l)
echo "num_H3K27AC is $num_H3K27AC"
# count the number of peaks in common
num_overlap=$(tail -n +2 "mergepeaks_${tmp_outH3K4ME3}_${tmp_outH3K27AC}" | wc -l)
# plot the values in a pairwise venn in R
# # make sure the correct version of R is loaded:
module unload r
module load r/3.2.0
Rscript --slave --no-save --no-restore - "$tmp_sampleID" "$num_H3K4ME3" "$num_H3K27AC" "$num_overlap" <<EOF
## R code
# load packages
library('VennDiagram')
library('gridExtra')
# get script args, print them to console
args <- commandArgs(TRUE); cat("Script args are:\n"); args
SampleID<-args[1]
peaks_H3K4ME3<-as.numeric(args[2])
peaks_H3K27AC<-as.numeric(args[3])
peaks_overlap<-as.numeric(args[4])
# get filename for the plot PDF
plot_filename<-paste0(SampleID,"_peaks.pdf")
# make a Venn object, don't print it yet
venn<-draw.pairwise.venn(area1=peaks_H3K4ME3+peaks_overlap,area2=peaks_H3K27AC+peaks_overlap,cross.area=peaks_overlap,category=c('H3K4ME3','H3K27AC'),fill=c('red','blue'),alpha=c(0.3,0.3),cex=c(2,2,2),cat.cex=c(1.25,1.25),main=SampleID,ind=FALSE)
# print it inside a PDF file, with a title
pdf(plot_filename,width = 8,height = 8)
grid.arrange(gTree(children=venn), top=SampleID) #, bottom="subtitle")
dev.off()
EOF
| biostars | {"uid": 164054, "view_count": 6777, "vote_count": 2} |
I wanna to see the expression profile in the ucsc genome browser of several genes. I was thinking if someone know some way or script that can retrieve the png images automatically from the genome browser. | I posted a Python-based tool on Github called [soda.py][1] that creates a web-ready gallery of UCSC browser shots. You just give it your BED file of coordinates, build, and session ID, and you specify an output directory where PDF and PNG results get stored (as well as an `index.html` file that lets you browse through snapshots with a web browser).
If you want to do things by hand, you can do something like the following quick and dirty approach to get a nice PNG. You'll need ImageMagick `convert` installed in order to convert the PDF to PNG. You'll also need GNU `wget` to do web requests on the command line.
$!/bin/bash
chrom="chr1"
chromStart=1234567
chromEnd=1234987
sessionID=1234
genomeBrowserURL="genome.ucsc.edu"
dumpURL="http://${genomeBrowserURL}/cgi-bin/cartDump"
postData="hgsid=${sessionID}&hgt.psOutput=on&cartDump.varName=position&cartDump.newValue=${chrom}%3A${chromStart}-${chromEnd}&submit=submit"
wgetOpts="--no-directories --recursive --convert-links -l 1 -A hgt_*.pdf"
wgetWaitOpts="--wait=1 --random-wait --tries=2 --timeout=100"
wget ${wgetWaitOpts} --post-data "${postData}" "${dumpURL}"
wget ${wgetOpts} ${wgetWaitOpts} "$url&position=${chrom}%3A${chromStart}-${chromEnd}" 2> fetch.log
mv hgt_*.pdf ${sessionID}.pdf
convert -density 300 ${sessionID}.pdf -background white -flatten ${sessionID}.png
You'd fill out `chrom`, `chromStart`, `chromEnd` and `sessionID`. Or use placeholders `$1` etc. and pass them in on the command line.
If you have a few regions to look at, this would just be a matter of modifying this approach to loop over their respective chromosome name and interval values, and naming the output files appropriately. (Or you can use [soda.py][1] for automation.)
[1]: https://github.com/alexpreynolds/soda | biostars | {"uid": 180195, "view_count": 4063, "vote_count": 1} |
<p>Hi, Im a begginer in MEGA software. I have 89 protein sequence for which I need to construct a <strong>phylogenetic tree</strong> using <strong>bootstrap method with 1000 replication</strong> with data set parameter with <strong>complete deletion</strong>. But I am not able to construct a tree because of <strong>3 </strong>sequence whose protein length is <strong>very less</strong> when compared to other 86 sequence. Even I tried by <strong>deleting non conserved regions </strong>in all protein sequence but still I am not able to get a tree because the size of the smaller proteins become smaller and smaller. Kindly help me out in solving this problem.</p>
| Regardless of the approach or program you are using, the input for any phylogenetic estimation approach is an alignment, i.e., an inference of homology. Therefore, by necessity, your sequences **must have a shared ancestry** to even begin to infer a phylogeny. If the sequences are shorter but homologous, a multiple sequence alignment (of nucleotides or amino acids or both via a translation alignment for protein-coding sequences) ought to resolve the sequences by introducing gaps - insertions or deletions. It sounds like you're not doing this; when you say
> The 3 short protein sequence are upregulated in abiotic stresses. Is it ok if I omit the sequence because they have role in abiotic stresses?
It suggests that your dataset may consist of multiple proteins, not the same protein across samples, which is a **completely inappropriate** input for phylogenetic techniques.
In other words, your workflow would be:
1. Construct a dataset of the same locus across all samples
2. Align the amino acids or nucleotides
3. Model selection for ML analysis or NJ distance corrections/uncorrected NJ/UPGMA/etc.
4. [If you decide to use a model: With an appropriate model, any likelihood (maximum likelihood or Bayesian) approach.]
5. Bootstrapping etc. for support.
If you do have sequences with a shared history, I would follow Istvan Albert's recommendation and remove the short sequences if they are truly unalignable. | biostars | {"uid": 119283, "view_count": 6902, "vote_count": 1} |
Probably a basic question but it's difficult to find information on the subject...
When viewing a GTF and BED file in IGV, there seems to be differences in colour (BED track can't be recoloured, just shows as black, GTF can be recoloured). Does anyone know the reason for this? I thought that the GTF and BED formats contained predominantly the same information just in different formats.
Thanks! | <p>BED was originally designed to contain color specification for each row in column 9. In addition it can contain a number of track definitions in the header that can further specify the colors. Using these have fallen out of favor (thankfully so I might add) but IGV seems to stick with the more strict definition of reading the color from the rows themselves.</p>
| biostars | {"uid": 97868, "view_count": 4222, "vote_count": 2} |
In the last few weeks I have been using [agriGO][1] to perform GO enrichment analysis on a non-model organism dataset. In the last week or so I've noticed that the website hosting it is down. I know it is not just my connection since https://isitup.org/bioinfo.cau.edu.cn also shows that is is down. Does anyone know what the status of agriGO is? If it will come back online or if there is a mirror?
Alternatively, I'm looking for a similar service that I could supply a background list of GO terms, and a regulated list of GO terms to perform enrichment analysis of.
[1]: http://bioinfo.cau.edu.cn/agriGO | I guess they turned off agriGO v1 (yours, paper from 2010) and left only agriGO v2 (paper from 2017) up? http://systemsbiology.cau.edu.cn/agriGOv2/
Especially for GO annotation I would always use the newest data possible - see for example https://www.nature.com/articles/s41598-018-23395-2 | biostars | {"uid": 319170, "view_count": 4885, "vote_count": 1} |
Hello, I am trying to find where to download human alpha satellite sequence. So far I have only been able to find one from 1987 paper: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC306152/pdf/nar00260-0438.pdf
I know that Karen H. Miga incorporated her Centromere Reference Models into GRCh38, so downloading her predicted sequences of centromeres would also be helpful - if you can point me where to download those. However, alpha satellite (or multiple since they are variable) would be the best. | Have you tried [RepBase](http://www.girinst.org/server/RepBase/index.php)? | biostars | {"uid": 163592, "view_count": 2961, "vote_count": 3} |
<p>Hi friends,</p>
<p>I am working on a population genetic project using microsatellite markers. Due to the large sample size and 12 microsatellite loci, I will need to use an efficient and solid method/tool to score the fragment analysis data (.fsa file type) from genetic analyzer 3130. In total, I will have nearly 5000 files to be scored. I have done part of my pilot fragment analysis results with a free version software, PeakScanner 1.0 (ABI); however, this software doesn't come with automatic features, and thus I have to score the data manually, which is very time consuming and inconvenient. In addition, we don't have the budget to purchase some commercial softwares (such as, Gene Mapper ) to do this. I am wondering if you guys can give me some suggestions to handle this situation.</p>
<p>OR</p>
<p>I can keep the question short, any substitution software package for ABI GeneMapper to score multiplex fragment analysis data (*.fsa)?</p>
<p>Thank you very much.</p>
| Update for this question:
There's a package called `Fragman` in R just released in 2015, performs fragment analysis using FSA files: https://cran.r-project.org/web/packages/Fragman/ | biostars | {"uid": 53954, "view_count": 8092, "vote_count": 1} |
Hi,
I have SNP data for approximately 600,000 SNPs that I'll be using for eQTL analysis.
I've been advised to derive a pruned set of SNPs that are in approximate linkage disequilibrium (LD).
I've used the SNPRelate::snpgdsLDpruning function in R to do this, using an LD threshold of 0.2.
Is this an appropriate threshold to use?
This takes our 600,000 snps down to ~60,000; using a threshold of 0.1 leaves ~20,000 snps.
This is quite a big difference so I'm wondering if 0.1-0.2 is too stringent.
Is there a more-or-less standard threshold that is used for LD pruning?
Thank you. | The choice of optimal r^2 threshold for LD pruning is highly dependent on the population history of your study subjects. Despite D' that is purely a measure of the non-random association of alleles at two or more loci, r^2 values are informed by allele frequencies as well. For example for two polymorphic loci (A & B), one with 50% allele frequency and the other with 1% frequency which are in complete LD, the D' value would be 1, but r^2 would only be 0.01. This tells us although these two polymorphic loci are in complete LD with each other, the allele B is so rare that 99% of the time it is not observed on the same haplotype with allele A.
To circumvent such situations, a minor allele frequency filtering is carried out prior pairwise LD calculation to remove rare alleles. Sometimes, if your study population is not a true representative of the extant population, this filtering step is not sufficient since rare alleles can rapidly drift up to higher frequencies in structured or founder populations. Although I assume these scenarios are an overstretch to your question, you'd like to use more stringent r^2 values to avoid colinearity of effects among your pruned SNPs.
On the contrary, if you're dealing with a population with massive haplotype diversity (such as sub-Saharan Africans) and you would assume that your study population is not large enough to be a true representative of all haplotypes in the population you may want to use a more relaxed r^2 threshold for pruning.
Overall, as long as you can rationalise your choice of r^2 either thresholds are acceptable, but r^2 < 0.2 is the common practice for European populations. | biostars | {"uid": 450661, "view_count": 4549, "vote_count": 2} |
I am trying to annotate a variant file(generated using strelka) from mice WGS data. This is the command I used:
./vep -i /path/to/somatic.snvs.vcf \
--cache /data/shayantan/mus_musculus/ \
--species mus_musculus
The output variant file has no gene names. Why is this happening? Something wrong with my cache files?
EDIT (@Ram): Sample input VCF:
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NORMAL TUMOR
chr1 3003110 . G T . LowEVS SOMATIC;QSS=17;TQSS=1;NT=ref;QSS_NT=17;TQSS_NT=1;SGT=GG->GT;DP=35;MQ=60.00;MQ0=0;ReadPosRankSum=1.95;SNVSB=3.58;SomaticEVS=0.80 DP:FDP:SDP:SUBDP:AU:CU:GU:TU 18:0:0:0:0,0:0,0:17,17:1,1 14:0:0:0:0,0:0,0:11,13:3,4
chr1 3035137 . G A . LowEVS SOMATIC;QSS=17;TQSS=2;NT=ref;QSS_NT=17;TQSS_NT=2;SGT=GG->AG;DP=70;MQ=40.40;MQ0=10;ReadPosRankSum=0.89;SNVSB=3.23;SomaticEVS=0.10 DP:FDP:SDP:SUBDP:AU:CU:GU:TU 27:0:0:0:3,7:0,0:24,28:0,0 27:0:0:0:4,6:0,0:23,29:0,0
chr1 3035168 . C T . LowEVS SOMATIC;QSS=8;TQSS=2;NT=ref;QSS_NT=8;TQSS_NT=2;SGT=CC->CT;DP=51;MQ=47.72;MQ0=3;ReadPosRankSum=1.78;SNVSB=2.68;SomaticEVS=0.08 DP:FDP:SDP:SUBDP:AU:CU:GU:TU 18:0:0:0:0,0:16,19:0,0:2,4 23:0:0:0:0,0:20,25:0,0:3,3
chr1 3035504 . C A . LowEVS SOMATIC;QSS=15;TQSS=2;NT=ref;QSS_NT=14;TQSS_NT=2;SGT=CC->AC;DP=59;MQ=51.03;MQ0=2;ReadPosRankSum=-1.19;SNVSB=2.71;SomaticEVS=0.09 DP:FDP:SDP:SUBDP:AU:CU:GU:TU 23:0:0:0:3,5:20,22:0,0:0,0 27:0:0:0:4,4:23,28:0,0:0,0
chr1 3043000 . G T . LowEVS SOMATIC;QSS=21;TQSS=1;NT=ref;QSS_NT=21;TQSS_NT=1;SGT=GG->GT;DP=53;MQ=46.60;MQ0=7;ReadPosRankSum=1.70;SNVSB=1.37;SomaticEVS=0.20 DP:FDP:SDP:SUBDP:AU:CU:GU:TU 20:0:0:0:0,0:0,0:18,24:2,3 22:0:0:0:0,0:0,0:18,22:4,4
| [Those variants are all intergenic][1]. There is no gene symbol because no genes are hit.
EDIT (@genomax) - Actual answer is further below in this chain at https://www.biostars.org/p/331209/#331504
[1]: http://www.ensembl.org/Mus_musculus/Tools/VEP/Ticket?tl=F9VQxvXxHkzscTq9 | biostars | {"uid": 331209, "view_count": 2702, "vote_count": 1} |
I'm trying to create a fasta file with all the viral sequences for a particular gene, with taxonomy information in the record description. So far so good, except that while I can see the general host information on the taxonomy page of each virus (For example this virus: http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=1221449 has "Host: plants" as part of its entry) that information is not part of the taxonomy database information I get when I do an efetch query using the taxonomy db id number. And I really want that host information! It's right there, taunting me. If anyone knows how to get at it, I'd really appreciate it.
Here is my query, in case it matters:
handle2 = Entrez.efetch(db="Taxonomy", id=taxid, retmode="xml")
Edit:
Based on what Neilfws wrote, I wrote up some python to scrape the ncbi taxonomy browser for virus host name, for Ruby is Greek to me. Here it is for any other poor saps who need to do this. Depending on the tax uid (and, one presumes, how frisky a PI was feeling when they entered in their sequence), the taxonomy browser sometimes takes you to a list of species links rather than the taxonomy entry, so this code accounts for that....usually.
```py
from bs4 import BeautifulSoup as BS
from urllib2 import urlopen
import re
for tax_id in listoftaxids:
address = 'http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id='+tax_id
page = urlopen(address)
soup = BS(page)
find_string = soup.body.form.find_all('td')
find = 0
for I in find_string:
for match in re.findall('Host:\s'+r'<\/em>'+'(.*?)'+r'<', str(i)):
print match
find += 1
if find == 0:
spec_link = soup.body.form.find_all('a', attrs={'title' : 'species'})
for I in spec_link:
newaddress = 'http://www.ncbi.nlm.nih.gov'+i.get('href')
newpage = urlopen(newaddress)
soup1 = BS(newpage)
find_string = soup1.body.form.find_all('td')
for I in find_string:
for match in re.findall('Host:\s'+r'<\/em>'+'(.*?)'+r'<', str(i)):
print match
find += 1
if find == 0:
print 'SERIOUSLY???'
``` | I'm pretty sure that *Host* is not returned in the XML of an Entrez query. You can get the same XML that efetch returns by visiting a URL like [this one][1] and selecting Send to -> File -> format -> XML, and that does not contain the host.
So all I can suggest is scraping the web page. Which is prone to failure of course, should the HTML change. Currently, there is a single table cell in which information, including the Host, is separated by line breaks. This does not make for easy parsing using *e.g.* XPath.
I came up with this (rough and ready, no error checks or tests) using Nokogiri for Ruby; I'm sure there's something similar in Python.
```
#!/usr/bin/ruby
require 'nokogiri'
require 'open-uri'
def get_host(uid)
url = "http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=Info&lvl=3&lin=f&keep=1&srchmode=1&unlock&id=" + uid.to_s
doc = Nokogiri::HTML.parse(open(url).read)
data = doc.xpath("//td").collect { |x| x.inner_html.split("") }.flatten
data.each do |e|
puts $1 if e =~ /Host:\s+<\/em>(.*?)$/
end
end
get_host(ARGV[0])
```
Save that as *e.g.* *taxhost.rb*, then supply the taxonomy UID as first argument to the script.
```
$ ruby taxhost.rb 12249
plants
$ ruby taxhost.rb 12721
vertebrates
$ ruby taxhost.rb 11709
vertebrates| human
```
[1]: http://www.ncbi.nlm.nih.gov/taxonomy/?term=txid12249%5BSubtree%5D&report=docsum | biostars | {"uid": 144577, "view_count": 5648, "vote_count": 2} |
Hi
Can we sort BAM files according to the read name? As the normal sorting happens on the co-ordinates, can anyone tell how to sort a BAM file on read names? I am trying to run HT-Seq count on paired end SAM files but receiving warnings for which I have to sort the BAM in read names and then create its SAM and then run HT-Seq | See Picard's SortSam command below. You can sort a sam or bam file using queryname or readname.
http://picard.sourceforge.net/command-line-overview.shtml#SortSam | biostars | {"uid": 78318, "view_count": 21941, "vote_count": 3} |
Hello,
I used the RidgePlot to make the plot. What is the definition of "Identity", what is the mean of the "identity"? Is that possible I can delete "Identity"?
RidgePlot(Plasma, features = c("UCP2"), group.by="tech", ncol = 3)
Thank you in advance for your great help!
Please see attached file
Thank you!
Best,
Yue
<a href="https://imgbb.com/"><img src="https://i.ibb.co/34jC86Z/Screenshot-from-2020-08-27-03-58-17.png" alt="Screenshot-from-2020-08-27-03-58-17" border="0" /></a> | Hi,
`Identity` is the identity of the cells in scRNA-seq data. It's not directly related with this plot, but with scRNA-seq data in general. So, the `identity` is any factor/categorical variable that describes each cell. The `identity` of the cells can be the sample, replicate, condition, cell cluster to which they belong to.
In your case it seems to be `condition` or `sample`. But you can set the `identity` to cell clusters/populations/types any variable that annotates each cell to a factor/categorical variable.
You can see the `identity` of your cells by typing:
Idents(pbmc) # assuming that pbmc is your seurat object
You can set the `identity` of your cells to another variable in your seurat object by doing:
Idents(pbmc) <- "replicate"
Where `replicate` is a variable in your seurat object - "pbmc" - annotating each cell as `rep1` or `rep2` (see the example here: https://satijalab.org/seurat/v3.0/interaction_vignette.html ).
I hope this answers your question,
António | biostars | {"uid": 457845, "view_count": 1490, "vote_count": 1} |
Hi all,
I have RNA-seq data for 3 conditions (control, breast, endometrial). Some of the samples where collected and sequenced in two different batches. I want to correct for batch effects using EdgeR and GLMs. I did read the EdgeR vignette, and specifically section 4.5 but I'm still a bit confused with the contrasts matrix and if I'm interpreting the results correctly.
This is my code:
design = model.matrix(~group+batch, data=d$samples)
d$samples$group = relevel(d$samples$group, ref="control")
rownames(design) = colnames(d)
d <- estimateGLMCommonDisp(d, design, verbose=TRUE)
d <- estimateGLMTrendedDisp(d, design)
d <- estimateGLMTagwiseDisp(d, design)
fit = glmFit(d, design)
lrt = glmLRT(fit, contrast=c(0,0,-1,0))
This is my design matrix
intercept groupbrc groupendo batch2
sample.1 1 0 1 0
sample.2 1 0 1 0
sample.3 1 0 1 0
sample.4 1 0 0 0
sample.5 1 1 0 0
sample.6 1 1 0 0
sample.7 1 1 0 1
sample.8 1 1 0 1
Basically what i want to do is pairwise comparisons between the treatments (brc vs normal, endo vs normal, brc vs endo) and accounting for batch effects at the same time. I understand that the (intercept) corresponds to the normal condition but what I don't understand is what the last column (batch 2) means, and if i should include it in my contrasts. The contrasts I've used are the following, i get DE genes but I'm not sure if I'm accounting for batch effects correctly
Brc VS normal `lrt = glmLRT(fit, contrast=c(0,1,0,0))`
Endo VS Normal `lrt = glmLRT(fit, contrast=c(0,0,1,0))`
Brc VS Endo `lrt = glmLRT(fit, contrast=c(0,1,-1,0))`
Thank you! | Brc vs. normal: `lrt = glmLRT(fit, coef=2)`
Endo vs. normal: `lrt = glmLRT(fit, coef=3)`
I would do **Brc vs. Endo** the same as you did, since you do want to contrast two model coefficients. There's no need to use a contrast when you're just wanting to test whether a model coefficient is itself significant. | biostars | {"uid": 102036, "view_count": 6454, "vote_count": 6} |
Hello!
I am very new to biopython and I am trying to accomplish what I think is a simple task: I would like to remove sequences from a protein alignment that do not contain a particular residue at a specified position. I would like to be able to input a protein alignment in fasta format and then output a new alignment where all the sequences that do not meet my criteria are removed
For example: My input protein alignment contains sequences that have a mixture of residues at position 137. I would like to output a new alignment that contains only sequences that have either an arginine or a valine at position 137.
Just a bit of additional clarification: I am sequencing an amplicon of a functional gene and generating protein sequence alignments using RDP's fungene pipeline. I want to further screen the alignment by eliminating any sequences that do not contain a selection of conserved residues at various positions.
Thank you very much for your time.
-J | python script.py <file.fasta>
- read in the sequence `from Bio import SeqIO` and store them in a dictionary. Print Sequences in a file.
- align the sequence file with tool of your choice.
- read in the alignment using from Bio import AlignIO
- iterate through the alignment, and check the residues you are interested.
- make a list of the ones to kept or thrown. Delete your alignment file.
- Print the remaining sequences and re-do alignment.
Hope I haven't confused it. | biostars | {"uid": 116417, "view_count": 4030, "vote_count": 1} |
Hi, all!
I'm trying to do the pathway analysis from bovine RNA-seq data using 'gage' in R, but I couldn't find the 'kegg gene set' for bovine.
If anyone knows how to obtain it, please help me!
Thank you! | In gage package, you can use kegg.gsets function to generate kegg pathway gene set data or ~3000 KEGG species, and go.gsets function to generate GO gene set data for 19 major species.
Unlike the ones provides with gage and gageData, these gene set data are most up-to-date ones.
For details, check:
?kegg.gsets
?go.gsets
The main vignette of gage also showed some example in these functions:
http://bioconductor.org/packages/release/bioc/vignettes/gage/inst/doc/gage.pdf
| biostars | {"uid": 193695, "view_count": 1747, "vote_count": 1} |
What are ‘reads’ in mean reads per cell? What are 'reads' in mean reads per cell? | If this question is about 10x `cellranger` data then details can be found [at this link][1].
Mean Reads per Cell = The total number of sequenced reads divided by the estimated number of cells.
[1]: https://support.10xgenomics.com/single-cell-gene-expression/software/pipelines/latest/output/metrics | biostars | {"uid": 320932, "view_count": 3925, "vote_count": 2} |
Hi all,
I know this may sound simple, but I am really having trouble finding a way how to extract reads that fall **outside** a give region. I tried the -U option but it doesnt work (or maybe I am not using it correctly). I didn't find any example either which uses -U option of samtools view.
I am using this command:
$samtools view sample.bam 2:33050509-33154206 -U without-region.sam
This is paired end data and I want to retain the other paired read that falls outside the region, therefore grepping reads other than the one in my specified region, wont work either.
Thanks in advance. | Hi all,
I found the answer to this issue, its a very simple fix. Sorry for replying so late. Got busy with some other stuff and totally forgot to reply. Anyways, if one wants to select regions outside of a given region, you can use -U option, but as I said earlier it doesnt work with the range specified on command line. But if you give a bedfile containing the region ad then provide -U option, it will work.
For example:
Create bedfile (my.bed):
2 33050509 33154206 LINC00486
and now use this bedfile to extract region other than the ones in the given bedfile by:
$samtools view sample.bam -L my.bed -U without-bed > /dev/null
To my knowledge with this approach, a read that has even a single base in the specified region gets counted as within region, the read just need to span over either of the start or stop of the region (even if its a single base).
Let me know if you have any questions.
Thanks. | biostars | {"uid": 183921, "view_count": 3211, "vote_count": 1} |
<p>Hi guys, </p>
<p>I am not sure, in terms of sequencing what is referred to as color space and nucleotide space. What is a difference between a normal reference genome and a color space genome. Does it has to do something with base calling, that the original data is retained without the conversions to textual nucleotide data!!!! What we use in normal chip-seq and rna-seq analysis. </p>
<p>There's a kind of similar question <a href="http://www.biostars.org/post/show/12075/color-space-data-in-solid/">here</a> but the answer is missing the link.</p>
<p>Thanks</p>
| <p>Short answer: color space refers to the native format of ABI SoLID technology. Color space is translated to nucleotide, or base space (same thing) so that it can be understood.</p>
<p>That technology is not growing in market share, so in the next few years it will become less common. ABI is putting most of their effort behind the Ion Torrent now.</p>
| biostars | {"uid": 44269, "view_count": 12662, "vote_count": 2} |
Hi,
Is it possibly to only output dosages from a generated vcf dosage file from minimac?
Input format is
##fileformat=VCFv4.1
##filedate=2017.7.5
##source=Minimac3
##contig=<ID=29>
##FORMAT=<ID=GT,Number=1,Type=String,Description="Genotype">
##FORMAT=<ID=DS,Number=1,Type=Float,Description="Estimated Alternate Allele Dosage : [P(0/1)+2*P(1/1)]">
##INFO=<ID=AF,Number=1,Type=Float,Description="Estimated Alternate Allele Frequency">
##INFO=<ID=MAF,Number=1,Type=Float,Description="Estimated Minor Allele Frequency">
##INFO=<ID=R2,Number=1,Type=Float,Description="Estimated Imputation Accuracy">
##INFO=<ID=ER2,Number=1,Type=Float,Description="Empirical (Leave-One-Out) R-square (available only for genotyped variants)">
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT 1242658141 1364665948 1242658615
29 11 Chr29:11 A G . PASS . GT:DS 0|0:0.193 0|0:0.193 0|0:0.193
Is there an option in vcftools to output the dosages for each genotype seperatly, without the genotype call? | bcftools query -f "%CHROM\t%POS[\t%DS]\n" minimac3.dose.vcf.gz
| biostars | {"uid": 265400, "view_count": 6147, "vote_count": 1} |
How to download all genes inside all gene sets in C6 (oncogenic signature gene sets, 189 gene sets) in Molecular signature database (Msigdb) I want to download the all genes inside 189 gene sets in R in a data frame?
Thanks in advance | ```r
library(msigdbr)
genesets = msigdbr(species = "Homo sapiens", category = "C6",subcategory = NULL)
View(genesets)
unique_genes <- genesets%>% distinct(gene_symbol)
View(unique_genes)
``` | biostars | {"uid": 9531140, "view_count": 957, "vote_count": 1} |
Hello everyone,
Is it possible to get the telomeric locations in a file for every chromosome? | [UCSC Table Browser][1]
Select
group: All Tables
table: gap
Then filter "create" button and type match "telomere". Then submit and get output.
[1]: https://genome.ucsc.edu/cgi-bin/hgTables | biostars | {"uid": 261132, "view_count": 2149, "vote_count": 1} |
Hello everyone,
I have a question regarding my single-cell RNA-seq data. I have the following pair-end data in `fastq.gz` format.
Read1 (contains 6bp UMI, followed by 6bp cell barcode info and the rest is a polyT stretch):
@J00182:79:HV2WWBBXX:6:1101:11160:38873 1:N:0:ACAGTG
GAGAAGACAGTGTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTCTT
+
AAAFFJJJJJJJJFJJJJJJJJJJJJJJJJJJJJJJJJJ--A-----7AJJFAF-AJJJJJJJJJJJ<A<----A
Read2 is the normal read that I am using for mapping to the reference and the corresponding pair-end mate of the above read looks like this-
@J00182:79:HV2WWBBXX:6:1101:11160:38873 2:N:0:ACAGTG
GCATACTTATTTCCAAACTTTTGGAAAAAGCATAATTTGACAAAAAAGAATACAATTTTTTGCTGTTTCAACCAC
+
A<<AFJFJJJJJJFJJJJJFJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ
Now I would like to append the cell barcode and UMI info from the read1 sequence in front of the header of my read2 in the following format- `@6bpCellbarcode_6bpUMI#Read2header` (with an underscore in between Cellbarcode and UMI and a hash between UMI and the rest of the header).
Example output-
@ACAGTG_GAGAAG#J00182:79:HV2WWBBXX:6:1101:11160:38873 2:N:0:ACAGTG
GCATACTTATTTCCAAACTTTTGGAAAAAGCATAATTTGACAAAAAAGAATACAATTTTTTGCTGTTTCAACCAC
+
A<<AFJFJJJJJJFJJJJJFJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ
`ACAGTG` is the cell barcode and `GAGAAG` is the UMI. Note that the order is flipped here in the output as Read1 first contains UMI and later the cell barcode while the output I need is vice versa.
Can someone please tell me how to do that?
as usual, thank you so much!
| Do yourself a favour and simply use `umi_tools extract`. The headers won't be formatted how you want, but will instead be formatted in a way more compatible with other tools. | biostars | {"uid": 337137, "view_count": 4193, "vote_count": 1} |
Hello,
I want to ask a basic question. So, I read from the readme in the Ensembl ftp for Hg38 reference and it seems there are several type of file of dna, which is only dna, dna_sm, and dna_rm. If I want to align some fastq file to Ensembl, to which kind of file I should choose? I notice that I should merge all of the fasta file which is per chromosome based fasta into 1 single fasta file and then create index and then do the alignment process. Should I merge all of the file that I download from Ensembl or I should choose only what I need? If I need to choose, how to choose what I need based on my fastq. Thank you for your help and explanation. By the way, I already merge all of the fasta file into 1 single file (including the one with PATCH name in it) and currently I'm aligning my data. | The answer is to use the primary assembly. ftp://ftp.ensembl.org/pub/release-79/fasta/homo_sapiens/dna/Homo_sapiens.GRCh38.dna.primary_assembly.fa.gz | biostars | {"uid": 142337, "view_count": 5062, "vote_count": 2} |
I have generated a gVCF (genomic VCF) with GATK using this command line:
gatk HaplotypeCaller -R GRCh38.p13.genome.fa.gz -I bam.bam -D dbsnp_146.hg38.vcf.gz -O genome.g.vcf.gz -ERC GVCF --output-mode EMIT_ALL_ACTIVE_SITES
I'm using GRCh38 patch 13, my BAM file is also based on it.
I want to analyze my genotype for rs4633: https://www.snpedia.com/index.php/Rs4633
As I understand from SNPedia, the reference allele is "C". As I have two variants from my mother and my father, I can have either (C;C), (C;T) or (T;T)
In my gVCF I have the corresponding line:
CHROM POS ID REF ALT QUAL FILTER INFO FORMAT bar
chr22 19962712 rs4633 C T,<NON_REF> 1117.03 . DB;DP=29;ExcessHet=3.0103;MLEAC=2,0;MLEAF=1.00,0.00;RAW_MQandDP=95148,29
The reference allele in chromosome 22 at position 19962712 is "C". But what exactly does "T,<NON_REF>" mean in the ALT column? How should I interpret this? How can I extract the two alleles (father/mother) from this line and map them to the ones from SNPedia: (C;C), (C;T) or (T;T)?
To ask the question in a different way: how can I obtain both my alleles from the ALT column?
To add to this confusion. Another example. I want to know my genotype for rs1801131. SNPedia says: https://www.snpedia.com/index.php/rs1801131
Possible alleles: (A;A), (A;C), (C;C)
Why the "letters" are different than rs4633? Is it because of A=T and G=C?
The line in the gVCF is:
CHROM POS ID REF ALT QUAL FILTER INFO FORMAT bar
chr1 11794419 rs1801131 T G,<NON_REF> 1193.60 . BaseQRankSum=0.306;DB;DP=54;ExcessHet=3.0103;MLEAC=1,0;MLEAF=0.500,0.00;MQRankSum=-4.833;RAW_MQandDP=151742,54;ReadPosRankSum=1.170 GT:AD:DP:GQ:PGT:PID:PL:PS:SB 0|1:19,35,0:54:99:0|1:11794400_G_A:1201,0,583,1258,688,1945:11794400:12,7,20,15
So reference allele "T" and "alternative" is "G,<NON_REF>". So what are my two alleles which I can map to the possible alleles from SNPedia: (A;A), (A;C), (C;C)?
My assumption is that REF column is the allele from GRCh38 and ALT is somehow my own, but there should be two - one from mom and one from dad. What I'm doing wrong?
| Two things to help you:
> But what exactly does "T,<\non_ref>" mean in the ALT column?
You have only made a `g.vcf` file with `HaplotypeCaller`. This is a file that is 'poised' to be genotyped, but that has not yet been performed. You need to use the `g.vcf` file as input in the genotyping step of GATK using the [GenotypeGVCFs][1] tool. This will give you the genotypes that you are expecting to see.
> Why the "letters" are different than rs4633? Is it because of A=T and G=C?
Yes. the MTHFR gene is transcribed on the reverse strand of DNA and SNPedia is showing the alleles corresponding to the mRNA. In the reference sequence these alleles will be the complimentary alleles. from SNPedia:
> rs1801131 is a SNP in the MTHFR gene, representing an A>C mutation at mRNA position 1298, resulting in a glu429-to-ala (E429A) substitution (hence this SNP is also known as A1298C or E429A).
[1]: https://gatk.broadinstitute.org/hc/en-us/articles/360036351612-GenotypeGVCFs | biostars | {"uid": 437718, "view_count": 1131, "vote_count": 1} |
I'm exploring the results from Cuffdiff:
1. `.gene_differential_expression_testing`
2. `.transcript_differential_expression_testing`
On some genes I see multiple entries in the 2) and with opposite log2 direction as:
```
TMEM51 chr1:15479027-15546974 A B OK 0.000151584 0.563561 11.8602
TMEM51 chr1:15479027-15546974 A B OK 0.741354 3.92E-05 -14.2062
TMEM51 chr1:15479027-15546974 A B OK 2.39194 0.460979 -2.37541
```
Moreover the gene TMEM51 is missing in 1)......is that normal?
In the 2) are these multiple gene names are different isoforms, if yes how to know if the isoform is new or known one? | On your transcripts file, everything should have a TCONS ID, if there are two entries with the same gene name, or gene ID, then chances are they'll have a different TCONS ID, implying they're different isoforms of the gene.
If you're seeing the same gene name in the Genes differential expression file from cuffdiff, then that's more concerning, as everything should be collapsed down to an XLOC ID, or essentially a locus that encapsulates the gene.
If you want to know if it's 'novel' under the tuxedo method of detection, you should look at the class code that it's been given, better yet, load up some of the alignments in IGV and judge for yourself. In my experience, I'd take what cufflinks is calling 'novel' with a bucket of salt. | biostars | {"uid": 152708, "view_count": 1928, "vote_count": 1} |
<p>I am still new to bioinformatics and I have not yet fully understood the definition of contig. I have read a few explanations and what I understand is that contigs are fragments of the genome for which we are certain that the order of the bases is correct. Then, we make scaffolds out of the contigs and the goal is to get one scaffold to represent the entire genome.</p>
<p>Right now, I am trying to obtain the full reference genome in FASTA format of Streptococcus pneumoniae BR1064. I found <a href="http://www.ebi.ac.uk/ena/data/view/GCA_000203735#WGS%20Sets_GCA_000203735.2">this</a> at ENA and in the top right category under "Send Feedback" it appears "Genome Representation: full". From there, one can get over to the <a href="http://www.ebi.ac.uk/ena/data/view/AFBZ01000001-AFBZ01000245">assembly contig</a> and there are 245 contigs. Can I just put all this contigs together and obtain the full genome of the organism? If so, is there a particular way to do it? Should it just be in increasing numerical order?</p>
| You're right that contigs are just fragments of the genome, and that scaffolding is the next step in assembly. Usually, this is done using genetic maps and SNPs (or other markers) so that the contigs can be anchored along that genetic map. Here's one recent example with the *Brassica oleracea* genome: [The *Brassica oleracea* genome reveals the asymmetrical evolution of polyploid genomes][1]
Looking at the [publication][2] for your genome, it doesn't look like they performed scaffolding, maybe because there is nothing to use as a reference to scaffold against. Therefore, it is likely that the 245 contigs are just numbered by the order they fell out of the assembly program, an order which doesn't reflect the 'real' genome. In that case, I wouldn't concatenate them.
What do you want to do with the contigs? If it's just SNP calling or something like that, I would leave the sequences as contigs.
[1]: http://www.nature.com/ncomms/2014/140523/ncomms4930/full/ncomms4930.html#methods
[2]: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3133277/ | biostars | {"uid": 105901, "view_count": 4657, "vote_count": 1} |
In Cancer Gene Census (CGC) dataset, ([cgc link][1], there is a column named Role in cancer. The values of this attributes are 'fusion', 'oncogene', 'oncogene, fusion', 'oncogene, TSG', 'TSG, fusion', 'oncogene, TSG'.
What does the 'fusion' mean? Does this mean a mixture? Or a gene that sometimes acts as OG and sometimes as TSG?
[1]: http://cancer.sanger.ac.uk/census | I don't usually point to wikipedia, but this is not a bad description: https://en.wikipedia.org/wiki/Fusion_gene
| biostars | {"uid": 295804, "view_count": 1425, "vote_count": 1} |
I am looking for *Pisum sativum* whole proteome FASTA file. Where can I download it in Uniprot?
Is there any way I can download it manually and also any R package? | From [**NCBI**][1]
From [**Ensembl**][2]
UniProt does not seem to have this proteome at the moment
[1]: https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/024/323/335/GCF_024323335.1_CAAS_Psat_ZW6_1.0/GCF_024323335.1_CAAS_Psat_ZW6_1.0_protein.faa.gz
[2]: http://%20https://ftp.ensemblgenomes.ebi.ac.uk/pub/plants/release-55/fasta/pisum_sativum/pep/Pisum_sativum.Pisum_sativum_v1a.pep.all.fa.gz | biostars | {"uid": 9550785, "view_count": 511, "vote_count": 1} |
Is there a way to convert from genome build to Ens.db version? (ex. grch38 -> EnsDb.Hsapiens.v87) | [A table from Ensembl][1] showing assemblies present in each release.
[1]: https://www.ensembl.org/info/website/archives/assembly.html | biostars | {"uid": 417789, "view_count": 464, "vote_count": 1} |
I am confuse if padj and FDR is same? I am in the mid of analysis and confused if I have FDR or not. I have seen some explanation but not able to get the answer. please share your views.
Thanks | The "false-discovery rate" is the fraction of positives that are false positives at a given p-value threshold. It is a property of the threshold, not a property of a gene. So, if you do a 1000 tests, and get 100 positives at p<0.05, your FDR is 50% (as 50 false positives would be expected in 1000 tests at p<0.05).
Thus, technically speaking, it doesn't make sense to say a gene has an FDR. This is why many tools will use the term "adjusted p-value" or "q-value". The adjusted p-value is the FDR your experiment would have if you set the threshold at the p-value for this gene. Thus, in our example above, a gene with a p-value of 0.05 would have a padj/qvalue of 50% because if you set the threshold at 0.05, you would have a 50% FDR.
| biostars | {"uid": 462897, "view_count": 7367, "vote_count": 2} |
Hi all,
I downloaded hg19.p13(GCF_000001405.25) fasta files from [here][1]. Release date is June 2013.
I needed gene annotation files for my downstream analysis. When I checked the UCSC website, they have the first release hg19 (GCA_000001405.1). Release date Feb 2009 and all the associated files. If I use UCSC annotation files for the hg19.p13, will that be alright? I think it is good to use the latest release of genomes. Since hg19 has all the associated file we decided to use hg19 instead of hg38. Is an annotation file available for hg19.p13?
My downstream pipeline includes both CNA(bowtie) and RNA (tophat,cufflinks,cuffdiff)analysis.
Any suggestions?
Thank you,
Deeps
[1]: http://www.ncbi.nlm.nih.gov/projects/genome/assembly/grc/human/data/index.shtml | Those patch numbers (p.13) don't affect the primary assembly. The sequence you actually use does not differ from hg19.p1 to hg19.p13.
That is why it's called a "freeze". Can you imagine the chaos that would ensue if the genomic sequence changed every few months?
https://www.biostars.org/p/45788/ | biostars | {"uid": 102238, "view_count": 2629, "vote_count": 2} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.