INSTRUCTION
stringlengths 48
14.1k
| RESPONSE
stringlengths 14
8.79k
| SOURCE
stringclasses 1
value | METADATA
stringlengths 48
55
|
---|---|---|---|
Hi all,
I have been trying to use Mutect to compare results from Varscan and other tools. To run MuTect, pre-processing from GATK and Picard tools is necessary.
1\. **Mapped reads using BWA.**
2\. **Convert to sorted BAM using PICARD**
```
java -Xmx4g \
-Djava.io.tmpdir=/tmp \
-jar SortSam.jar \
SO=coordinate \
INPUT=Trimmed_ERR361938_trimmed_bwa.sam \
OUTPUT=Test.bam \
VALIDATION_STRINGENCY=LENIENT \
CREATE_INDEX=true
```
3\. **Mark Duplicates using PICARD**
```
java -Xmx4g \
-Djava.io.tmpdir=/tmp \
-jarpicard-tools-1.119/SortSam.jar \
SO=coordinate \
INPUT=Trimmed_ERR361938_trimmed_bwa.sam \
OUTPUT=Test.bam \
VALIDATION_STRINGENCY=LENIENT \
CREATE_INDEX=true
```
4\. **Realign along INDEL using GATK**
```
java -Xmx4g \
-jar GenomeAnalysisTK.jar \
-T RealignerTargetCreator \
-R /steno-internal/chirag/data/indexGenome/hg19/bwa/hg19.fa \
-o input.bam.list \
-I input.marked.bam
```
**NOW I GET ERROR**
```
##### ERROR
##### ERROR MESSAGE: SAM/BAM file input.marked.bam is malformed: SAM file doesn't have any read groups defined in the header. The GATK no longer supports SAM files without read groups
##### ERROR
```
There is this script which should fix this, but I am not sure of some of the parameter used here,
java -jar ~/unixTools/picard-tools-1.119/AddOrReplaceReadGroups.jar
These parameters need to be used
- RGLB=String
- LB=String Read Group Library Required.
- RGPU=String
- PU=String Read Group platform unit (eg. run barcode) Required.
- RGSM=String
- SM=String Read Group sample name Required.
How do I get information on these parameters, as I am analyzing many published reads.
Are there some other ways to fix this step.
Thanks in advance! | you have to specify the read group from the beginning using the option -R of bwa
> `-R STR` Complete read group header line. '\t' can be used in STR and will be converted to a TAB in the output SAM. The read group ID will be attached to every read in the output. An example is '@RG\tID:foo\tSM:bar'. [null]
you can add a group to your current bams using picard AddOrReplaceReadGroups http://broadinstitute.github.io/picard/command-line-overview.html#AddOrReplaceReadGroups | biostars | {"uid": 115819, "view_count": 24229, "vote_count": 6} |
<p>Hi,</p>
<p>I have a bam files produced from mapping by LifeScope Tool (mapping of SOLiD reads). I would like to know the length of the reads that were mapped to the reference genome based on the information present in bam files. Is there a tool which can give me the stats of such things from bam files?</p>
<p>Thanks!</p>
| A little shorter version:
samtools view test.bam | awk '{print length($10)}' | head -1000 | sort -u
| biostars | {"uid": 65216, "view_count": 48209, "vote_count": 7} |
Hello,
I've been given some data to perform differential expression on, and it the process of QCing the resultant count data, I'm seeing that the library sizes have pretty big discrepancies between the 2 samples shown below. I know a good run of an illumina generates between 10-40 million reads, but is it normal for such runs to produce starkly different total reads like this? i.e.: is this an acceptable library size?
I have conducted PCA on this particular grouping and found that P70F20 is a significant outlier and removed it, so I'm also curious how much of that variability is potentially attributable to the library size? I believe DESeq uses TPM normalization, and that should control for this difference?
Any help is appreciated, I have never seen this magnitude of difference in a single grouping before. Fastqc was perfect as well, adapters were trimmed with cutadapt, alignment and counting was done using the Rsubread package in R.
Thanks!
![Barplot of Library Sizes, with anticipated 20 million reads as hline][1]
[1]: /media/images/15d331b0-717d-4da2-9642-6e235d87 | Differences in depth are not *per se* a problem. It is only a problem when depth is so low that many genes have zeros (dropouts) due to the under-sequencing. Zeros will remain zeros, regardless of the normalization method. Usually you run PCA first to see whether this sample manifests as an outlier. If so you can either remove it or downweights its influence. The latter is implemented in the limma package with the `voomWithQualityWeights` function. | biostars | {"uid": 9551775, "view_count": 556, "vote_count": 1} |
Hi, I used "macs2" to call peaks from my data of ChIP-seq.
This is not my first time to use macs2, but still found myself not being able to grasp what "gappedPeak" stands for in the OUTPUT of macs2.
"NAME_peaks.narrowPeak" , "NAME_peaks.broadPeak" are quite intuitive,
"narrowPeak" means narrow peaks which is suitable for TFs
"broadPeak" means broad peaks which is suitable for histone modifications spanning wider ranges of genomic regions.
But how about "gappedPeak"?
In GitHub of macs2:
"NAME_peaks.gappedPeak is in BED12+3 format which contains both the broad region and narrow peaks."
it seems gappedPeaks contains both categories (narrow & broad), if that is the case, where the gaps come from?
and
https://genome.ucsc.edu/FAQ/FAQformat#format14
for ENCODE gapped peaks ( I assumed that those peaks are called using macs) it explained:
"regions of signal enrichment based on pooled, normalized (interpreted) data where the regions may be spliced or incorporate gaps in the genomic sequence"
"regions may be spliced or incorporate gaps" I could understand RNA being spliced, but for DNA?
Anyone could explain?
[Jun@host workingdirectory]$ less Histonemark_cellA_peaks.broadPeak
Chrom ChromStart ChromEnd name score strand signalValue pValue qValue
chr1 4775387 4776044 Histonemark_cellA_peak_1 41 . 3.22266 5.57941 4.13770
chr1 4847525 4848363 Histonemark_cellA_peak_2 38 . 3.03717 5.39983 3.82081
chr1 5073148 5073709 Histonemark_cellA_peak_3 31 . 3.02635 4.72286 3.10498
[Jun@host workingdirectory]$ less Histonemark_cellA_peaks.gappedPeak
Chrom ChromStart ChromEnd name score strand thickStart thickEnd itemRgb blockCount blockSizes blockStarts signalValue pValue qValue
chr1 4775387 4776044 Histonemark_cellA_peak_1 41 . 4775387 4776044 0 2 645,1 0,656 3.22266 5.57941 4.13770
Of course, to understand a file, it is always better to look insides of it.
By looking the inside of the broadPeak file and gappedPeak file, I realized that the key is to understand what is "thickStart"/"thickEnd".
Then I found a [post trying to address that][1]
but I found myself still being unable to understand.
Especially "Thickstart and thickend are the left and the right boundaries of the coding sequence. " explained
by [Ido Tamir][2] made me more confused. What does "boundaries of the coding sequence" means in the context of ChIP-seq?
[1]: https://www.biostars.org/p/73452/
[2]: https://www.biostars.org/u/2259/ | GappedPeak is a representation of narrow peaks as blocks over a broad peak. To trick the visualisation tools, they use the same format as gene models, but use the narrow peak coordinates as exons coordinates and the broad peak coordinates as coding region coordinate. | biostars | {"uid": 242501, "view_count": 4969, "vote_count": 2} |
Dear All,
I have a VCF file and I want to change part of the header:
```
##contig=<ID=1,length=195471971>
##contig=<ID=10,length=130694993>
##contig=<ID=11,length=122082543>
##contig=<ID=12,length=120129022>
##contig=<ID=13,length=120421639>
##contig=<ID=14,length=124902244>
##contig=<ID=15,length=104043685>
##contig=<ID=16,length=98207768>
##contig=<ID=17,length=94987271>
##contig=<ID=18,length=90702639>
##contig=<ID=19,length=61431566>
##contig=<ID=2,length=182113224>
##contig=<ID=3,length=160039680>
##contig=<ID=4,length=156508116>
##contig=<ID=5,length=151834684>
##contig=<ID=6,length=149736546>
##contig=<ID=7,length=145441459>
##contig=<ID=8,length=129401213>
##contig=<ID=9,length=124595110>
##contig=<ID=MT,length=16299>
##contig=<ID=X,length=171031299>
##contig=<ID=Y,length=91744698>
```
to
```
##contig=<ID=chr1,length=195471971>
##contig=<ID=chr10,length=130694993>
##contig=<ID=chr11,length=122082543>
##contig=<ID=chr12,length=120129022>
##contig=<ID=chr13,length=120421639>
##contig=<ID=chr14,length=124902244>
##contig=<ID=chr15,length=104043685>
##contig=<ID=chr16,length=98207768>
##contig=<ID=chr17,length=94987271>
##contig=<ID=chr18,length=90702639>
##contig=<ID=chr19,length=61431566>
##contig=<ID=chr2,length=182113224>
##contig=<ID=chr3,length=160039680>
##contig=<ID=chr4,length=156508116>
##contig=<ID=chr5,length=151834684>
##contig=<ID=chr6,length=149736546>
##contig=<ID=chr7,length=145441459>
##contig=<ID=chr8,length=129401213>
##contig=<ID=chr9,length=124595110>
##contig=<ID=chrM,length=16299>
##contig=<ID=chrX,length=171031299>
##contig=<ID=chrY,length=91744698>
```
and also change the field of `#CHROM` correspondingly.
I am wondering if there are any tools or easy ways to achieve this. Thank you in advance. | I have **not tested/validated** the commands, but think you need two steps: 1) change the header(s)
bcftools view --header-only $INPUT_FILE | sed 's/##contig=<ID=/##contig=<ID=chr/' | sed 's/##contig=<ID=chrMT/##contig=<ID=chrM/' > $OUTPUT_FILE
and 2) change the data field (as questioned by [@charlesberkn][1]).
bcftools view --no-header $INPUT_FILE | sed 's/^/chr/' | sed 's/^chrMT/chrM/' >> $OUTPUT_FILE
PS: Thanks to Pierre for pointing toward the special handling of chrM(T)!
[1]: https://www.biostars.org/u/20941/ | biostars | {"uid": 160863, "view_count": 9280, "vote_count": 2} |
Hi all,
Is there a tool to calculate the combined length of transcripts which map to a given interval. For example as a function of bedtools or htseq? I can calculate the number of reads mapping to a particular gene using these tools but not the length of the gene segment which has been covered. What I want to know is, which genes in a gff or bed file have alignments from my sorted bam file covering more than 100 bp of their entire length.
Thanks,
James | BEDOPS <a href="http://bedops.readthedocs.org/en/latest/content/reference/statistics/bedmap.html#element-and-overlap-statistics">*bedmap --bases* and *bedmap --bases-uniq*</a> offer total and distinct counts of bases of map elements (*e.g.*, transcripts or reads) which overlap ("map" to) a reference interval (*e.g.*, a set of genes):
$ bedmap --echo --bases genes.bed reads.bed \
> total_read_base_count_over_genes.bed
$ bedmap --echo --bases-uniq genes.bed reads.bed \
> distinct_read_base_count_over_genes.bed
It's not clear which option you want, but the *--bases-uniq* option would effectively merge the overlapping reads into a contiguous region, and then give the length of the overlapping part of the contiguous region. The *--bases* option simply gives the total sum of the length of the overlapping part of each overlapping read.
Regardless of which option you pick, if you add *--delim '\t'*, you can easily pipe to *awk* to quickly filter mapped results, where genes have alignments covering 100 bp or more:
$ bedmap --echo --bases --delim '\t' genes.bed reads.bed \
| awk '$NF >= 100' - \
> genes_with_all_reads_a_total_of_100_bases_or_more_overlap.bed
If you need to convert BAM-formatted reads to BED, you could use BEDOPS <a href="http://bedops.readthedocs.org/en/latest/content/reference/file-management/conversion/bam2bed.html">*bam2bed*</a> and pipe BED-formatted reads to *bedmap*:
$ bam2bed < reads.bam \
| bedmap --echo --bases --delim '\t' genes.bed - \
| awk '$NF >= 100' - \
> genes_with_all_reads_a_total_of_100_bases_or_more_overlap.bed | biostars | {"uid": 166585, "view_count": 1798, "vote_count": 2} |
<p>I have paired-end RNA-Seq data - read1 and read2 - stored in the same fastq file.</p>
<p>I'd like to align the reads using tophat.</p>
<p>Do I have to separate the data into two different files before running tophat?</p>
| If you have any pattern in the read name ( like `1:N:####` or `/1` or `_1` etc) you could use the <a href="http://homes.cs.washington.edu/~dcjones/fastq-tools/fastq-grep.html">fastq-grep</a> to match the pattern to extract the R1 and R2 into two separate files.
Or a simple Awk patter match will do. something like:
zcat fastq.gz | paste - - - - | awk '{ if $1 ~ /< R1 pattern here>/ { print $1"\n"$2"\n"$3"\n"$4" }' | gzip > Read_1.fastq.gz
zcat fastq.gz | paste - - - - | awk '{ if $1 ~ /< R2 pattern here>/ { print $1"\n"$2"\n"$3"\n"$4" }' | gzip > Read_2.fastq.gz
But they need to be kept in order, if they are not. | biostars | {"uid": 123686, "view_count": 5479, "vote_count": 2} |
Hello,
Our lab would like to use [Homer][1] and install it under the following environment:
- Mac OS X 10.8.5
- Xcode Version 5.0.1 (5A2053)
- Xcode command line tools
- Homebrew
- Python 2.7.2 @ /usr/bin/python
It looks like Homer was successfully installed except for one package, seqlogo. The results of our troubleshooting is listed below. We attempted to contact technical support at Homer and Seqlogo but unfortunately never heard back.
How can we get Homer to see seqlogo? Thanks!
**Troubleshooting:**
```
perl configureHomer.pl -check
Current base directory for HOMER is /Users/Shared/homer/./
Checking for standard utilities and 3rd party software:
Checking for wget... /usr/local/bin/wget
Checking for cut... /usr/bin/cut
Checking for gcc... /usr/bin/gcc
Checking for zip... /usr/bin/zip
Checking for unzip... /usr/bin/unzip
Checking for make... /usr/bin/make
Checking for tar... /usr/bin/tar
Checking for gunzip... /usr/bin/gunzip
Checking for gzip... /usr/bin/gzip
Checking for gs... /usr/local/bin/gs
**Checking for seqlogo..The program seqlogo was not found but is required for making motif logos**
Checking for blat... /usr/local/bin/blat
All auxilary programs found.
```
```
$ seqlogo
Command not found
$ which weblogo
/usr/local/bin/weblogo
weblogo 3.4
```
webLogo installed like so
```
sudo pip install numpy
brew install pdf2svg
sudo pip install weblogo
```
`findMotifs.pl` returns help page (good sign, right?)
```
echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/administrator/Downloads/software/ngsplot/bin:/Users/Shared/homer/bin:/opt/X11/bin
```
**Edit 2014.09.24**
As suggested by Ying W, weblogo 2.8.2 must be used in order for seqlogo to be recognized by HOMER.
[1]: http://homer.salk.edu/homer/introduction/install.html | <p>Follow the weblogo install directions found <a href="http://homer.salk.edu/homer/introduction/install.html#Installing_3rd_Party_Software">here</a> you will need a specific version that might not be the right one installed by pip</p>
| biostars | {"uid": 113042, "view_count": 8556, "vote_count": 3} |
Hi,
I've downloaded a set of bams from Illumina's Platinum Genomes experiment (hosted at EBI) and would like to do due diligence and perform a checksum check, but I can't find any published for these files.
Has anyone else downloaded the bam files from ftp://ftp.sra.ebi.ac.uk/vol1/ERA172/ERA172924/bam/ and run `md5sum` on them? These are the CEPH pedigree sequenced to 50x depth.
If so I'd be very grateful for the results to do a comparison (I can publish the checksum values I get once its finished running).
SB | EBI are actually very good at storing their data and usually have the md5 checksums among the files.
To find the md5 number for your data: look at their data table for the accession number instead of raw FTP access: https://www.ebi.ac.uk/ena/data/view/ERA172924&display=html
Click on "select columns", then on "submitted md5" and voila -- md5 is there. There's a bunch of other metadata there as well.
NB: If you want to parse this from code checkout the [text view][1].
[1]: https://www.ebi.ac.uk/ena/data/warehouse/filereport?accession=ERA172924&result=read_run&fields=study_accession,secondary_study_accession,sample_accession,secondary_sample_accession,experiment_accession,run_accession,tax_id,scientific_name,instrument_model,library_layout,fastq_ftp,fastq_galaxy,submitted_ftp,submitted_galaxy | biostars | {"uid": 137254, "view_count": 2952, "vote_count": 1} |
Hi
I've done PCA for my gene expression data after DEG analysis, I can see that my case and control samples clustered distinctly but **I want to report a p.value for this result**, it worth to notice that I have the coordinate csv file that represents coordinates of samples from different phenotypes (case vs control).
Thanks in advance | PCA is an exploratory data analysis method. It does not test a null hypothesis and generate a p-value.
If you want to compute a p-value maybe you should try pvclust package in R. It didn't use PCA but a hierarchical clustering and report p-values for each sub-tree
http://stat.sys.i.kyoto-u.ac.jp/prog/pvclust/ | biostars | {"uid": 279919, "view_count": 10560, "vote_count": 1} |
<p>I got RNASeq data in several samples. I checked the FastQC, seems the read quality are good (Hiseq 2000). But the problem is many reads are mapped to intronic region, and the regions have no any reference exons there (Refseq, ensembl, gencode). We don't know what they are. We guess the problem happend in library preparation, the concentration was low. Now the data has come out and we can't re-sequencing, so we want to remove the reads mapped to intronic region, is there a method to do that? Or anyone have an idea about the intronic reads. Thanks.</p>
| <p>If you have a bed file of exonic regions, or gtf, something like that, you can use <a href='https://code.google.com/p/bedtools/'>BEDTools</a> to filter your .bam for reads that fall in the desired coordinates, using intersectBed</p>
| biostars | {"uid": 73821, "view_count": 3976, "vote_count": 1} |
Hello,
Actually I am doing a tutorial (https://hakyimlab.github.io/psychencode/generate_weights.html) where they indicate that I need a dbSNP150 reference table containing "chromosome, position, ref, alt, rsid, and dbSNPBuildID" information for the hg19 version of the human genome. I am stuck in this step, can anyone help me and indicate where can I find this information? | Since you are working in R, you can often use annotation that is provided as part of Bioconductor. The latest release of Bioconductor for example contains a copy of dbSNP [version 150][1] and [even newer ones][2].
However, the annotation provided by Bioconductor refers to another reference genome build (hg38 instead of the long outdated version hg19). Older (ancient!) versions of Bioconductor might still contain hg19 coordinates, but better use [LiftOver][3] to convert the latest annotation back to the older reference genome build.
Alternatively, there is also [a hg19 annotation from dbSNP 135][4] in the current build.
[1]: https://bioconductor.org/packages/3.15/data/annotation/html/SNPlocs.Hsapiens.dbSNP150.GRCh38.html
[2]: https://bioconductor.org/packages/3.15/data/annotation/html/SNPlocs.Hsapiens.dbSNP155.GRCh38.html
[3]: https://bioconductor.org/packages/release/workflows/html/liftOver.html
[4]: https://bioconductor.org/packages/3.15/data/annotation/html/FDb.UCSC.snp135common.hg19.html | biostars | {"uid": 9526352, "view_count": 613, "vote_count": 1} |
I'm reading [this excellent paper][1] which describes a methodology to standardize variant benchmarking process. They say that the normal binary classification form (i.e., TP, FP, FN, and statistics derived from these) are not simple for variant calls. So they go on to describe how they do this, in tabular form.
It's also worth noting that this is how they define sensitivity and specificity:
sensitivity (the ability to detect variants that are known to be present or “absence of false negatives”)
and specificity (the ability to correctly identify the absence of variants or “absence of false positives”)
They do not use specificity, opting instead for precision because "precision is often a more useful metric than specificity due to the very large proportion of true negative positions in the genome."
![contingency table][2]
I'm having a hard time understanding why FP is the way it is. I would've thought that all the n/a's in the first column (e,g, "ref/var2" and ref/var3", "var1/var3" etc) would be FP as well.
It's also hard to decipher why the n/a's occur in the rest of the table?
This may have something to do with their comment:
"Note that we have chosen not to include true negatives (or consequently specificity) in our standardized definitions. This is due to the challenge in defining the number of true negatives, particularly for indels or around complex variants."
Are the n/a's representing true negatives? It is all so confusing.
[1]: https://www.biorxiv.org/content/biorxiv/early/2018/05/24/270157.full.pdf
[2]: https://www.biorxiv.org/content/biorxiv/early/2018/02/23/270157/T1.medium.gif | > It's also hard to decipher why the n/a's occur in the rest of the table?
I think it is because those scenarios can't happen. If you compar two individuals, and the Query has var1/var3 then there has to be at least a var2 in your comparison, otherwise you can't have var3.
If Query = GT:1/3 then the only scenarios for Truth are GT:1:2, in a vcf file. | biostars | {"uid": 380377, "view_count": 1645, "vote_count": 1} |
Hi all.
I have detected somatic mutation and would like to know its significance for clinical interpretation.
I used MutationMapper in cBioPortal and then got a set of annotated mutations with plot as shown in below.
<img alt="lollipop plot example" src="http://www.cbioportal.org/images/lollipop_example.png" style="height:226px; width:297px" />
As you can see, there are a protein domains in the plot with yellow color and all the mutations are on its domain.
and my questions is below:
The segments that do not belong to yellow domains whose color is grey have no meaning for clinically annotating mutation? so I discard it unconsciously? | Your question is an important one without a definitive answer. In terms of clinical annotation, protein domain is just one of MANY, MANY potential features for prioritization. In practice, I would not discard mutations that are not in a known domain. | biostars | {"uid": 139260, "view_count": 3405, "vote_count": 1} |
I've got a list of 237 Ensembl protein IDs (e.g. ENSP00000493027), and I'm trying to convert them to UniProt accession numbers so that I can retrieve their REST text entry (e.g. https://rest.uniprot.org/uniprotkb/A0A286YF28.txt).
Officially, UniProt says to do it [this way][1]:
import urllib.parse
import urllib.request
url = 'https://www.uniprot.org/uploadlists/'
params = {
'from': 'ACC+ID',
'to': 'ENSEMBL_ID',
'format': 'tab',
'query': 'P40925 P40926 O43175 Q9UM73 P97793'
}
data = urllib.parse.urlencode(params)
data = data.encode('utf-8')
req = urllib.request.Request(url, data)
with urllib.request.urlopen(req) as f:
response = f.read()
print(response.decode('utf-8'))
However, when I run their example code verbatin, I get:
urllib.error.HTTPError: HTTP Error 405: Not Allowed
Along with some tracebacks. What's going on? It happens when I pass my own data too. Never used UniProt programmatically before.
Thanks in advance for any help anyone can give.
[1]: https://www.uniprot.org/help/api_idmapping | Here is an example how to access UniProt's REST with Python 3 with the `requests` package (`pip install requests`).
import json
import requests
import time
URL = 'https://rest.uniprot.org/idmapping'
IDS = ['P40925', 'P40926', 'O43175', 'Q9UM73', 'P97793']
params = {
'from': 'UniProtKB_AC-ID',
'to': 'Ensembl_Protein',
'ids': ' '.join(IDS)
}
response = requests.post(f'{URL}/run', params)
job_id = response.json()['jobId']
job_status = requests.get(f'{URL}/status/{job_id}')
d = job_status.json()
# Make three attemps to get the results
for i in range(3):
if d.get("job_status") == 'FINISHED' or d.get('results'):
job_results = requests.get(f'{URL}/results/{job_id}')
results = job_results.json()
for obj in results['results']:
print(f'{obj["from"]}\t{obj["to"]}')
break
time.sleep(1)
Output:
P40925 ENSP00000233114.8
P40925 ENSP00000410073.2
P40925 ENSP00000446395.2
P40926 ENSP00000327070.5
P40926 ENSP00000408649.2
O43175 ENSP00000493175.1
O43175 ENSP00000493382.1
Q9UM73 ENSP00000373700.3
P97793 ENSMUSP00000083840
| biostars | {"uid": 9528992, "view_count": 699, "vote_count": 1} |
My experiment set up: 2 samples from WT mice, 2 samples from KO mice, all sequenced with 10x 3' scRNA-seq. The cell populations sorted for sequencing are the same.
Seurat offers the anchor transfer method to perform dimension reduction and clustering, but differential expression is still performed upon normalized, untransformed data matrices.
Therefore, I was wondering if there are methods available to perform batch correction that can be used in differential expression. I would appreciate it if you could share your experience in dealing with this type of situation.
Thanks a lot~ | I personally like to aggregate cells per genotype and cluster into pseudobulks and then simply include the batch information into the design as we do in any normal RNA-seq setup such as `~batch+pseudobulkCluster`. Since you have replicates per genotype this comes then down to a normal 2 vs 2 comparison. That having said the few samples I analyzed were always made in a way that there was little batch effect so we processed one replicate of each genotype on the same day without any inter-platform or inter-species comparisons which probably is beyond simply including batch as a covariate. But for me that strategy worked out well so far. I personally use `sumCountsAcrossCells` from the scater package (assumes the SingleCellExperiment format) to get the pseudobulks and then edgeR or DESeq2 for the DE. | biostars | {"uid": 456987, "view_count": 944, "vote_count": 1} |
A student and me are testing the mirDeep2 pipeline on the Drosophila genome. mirDeep2 is using bowtie internally but we got a relatively low mapping rate and therefore also fed the pipeline with an alignment generated by BWA. BWA yielded a larger fraction of aligned reads, and naively I would assume that this should also lead to a larger number of detected miRNAs, but the opposite seems to be true. Why could this be the case? We have used public data only, supporting information below.
Sample: SRR019717 (Drosophila melanogaster), downloaded from SRA
Reads were trimmed using Trimmomatic, min length 18.
Alignment rate:
Total = 5,265,951 reads after trimming
Aligned (Bowtie) = 3,090,394 (58.69%) reads
Aligned (BWA) = 4,644,269 (88.19%) reads
Result:
In species: 438
Novel (Bowtie): 25 miRNAs
Known (Bowtie): 105 miRNAs
In data (Bowtie, at least one read mapped back): 198
Novel (BWA): 33 miRNAs
Known (BWA): 79 miRNAs
In data (BWA): 135
Update: we calculated the overlap between bowtie and bwa aln alignments using bx-python:
Note: the observed overlap Bowtie <-> BWA is vastly asymmetric. A large proportion of Bowtie alignments is also covered at least once by BWA, but BWA also seems to cover a large number of locations that Bowtie does not cover. This might be explained by the hypothesis that BWA alignments are more uniformly spread out over the genome, while Bowtie might generate more localized stacks of reads.
bed_intersect_basewise.py SRR019717_trimmed_bowtie.bed SRR019717_trimmed_bwa.bed >~/baseoverlap.bed; wc -l ~/baseoverlap.bed
**215,130**
bed_intersect.py SRR019717_trimmed_bowtie.bed SRR019717_trimmed_bwa.bed >~/overlap.bed
**3,067,089**
bed_intersect.py SRR019717_trimmed_bwa.bed SRR019717_trimmed_bowtie.bed >~/overlap2.bed
**2,596,771**
----------
A precision-recall plot using the output of mirDeep2 at different cutoffs between -10..10, making somewhat arbitrary assumptions, that all novel miRNAs are false positives, and that there are
438 miRNA's known in mirBase as reported by mirDeep2.
![Precision recall plot based on mirDeep output][1]
[mirdeep2-bowtie.html][2]
[mirdeep2-bwa.html][3]
## Possible culprits ##
Edit: We have several candidates, but these need to be checked carefully:
- Absurd coverage: if there are really only ~500 miRNA of length ~100bp including precursor, even only 1M reads yield already > 500X coverage of the miRNA-ome. Maybe there is an upper limit for local coverage in the pipeline?
- Multi-mappers: More mapped reads might also yield more multi-mapped reads, maybe there is a filter "upper limit multi-mapping" in the pipeline, especially if multi-mapping is to protein coding transcripts?
- Pipeline possibly uses a specific feature of the SAM output of Bowtie?
- Most miRNAs in mirBase from Drosophila where possibly predicted using mirDeep with Bowtie?
[1]: https://s21.postimg.org/cqankzxlz/Precision_Recall_Plot.png
[2]: https://gist.github.com/mdondrup/c093390cc6dc6da67f0500aeb2f5ee25
[3]: https://gist.github.com/mdondrup/5e43825d3f751adbcabf191fac6c0dd0 | Think we are approaching a first solution to the mirDeep paradox.
## Differences in SAM output ##
Most likely the pipeline is adapted to the peculiarities of the output format of bowtie.
Look at the differences for a random multi-mapping read:
Bowtie:
SRR019717.35 16 2R_dna:chromosome_chromosome:BDGP6:2R:1:25286936:1_REF 5049582 255 25M * 0 0 AGCTTTCCGCTGCCAGGCATTCTTC ;;;;;;;;;;;;;;;;;;;;;;;;; XA:i:0 MD:Z:25 NM:i:0
SRR019717.35 16 2L_dna:chromosome_chromosome:BDGP6:2L:1:23513712:1_REF 1302172 255 25M * 0 0 AGCTTTCCGCTGCCAGGCATTCTTC ;;;;;;;;;;;;;;;;;;;;;;;;; XA:i:0 MD:Z:25 NM:i:0
SRR019717.35 16 2R_dna:chromosome_chromosome:BDGP6:2R:1:25286936:1_REF 5056566 255 25M * 0 0 AGCTTTCCGCTGCCAGGCATTCTTC ;;;;;;;;;;;;;;;;;;;;;;;;; XA:i:0 MD:Z:25 NM:i:0
BWA:
SRR019717.35 16 2R_dna:chromosome_chromosome:BDGP6:2R:1:25286936:1_REF 5056566 0 25M * 0 0 AGCTTTCCGCTGCCAGGCATTCTTC ;;;;;;;;;;;;;;;;;;;;;;;;; XT:A:R NM:i:0 X0:i:3X1:i:18 XM:i:0 XO:i:0 XG:i:0 MD:Z:25
While Bowtie outputs one line for each hit, BWA chooses one of the mapping positions and uses the X0 tag to report that there are multiple mappings. However the average number of alignments per aligned read in Bowtie is 1.39, maximum alignments per read are capped by 5, otherwise it is reported as unaligned. In our BWA run, there was no cap on the number of alignments, but the pipeline has no way to find alternative alignments, reducing the coverage. If we assume, that most real miRNA most probably come in 2-5 copies, these have their coverage massively reduced by counting only a single random alignment.
**Other differences:**
- Mapping quality, in Bowtie it is always 255 per mapped read, in BWA it is 16.48 on average (here 0)
- Tags: BWA has more tags, but is lacking the `XA:i:` tag
**Conclusion**
One cannot use BWA as a drop in replacement for Bowtie in mirDeep2! Other aligners might work, but this has to be checked.
| biostars | {"uid": 217254, "view_count": 3909, "vote_count": 4} |
My basic question is why aren't genome assemblers using an underlying [Hamiltonian path algorithm][1]?
My basic, high level understanding of genome assemblers is that they consider nodes in a graph created from short kmer sequences and connect a directed edge when the suffix of one node is the prefix of another. An Eulerian path is found in the resulting graph and this is what constitutes the final assembly. Since the genome has low entropy, redundant and repeated regions, methods like these are needed to resolve the ambiguity that comes with having read lengths that are smaller than the repeated regions.
I found [a Nature article][2] that I thought gave a nice overview. From the article:
> What's more, there is no known efficient algorithm for finding a Hamiltonian cycle in a large graph with millions (let alone billions) of nodes. ... the computational burden of this approach was so large that most next-generation sequencing projects have abandoned it.
The article goes on to mention that the Hamiltonian path problem is NP-Complete.
This confuses me because there are known almost sure polynomial time algorithms that find Hamiltonian paths in Erdos-Renyi random graphs. Further, there are many solvers that use a variety of heuristics to aid in search.
I understand that genome assemblers using Eulerian paths probably do a "good enough" job so it's probably a moot point but is there any real reason why finding Hamiltonian paths hasn't been used in genome assembly? Did researchers hear that it was NP-Complete and get scared away from the prospect? Is there any work to create an assembler that use an underlying Hamiltonian path algorithm?
[1]: https://en.wikipedia.org/wiki/Hamiltonian_path_problem
[2]: http://www.nature.com/nbt/journal/v29/n11/full/nbt.2023.html | In my view, reducing overlap assembly to the Hamilton Path problem is just an illusion. I could not find the full text of old literatures -- among the papers I know, such a formulation seems to first appear in the 2001 Euler paper, a paper that objects to OLC. Even if there are earlier papers on this formulation, the modern theory on OLC as is established by Myers et al has nothing to do with Hamilton Path reduction. It almost seems to me that the Hamilton reduction was introduced only to promote the Eulerian approach.
Anyway, on real data, there may be many dead ends, artifactual reads and missing or false overlaps. Strict Hamilton paths often don't make sense. In addition, we usually require very low misassembly rate. That is best achieved by breaking contigs whenever there are ambiguities -- rather than seeking "optimal" Hamilton Paths that may be sensitive to all kinds of errors. Furthermore, most heuristic overlap-based assemblers has average time complexity better than O(N^2). I guess those approximate Hamilton Path finders can't achieve this?
For long reads, OLC is still the king. To my limited knowledge, the vast majority (if not all) of large genomes sequenced with >=1kb reads were assembled with OLC-based assemblers. If you are interested in de novo assembly, I would recommend to read Myers et al's papers in 1995 (overlap graph), 2000 (celera assembler) and 2005 (string graph). These are the proven theory on OLC and are still used today for PacBio and nanopore assemblies. | biostars | {"uid": 157515, "view_count": 4013, "vote_count": 6} |
I have a dataset with a missing column header for first column. How to name it so that I can find it using
a$
![screenshot][1]
[1]: https://image.ibb.co/nNv7Xn/column_name.jpg
<br/>
I used following command but got error
a$miRNAs<-row.names(a)
Warning message:
In a$miRNAs <- row.names(a) : Coercing LHS to a list
How to find the solution? | It is rownames not a column. Since you have mentioned dplyr, we can convert it to columns using packages from tidyverse, here is an example:
# example dataset with rownames
a <- mtcars[1:3, 1:3]
a
# mpg cyl disp
# Mazda RX4 21.0 6 160
# Mazda RX4 Wag 21.0 6 160
# Datsun 710 22.8 4 108
library(dplyr)
library(tibble)
a <- a %>%
rownames_to_column(var = "myName")
a
# myName mpg cyl disp
# 1 Mazda RX4 21.0 6 160
# 2 Mazda RX4 Wag 21.0 6 160
# 3 Datsun 710 22.8 4 108
| biostars | {"uid": 303025, "view_count": 14562, "vote_count": 3} |
It might be a very stupid question for many of you but, since it's my first variant calling, I didn't figure it out yet.
I have **mpileup**'ped two bam files from two samples, then I filtered the results with **vcfutils.pl** and called the genotypes with **bcftools** **call**. Now I have a **VCF** file containing what I want, but I managed to analyze differences only between the two samples and the reference (which is an assembly coming from a line that is *different* *from both* *samples*).
I have the variants between my two samples and the assembly, but what if I want to detect the differences **between** the two samples? I did it with awk / sed / cut and other command line tools but is there maybe a better and more straight-forward way to do that? Perhaps using bcftools? Up to know, I didn't find it.
Any suggestion appreaciated!
EDIT:
I did it with bcftools gtcheck as well, that works fine. I am asking if there are **other** reliable tools to test!
EDIT 2:
As this post has many views now, I guess it will be useful for everyone to know that I used **bcftools isec** and it worked brilliantly. | depending on what you're looking for, like simple counts for instance, awk/perl solutions could be enough. if you need a deeper description, `bcftools stats` would give you some interesting details.
if your 2 samples are in a single vcf file I would suggest to split them first
bcftools view -Oz -c1 -s sample1 joined.vcf.gz > sample1.vcf.gz
bcftools view -Oz -c1 -s sample2 joined.vcf.gz > sample2.vcf.gz
and then compare them
bcftools stats sample1.vcf.gz sample2.vcf.gz > joined.stats.txt | biostars | {"uid": 224919, "view_count": 16415, "vote_count": 14} |
**Can I transfer this first format to the second one just by basic shell procession or awk or sed on linux? This is a toy example:**
This kind of text file is what I have, three cols, col2 and col3 like range, left close and right open,
chr1 0 2 0
chr1 2 6 1.5
chr2 0 3 0
chr2 3 10 2.1
Transfer to describe each position as:
chr1 0 0
chr1 1 0
chr1 2 1.5
chr1 3 1.5
chr1 4 1.5
chr1 5 1.5
chr2 0 0
chr2 1 0
chr2 2 0
chr2 3 2.1
...
chr2 9 2.1
Someone has idea how to solve? Thanks!!! | awk '{B=int($2);E=int($3);for(i=B;i<E;++i) printf("%s\t%d\t%s\n",$1,i,$4);}' in.bed | biostars | {"uid": 325861, "view_count": 1209, "vote_count": 1} |
I need to calculate the median in the fastq file, how can I do this with awk or bioawk? | Straight up `awk` soliution would be:
cat *.fq | awk 'NR % 4 == 0 { print length($1) } ' | awk ' { a[i++]=$1; } END { print a[int(i/2)]; }'
I would do it with `bioawk` and `datamash`
cat *.fq | bioawk -c fastx '{ print(length($seq)) }' | datamash median 1
you can also implement a better median in awk as shown here:
* https://stackoverflow.com/questions/6166375/median-of-column-with-awk
that works like this:
cat *.fq | bioawk -c fastx '{ print(length($seq)) }' | bioawk ' { a[i++]=$1; } \
END { x=int((i+1)/2); if (x < (i+1)/2) print (a[x-1]+a[x])/2; else print a[x-1]; }'
| biostars | {"uid": 9496524, "view_count": 900, "vote_count": 1} |
Hi Everyone,
We have commissioned RNA-seq and analysis by a company, which provided us with raw fastq files, BAM files, and a count matrix. They used hard clipping and Tophat for the alignment to GRCm38/Mm10. I have attempted to recreate their analysis with HISAT2 (same reference genome), using simply the default parameters and no separate trimming/clipping. I have used samtools to convert the SAM files to BAM files and compared the results from the company's analysis ("Tophat") with my own ("HISAT2") using IGV. The results are very confusing to me. The majority of genes I have (randomly) inspected look highly similar between both sets of BAM files. See this example gene (Tophat in blue, HISAT2 in red):
[Tnf][1]
So far, so good. However, there are also multiple instance where one analysis picked up good reads, while the other did not. This is true in both directions. See these two example genes:
[Gapdh][2]
[Il12b][3]
And, finally, there are some genes in which one alignment just looks weirdly skewed. For instance:
[Ubc][4]
Does anyone know what might account for these differences? Or which alignment I should use for downstream analysis? I'd be grateful for any feedback!
Thomas
[1]: https://ibb.co/ksUQsS
[2]: https://ibb.co/b1emz7
[3]: https://ibb.co/jj9QsS
[4]: https://ibb.co/nO9mz7 | Apparently, this problem is inherent to IGV. When we ran the same files that showed no reads on a different operating system, everything looked just fine. There are, of course, still differences between the Tophat and the HISAT2 alignments, but nothing is missing altogether. I will try to switch to the UCSC Genome Browser, as suggested by WouterDeCoster.
Thank you all for your input! | biostars | {"uid": 309059, "view_count": 1788, "vote_count": 1} |
Hello All,
I have read counts from RNA seq data in row and columns. I want to quantile normalized them in R. I have following code. This gives me the normalized values. However, the output is a matrix. I want the output with row name and column name so that I can perform PCA on it.
data <- read.csv("data.csv",header=T)
head(data)
data_mat <- as.matrix(data[,-1])
head(data_mat)
data_norm <- normalize.quantiles(data_mat, copy = TRUE)
Could someone help me to get that? Thank you in advance. | Try this (note the extra line; also use `data.matrix`, not `as.matrix`):
data <- read.csv("data.csv",header=T)
head(data)
rownames(data) <- data[,1]
data_mat <- data.matrix(data[,-1])
head(data_mat)
data_norm <- normalize.quantiles(data_mat, copy = TRUE)
| biostars | {"uid": 296992, "view_count": 15080, "vote_count": 4} |
So I'm trying to understand the --shift and --extsize parameters in MACS2. I have inherited an ATAC-seq that used the following MACS2 command.
macs2 callpeak -t ATAC_sample-1.bam --nomodel --shift -100 --extsize 200 -g 1.5e9 -f BAM
I'm trying to see how I can apply these principles to a Rscript I have that converts my raw BAM file to a coverage file? The full script can be looked at [here](http://rpubs.com/achitsaz/98857) but the main lines I'm looking at are:
aln <- as(aln, "GRanges") # Converts each read mapping to GRanges coord
aln <- resize(aln, 150) # Extended the reads to the fragment length (4 previous exp)
cov <- coverage(aln) # Get Coverages nucleotide
The definitions for extsize and shift can be seen in the [documentation](https://github.com/taoliu/MACS#--extsize) but I'll copy it for reference.
> --extsize
> >While '--nomodel' is set, MACS uses this parameter to extend reads in 5'->3' direction to fix-sized fragments. For example, if the size of binding region for your transcription factor is 200 bp, and you want to bypass the model building by MACS, this parameter can be set as 200. This option is only valid when --nomodel is set or when MACS fails to build model and --fix-bimodal is on.
>--shift
>>Note, this is NOT the legacy --shiftsize option which is replaced by --extsize! You can set an arbitrary shift in bp here. Please Use discretion while setting it other than default value (0). When --nomodel is set, MACS will use this value to move cutting ends (5') then apply --extsize from 5' to 3' direction to extend them to fragments. When this value is negative, ends will be moved toward 3'->5' direction, otherwise 5'->3' direction. Recommended to keep it as default 0 for ChIP-Seq datasets, or -1 * half of EXTSIZE together with --extsize option for detecting enriched cutting loci such as certain DNAseI-Seq datasets. Note, you can't set values other than 0 if format is BAMPE or BEDPE for paired-end data. Default is 0.
>Here are some examples for combining --shift and --extsize:
>>EXAMPLE 1: To find enriched cutting sites such as some DNAse-Seq datasets. In this case, all 5' ends of sequenced reads should be extended in both direction to smooth the pileup signals. If the wanted smoothing window is 200bps, then use '--nomodel --shift -100 --extsize 200'.
So to summarize my question is how should I resize my raw reads so that the coverage will reflect the --shift and --extsize used by MACS2? From reading the documentation it seems I should shift the reads 100 bp in both directions because it is extending the reads 200 bp 5' -> 3' (--extsze) and back 100 bp with (--shift). | You'll need to `endoapply()` a function that does the `shift()` according to the strand, since that's otherwise ignored. Then `resize(aln, 200)`. Alternatively, use `idx = which(strand(aln) == '+')` for each strand and directly adjust the start accordingly (then `resize()`). | biostars | {"uid": 207318, "view_count": 13697, "vote_count": 3} |
Hello,
I have microarray data from the chip [HuGene-1_0-st] Affymetrix Human Gene 1.0 ST Array [transcript (gene) version], and I am trying to annotate the probe IDs in R using the hugene10sttranscriptcluster.db package and the following function:
annotatedTopTable <- function(topTab, anotPackage)
{
topTab <- cbind(PROBEID=rownames(topTab), topTab)
myProbes <- rownames(topTab)
thePackage <- eval(parse(text = anotPackage))
geneAnots <- AnnotationDbi::select(thePackage, myProbes, c("SYMBOL", "ENTREZID", "GENENAME"))
annotatedTopTab<- merge(x=geneAnots, y=topTab, by.x="PROBEID", by.y="PROBEID")
return(annotatedTopTab)
}
topAnnotated_Condition<- annotatedTopTable(topTab_Condition,
anotPackage="hugene10sttranscriptcluster.db")
topAnnotated_Condition
Is hugene10sttranscriptcluster.db the right package to use?
Thanks
|
Yes, for most use-cases, that package is correct. It will depend on how the original array data was processed.
You only need to do this, by the way:
require(hugene10sttranscriptcluster.db)
mapIds(
hugene10sttranscriptcluster.db,
keys = probes,
column = 'SYMBOL',
keytype = 'PROBEID')
select(
hugene10sttranscriptcluster.db,
keys = probes,
column = c('SYMBOL', 'ENTREZID', 'ENSEMBL'),
keytype = 'PROBEID')
Kevin
| biostars | {"uid": 9485622, "view_count": 1024, "vote_count": 1} |
Hi folks:
I am trying to get raw RNASeq data from ENA and wonder how to know the length of the sequences in those FASTQ files without downloading them?
In the query design I follow their [guide][1], and use the "`nominal_length>XXX"` in the query, but this filtering mechanism doesn't work...
Thanks!
[1]: http://www.ebi.ac.uk/ena/browse/search-rest | Get the count of total bases and number of reads from ENA, and do below calculation,
Read length = Total Bases/ number of reads | biostars | {"uid": 133763, "view_count": 2382, "vote_count": 1} |
<p>I want to find the coordinates of all occurrences of the sequence recognized by a restriction enzyme. I know that using EMBOSS I may do this, but this task seems perfectly fitted for short-read sequence alignment software. However, I didn't find any reference for this.</p>
<p>I used bwa for the task and quickly obtained some results. However, to be on the safe side I will like to ask is someone has done something similar or has some advice, perhaps I am stretching the use of bwa. </p>
<p>I tried the following:</p>
<pre><code>echo -e ">DpnII\bGATC" > DnpII.fa
bwa aln -N -n 0 -o 0 -e 0 -l 4 -k 0 dm3 DpnII.fa > DpnII.sai
bwa samse -n 100000000 dm3 DpnII.sai DpnII.fa > DpnII.sam
</code></pre>
<p>The results seem to match the right sites, also the number of sites (489570 for Drosophila) that I obtain are close to what I expect. </p>
<p>Thanks.</p>
| Here is an update for people landing on this question: The bwa method that I proposed does not work properly. I don't remember exactly why it was wrong but I stop using it. I think it was returning regions with NNNs.
My solution was to use Biopython to search for a pattern. This is quite fast and accurate. Here is the code that finds and sorts the results:
https://github.com/maxplanck-ie/HiCExplorer/blob/master/hicexplorer/findRestSite.py
[update]
Based on the comments by [Michael Dondrup][1] I fixed an error in the code.
[1]: https://www.biostars.org/u/55/ | biostars | {"uid": 17968, "view_count": 4233, "vote_count": 2} |
Hello all,
I am looking at the Level 3 CNV files on TCGA - the ones generated using SNP microarrays. I have a few questions:
1. How is 'segment mean' calculated and what is the exact biological interpretation?
2. For each patient I have two files called e.g. `....hg19.seg.txt` and `...nocnv_hg19.seg.txt`. What does each file contain, and which should I be using?
Thanks for any help,
Stephanie | [Here][1] is the best documentation (that I know of) for TCGA SNP-array based CNV data. Regd. your two questions:
1. CBS segmentation algorithm identifies regions in the genome that, in spite of noise, probably have a uniform underlying copy number. The "segment mean" of each region is reported in the level 3 file, and can be used as the estimated CN-ratio for the segment.
2. "nocnv" just means that germline CN variations are removed. In TCGA, they ran SNP arrays on normal tissue too.
[1]: http://software.broadinstitute.org/cancer/software/genepattern/affymetrix-snp6-copy-number-inference-pipeline | biostars | {"uid": 111417, "view_count": 10491, "vote_count": 5} |
Hi,
I wanted to download 10 abstracts for 10 specific mouse genes (for a total of 100 abstracts). I was thinking of searching Pubmed for it, so I put in my first gene (FOS) in the search bar. But there was no option (that I could find) for mentioning the species I want to restrict my search to. So the results were either for 'Humans' or 'All organisms'.
Since this is a pretty basic feature, I was hoping that Pubmed would allow it inherently. But it wasn't even mentioned in the 'Advanced' options. So, does anyone know if and how this can be done? | This thread piqued my interest since I wanted to write a script to do this for a web app. I'm not sure why the above examples don't work but here's how I got it going.
First you need the gene ID for Fos. You can do that just by searching:
https://www.ncbi.nlm.nih.gov/gene/14281
The Gene ID is the number at the end of the URL above.
You can put this directly into pubmed as the `&from_uid=` in this url:
https://www.ncbi.nlm.nih.gov/pubmed?LinkName=gene_pubmed&from_uid=14281
You can also use elink as suggested by Pierre. To do that just tack it onto this url on the `&id=`:
https://eutils.ncbi.nlm.nih.gov/entrez/eutils/elink.fcgi?dbfrom=gene&dbto=pubmed&cmd=neighbor&retmode=xml&id=14281
Change 'gene' to 'protein' if you use protein accession numbers. This will return an XML like so:
<eLinkResult>
<LinkSet>
<DbFrom>gene</DbFrom>
<IdList>
<Id>14281</Id>
</IdList>
<LinkSetDb>
<DbTo>pubmed</DbTo>
<LinkName>gene_pubmed</LinkName>
<Link>
<Id>26762887</Id>
</Link>
<Link>
<Id>26143639</Id>
</Link>
etc...
The pub med hits are in `<LinkSetDb><Link><Id>`. You can extract these with a parsing tool or do it by hand for a few genes. Then put them into pubmed as a unique id:
https://www.ncbi.nlm.nih.gov/pubmed/?term=26762887
To put this in a program you could loop the gene names and paste them into either of the links above. For the eutils link you need to capture the url and parse it, extract the ids and paste them into a url.
| biostars | {"uid": 250368, "view_count": 1338, "vote_count": 1} |
I've got a bam file of paired-end illumina sequences mapped to a reference sequence and I was wondering if when I call the samtools view command if the results represent only reads that completely span over the region of interest or if it also included reads that intersect it by one or two base pairs.
If not are there any existing tools out there to *only* read thats completely span over the region of interest?
with this visual example of what I mean I would want to only include sequences like case_2 and exclude case_1:
[===================Region-of-Interest=====================]
[---------------------------------------------------------------------------------------------------------] case_1
[-------------------------------------------------------------------------------------------------------------------] case_2
Thanks! | [bedtools intersect][1] with -f 1.0 or -F 1.0 should do what you want. Something like:
bedtools intersect -F 1.0 -a intervarls.bed -b file.bam
I am not an expert on bedtools, though, so you may have to tweak / correct this command.
[1]: http://bedtools.readthedocs.io/en/latest/content/tools/intersect.html | biostars | {"uid": 308021, "view_count": 2181, "vote_count": 1} |
Hello,
I am looking at the human genome available through Bioconductor packages: NCBI GRCh38 and UCSC.hg19. And I do not get all the different sequence names I see. Could you help please?
In the UCSC.hg19, I do have chromosomes 1 to 22 + X and Y. But I also have chrM, chr1_gl000191_random, chr4_ctg9_hap1, chrUn_gl000212....and so on.
In the NCBI GRCh38, I can see sequences called MT, HSCHR1_CTG2_UNLOCALIZED, HSCHR3UN_CTG2, HSCHR2_RANDOM_CTG1....
What are those _random, _unlocalized, chrUn, _ctg9_hap1...please??
Should all those sequences be used when trying to align NGS reads to the genome for instance? or only a subset?
many thanks
Aurelie | chrM == MT == mitochondrial DNA. You should probably use this.
The `chrUn_*` sequences are unplaced contigs. So they may belong in the genome, but we don't know where. I personally use these, but I know that's not universal.
`chr*_unlocalized` and `chr??_*_random` are contigs that are known to belong to a specific chromosome (I've never looked into how that was determined) but haven't yet been integrated in. You'll want to use these.
The various `*hap*` chromosomes are alternate haplotypes. There aren't a lot of good ways to deal with these during alignment yet, so a lot of people don't use these. An upcoming version of BWA is supposed to handle these in a good way, see if you plan to use BWA keep an eye out for that update and then definitely use these. | biostars | {"uid": 114335, "view_count": 1634, "vote_count": 2} |
Hello everyone,
I would like to know if there are tools that feeded with ChIP-seq data for various histone marks, identify the enhancers by the "right" methylation/acetylation combinations.
Until now, I found CSI-ANN, but I was wondering if there are other tools.
Thanks
| The search term you're looking for is "chromatin segmentation". For example, omicstools has [a list of chromatin segmentation tools][1]
Of these, [ChromHMM][2] is probably the most widely used in publications at the moment.
Also see other answers on biostars on chromatin segmentation and enhancers:
- https://www.biostars.org/p/185468
- https://www.biostars.org/p/17912
- https://www.biostars.org/p/175982
- https://www.biostars.org/p/142111
[1]: https://omictools.com/chromatin-segmentation-category
[2]: http://compbio.mit.edu/ChromHMM/ | biostars | {"uid": 200697, "view_count": 3393, "vote_count": 1} |
Hi everyone,
I would be very grateful if you could help me.
I want to download the sequences of all the exons for each human gene. I went to ensembl biomart and tried to do it for BRCA2 first (this was a random choice). First I selected the following attributes: Unspliced(gene), Exon start, Exon end, strand, Gene start. I thought that this information will be enough to 'cut out' all the exons. The first thing I noticed is that the number of exons is almost twice larger then the number I see on the Wiki.
Then I tried to download directly Exon sequences, but the number of sequences was again larger the the number of exons should be and moreover several exon ids correspond to the same sequence.
I also tried to download coding sequence and cDNA, but the length of both sequences is not consistent with 'official' BRCA2 length!
These all drives me crazy and I have absolutely no idea on what to do. All I want is to get for each gene the sequences of its exons. Help me please!
Thanks! | Download refGene.txt.gz from UCSC :
wget http://hgdownload.cse.ucsc.edu/goldenpath/hg19/database/refGene.txt.gz
Then keep only uniq Gene name ( column 13 ) and extract coordinate ( chrom - exonstart - exonend) to a bed file. Column 10 and 11 contains exonsStart and exonEnd position separated by a comma.
zcat refGene.txt.gz|sort -u -k13,13|cut -f3,10,11|awk 'BEGIN{OFS="\t"}{split($2,start,",");split($3,end,","); for(i=1;i<length(start);++i){print $1,start[i],end[i]}}' > exons.bed
Finally, you can get exons sequence using bedtools getFasta :
bedtools getfasta -fi hg19.fa -bed exons.bed -fo exon.fa
paid attention to work only with chromsome name in range 1-22,X,Y.
| biostars | {"uid": 218009, "view_count": 3310, "vote_count": 1} |
Hello All,
I am confused about RA-seq normalization methods and when is appropriate to use them. I appreciate if you could share your thought with me.
My understanding from TMM and TPM is that TMM is appropriate for between sample/condition comparison as it counts for RNA composition in addition to library size (e.g. it is used by edgeR for DE analysis). While a method like TPM is better for within sample comparison. But many online tools use TPM to illustrate a gene expression levels across different tissue types (like GTEx data portal). Isn't this wrong?
The reason that I got into this is that I was performing DE analysis using edgeR the other day to compare samples from a tumor to some normal tissues. Although edgeR reports a downregulation for some genes, an illustration of those gene's expression using TPM values shows kinda an upregulation effect. When I extracted and plotted the pseudo counts from edgeR (TMM normalized counts), I clearly can see the downregulation - but TPM doesn't agree! So I am confused now. Is it wrong to use TPM normalized counts for plotting gene expression?
I appreciate your time and thoughts on this.
Thanks,
| Online tools use TPMs for illustration because they can calculate them once and be done. This is convenient when people want to add samples over time and change groups and samples being visually compared. The results of that will not be as robust to outliers as normalized counts (produced with TMM or another method), but they're usually good enough for visualizations.
What you observed is due to the non-robustness of TPM that I mentioned earlier. TPM is one of those things that has its use, but if you're in a scenario where you can use properly normalized counts then that's usually preferable. As an aside, you could convert your normalized counts (or pseudo-counts) to TPMs and then you'd see the down-regulation. | biostars | {"uid": 317417, "view_count": 13590, "vote_count": 6} |
Dear community,
I ran phyml on a gene family to build a tree. Looking at the results, I'm a bit worried about the log-likelihood value: it's -754, which means the likelihood is almost zero! Does this mean that the program has little confidence in the estimated parameters or the tree topology? I was wondering if I'm understanding this incorrectly.
Thank you so much!!
```
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
--- PhyML 3.3.20190909 ---
http://www.atgc-montpellier.fr/phyml
Copyright CNRS - Universite Montpellier
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
. Sequence filename: exon3_wb_aligned_phy
. Data set: #1
. Initial tree: BioNJ
. Model of nucleotides substitution: GTR
. Number of taxa: 52
. Log-likelihood: -754.13849
. Unconstrained log-likelihood: -348.41766
. Composite log-likelihood: -6438.01266
. Parsimony: 119
. Tree size: 1.35199
. Discrete gamma model: Yes
- Number of classes: 4
- Gamma shape parameter: 1.901
- Relative rate in class 1: 0.28116 [freq=0.250000]
- Relative rate in class 2: 0.64406 [freq=0.250000]
- Relative rate in class 3: 1.06730 [freq=0.250000]
- Relative rate in class 4: 2.00748 [freq=0.250000]
. Nucleotides frequencies:
- f(A)= 0.37232
- f(C)= 0.24092
- f(G)= 0.17327
- f(T)= 0.21350
. GTR relative rate parameters :
A <-> C 0.82212
A <-> G 1.82689
A <-> T 0.53724
C <-> G 0.17829
C <-> T 2.00016
G <-> T 1.00000
. Instantaneous rate matrix :
[A---------C---------G---------T------]
-0.82453 0.25951 0.41474 0.15028
0.40104 -1.00102 0.04048 0.55950
0.89119 0.05628 -1.22720 0.27973
0.26208 0.63136 0.22702 -1.12046
. Run ID: none
. Random seed: 1625516914
. Subtree patterns aliasing: no
. Version: 3.3.20190909
. Time used: 0h0m4s (4 seconds)
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
Suggested citations:
S. Guindon, JF. Dufayard, V. Lefort, M. Anisimova, W. Hordijk, O. Gascuel
"New algorithms and methods to estimate maximum-likelihood phylogenies: assessing the performance of PhyML 3.0."
Systematic Biology. 2010. 59(3):307-321.
S. Guindon & O. Gascuel
"A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood"
Systematic Biology. 2003. 52(5):696-704.
oooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo
oooooooooooooooooooooooooooooooo
``` | The principle of maximum likelihood is to choose the tree which makes the data most probable. As tree probabilities are usually tiny, especially for large datasets, we express them as ln(P), which is the log likelihood (LL). LL is a negative number (log function is negative in the 0-1 range), and the best it can be is 0 (when the probability is 1).
Seems like you have a smallish tree, and the LL you obtained is appropriate. As of this writing I am monitoring an ongoing large tree that has `LL=-978001.783`, so you are golden. LL values are comparable for the same dataset (alignment), but not between different datasets. Higher LL is better. | biostars | {"uid": 9513570, "view_count": 510, "vote_count": 1} |
Hi community!,
I'm annotating variants with the VEP software and I'm finding some unexpected transcript data of the type:
- NM_014938.4_dupl16
- NM_001170637.2_dupl3
```
1 206516261 . C T 47 PASS CSQ=T|non_coding_transcript_exon_variant|MODIFIER|SRGAP2|23380|Transcript|NM_001170637.2_dupl3|mRNA|1/20||NM_001170637.2_dupl3.1:n.65C>T||65|||||||1||SNV|EntrezGene|||||||||||C|C||||||||||||||||||||||||||||||||||||||||||||||||||||||||||1:206516261-206516261|0.4996565||||,
T|missense_variant|MODERATE|SRGAP2|23380|Transcript|NM_001170637.3|protein_coding|1/20||NM_001170637.3:c.65C>T|NP_001164108.1:p.Arg289Trp|864|865|289|R/W|Cgg/Tgg|||1||SNV|EntrezGene||||||NP_001164108.1|||||C|C|OK|||||||||||||||||||||||||||||||||||0.63580||||T|T||||||||||2|||||||1:206516261-206516261|0.4996565||||,
T|missense_variant|MODERATE|SRGAP2|23380|Transcript|NM_001300952.1|protein_coding|1/18||NM_001300952.1:c.65C>T|NP_001287881.1:p.Arg289Trp|864|865|289|R/W|Cgg/Tgg|||1||SNV|EntrezGene||||||NP_001287881.1|||||C|C|OK|||||||||||||||||||||||||||||||||||0.63580||||T|T||||||||||2|||||||1:206516261-206516261|0.4996565||||,
T|non_coding_transcript_exon_variant|MODIFIER|SRGAP2|23380|Transcript|NM_015326.3_dupl3|mRNA|1/20||NM_015326.3_dupl3.1:n.65C>T||65|||||||1||SNV|EntrezGene||YES|||||||||C|C||||||||||||||||||||||||||||||||||||||||||||||||||||||||||1:206516261-206516261|0.4996565||||,
T|missense_variant|MODERATE|SRGAP2|23380|Transcript|NM_015326.4|protein_coding|1/20||NM_015326.4:c.65C>T|NP_056141.2:p.Arg289Trp|864|865|289|R/W|Cgg/Tgg|||1||SNV|EntrezGene||YES||||NP_056141.2|||||C|C|OK|||||||||||||||||||||||||||||||||||0.63580||||T|T||||||||||2|||||||1:206516261-206516261|0.4996565|||| GT:DP:VD:AD:AF:RD:ALD 0/1:9:3:6,3:0.3333:6,0:3,0
```
Searching on the VEP webpage or in the internet I can't find any reference to this kind of "dupl" suffix. Has anyone faced this? I don't know if they are alternatives of the transcript or explain why they are not transcripts on its own.
Thanks in advance!
Cristian.
Edit: Added example of variant with the vep annotation of dup (NM_015326.3_dupl3)
Edit2: Using VEP ensembl version 91.1 with cache v91
| We are investigating these. It looks like some RefSeq transcripts (eg NM_001170637.3) have been duplicated in Ensembl's other_features database with a lower version number and this dupl suffix (eg NM_001170637.2_dupl3). This has been propagated across to the VEP cache, which is why you're seeing them. We don't currently know why, but we believe that you can just ignore them from your analyses for now. | biostars | {"uid": 312593, "view_count": 1677, "vote_count": 2} |
<p>One of our project used to query OMIM data as XML through NCBI's efetch utility, as described here for example:</p>
<p><a href="http://biostar.stackexchange.com/questions/4194/what-is-the-best-way-to-interact-programmatically-with-omim">What is the best way to interact programmatically with OMIM?</a></p>
<p>However, it seems the service has stopped functioning a few months ago. It now simply returns the following error:</p>
<blockquote>
<p>Database: omim - is not supported</p>
</blockquote>
<p>I can find no mention of an update to the API on NCBI's website or anywhere else.
At the same time, the pages accessible directly on OMIM's website offer no link to structured data (XML or otherwise) and the downloadable file, while using some specific format to delimit fields, is still far from the flexibility of the former XML files (for example, it is impossible to retrieve metadata for each reference).</p>
<p>Is there currently any way to regain access to OMIM data in a structured, parsable format (XML...)?</p>
| <p>FYI: this was just announced this morning via twitter:</p>
<p><a href="https://twitter.com/#!/OmimOrg/status/196939511220015104">https://twitter.com/#!/OmimOrg/status/196939511220015104</a></p>
<blockquote>
<p>@OmimOrg</p>
<p>OMIM API is now open, see <a href="http://omim.org/api">http://omim.org/api</a> and
http://omim.org/help/api #OmimOrg</p>
</blockquote>
| biostars | {"uid": 19421, "view_count": 5967, "vote_count": 5} |
Hi guys,
I have an R programming question. I want to compare the genotype (`.GT` columns) with the given alleles (`.allele` columns) in dataframe `df1` and see if they are concordant or not. The rule is that if there is only one allele, it should be 0/0. If there are two alleles (for example GA), the genotype should be 0/1 and that is why I have mismatch in the concordance column for GA. So, A, T, G or C individually is 0/0 and in pair combination is 0/1. Based on this information, I want to add new concordance column next to every two columns that are compared with and have match or mismatch result in concordance column. This concordance column should be cbind next to every compared pair of column and get `Result`. Could you please help me get this done. Thank you.
`df1`
```
1.allele 1.GT 2.allele 2.GT
A 0/0 A 0/0
GA 0/0 CT 0/1
C 0/0 G 0/0
```
`Result`
```
1.allele 1.GT 1.Concordance 2.allele 2.GT 2.Concordance
A 0/0 match A 0/0 match
GA 0/0 mismatch CT 0/1 match
C 0/0 match G 0/0 match
``` | Data:
```r
df <- data.frame(allele1=c("A","AT","C"),GT1=c("0/0","0/0","0/0"),
allele2=c("AT","G","CG"),GT2=c("0/0","0/0","0/0"))
```
First I create a vector of correct translation. But I am assuming that there is always a 0/1, never a 1/0:
```r
library(stringr)
translation <- function(x) ifelse(str_length(df[,x])>1,"0/1","0/0")
correct <- sapply(c(1,3),translation)
```
Finally, mis/match:
```r
match.fun <- function(x) ifelse(df[,x] == correct[,(x/2)] , "match" , "mismatch" )
comparison <- sapply(c(2,4),match.fun)
cbind(df[,1:2],"Concordance1"=comparison[,1],df[,3:4],"Concordance2"=comparison[,2])
``` | biostars | {"uid": 141869, "view_count": 3865, "vote_count": 5} |
Dear all
I followed some links here in biostar to get the differential expressions of my RNAseq data for tumor vs control.
Then I get the pathways, I did somatic mutations using GATK pipeline to get some somatic mutations.
I found some differentially expressed genes and found common somatic mutations in them, could be interesting.
Then I analyzed the top pathways to see if they are related to cancers, nothing interesting is found.
I am still trying to connect pieces. Any suggestion how can I conclude my results? What else we can do?
Thank you |
Just some ideas off the top of my head:
1. **Mutation-to-expression modelling:** For each mutation, test it's association to the expression of
differentially expressed genes (DEGs) in the mutation's 'vicinity'.
This can be as easy as building a linear regression model with
expression as the *y* (dependent) variable and mutation
`present`/`absent` as *x* (predictor). From this, you could derive
R-squared values and cross validated 'shrunk' R-squared values,
along with p-values. y variable would be continuous; x variable
would be categorical with mutation absent as reference/base level.
2. **Transcription factor binding sites:** Check for new TFBS (transcription factor binding sites) that may be
introduced as a result of each mutation. Look at databases like
JASPAR to do this - there are also other threads on biostars. There
are undoubtedly some mutations in your data that are going to
modulate expression of nearby genes. For an idea of mechanism, see
the wonderful study by Manour: <a
href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4720521/">An
Oncogenic Super-Enhancer Formed Through Somatic Mutation of a
Noncoding Intergenic Element</a>
3. **Histone binding regions**: Check for overlapping histone methylation (e.g. H3K27me3) and
acetylation (e.g. H3K27ac) binding regions - this data is available
from the UCSC, as far as I know. A mutation in such regions could
modify chromatin structure and alter expression.
4. **Transcription start sites:** Overlapping transcription start sites (TSS) - again,
available from UCSC I believe
5. ***In silico* prediction:** Use one of those functional / pathogenicity prediction tools. There
have been many tools released in recent years, including ones
tailored for cancer and somatic mutations. Take a quick look here:
https://www.biostars.org/p/286364/#286483
Noe that, technically, you could introduce all of the data from points 2-5 into the model mentioned in point 1. This would then be a robust way to assess the role of each mutation in relation to gene expression.
Finally, thinking just about the RNA-seq data, you could deconvolute it in order to identify immune cell-types that may be present in the tumour. This would give you an indication of the amount of immune cell infiltration, which is likely to differ across your tumors.
There are yet more ideas that I have not mentioned.
Kevin
| biostars | {"uid": 314312, "view_count": 890, "vote_count": 1} |
scRNA-seq novice here: We have four 10X scRNA-seq samples (wildtype and knockout condition) as n=2 each.
Each pair (so one WT and one KO) was produced on the same day respectively, same FACS sorting machine, same lab, same technician etc, so avoiding batch effects as much as we could.
For comparative analysis between the conditions I went through the `scran` / [OSCA][1] workflow and now aim to integrate the datasets. Essentially the choice is now to either merge the datasets without explicit batch correction via fastMNN (and only do per-sample depth correction via `multiBatchNorm` to ensure equal depth across the already normalized samples) or to apply fastMNN. I tested and visualized both approaches for every replicate independently, see below, and see quite different results.
Both replicates (if no fastMNN is applied) show a reproducible separation by condition (which we expect), so probably the influence of condition is greater than any batch effect. When applying fastMNN the two conditions lose this separation.
**Therefore my question**: Are there situations where batch correction masks interesting biological features. Given that we see reproducible separation by condition, could it be more meaningful to not apply fastMNN? If I combine the datasets and only correct cor batch = day (so rep1 is one batch and rep2 is one batch) I manage to preserve the separation by condition. The tSNEs then pretty much look like the left panel in the plot below.
Comments and your experiences with this are appreciated.
![enter image description here][2]
[1]: https://osca.bioconductor.org/
[2]: https://i.ibb.co/M1kf9xM/Rplot.png | I would apply the mutual nearest neighbours correction for the exact reasons laid out by the developer here: https://osca.bioconductor.org/multi-sample-comparisons.html#sacrificing-differences | biostars | {"uid": 437572, "view_count": 2517, "vote_count": 2} |
> The denominator of this expression can be interpreted as a pseudo-reference sample obtained by taking the geometric mean across samples. Thus, each size factor estimate sˆj is computed as the median of the ratios of the j-th sample's counts to those of the pseudo-reference.
But what if there are zero counts? The geometric mean will appear to be zero. And there is no devision by 0. Who does it handle genes with zero counts in some condition? Or the assumption is that if there are no reads on one condition, then the whole gene should be excluded? | <p>if a gene has 0 counts in one sample, it is <a href="https://github.com/Bioconductor-mirror/DESeq2/blob/release-3.1/man/estimateSizeFactorsForMatrix.Rd">excluded</a> from this computation. this is not the ideal solution, but usually the number of genes that have some counts in all samples is so high that the estimation of the size factors is good enough.</p>
| biostars | {"uid": 161162, "view_count": 2863, "vote_count": 2} |
<p>Hello all,</p>
<p>I have a very large list of NCBI gene IDs (such as, gi:47221249, ect). I am hoping to use this list to get the descriptions for each of the gene IDs. Using the GI above it would be "unnamed protein product [Tetraodon nigroviridis]".</p>
<p>Thus ending up with a file that has two columns, one with gene IDs and the other with the description for these IDs.</p>
<p>Would anyone know of a script/software already available to do a job such as this?</p>
<p>Thanks for the help!</p>
| With <a href="http://www.ncbi.nlm.nih.gov/books/NBK179288/">Entrez Direct</a> you can:
efetch -id 47221249 -db protein -format docsum | xtract -element Title
unnamed protein product [Tetraodon nigroviridis]
Or if you have Blast installed you could fetch latest nr and query it with `blastdbcmd`. | biostars | {"uid": 110000, "view_count": 11500, "vote_count": 1} |
Hi all,
While working on a ChIP-Seq data set consisting out of 16 samples I want to see the differences in peak height. To achieve this I first need a merged peak location. To achieve this I was thinking of a tool which could merge all 16 of my peak files at once. E.G. bedtools merge / multiinter. Only thing is that I have the feeling this is not exactly what I want and it becomes difficult to see if bedtools does a good job here.
I want to achieve a peak location in the following way:
A: start = 25 : end = 50
B: start = 30 : end = **65**
C: start = **20** : end = 45
MERGED: start = 20 : end = 65
Which tool/ mode from bedtools can achieve this result. Any hints are very much appreciated. Thanks!
Sander | This should do it, concatenate peak locations in all peaks, sort them and merge
cat A B C .... | sort -k1,1 -k2,2n | mergeBed -i stdin > locations.bed
To know which files the peaks co-ordinates are merged from, you need to have an identifier in each file before merging.
Use
awk '{print $0"\t","peakFile-"NR}' A > A_id
This will add a new last column with label "peakFile-1" incremented per row, which will be nice, if you want to track later, which exact and how many peaks were used from which file for the current peak merge. I leave it you to implement a loop to label all the files automatically. Once its done, use the `collapse `operator from mergeBed.
cat A_id B_id C_id .... | sort -k1,1 -k2,2n | mergeBed -i stdin - o collapse -c 4
where `c` is the column number having the id's we just entered before.
Output:
chr1 20 65 peakFile-3, peakFile-1, peakFile-2
Enjoy! | biostars | {"uid": 144369, "view_count": 7761, "vote_count": 1} |
Hi,
I am working on an assembly of a genome and currently am trying to annotate and visualize the regions of consecutive Ns. I would like to see the regions of my newly assembled genome, that are gaps (NNNNn) .
The way i tried to do it is with Letterfrequencyinslidingwindow command of Biostrings, but it takes forever (more than half an hour) for 1 scaffold, and have many of them. Later on, I make a dataframe out of the output matrix and try to plot it in order to see the regions where Ns are consecutive. Same goes for plotting it. It really takes a lot of time.
The command I use is:
Freq_N_758 <- sapply(chromium.assembly["758"], letterFrequencyInSlidingView, 1, "N")
I am sure that I miss a very important point here and do it the wrong way. What is the correct way to do it in R?
Many thanks,
Alex | Try something like this using the Biostrings Bioconductor package.
```
library(Biostrings)
x = DNAString("ACTGNNTTGGNNNNAACTGC")
y = maskMotif(x,'N')
z = as(gaps(y),"Views")
ranges(z)
as.data.frame(ranges(z))
```
The final output from above will be:
```
start end width
1 5 6 2
2 11 14 4
```
This will run nearly instantaneously for pretty much arbitrary sizes of sequence.
| biostars | {"uid": 305785, "view_count": 1314, "vote_count": 1} |
Hi,
I have received fastq files containing the reads from Illumina MiSeq. Since they are paired-end, there is an R1 and an R2 file for each sample. So I expected to find reads beginning with our forward primer in the R1 files, and reads beginning with our reverse primer in the R2 (or vice versa). However, I find both in both; i.e. about half of the reads in the R1 files begin with the forward primer, and half with the reverse primer; and same with the R2s.
I tried merging them, but this results in about half of the reads being reverse complemented, and this makes things more complicated downstream, so I would like them to all go in the same direction.
I thought to grep for each of the primers, but because of ambiguities and some still having short tags on the beginning, I don't think it's going to work--plus I thought they weren't supposed to be mixed anyway...???
Maybe I don't understand this as well as I thought. Any ideas? Thanks. | Actually the reads are always mixed just the way you describe them. R1 may be forward or reverse. R2 may also be forward or reverse. You are only guaranteed that the pairs are complementary.
Depending on your requirements, you may indeed need to check which is which down the pipeline. Standard alignment utilities do that automatically. | biostars | {"uid": 244732, "view_count": 11651, "vote_count": 1} |
Duplicate reads have first been removed using picard:
java -jar -Xmx3g picard/dist/picard.jar MarkDuplicates INPUT=input.bam OUTPUT=output.bam METRICS_FILE=output.dup_metrics CREATE_INDEX=TRUE VALIDATION_STRINGENCY=SILENT
When I run samtools flagstat on the output bamfile I get the following:
2182812 + 0 in total (QC-passed reads + QC-failed reads)
226710 + 0 duplicates
2176925 + 0 mapped (99.73%:-nan%)
2182812 + 0 paired in sequencing
1091406 + 0 read1
1091406 + 0 read2
2156992 + 0 properly paired (98.82%:-nan%)
2171322 + 0 with itself and mate mapped
5603 + 0 singletons (0.26%:-nan%)
9776 + 0 with mate mapped to a different chr
7030 + 0 with mate mapped to a different chr (mapQ>=5)
So presumably the file still contains 226710 duplicate reads.
If I filter out duplicates according to this table:
Flag Chr Description
0x0001 p the read is paired in sequencing
0x0002 P the read is mapped in a proper pair
0x0004 u the query sequence itself is unmapped
0x0008 U the mate is unmapped
0x0010 r strand of the query (1 for reverse)
0x0020 R strand of the mate
0x0040 1 the read is the first read in a pair
0x0080 2 the read is the second read in a pair
0x0100 s the alignment is not primary
0x0200 f the read fails platform/vendor quality checks
0x0400 d the read is either a PCR or an optical duplicate
using the command:
samtools view -F 400 output.bam | wc -l
I get 506072,
not: total reads (2182812) - duplicates (226710) = 1956102
My question is why does samtools flagstat indicate that there are still duplicates present after running picard tools, and why are these figures inconsistent when I attempt to filter out duplicates using samtools? I've been asked to remove duplicates for a project. At the moment I am very confused as to which method I should use given the inconsistencies in results. | Picard `MarkDuplicates` *marks* duplicates, rather than removing them. Instead, when it finds a duplicate read it sets the duplicate flag to true and then outputs it. To remove the duplicates you need to add
REMOVE_DUPLICATES=true
to the command line. | biostars | {"uid": 208897, "view_count": 10981, "vote_count": 2} |
Hi everyone!
Sorry for this noob question, but I just wanted to ask, what is the best practice in the field for publishing RNA sequencing data analysis scripts? Does it usually happen before the publication, or after? Currently I kept the repo as private and the paper is not published yet. | Generally, it doesn't really matter unless you're publishing a custom, novel method where the code is really an integral part of the publication. In such a case, you'd want to keep it private until publication. Otherwise, keeping your random DESeq2 scripts in a public repo isn't going to hurt you or draw much attention, though nobody will care if you keep it private until submission either.
Just be sure not to include any PHI or sample information in them. | biostars | {"uid": 9479724, "view_count": 815, "vote_count": 2} |
i have been trying to merge some ped/map files. i had removed or renamed the duplicated in the map file so i no longer get a duplication warning.
/home/chrystalla/plink-1.07-x86_64/plink --bfile snp_exc_1 --merge snp_exc_2.map snp_exc_2.ped --make-bed --out lets_merge
ERROR: Problem with MAP file line:
F102 PN12-4100 0 0 1 2 T T G A C C 0 0 C C 0 0 T C A A A T C C C C C C A G C C A A C C A A 0 0 A A G G G G A T C T T C G G C C C A T T A A G T G G A G C T C C A C A A A G G G A G G G A A T T T T C C T T T T G A T T A A A A C C T T G A G G A A T T G G 0 0
i try merging them after converting them into the binary format (bed etc) but the process is just killed
Analysis started...........
Reading genotype bitfile from [ more_exc_1.bed ]
Detected that binary PED file is v1.00 SNP-major mode
Using merge mode 1 : consensus call (default)
Killed
can you help me?
thanks | The MAP line generated in the error looks like a PED line. Either your files (.map .ped) are named incorrectly, or you need to change the order you read them into PLINK.
Your FID and IID should be alphanumeric. Removing the '-' may help? | biostars | {"uid": 232378, "view_count": 2882, "vote_count": 1} |
<p>I want to split my bam file into paired aligned reads and unpaired aligned reads for a much easier downstream analysis.</p>
<p>1) Is this possible to do in one run through the file with samtools?</p>
<p>2) If this is not possible with samtools directly, is it possible with the pysam wrapper library?</p>
<p>If not, perhaps just running the two filters in parallel should reduce IO due to caching...</p>
<p>Ps. I want the pairs in the paired file to be on contiguous lines, so that line 0 and 1 contain the first pair, line 2 and 3 contain the second pair and so on.</p>
<p>3) Is sorting the file by name enough to ensure that paired reads end up next to each other?</p>
<p>Sorry for these possibly dumb questions, I am a complete samtools/paired end reads newb.</p>
| <pre>
samtools view -f 4 -o umapped.bam -U mapped.bam in.bam</pre>
<p>http://www.htslib.org/doc/samtools-1.1.html</p>
<p>"-U FILE Write alignments that are <em>not</em> selected by the various filter options to <em>FILE</em>. When this option is used, all alignments (or all alignments intersecting the <em>regions</em> specified) are written to either the output file or this file, but never both. "</p>
| biostars | {"uid": 138662, "view_count": 1772, "vote_count": 1} |
My goal is to make PCA and correlation plots of my RNA-Seq BAM files. Some useful discussion on BioStars such as [this][1], have helped guide my steps.
In another post, responding to a question on library size normalization at this BioStars [post][2], user [ATpoint][3] indicates size factor calculation must be performed as follows:
## edgeR:: calcNormFactors
tmp.NormFactors <- calcNormFactors(object = raw.counts, method = c("TMM"), doWeighting = FALSE)
## raw library size:
tmp.LibSize <- colSums(raw.counts)
## calculate size factors:
SizeFactors <- tmp.NormFactors * tmp.LibSize / 1000000
In my analyses, I used `DESeq2` instead of `edgeR`, after importing SALMON quantification using tximport, using syntax instructions at [BioConductor][4], as follows:
library(DESeq2)
Design <- DataFrame((cbind(BiolRep, Genotype, TimePoints)))
dim(Design)
#[1] 144 3
rownames(Design) <- colnames(txi.salmon$counts)
design_formula <- ~ TimePoints * Genotype
dds <- DESeqDataSetFromTximport(txi.salmon, Design.df, design_formula)
NormValues <- estimateSizeFactorsForMatrix(counts(dds))
So my **1st question** is this:
To use DESeq2-based size Factors for converting BAM to BigWig, using bamCoverage of deepTools, I would still need to calculate `SizeFactors` as follows, rather than use just the (inverse of the) `NormValues`, am I right?
SizeFactors <- NormValues * LibSize / 1000000
And my **2nd question** is :
With `SizeFactors` calculated as above, I'd then have to use the **inverse of those values** to obtain my final **normalized BAM files** as inputs for use with deepTools, with the following syntax, am I right?
bamCoverage -b $BAM_IN -o $BigWig_OUT --normalizeUsing None --scaleFactor $(1/Size_factor) --effectiveGenomeSize $ACGTtotalCount
Could you please confirm or correct the approach I have indicated above? Thanks in advance!
[1]: http://%20https://www.biostars.org/p/349881/
[2]: https://bioconductor.riken.jp/packages/3.4/bioc/vignettes/tximport/inst/doc/tximport.html
[3]: https://www.biostars.org/u/25721/
[4]: https://bioconductor.riken.jp/packages/3.4/bioc/vignettes/tximport/inst/doc/tximport.html | Please use `1/calcNormFactors(object = raw.count)` as the scaling factor. Whether you use TMM or the default RLE is largely immaterial to me. Your `bamCoverage` command looks fine. | biostars | {"uid": 452905, "view_count": 1761, "vote_count": 2} |
Hi All,
I have a question regarding adapter trimming process of small RNA-seq data. The library for this dataset was prepared using NEBNext multiplex small RNA sample prep set for illumina (E7300S/L: https://www.neb.com/-/media/catalog/datacards-or-manuals/manuale7300.pdf). So I used `bbduk.sh` from BBtools(https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbduk-guide/) using the following command:
bbduk.sh -Xmx1g in=Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq out=/media/owner/7ef86942-96a5-48a7-a325-6c5e1aec7408/trimmed_files/bbmap_trimmed/clean_Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq ref=NEB-SE_5_and_3_Prime.fa ktrim=r k=23 mink=11 hdist=1 tpe tbo
The adapter file`NEB-SE_5_and_3_Prime.fa` contains both 5' and 3' adapters:
>NEB_sRNA_read_1
AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC
>NEB_sRNA_read_2
AGATCGGAA
So the problem I have is with the trimmed file- the trimmed file now got rid of first adapter:
cat clean_Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq | head -n 20000 | grep AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC
owner@owner-HP-Z840-Workstation[bbmap_trimmed]
but it is still showing the second adapter:
owner@owner-HP-Z840-Workstation[bbmap_trimmed] cat clean_Ago2_SsHV2L_1_CATGGC_L003_R1_001.fastq | head -n 1000 | grep AGATCGGAA
TTTCTCTGAGCACTCCTTAGTACAAGATCGGAAGAGCACACGTCGAACTC
AAATGTTCTGAGGACTGGTTCTAGATCGGAAGAGCACCGTCTGAACTCCA
GATGGGCCCCGGGTTCGATTCCCGGCGAACGCACCAGATCGGAAGAGCCA
TTGGACGTGTTATTTTCAGACAAGATCGGAAGAAGCACACGTCTGAACTC
Can someone please help me understand if I need to remove both of these adapters in order to perform downstream/expression analysis? I have been using btrim to trim adapters from RNAseq data (in this case I never had to provide adapter infile), but this is the first time I am doing it with bbmap (and also with trimmomatic) for smallRNAseq data. In case of smallRNAseq data, do we normally trim both 5' and 3' adapters and have both adapter sequences in infile fortrimming? Can someone please help me understand this process? Thank you for your help in advance.
| Hi:
I'm one of the developers of the NEBNext kits.
The reads you show seem to contain the sequence of the 3' adapter as expected for small RNA.
The 5' adapter sequence begins with a G (no A-tailing for this library type). The 5' adapter sequence should not be found in read 1, and I don't see it in the sequences you posted.
For our DNA Ultra*, RNA Ultra* and Small RNA methods, read 1 should always be trimmed with
AGATCGGAAGAGCACACGTCTGAACTCCAGTCAC
(note: this is not always true for other vendor's methods)
For the Small RNA kit, Read 2 (if present, not typically done with short inserts) should be trimmed with
GATCGTCGGACTGTAGAACTCTGAACGTGTAGATCTCGGTGGTCGCCGTATCATT
The sequences of oligos used in our kits are documented at the end of our manuals.
I recommend using a simple program like flexbar [1] for single end trimming.
For paired end reads, presence of true adapter sequence requires that the insert is shorter than read length.
In that scenario, both read 1 and read 2 contain information about the position of the adapter.
Use of an adapter trimmer that takes advantage of this additional signal is advisable.
E.g. seqprep[2], flexbar (-ap option)[1], etc.
[1] Roehr, J. T., Dieterich, C., & Reinert, K. (2017). Flexbar 3.0 - SIMD and multicore parallelization. Bioinformatics, 33(18), 2941–2942. http://doi.org/10.1093/bioinformatics/btx330
https://github.com/seqan/flexbar
[2] https://github.com/jstjohn/SeqPrep | biostars | {"uid": 327802, "view_count": 4500, "vote_count": 2} |
<p>Dear lazyweb,</p>
<p>I wonder how data are stored behind the <strong>Exac</strong> server http://exac.broadinstitute.org/ :</p>
<ul>
<li>SQL only ?</li>
<li>SQL+tabix/VCF ?</li>
<li>Gemini ?</li>
<li>...</li>
</ul>
<p>Thanks.</p>
<p>Pierre</p>
| <p>Pierre;</p>
<p>The source code is here:</p>
<p>https://github.com/konradjk/exac_browser</p>
<p>It looks like a MongoDB database and Python Flask webserver. It also builds on xBrowse:</p>
<p>https://github.com/xbrowse/xbrowse</p>
| biostars | {"uid": 129818, "view_count": 6727, "vote_count": 3} |
<p>Dear All,</p>
<p>I am using <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a> files for chip-seq analysis. The chromosome notation in a usual <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a> file is like: chr1. In my file the chromosomme notation is 1. Is there a way to change that in the <a href='http://samtools.sourceforge.net/SAM1.pdf'>BAM</a> file? For further analysis it is very important to change the notation.</p>
<p>Many thanks!</p>
<p>Greetz Lisanne</p>
| Edit: ~10 years later.
Some options:
**1)** See I wrote http://lindenb.github.io/jvarkit/ConvertBamChromosomes.html
**2)** You can use `samtools view` to dump your data with the header: http://samtools.sourceforge.net/samtools.shtml and replace the chromosomes with `sed` or `awk`. (**not tested** but it could be something like below):
samtools view -h file.bam |\
sed -e '/^@SQ/s/SN\:/SN\:chr/' -e '/^[^@]/s/\t/\tchr/2'|\
awk -F ' ' '$7=($7=="=" || $7=="*"?$7:sprintf("chr%s",$7))' |\
tr " " "\t"
**3)** You could also use [PICARD ReplaceSamHeader](https://gatk.broadinstitute.org/hc/en-us/articles/360036897572-ReplaceSamHeader).
**4)** If you need to work with **samtools** or the **gatk**, another hack is to create a 'mock' **faidx** index to the reference genome. See [my blog](http://plindenbaum.blogspot.com/2011/10/reference-genome-with-or-without-chr.html). | biostars | {"uid": 13462, "view_count": 37716, "vote_count": 15} |
I have two fasta files, with the same headers/names for the sequences but different sequences.
I would like to combine them into one file, so that each sequence has the same name but is a combination of both sequences.
My preferred language is bash script, but I'm open to other suggestions.
thanks. | assuming there are only twho lines per sequence (title/dna) and they are ordered the same way.
paste f1.fa f2.fa | sed -e 's/\t>.*//' -e 's/\t//' | biostars | {"uid": 231806, "view_count": 8418, "vote_count": 2} |
Hello All,
I have a large set of genes with differential expression p-values (from DEseq2) and directional (up or down-regulated) information. I was wondering if anyone knew of a pathway enrichment analysis program that could make use of all of this information rather than just the gene list? Sorry if I have missed an obvious option, but it seems like most of the programs available just use the gene list (like Enrichr) or want to do the entire differential expression calculation from read counts (like GSEA). Does anyone know of a midpoint between these options?
Thank you very much. | There is a nice review in [Plos Comp][1] about the different methods that are available for pathway of GO analysis. The methods (with only a list of significant genes) you are referring to are first generation. There are also second and even third generation. Read it in the paper, and you'll know what to look for.
[1]: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002375 | biostars | {"uid": 321269, "view_count": 1448, "vote_count": 1} |
Hello All,
Our lab is getting some interesting results whereby we have modules that have both positive and negative kme values in a signed network. If a higher power is used this goes away but the lower power already surpasses the R2 and k.means thresholds. Any insight would be helpful.
Best,
Duc | Thanks Kevin and all who have looked at this issue-- Peter Langfelder suggested bypassing use of TOM for the dissimilarity metric used to cluster, which can disagree sometimes substantially with expected module membership based on kME table (based on the experience with this and similar large multi-batch, batch-corrected, data). The batch variance removal prior to WGCNA blockwiseModules() may/may not play a role, which begame the focus of the above post. Regardless, adding the parameter to bypass TOM and use the adjacency matrix instead, TOMType="none" seems to have handled all abnormal assignments in a kME table of a similar network, and I suspect it will work for these data. Cf. [interactions on the bioconductor forum][1] for more of Peter's response.
Practical solution found -- I recommend Duc close this thread!
[1]: https://support.bioconductor.org/p/119506/#119531 | biostars | {"uid": 339326, "view_count": 3754, "vote_count": 1} |
I may be missing something in the html help page, but is there a way not to have any labels aside from changing the label size to 0? Thanks. | Sorry, I should have made this more intuitive. You just need to do:
lab = NA
Kevin
| biostars | {"uid": 415803, "view_count": 7442, "vote_count": 2} |
I would like to use regular expressions to identify a motif in an amino acid sequence. Part of the the motif is described as '2 or more out of XXXX are D or E'. I wonder if there is a way to specify this part directly with regular expressions instead of writing out all the alternatives or using a more iterative approach.
I'm actually using this in the find box of my editor (sublime text) as it accepts regex (not sure what extensions/definitions it goes to). Otherwise a perl version of regex is where I would implement this.
Thanks!
edit: changed title slightly
edit: changed question to include *or more*. | <p>I think I've figured it out now, using lookahead <code>(?=pattern) </code>to link two regular expressions like an <code>AND</code>:</p>
<p><s><code>(?=.?[DE]{1,4}.?[DE]{1,2}.?).{4}</code></s></p>
<p>The first part (in brackets) stipulates the pattern described by the following part must have at least 2 Ds or Es which may have other characters before, after or between them. The second part (following brackets) says the result must be four characters long.</p>
<p>EDIT: PLUS an alternate with two wildcard characters in the middle</p>
<p><code>(?=.?[DE]{1,4}.?[DE]{1,2}.?|[DE]..[DE]).{4}</code></p>
<p>I'm not sure how this would deal with overlapping motifs (I only came across regular expressions recently) but this is adequate for my needs now.</p>
| biostars | {"uid": 104868, "view_count": 2412, "vote_count": 1} |
Hi all,
Is there an API somewhere for fetching lists of accession numbers from NCBI that match some search criteria?
Context: I'd like to use their SRA Toolkit's 'prefetch' functionality (http://www.ncbi.nlm.nih.gov/books/NBK47540/#SRA_Download_Guid_B.The_SRA_Toolkit) to grab a bunch of sra files as part of a larger automated pipeline, but I don't want to have to cut and paste accession numbers one by one from a web-based search. | <p>You mean like <a href="http://www.ncbi.nlm.nih.gov/books/NBK179288/">Entrez Direct</a> or the less user friendly <a href="http://www.ncbi.nlm.nih.gov/books/NBK25501/">eUtils</a>?</p>
| biostars | {"uid": 106842, "view_count": 2621, "vote_count": 1} |
I am looking for a tool/script/pony to correct the REF column in a vcf file whenever that nucleotide doesn't match the reference genome, as supplied in fasta format.
It sounds like a common task but I could not find something. I did find `bcftools +fixref` but that only works for SNPs. My vcf files are from structural variants.
Cheers,
W | This is the python solution I came up with, using cyvcf (minimally requiring v0.10.2) and pyfaidx. It can use compressed vcf and fasta files.
https://gist.github.com/wdecoster/d7fa440a74afd4607bb321ae0986fccd | biostars | {"uid": 347588, "view_count": 3945, "vote_count": 2} |
Hi all,
I'm looking for a simple solution for renaming fasta headers.
I have this fasta header
>trpE___AA_HMM___6fa05435949258489b608db9e58e5ba38821f2f26fffe5755daff43abin_id:MALBOS1|source:AA_HMM|e_value:5.2e99|contig:MALBOS1_000000117228|gene_callers_id:113772|start:215745|stop:217260|length:1515
And I would like to rename it only like this
>MALBOS1_000000117228
That means, remove everything before the pattern "contig:" and after "|gene_callers_id"
Any ideas?
thanks
| Here's a solution using cut:
cut -d '|' -f 4 start.fasta | sed 's/contig:/>/' > end.fasta
| biostars | {"uid": 9546056, "view_count": 562, "vote_count": 1} |
Hi all,
I've just written a tool adding one or more **extra column** in a [VCF file](http://www.1000genomes.org/wiki/doku.php?id=1000_genomes:analysis:vcf3.3). The header now looks like this:
(...)
#CHROM POS ID REF ALT QUAL FILTER INFO MY_COL1 MY_COL2 FORMAT NA00001 NA00002 NA00003
(...)
Is there something in the **VCF** spec saying that another column can't be added ?
because when I used **VCFTOOLS**, it says:
vcftools --vcf file.vcf
(...)
Scanning file.vcf ...
Ninth Header entry should be FORMAT: MY_COL1
Currently scanning CHROM: 19
Currently scanning CHROM: 20
Currently scanning CHROM: X | Instead of inserting new columns which will screw up most tools, you should add your custom information at the **ANNO** column. This is what that field is designed for. With perl, it is very easy to extract the key-value pair there, e.g.:
perl -ane 'print "MYKEY=$1\n" if $F[7]=~/MYKEY=([^;]+)/'
Furthermore, VCF is not only used for SNPs, but also for INDELs and SVs. To make this format, various people from several major sequencing centers have joined the discussion. In my opinion, it is quite stable now. Small details may be changed in future, but not the number of columns. | biostars | {"uid": 994, "view_count": 7396, "vote_count": 2} |
Dear all,
I have pair-end RNA-seq data (Illumina) from parasite and I would like to do De-Novo assembly by TRINITY. I have reference genome of my host organism so I can map my data to host and remove from fastq contaminations.
My plan is:
1. Map with bwa/bowtie/novoaling my pair-end FASTQ files to a host reference genome
2. Remove hits from fastq files (cleaning contaminations)
3. For the rest of FASTQ files use TRINITY for De-Novo transcript assembly
My question is:
May I use aligners (bwa etc.) and align raw fastq files to host DNA and then remove contaminants from fastq files? Question is because my data are from RNA-seq project NOT DNA.
How can I remove the sequences from raw fastq files that align to host DNA (cleaning process)?
Or if you have any other advice how to prepare data to TRINITY pipeline I will appreciate it.
Thank you so much for any comment and sharing your experience. | If you would like a Galaxy solution, this filters by ID: http://toolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id / https://github.com/peterjc/pico_galaxy/tree/master/tools/seq_filter_by_id
This filters using a SAM/BAM mapping file: http://toolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_mapping / https://testtoolshed.g2.bx.psu.edu/view/peterjc/sample_seqs / https://github.com/peterjc/pico_galaxy/tree/master/tools/seq_filter_by_mapping | biostars | {"uid": 120756, "view_count": 5773, "vote_count": 1} |
Greetings,
I'm trying to get a sense of what people are using for local assembly. Ideally I'm looking for a tool that takes a BAM file and outputs a fasta for a region of interest.
EDIT:
I'm working with whole genome data. | Look no further than [Scalpel][1].
I've hacked out [the microassembler component of scalpel][2]. It does basically what you're describing except it produces an assembly graph from the BAM file in a particular region.
[1]: http://scalpel.sourceforge.net/
[2]: https://github.com/ekg/microassembler | biostars | {"uid": 126886, "view_count": 1967, "vote_count": 3} |
Dear all,
I am trying to install [Primer3][1] on an Ubuntu machine. Following the instructions, I moved into primer3-<release>/src and typed `make all` but i got:
/usr/local/lib/primer3/src$ make all
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 primer3_boulder_main.c
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -o format_output.o format_output.c
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -o read_boulder.o read_boulder.c
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -o print_boulder.o print_boulder.c
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -Wno-deprecated -o libprimer3.o libprimer3.c
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -o p3_seq_lib.o p3_seq_lib.c
ar rv libprimer3.a libprimer3.o p3_seq_lib.o
ar: creating libprimer3.a
a - libprimer3.o
a - p3_seq_lib.o
ranlib libprimer3.a
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -o dpal_primer.o dpal.c
dpal.c: In function ‘void print_align(const unsigned char*, const unsigned char*, int (*)[1600][3], int, int, const dpal_args*)’:
dpal.c:1036:5: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation]
for(i=j;i<j+70;i++) fprintf(stderr, "%c",sy[i]); fprintf(stderr,"\n");
^~~
dpal.c:1036:54: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’
for(i=j;i<j+70;i++) fprintf(stderr, "%c",sy[i]); fprintf(stderr,"\n");
^~~~~~~
ar rv libdpal.a dpal_primer.o
ar: creating libdpal.a
a - dpal_primer.o
ranlib libdpal.a
g++ -c -g -Wall -D__USE_FIXED_PROTOTYPES__ -O2 -ffloat-store -o thal_primer.o thal.c
thal.c: In function ‘void thal(const unsigned char*, const unsigned char*, const thal_args*, thal_results*)’:
thal.c:429:16: error: ISO C++ forbids comparison between pointer and integer [-fpermissive]
if ('\0' == oligo_f) {
^~~~~~~
thal.c:434:16: error: ISO C++ forbids comparison between pointer and integer [-fpermissive]
if ('\0' == oligo_r) {
^~~~~~~
thal.c: In function ‘void tableStartATS(double, double (*)[5])’:
thal.c:1200:4: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation]
for (i = 0; i < 5; ++i)
^~~
thal.c:1203:6: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’
atpS[0][3] = atpS[3][0] = atp_value;
^~~~
thal.c: In function ‘void tableStartATH(double, double (*)[5])’:
thal.c:1212:4: warning: this ‘for’ clause does not guard... [-Wmisleading-indentation]
for (i = 0; i < 5; ++i)
^~~
thal.c:1216:6: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘for’
atpH[0][3] = atpH[3][0] = atp_value;
^~~~
Makefile:192: recipe for target 'thal_primer.o' failed
make: *** [thal_primer.o] Error 1
what I did wrong?
thank you
[1]: https://sourceforge.net/projects/primer3/ | I finally was able to install it following the instructions in the github page here https://github.com/primer3-org/primer3
Installing
sudo apt-get install -y build-essential g++ cmake git-all
git clone https://github.com/primer3-org/primer3.git primer3
cd primer3/src
make
make test | biostars | {"uid": 306358, "view_count": 3731, "vote_count": 1} |
As the title says, Discovar De Novo (52488 - I think this is some version identifier) keeps saying that it can't allocate memory - it then reliably aborts. This is driving me up the wall because I'm queueing often for days for access to a 1 Tb compute node on the HPC.
**The details of my sequencing data:**
-------------------------------
A single PE library (Illumina HiSeq 2500, 2x250 bp, 500 bp insert). Originally ~ 120x depth but I have tried subsampling this to 50% of that and get the same error. Genome size is estimated at ~ 300 Mb.
**The node I'm running on:**
============================
- hardware type: x86_64
- cache size: 35840 KB
- cpu MHz: 2400.000
- cpu model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
- physical memory: 1007.57 GB
**The invocation I last used (this is the 50% subsampling I mentioned above):**
------------------------------------------------------------------------
DiscovarDeNovo READS=/scratch/genomicsocorg/mwhj1/Assemblies_2MPs/SC1702273-R3/20_50pc_R1.fastq,/scratch/genomicsocorg/mwhj1//Assemblies_2MPs/SC1702273-R3/20_50pc_R2.fastq OUT_DIR=/scratch/genomicsocorg/mwhj1/Assemblies_2MPs/Test4_20 MAX_MEM_GB=900 NUM_THREADS=28
**The error message (this has popped up at different points during assembly and is always the same):**
------------------------------------------------------------------------
> "Dang dang dang, we've got a problem. Attempt to allocate memory
> failed, memory usage before call = 38.87 GB."
Further up in the output log I can see, reliably, that peak memory usage always successfully gets into the 500 Gb range during steps prior to this error appearing.
Discovar De Novo suggested the following solutions:
> - Run without other competing processes (if that's the problem).
> - Run on a server having more memory, or reduce your input data amount.
> - Consider using the MAX_MEM_GB or MEMORY_CHECK options (if available).
I don't think 1 is an issue (but details of the top memory processes on the node, at the time of Discovar De Novo giving up are below) - IT services here agree that this is not the issue.
2 is not an option here as I am using our highest memory (1 Tb) nodes.
3 I have tried and does not seem to help.
Top memory processes on node, as reported in the output log, at the time of failure (I *think* all of these are from Discovar De Novo):
> .0. our_new_handler(), in RunTime.cc:586
> 1. __gnu_cxx::new_allocator<KmerRecord<200> >::allocate(...), in new_allocator.h:104
> 2. _Vector_base<KmerRecord<200>, allocator<KmerRecord<200> > >::_M_allocate(...), in stl_vector.h:168
> 3. void vector<KmerRecord<200>, allocator<KmerRecord<200> > >::_M_emplace_back_aux<KmerRecord<200> const&>(...), in vector.tcc:404
> 4. vector<KmerRecord<200>, allocator<KmerRecord<200> > >::push_back(...), in stl_vector.h:911
> 5. vec<KmerRecord<200>, allocator<KmerRecord<200> > >::push_back(...), in Vec.h:153
> 6. KmerParcelVec<200ul>::ParseReadKmersForParcelIDs(...), in KmerParcelsBuilder.cc:331
> 7. KmerParcelVec<200ul>::RunNextTask(...), in KmerParcelsBuilder.cc:408
> 8. KmerParcelVecVec<200ul>::RunTasks(...), in KmerParcelsBuilder.cc:516
> 9. ParcelProcessor<200ul>::operator()(unsigned long), in KmerParcelsBuilder.h:258
> 10. void KmerParcelsBuilder::BuildTemplate<200ul>(), in KmerParcelsBuilder.cc:578
> 11. KmerParcelsBuilder::Build(unsigned long), in KmerParcelsBuilder.cc:743 (discriminator 1)
> 12. void MakeAlignsPathsParallelX<2ul>(...), in MakeAlignsPathsParallelX.cc:210
> 13. base_vec_vec_to_mutmer_hits(...), in ReadsToPathsCoreX.cc:438 (discriminator 1)
> 14. ReadsToPathsCoreX(...), in ReadsToPathsCoreX.cc:743
> 15. ReadsToPathsCoreY(...), in ReadsToPathsCoreX.cc:796
**Details of my trials and tribulations:**
------------------------------------------
I have talked to IT services at my university about this issue. The first time it happened I only requested 350 Gb of memory from the scheduler - that job was allocated a 1 Tb node and hit this issue. I resubmitted it with a request for 1000 Gb of memory, and included Discovar De Novo's optional arguments MAX_MEM_GB (I put 1000) and MEMORY_CHECK - I got the same issue. IT services said they were confident that I was securing an entire 1 Tb node (the biggest we have here) with my scheduler options, and suggested giving a bit of overhead in the MAX_MEM_GB option, so I submitted the assembly again with MAX_MEM_GB=960. Discovar De Novo ran and checked the available memory and could only access 950 Gb, reduced my 960 figure to 950, then hit the same problem. I re-submitted it with MAX_MEM_GB=900 (and no MEMORY_CHECK option because I was worried about the assembler deciding to increase this figure if it could see more available - maybe this was a mistake) and got the same error. All of these attempts were using the full library and as Discovar De Novo's documentation says that it's designed for ~ 60x I subsampled my reads with seqtk to 50% of their original depth (using the same seed to keep read pairs together) and submitted that as an assembly. Same error.
**A plea for help from someone inexperienced:**
-----------------------------------------------
Am I doing something stupid? Do I need to interleave my fastq files or provide any extra options? If anyone has any help, advice, or words of support I would be extremely grateful - this is my first big project for my PhD and I want to tear into it - I'm utterly stuck though. If nobody can help specifically with Discovar De Novo, is there another assembler which anyone can suggest which would be suitable for assembling a single Illumina library? | For anyone looking at this post, who has the same problem: I have it fixed now.
I was pointed at a forked version of Discovar, developed by a group at the Earlham Institute in Norwich. They wanted to use Discovar De Novo to assemble a wheat genome and it just crashed, so they got into it and fiddled about and seem to have fixed this problem - I was a bit dubious at first due to the fact that I'm working with a small-ish haploid genome, whereas they wanted it to work with a big hexaploid genome, but fixing memory issues was the first thing they've done and I've had no problems using it.
https://github.com/bioinfologics/w2rap-contigger
http://bioinfologics.github.io/the-w2rap-contigger/
https://pdfs.semanticscholar.org/1d1e/3b1d6014dfbb4beb86c576cd85b5f7275150.pdf | biostars | {"uid": 267292, "view_count": 2449, "vote_count": 1} |
I have a bed file called `my.bed` with CHROM, START, and END Position. Can someone please explain me how I can use bcftools or command to extract the regions from `myvcf.vcf` file? | my preferred option when dealing with vcf files is [bcftools][1], which requires vcf indexing:
tabix -p vcf my.vcf
bcftools view -R my.bed my.vcf.gz
another perfectly valid alternative, as Devon has just pointed out, would be [bedtools intersect][2]:
bedtools intersect -a my.vcf -b my.bed
[1]: http://samtools.github.io/bcftools/bcftools.html#view
[2]: http://bedtools.readthedocs.org/en/latest/content/tools/intersect.html | biostars | {"uid": 188137, "view_count": 14552, "vote_count": 1} |
Hello,
I am trying to get haplotypes for one locus on chromosome19 and for that, I need phased data. However, prior to the phasing, I am supposed to split my PLINK files into separated files by chromosome. I found this script online but it doesn't work (see below). Can you please give me advice on how to do this? I am rather a newbie in the data processing so I am sorry if this is a too basic question.
Thank you!
#!/usr/bin/perl
# This script takes as input the base filename of binary pedfiles (*.bed,
# *.bim, *.fam) and a base output filename and splits up a dataset by
# chromosome. Useful for imputing to 1000 genomes.
chomp(my $pwd = `pwd`); my $help = "\nUsage: $0 <BEDfile base> <output base>\n\n"; die $help if @ARGV!=2;
$infile_base=$ARGV[0]; #base filename of inputs $outfile_base=$ARGV[1]; #base filename of outputs $plink_exec="plink
--nonfounders --allow-no-sex --noweb"; $chr=22; #last chromosome to write out
for (1..$chr) { print "Processing chromosome $_\n"; `$plink_exec
--bfile $infile_base --chr $_ --make-bed --out ${outfile_base}$_;` }
| I think the correct format is (in case some codes are commented out):
```
#!/usr/bin/perl
# This script takes as input the base filename of binary pedfiles (*.bed,
# *.bim, *.fam) and a base output filename and splits up a dataset by
# chromosome. Useful for imputing to 1000 genomes.
chomp( my $pwd = `pwd` );
my $help = "\nUsage: $0 <BEDfile base> <output base>\n\n";
die $help if @ARGV != 2;
$infile_base = $ARGV[0]; #base filename of inputs
$outfile_base = $ARGV[1]; #base filename of outputs
$plink_exec = "plink --nonfounders --allow-no-sex --noweb";
$chr = 22; #last chromosome to write out
for ( 1 .. $chr ) {
print "Processing chromosome $_\n";
`$plink_exec --bfile $infile_base --chr $_ --make-bed --out ${outfile_base}$_;`;
}
``` | biostars | {"uid": 387132, "view_count": 7421, "vote_count": 2} |
Hi everyone,
My lab is using gene expression data generated by Illumina Human HT-12 v3 Expression Beadchips. As advertised by the company, this products has 48000+ probes for 25000 genes. I have never used expression data before and would like to cluster genes based on their expression. The data has already been normalized and corrected for batch effects.
The current file format is:
ProbeID Sample1 Sample2
I would like to get the following format:
GeneID Sample1 Sample2
It seems that some genes have more probes than others. Moreover, there can be multiple transcripts for a given gene. I was wondering if someone could please give me a general idea about getting the desired format.
Thank you for your time. | Hi,
Its easier to do this in R. All you need is to convert ProbeID into the Gene name to which it is mapped.
```
> probeID=c("ILMN_1690170", "ILMN_2410826", "ILMN_1675640", "ILMN_1801246",
"ILMN_1658247", "ILMN_1740938", "ILMN_1657871", "ILMN_1769520",
"ILMN_1778401")
> library("illuminaHumanv4.db") #Get this library if you don't have
> data.frame(Gene=unlist(mget(x = probeID,envir = illuminaHumanv4SYMBOL)))
Gene
ILMN_1690170 CRABP2
ILMN_2410826 OAS1
ILMN_1675640 OAS1
ILMN_1801246 IFITM1
ILMN_1658247 OAS1
ILMN_1740938 APOE
ILMN_1657871 RSAD2
ILMN_1769520 UBE2L6
ILMN_1778401 HLA-B
``` | biostars | {"uid": 109248, "view_count": 17710, "vote_count": 2} |
Dear All,
I have a big embl file that has over 1000 IDs, and I want to split it based on IDs. Each ID should be a separate file containing all information of related ID, and should have the ID name.
How to do that?
| @OP: please post first embl record for better parsing.
Note: Before proceeding, create a test directory, copy the original embl (test.embl below) and run the script and check the output for random files.
OP embl ID line:
> ID scaffold00001; SV 1; linear; unassigned DNA; STD; UNC; 6279 BP.
based on OP ID line,
$ grep -i "ID" test.embl | awk '{gsub(";",""); print $2}' | parallel "awk '/{}/,/\/\//' test.embl > {}.embl"
Above script should create multiple embl files with ID as file name.
Example embl file from post https://www.biostars.org/p/147163/ (close to OP embl IDs):
$ cat test.embl
ID comp0_c0_seq1; SV 1; linear; unassigned DNA; STD; UNC; 205 BP.
XX
DE len=205 path=[1:0-135 1445:136-204]
XX
SQ Sequence 205 BP; 64 A; 54 C; 31 G; 56 T; 0 other;
GTATTGAACT GCAGAGCATT AAATGCTGCA ACTCAGTGCT TAGAATTCAT TAGATTCAGA 60
GCAACGAACC CTAAATACTG AGCTGTCCCA TTAAATACTC TGCAGTTCAA TACTTAGCAT 120
TCACCATTAA ACATAACACT TCCCGAGTTT CCACCATCCA TAAACAGCAG GCATTGTAAC 180
CTGTAGGCTC TCTCCACGGT TACCT 205
//
ID comp0_c0_seq2; SV 1; linear; unassigned DNA; STD; UNC; 205 BP.
XX
DE len=205 path=[4094:0-135 1445:136-204]
XX
SQ Sequence 205 BP; 59 A; 50 C; 35 G; 61 T; 0 other;
AGAGTATTAA ATGTTGCAGT TCAGTGCTTA AAATTTATTG GATTCAGAGA ATCTTCAAAT 60
TCAACGGACC CTAAACACTG AGCTGTCGCA TTAAATGCTC TGCAGTTCAA TGCTTAGCTT 120
TCACCATTAA GCATAGCACT TCCCGAGTTT CCACCATCCA TAAACAGCAG GCATTGTAAC 180
CTGTAGGCTC TCTCCACGGT TACCT 205
//
ID comp1_c0_seq1; SV 1; linear; unassigned DNA; STD; UNC; 244 BP.
XX
DE len=244 path=[3:0-88 875:89-243]
XX
SQ Sequence 244 BP; 71 A; 51 C; 63 G; 59 T; 0 other;
GCAGAATTTA AGGCTATGAA TCAGGAGGTT CATAATTCCT TAAGGAGGGG AGTATGATGC 60
GGAGCATCCA CGCTCACCTC CACTCCACCG CATTGTCTTC GAGCTGTGAC AGCCAGCGCA 120
TAATATTCAA GAGCTATTGA CAGGTGTTGA AACGCGGCAG CCTTGCATAC TATTGAAGGA 180
CCACGTTTCA TTATTGTGAT CTATAAGAAG ACAGCTGATG CGATCATGAG GAAGGAAGAA 240
GGCT 244
//
output:
$ ls *.embl
comp0_c0_seq1.embl comp0_c0_seq2.embl comp1_c0_seq1.embl test.embl
.
$ more comp0_c0_seq1.embl
ID comp0_c0_seq1; SV 1; linear; unassigned DNA; STD; UNC; 205 BP.
XX
DE len=205 path=[1:0-135 1445:136-204]
XX
SQ Sequence 205 BP; 64 A; 54 C; 31 G; 56 T; 0 other;
GTATTGAACT GCAGAGCATT AAATGCTGCA ACTCAGTGCT TAGAATTCAT TAGATTCAGA 60
GCAACGAACC CTAAATACTG AGCTGTCCCA TTAAATACTC TGCAGTTCAA TACTTAGCAT 120
TCACCATTAA ACATAACACT TCCCGAGTTT CCACCATCCA TAAACAGCAG GCATTGTAAC 180
CTGTAGGCTC TCTCCACGGT TACCT 205
//
$ more comp1_c0_seq1.embl
ID comp1_c0_seq1; SV 1; linear; unassigned DNA; STD; UNC; 244 BP.
XX
DE len=244 path=[3:0-88 875:89-243]
XX
SQ Sequence 244 BP; 71 A; 51 C; 63 G; 59 T; 0 other;
GCAGAATTTA AGGCTATGAA TCAGGAGGTT CATAATTCCT TAAGGAGGGG AGTATGATGC 60
GGAGCATCCA CGCTCACCTC CACTCCACCG CATTGTCTTC GAGCTGTGAC AGCCAGCGCA 120
TAATATTCAA GAGCTATTGA CAGGTGTTGA AACGCGGCAG CCTTGCATAC TATTGAAGGA 180
CCACGTTTCA TTATTGTGAT CTATAAGAAG ACAGCTGATG CGATCATGAG GAAGGAAGAA 240
GGCT 244
//
| biostars | {"uid": 280114, "view_count": 1782, "vote_count": 1} |
Hello, I am wondering, what tool/R-package can be used to draw this type of tree (preferably, programmatically)?
![enter image description here][1]
The image is taken from: https://www.ncbi.nlm.nih.gov/pubmed/26773003
[1]: http://ivanya.com/tmp/2016/1024.tree.png | Now it is supported by [ggtree](https://guangchuangyu.github.io/2016/11/align-genomic-features-with-phylogenetic-tree/).
| biostars | {"uid": 218591, "view_count": 3073, "vote_count": 2} |
In Biostars forum for differential gene expression analysis by LIMMA I found that most people suggest to set a threshold value of Log2FC > or = 2 to filter DEGs. When I analyzed the GEO dataset "GSE90594" - Agilent-039494 SurePrint G3 Human GE v2 8x60K Microarray 039381 platform for DEGs by LIMMA package, I found that very few genes only have a log 2FC value above 1.
But in the original paper "Study of gene expression alteration in male androgenetic alopecia: evidence of predominant molecular signalling pathways" (http://doi.wiley.com/10.1111/bjd.15577), the authors reported that they obtained 325 UP and 390 DOWN regulated DEGs. They have followed a different approach for obtaining DEGs and reported the DEGs with Fold change (FC) values. Most of the UP regulated genes are reported with the FC value 1.5 to 2 range (log2FC value 0.58 to 1.3 approx). Similarly for down regulated genes FC values of 0.5 to 0.8 (log2Fc value -1 to -0.3) are considered.
Is this right ? can we consider log2FC value of above 0.6 for UP and below -0.6 for DOWN DEGs. Please someone clarify this irrespective of p and FDR values. I consider a q value of 0.05 for DEG selection.
| The fold change will tell you something of the size of the effect of differential expression, so it's more about biology than about statistics. If you are looking for genes with big changes, you'll pick a higher cut-off. But if a gene is significantly differentially expressed then it's already worth looking into it: subtle differences in gene expression can already have a substantial impact. It probably makes a big difference in which gene is affected, too. Some genes are more dosage-sensitive than others. | biostars | {"uid": 373655, "view_count": 17375, "vote_count": 3} |
I have RNA-seq data with 2 conditions and 3 replicates per conditions.
I ran the [New Tuxedo pipeline][1] and also created some [read count tables][2] with prepDE.
I analysed differentially expressed genes with `Ballgown` and `DESeq2`.
With a treshold of 1 log2FoldChange and 0.01 padj in `DESeq2`: 14400 /32000 (45%) of DE genes
With a treshold of 0.01 pval in `Ballgown`: 3678/32000 (of DE genes), even with no fold change treshold, the number of DE genes is (very) lower. In ballgown, what is the difference between qval and pval ? Which one corresponds to padj in DESeq2 ?
I expect many DE genes as conditions are very different biogically (testis vs ovary, same species).
Why do I have a so large difference between softwares?
[1]: https://www.nature.com/articles/nprot.2016.095
[2]: http://ccb.jhu.edu/software/stringtie/index.shtml?t=manual | *pval* is the nominal p-value. *qval* is the adjusted p-value, which are also known as q-values (not many people know this).
Ballgown may be using FPKM data when conducting the differential expression analysis. FPKM is not suitable for this purpose. Please confirm the type of normalisation that you used in Ballgown.
When you ran DESeq2, did you use the `lfcShrink()` function? - see the piece of code that I posted here: https://www.biostars.org/p/324128/#324305 | biostars | {"uid": 324916, "view_count": 3551, "vote_count": 1} |
Hi all, I am new to the BioInformatics, and quite a beginner in programming languages. Can anyone suggest me some sources where I can at least learn 50% of the scRNA seq data analysis? I am familiar with C language, and I know a little bit of molecular biology too. | Single-cell data are rather unpleasant as a beginner's topic due to the noisy and sparse nature of these data. Maybe better first analyze some bulk RNA-seq data to get familiar with R (see [here][1]), and then dive into the documentation of `Seurat` which is the jack-of-all-trades in terms of scRNA-seq analysis. For lowlevel processing `alevin` is a good choice.
[1]: https://www.bioconductor.org/packages/devel/workflows/vignettes/rnaseqGene/inst/doc/rnaseqGene.html | biostars | {"uid": 373386, "view_count": 4322, "vote_count": 7} |
Hello,
I don't know why I am getting some errors during my analysis
I uploaded an example of my data in
https://gist.github.com/anonymous/2c69ab500bfa94d0268a
I use the following command in R to load my data
data <- read.delim("path to your file /example.txt", header=FALSE)
however, in summary or head or other commands I look at the data, it seems alright but I cannot analysis since it gives error like `all numeric variables`. For example if you want to get the range of the example data, you will get such error.
How normally do you import, load a microarray data (with **txt format**) (each row represents a prob and each column a sample)?
Thanks | Works perfectly for me:
```
> dat <- read.delim(file="Downloads/gist2c69ab500bfa94d0268a-ac4cd3d5b0d0764c2faae0e3fb0db8a39d75bb22/example.txt", row.names=1)
# your mistake was to set header=FALSE, and to omit
# row.names=1
> head(dat)
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12
200645_at 0.0446 0.0744 -0.0340 0.0173 0.2280 0.0070 -0.0250 0.0644 -0.0253 -0.1230 -0.6251 0.0210
200690_at -0.0165 0.1121 -0.0959 0.0000 -0.4595 -0.0282 -0.1617 -0.0482 -0.2611 0.0223 -0.6129 0.1961
200691_s_at 0.0554 -0.0689 -0.0852 0.0702 0.0823 0.0361 -0.0306 -0.0076 -0.0340 -0.0198 -0.1823 -0.0681
200692_s_at 0.0000 -0.0505 -0.0508 -0.0159 -0.3041 -0.0684 -0.0644 -0.0175 0.0503 0.0546 -0.2141 -0.0216
200693_at 0.0608 0.0601 0.0115 0.0744 -0.0232 -0.1095 -0.0416 -0.0499 -0.0515 0.0303 -0.1153 0.0824
200694_s_at 0.0424 0.0957 0.0758 -0.0387 -0.0517 -0.0207 0.0328 -0.1392 0.0140 -0.1476 0.1382 0.0113
M13 M14 M15 M16 M17 M18
200645_at 0.1095 0.1527 0.0261 -0.2107 -0.0196 -0.2316
200690_at 0.2119 0.0122 -0.5495 0.1518 -0.2409 0.1610
200691_s_at 0.1219 -0.1615 -0.0729 -0.0696 0.0042 0.1239
200692_s_at 0.0440 -0.0811 0.0964 0.0211 -0.0325 0.1810
200693_at -0.0036 0.0575 0.0427 0.1104 -0.0216 0.0278
200694_s_at 0.2247 0.1489 0.0196 0.0883 -0.1848 0.1989
> range(dat)
[1] -20.091 25.652
``` | biostars | {"uid": 130044, "view_count": 10954, "vote_count": 1} |
What exactly are RMA-units I get after normalizing affymetrix data?
These are relative expression log2 units. But relative to what? What is RMA = 0? | See this link to a presentation:
http://www.ub.edu/stat/docencia/bioinformatica/microarrays/ADM/slides/2_PreprocessingMicroarrayData-2-Preprocessing%20and%20Normalization.pdf
And also this link to some discussion on the topic:
https://www.researchgate.net/post/What_exactly_is_the_unit_of_measure_in_microarray_experiments
" Michael B Black · ScitoVation
The raw data is effectively unitless as it is simply relative signal intensity. Flourescence is measured by a photo multiplier tube or charge-coupled device and signal scaled across the range of detection for the platform. The starting data is thus simple relative signal values without units, based effectively on simple counted photons by the PMT or CCD.
"
" Prafull Kumar Singh · University Medical Center Freiburg
Micro array chip contain multiple probes for each gene. After completion of the experiment a .dat is generated which is a image showing the intensity of each probe in a digital format. Now it is converted into .cell file which contains the real probe intensity numerical values.The higher the probe intensity higher is the expression of the gene. But its meaningless until you compare it with the intensity of same probe in control sample. These probe intensity values are preprocessed (Background correction, normalization, Perfect match correction and Summarization) using diff algorithms (RMA, Mas5). The normalized probe intensity values are further used for calculation of relative gene expression i.e treatment vs control to get the fold change. The positive and negative values indicates up-regulated and down-regulated genes in treatment group when compared to control."
etc...
An initial experiment reference:
http://biostatistics.oxfordjournals.org/content/4/2/249.full.pdf | biostars | {"uid": 214013, "view_count": 8181, "vote_count": 1} |
Hi,
This appears to be a simple problem that I am unable to solve. I have some data that looks like this:
CHROM POS REF ALT TYPE AF
chr1 1 A T MISSENSE 0.23
chr2 1 A T,G MISSENSE 0.17, 0.09
The above is dummy meaningless data, but it is representative of the problem at hand.
I'd like to `separate_rows` such that the `ALT` and `AF` are separated in a couples manner. Running `separate_rows` on the 2 columns would give me 4 rows, not 2. I'd like my output to be:
CHROM POS REF ALT TYPE AF
chr1 1 A T MISSENSE 0.23
chr2 1 A T MISSENSE 0.17
chr2 1 A G MISSENSE 0.09
Is there any way I can conserve this combination while separating the values out? I am really far out from the VCF to go back and split multi-allelics.
| You could use a Python script to do this easily:
#!/usr/bin/env python
import sys
headers = None
idx = 0
for line in sys.stdin:
elems = line.rstrip().split('\t')
if idx == 0:
headers = elems
sys.stdout.write(line)
else:
items = {x:y for x,y in zip(headers, elems)}
alleles = items['ALT'].split(',')
afs = items['AF'].split(',')
for ai in range(len(alleles)):
items['ALT'] = alleles[ai]
items['AF'] = afs[ai]
sys.stdout.write('{}\n'.format('\t'.join([items[x] for x in headers])))
idx += 1
For example:
$ ./split.py < variants.txt
CHROM POS REF ALT TYPE AF
chr1 1 A T MISSENSE 0.23
chr2 1 A T MISSENSE 0.17
chr2 1 A G MISSENSE 0.09
Write it out to a file and bring that back into R:
$ ./split.py < variants.txt > variants.split.txt | biostars | {"uid": 483199, "view_count": 770, "vote_count": 1} |
Hello everyone,
I am really new in this forum, so I hope you can help me in resolving my problem.
I am actually working on a project, where I have to annotate some sequences.these sequences are a result of rna_seq , and I have them in bam format. Do you know some easy bioinformatics tools that can help me to annnotate that kind of sequences in order to get GFF files ?? So the input should be in bam files and the output in GFF.
Just for information, i need a tool that can be executed in linux, and that can be looped for so many sequences for different species.
Thank you for your help | OK, so you want to annotate (predict genes) on a genomic sequence making use of RNAseq info.
The question is easily asked but I'm afraid you might be underestimating how a complex matter this can be. But is absolute a good and correct approach. Problem is that this often requires quite a bit of work and knowledge. There are (or you could) run most tools as a black box but then your results will be of lesser quality.
You might consider tools as: Maker, Braker, Augustus, ... (most of these use RNAseq as an evidence information but also require other inputs ). There will be others that might work with RNAseq only but none come to mind right now.
| biostars | {"uid": 371871, "view_count": 1840, "vote_count": 1} |
Hi,
I have been trying to use multiple command i can find online to spilt the columns into different files but couldnt.
I have an 10 columns excel file ( first 2 col is paramaters, the remaining 8 col is the data). I would like to split it into 8 separate files which all 8 contains first 2 col paramaters and 1 data col.
May i know does anyone have any idea to get around this ?
Thank you. | Here is the code in R: Make sure that all for the files (excel files with .xls or .xlsx) are in the same format as in link furnished above (2nd column is mz, 5 is RT and 11th column onward samples). Code would create one excel file per sample and name of the excel file would be sample_"sample name".xls. It will have three columns: mz, RT and sample name.
library(readxl)
library(WriteXLS)
df=data.frame(read_xls("example.xls"))
for (i in 11:ncol(df)){
temp_df=df[,c(2,5,i)]
WriteXLS(temp_df, ExcelFileName = paste0("sample_",names(df)[i],".xls"))
}
if you have multiple xls files in the folder (and only xls files of interest):
setwd("~/Desktop/test") # change this to directory of interest
fn=list.files(pattern = "\\.xls") # lists files with .xls extension
library(readxl)
library(WriteXLS)
for (i in fn){
df=data.frame(read_xls(i))
for (j in 11:ncol(df)){
temp_df=df[,c(2,5,j)]
WriteXLS(temp_df, ExcelFileName = paste0(sub('\\.xls$', '', i) ,"_","sample_",names(df)[j],".xls"))
}
}
output files will have "xlsfilename_samplename.xls" | biostars | {"uid": 326422, "view_count": 1495, "vote_count": 1} |
Hello, I am looking at this heatmap and I do not understand why some of the tumours are grouped with the controls, seems as if the heatmap is 'moved to the left':
Top of the heatmap:
![Top of the heatmap][1]
(it is a long heatmap so i upload just a section)
Bottom of the heatmap:
![Bottom of the heatmap][2]
I tried to check on the phenotype of the samples but I found no correlations of the phenotype with this grouping.
There seem to be two tumour subgroups: one that is the biggest, and the small group of tumors that seem to be grouped with the controls. I did a t-test on the mean beta values for all the cpgs between these two groups, and it turns out there is significance among the means of these two tumour groups.
I am afraid when we publish a heatmap similar to this one, we will have trouble explaining this phenomenon. Any ideas or any opinions on this? Thank you!
[1]: https://i.imgur.com/v3oAPra.png
[2]: https://i.imgur.com/NgTGrB3.png | There can be many reasons, some not really relating to bioinformatics:
1. these tumours genuinely exhibit the 'normal-like' methylation
profile over these probes
2. these tumours have normal cell contamination
Some informatics reasons:
- your coding is incorrect and you have incorrectly assigned a normal
sample as a tumour
- you have scaled your data incorrectly
- you should consider the distance and linkage metric that you're using
By the way, for methylation, you could probably also perform the Wilcoxon Signed Rank test on the matched T-N pairs. If just a regular t-test, I would at least use the Mann Whitney (non-parametric) test.
You must also have an extra checkpoint: you should obtain the difference in mean β value between tumour and normal, i.e.:
difference in mean = mean β (tumour) - mean β (normal)
Then, use that as an extra cut-off in addition to the p-value.
Kevin | biostars | {"uid": 358231, "view_count": 1195, "vote_count": 2} |
How can we get total number of SNPs in plink association output file as "qassoc" and "adjusted file" which has less than 0.05
P values? | From the plink manuals the pvalue column for [qassoc file][1] is the 9th. So we can use `awk` to keep the 1st row header, then filter on 9th column value, see example:
awk '(NR==1) || ($9 < 0.05) ' myfile.qassoc > myfile_subset.qassoc
To get the number of rows:
awk '$9 < 0.05' myfile.qassoc | wc -l
[1]: https://www.cog-genomics.org/plink2/formats#qassoc | biostars | {"uid": 331359, "view_count": 1726, "vote_count": 1} |
I am trying to convert a bigwig file to a wig file to run through sitepro. I tried using the ucsc tool bigWigToWig and I get a file that looks like this:
```
#bedGraph section chr1:0-3079820
chr1 0 3001030 0
chr1 3001030 3001060 2.41
chr1 3001060 3001230 4.83
chr1 3001230 3001270 2.41
chr1 3001270 3002330 0
chr1 3002330 3002380 4.83
chr1 3002380 3002540 9.65
chr1 3002540 3002590 4.83
chr1 3002590 3003210 0
```
However, sitepro is choking on this wig file. I normally use wig files generated by MACS, which have a much different formatting:
```
track type=wiggle_0 name="macs_output_treat_all" description="Extended tag pileup from MACS version 1.4.2 20120305 for every 10 bp"
variableStep chrom=chr1 span=10
3001071 1
3001081 2
3001091 2
3001101 2
3001111 2
3001121 2
3001131 2
3001141 2
```
Why is the ucsc bigWigToWig tool giving me a bedgraph file rather than a wig file? | <p>This guy experienced the same problem, and has an answer (and a script to go from bedGraph to wig) on his site:</p>
<p><a href="http://sebastienvigneau.wordpress.com/2014/01/10/bigwig-to-bedgraph-to-wig/">http://sebastienvigneau.wordpress.com/2014/01/10/bigwig-to-bedgraph-to-wig/</a></p>
| biostars | {"uid": 113824, "view_count": 4798, "vote_count": 2} |
Hey guys:
I am doing **RNA-seq** analysis and it seems that the quality of my reads is not desirable.
Below is **a typical fastqc report** for my data.
I have read many tutorial about fastqc, from my understanding, **it seems that the 1-10 bp are adaptor sequences**. But in the adaptor content section, there is no waining.
----------
- I am wandering if my understanding is right?
- Should I use **trimmomatic** to cut adaptor sequences?
![Failed per base sequence content][1]
[1]: https://lh3.googleusercontent.com/Xp-_0jmKsw8p2MVrOx_X-hJbHIiKBfUFZ700lNT5nv0MKlma6DSgYonxLAbkHzkYNEaWHszNcvI6RjCPhk6YghTHfI7dT1_2GUyzPr4I5MdEnee1_qTSB46nbvwTJFh_TEASRgS1MEGIawbdc7zZpnWWd_bFL_CeNB0-MBk6PePyflSrMjeLZFlFu-1onTqcypuSMKeYw5Wty0c-pgPSCHWIHkTxrg3d0Y-XUhztxFRETL2sFQQXjtSGmZ5PhguduitzrZcnOijNlVK5KK_0hZ5Aqd-3pf7OHjJbRXfP02dxda5HQOrALIo-_DYoHszlbMDSFeLwSQerz80bPd2CNEsLmE79KsYBSqjOIeg_aH8npy70IlZc2pT34_pZul8pZapn_Yb_QN2tiazcaPPeYJvC-uVttpkoepsoaXT7Oz5lbB1OBkCrx1Ye3dyzpAx_bGhvdnnIXhOFYKbI44Nr1mh1ns7up0uj6uukZDqFOvRAkEFRNeNPHuqZlxGFfnCY1qAuM3Kqy3c0UjhVdDmFlTNVnR6wICnRR8v4EL7HxCbdPAnZ7JlHHzpaPHtKTxlMG80d51jSwWVLPa1GnaveRBsjfp3Ek5NVZ8bhv5UxhQXhPKRi8L3XBZ6-bg0t3MGomxWRSaE54EN6HD2GV1NKXE8muzTGcw=w1407-h809-no | The adapter starts at the 3' end of your reads, not the 5' (unless its an adapter dimer - i.e. no insert).
This is the result of random priming in RNA-Seq. I think [Biases in Illumina transcriptome sequencing caused by random hexamer priming][1] is the first paper on this. This represents real sequences. After alignment you can check the error rate in the reads, its only marginally higher 5' than in the rest of the read. And the fastqc report of the aligned sequences should show the same pattern.
[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2896536/ | biostars | {"uid": 371979, "view_count": 6650, "vote_count": 1} |
Hi All,
I have a question regarding trimming of adapters in NGS data. I have previously analyzed RNAseq data using brtim (https://www.sciencedirect.com/science/article/pii/S0888754311001339) without using adapter sequence. I am now analyzing smallRNAseq data, and I am using adapter sequence as adapter.fa using BBduk.sh from BBmap tools. Could you please clarify under what circumstances one would need to provide/know adapter sequence and when it is not necessary? | One case is where there is a kit/prep specific adapter that is being used. This may require special handling of the downstream data based on the instructions included in the kit.
If you had paired end reads with enough pairs having short inserts then you can detect them by doing this: `bbmerge.sh in1=r1.fq in2=r2.fq outa=adapters.fa` | biostars | {"uid": 329749, "view_count": 3142, "vote_count": 1} |
How to embed known cell information into Seurat object? Using AddMetaData?
Is this possible?
I read this documentation: https://www.rdocumentation.org/packages/Seurat/versions/3.1.4/topics/AddMetaData
However I need some clarification on how to go about doing this, please?
I have cell labels that are all aligned with the expression matrix such as this:
"12wks_fetal pancreas cell_acinar" "19wks_fetal pancreas cell_beta" "12wks_fetal pancreas cell_ductal" "12wks_fetal pancreas cell_beta" "12wks_fetal pancreas cell_acinar" "22wks_fetal pancreas cell_acinar" "12wks_fetal pancreas cell_acinar" "12wks_fetal pancreas cell_alpha" "12wks_fetal pancreas cell_ductal" "12wks_fetal pancreas cell_acinar" "14wks_fetal pancreas cell_endocrine.progenitor..."
I used these cell labels as cell names (column names) for the expression matrix. However that's not useful for seeing which cell is from what time point 12wks, 19wks, 14 wks, etc. on the UMAP for example.
Would there be a way to label the cells using the AddMetaData function so that when I visualize the data using UMAP, they are color coded based on the time of collection (ie. 12wks, 19wks, 22wks, etc.) or would there be a way I could also filter by both time of collection and/or cell type?
Is this possible in Seurat? How would I approach this?
I would really appreciate anyone's help.
Very Respectfully,
Pratik
| Hope I got your question correctly, you can do everything to cell information via `Ident` function. Follow the manual to change the `Idents` of your cells. By this, you can replace the `active.ident` with what you want.
| biostars | {"uid": 460935, "view_count": 24192, "vote_count": 1} |
**Hi everyone**
I would like to take a second opinion regarding a variant with DP =17. The variant is in a very high GC content area so the low coverage is reasonable.
I do hard filtering because I have few samples for VQSR, I thinks it is okay according to GATK hard filtering, the values are as follows:
AC=1;AF=0.500;AN=2;BaseQRankSum=0.892;ClippingRankSum=0.000;DP=17;ExcessHet=3.0103;FS=10.843;
MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.000;QD=2.81;ReadPosRankSum=-1.534;
SOR=2.795 GT:AD:DP:GQ:PL 0/1:14,3:17:76:76,0,506
I am worried because **AD is 14,3** which means only 3 reads only support the ALT? is it safe to accept it as a het variant. The variant is very interesting based on our data, so I don't wanna discard without a strong reason.
**Any comments, how do you think? with many many thanks**
| I cannot provide a detailed answer given the information but a few comments.
There are 2 most likely models:
1) The site is homozygous reference
Therefore, the 3 reads are potentially a) mismapped, b) weird duplicates or c) sequencing errors.
a) Did you use a mappability filter? i recommend Heng Li's method lh3lh3.users.sourceforge.net/snpable.shtml
b) Did you use rmdup?
c) if both a) and b) are satisfied, then they are sequencing errors, assuming a base quality of 38, the probability of 3 seq. errors occurring independently is:
(10^((-1*38)/10))^3 = 3.981072e-12
so one chance in 250 billion.
2) The site is heterozygous reference
It is possible that this is a genuine het site and you haven't sampled the other base, this can happen with probability:
dbinom(3,17,prob=0.5) = 0.005187988
so 1 chance in ~200 (assuming no mismappings or errors).
This is reflected in your PL field where the homo ref model has a score of 76, the het model a score of 0 (most likely) and 506 for the homo alt. While the hetero is more likely that the homo ref, it is not astronomically more likely. However, the homo alt is so unlikely that it can be safely discarded.
| biostars | {"uid": 310806, "view_count": 1269, "vote_count": 2} |
Hi,
I've recently started using Ensembl for help in designing a gene panel for NGS.
Each time I select a gene on Ensembl it comes up with different available transcripts for that gene.
What defines a transcript? Is it literally just a different version of the gene in different individuals (but if so, how come some of the transcripts appear to be so different in terms of bp length and amino acid number)
Any help in explaining would be most gratefully received!
| This is more of a comment, possibly a rant. In my opinion the concept of gene is obsolete and we would be better off if we ditched the concept of "gene" altogether.
Genes made sense when it was thought that there were these discrete units (genes) which produced each a single transcript and a single protein. However, a single gene can produce multiple transcripts and these can be very different one from the other (see for example the table of transcripts of the [Actin gene][1], there are several coding and non coding transcripts). So when we say gene X has a mutation it is not clear what we are referring to. How many transcripts are affected by this mutation? Are they coding or pseudogenes? In my opinion it would be simpler to think in "transcript space" and forget about genes.
Another way if seeing this is to consider that while transcripts exist, genes don't exist. When you do mRNA extraction you isolate transcripts, not genes, you can't isolate genes. You could have a restriction enzyme that cuts left and right of a "gene", but this is also not true. You have an enzyme that cuts at positions A and B, which happen to include a bunch of transcripts. If the definition of that gene (and transcripts) change, the enzyme still cuts there because it doesn't "see" genes, it sees DNA sequence.
I think "genes" hang around because they make some statements simpler ("Mutation at position A hits gene X" as opposed to "Mutation at position A hits an intron of transcripts X, an exon of transcript Y, and a UTR of Z"), but they are at best incomplete statements.
Any thoughts?
[1]: http://www.ensembl.org/Homo_sapiens/Gene/Splice?db=core;g=ENSG00000075624;r=7:5527151-5563784 | biostars | {"uid": 244850, "view_count": 27920, "vote_count": 8} |
Hi All,
I have MiSeq PE250 reads for a viral vector sample. I trimmed the raw reads with Trim Galore (length >= 200 and Q >= 30). I aligned these reads using both `BWA MEM` and `Bowtie2 --local` and found a `AAA -> TTT` variant in both alignment files at different frequencies `9%` and `71%` respectively. I further analysed reads containing variant and found that the reads with variant had base quality call score less than 30 for TTT bases.
To check the effect of trimming on variant call and coverage, I further trimmed the trim galore trimmed reads to `remove all reads which had even a single base call quality less than 30`. I repaired these reads using `bbmap repair.sh` and aligned again using bowtie2. For this alignment the above variant was found at `<10%` variant frequency. This was consistent with variant frequency reported with BWA-MEM. The coverage was also affected with almost `6.7%` of the bases with `coverage less than 10X` for the super trimmed reads vs `0.1%` for trim galore trimmed only reads.
I used freebayes for variant calling with same parameters (Min Coverage 10, Min Alternate Fraction 0.01, Min Alternate Count 4) for both BWA and Bowtie2 aligned reads.
1. Why was there such a large difference for variant frequency for Bowtie2 and BWA MEM?
2. The coverage difference between trim galore trimmed reads and super trimmed reads is vast. Should I still use super trimmed reads (Trim Galore + Further trimming) as it gives correct variant frequency.
The fastqc result for the trim galore trimmed reads and super trimmed reads were as follows:
**Trim Galore Trimmed Reads**
<a href="http://www.freeimagehosting.net/commercial-photography/"><img src="https://i.imgur.com/PPIIqEv.png" alt="Commercial Photography"></a>
**Super Trimmed Reads**
<a href="http://www.freeimagehosting.net/commercial-photography/"><img src="https://i.imgur.com/RxK6AK4.png" alt="Commercial Photography"></a>
The fastqc per base faultily fails for the super trimmed reads indicating a wrong Illumian Phred Score encoding version. Why did this happen?
Thanks! | >The fastqc per base faultily fails for the super trimmed reads indicating a wrong Illumian Phred Score encoding version. Why did this happen?
Because with this line:
```
awk '{unit=unit $0 ORS} NR%4==0{if (/^[?@ABCDEFGHIJK]+$/) printf "%s", unit; unit=""}' trimmed.R1.fq > super_trimmed.R1.fq
```
you eliminated any reads containing any bases with a score below `?`, which leads `FastQC` to the assumption that you're looking at files where the Phred scores have been encoded the Illumina1.3-way
 | biostars | {"uid": 372614, "view_count": 2109, "vote_count": 1} |
I have an ExpressionSet, combArr.eset, that contains Affymetrix Human Genome U133 Plus 2.0 Array gene expression data. I also have a matrix, exprs_combArr.eset, that contains the expression data for combArr.eset. The row names for exprs_combArr.eset are the hgu133plus2 probe IDs, and the column names are the Gene Expression Omnibus sample IDs.
I'd like to change the probe IDs in exprs_combArr.eset to their corresponding gene symbols, but I'm not sure how to do this.
I've retrieved the hgu133plus2SYMBOL object from the hgu122plus2.db package, which contains the mappings between manufacturer identifiers and gene abbreviations, but I'm not sure how to use this to change the probe IDs in exprs_combArr.eset.
I'm new to R and Bioconductor, and programming in general, so any help would be appreciated.
Thank you!
| Haha, this came up in a biofinormatics clinic I did on Friday. I've written a rather terse version of how to add gene-level annotations to an Affymetrix ExpressionSet [here][1]. You basically have to find an appropriate annotation package (probably hgu133plus2.db) and add that info to the featureData slot of your ExpressionSet (don't monkey with the probe id-rownames)
[1]: http://biolearnr.blogspot.co.uk/2017/05/bfx-clinic-getting-up-to-date.html | biostars | {"uid": 254040, "view_count": 10389, "vote_count": 2} |
Hello,
I know the following question have been asked many times by now, but the suggested solutions were not helpful.
I have a set of taxIDs and I want to convert them to taxonomies, but I do not want to download any extra databases. Is there any simple way to do this via terminal?
| Another solution using [Entrez Direct][1]
efetch -db taxonomy -id 387462,2594474 -format xml | \
xtract -pattern Taxon -first TaxId -element Taxon -block "*/Taxon" \
-unless Rank -equals "no rank" -tab "," -sep "_" -element Rank,ScientificName
387462 superkingdom_Eukaryota,kingdom_Metazoa,phylum_Mollusca,class_Gastropoda,subclass_Caenogastropoda,order_Littorinimorpha,superfamily_Stromboidea,family_Strombidae,genus_Strombus
2594474 superkingdom_Bacteria,phylum_Firmicutes,class_Clostridia,order_Clostridiales,family_Peptococcaceae,genus_Dehalobacter
[1]: https://www.ncbi.nlm.nih.gov/books/NBK179288/
| biostars | {"uid": 423201, "view_count": 3269, "vote_count": 1} |
Hi,
I encountered a strange issue while reading in a data table from `txt` format. If I read it from `txt` by `read.table` it does not include all rows but if I convert to `csv` and read it with `read.csv` its perfect. Does someone know the issue or is it my code?
[Here][1] is the file.
```r
test <- read.table("./Annotations/all_genes_pombase.txt",
+ header=T,
+ sep="\t",
+ row.names=1,
+ stringsAsFactors = F)
> dim(test)
[1] 4533 7
> str(test)
'data.frame': 4533 obs. of 7 variables:
$ name : chr "SPAC1002.01" "pom34" "gls2" "taf11" ...
$ chromosome : chr "I" "I" "I" "I" ...
$ description : chr "conserved fungal protein " "nucleoporin Pom34 " "glucosidase II alpha subunit Gls2 " "transcription factor TFIID complex subunit Taf11 (predicted) " ...
$ feature_type: chr "protein_coding" "protein_coding" "protein_coding" "protein_coding" ...
$ strand : int 1 1 -1 -1 -1 -1 -1 -1 -1 -1 ...
$ start : int 1798347 1799061 1799915 1803624 1804548 1807270 1807996 1809480 1811408 1813740 ...
$ end : int 1799015 1800053 1803141 1804491 1806797 1807781 1809433 1811361 1813805 1815796 ...
```
[1]: https://drive.google.com/file/d/0Bxl0rZon0OuzR1VzN3cyMlhkUUk/view?usp=sharing | Use the argument `quote = ""` inside read.table.
read.table("your_file", quote="", other.arguments)
**Explanation**:
Your data has a single quote on 59th line (`( pyridoxamine 5'-phosphate oxidase (predicted)`). Then there is another single quote, which complements the single quote on line 59, is on line 137 `(5'-hydroxyl-kinase activity...)`. Everything within quote will be read as a single field of data, and quotes can include the newline character also. That's why you lose the lines in between. **`quote = ""` disables quoting altogether**.
There are other more instances where this 'quoting' happens again. One way to know how many fields read.table sees in every row is by using `count.fields`
num.fields = count.fields("all_genes_pombase.txt", sep="\t")
Now look at the variable `num.fields`, there will be a lot of NAs, the lines which are not read correctly by `read.table`
**The problem doesn't arise with read.csv** because the [quoting defaults][1] are different in read.table and read.csv, due to some reason really unknown to me!
read.table: quote = "\"'"
read.csv: quote = "\""
PS: The best way to avoid the reading file nuisance of *read.table* is to use *fread()* from data.table package. The side benefit is that it's blazing fast for large files and it guesses the field separator automatically. See my earlier post: https://www.biostars.org/p/221009/#221032
[1]: http://stat.ethz.ch/R-manual/R-devel/library/utils/html/read.table.html | biostars | {"uid": 221983, "view_count": 21347, "vote_count": 3} |
I have a genotype vcf file for which I want to do trimming such that I have one variant with MAF > 5% every 100kb.
Does anyone have suggestions to do this smartly using bcftools or vcftools ?
Thanks Kiran | using **VcfFilterJdk** http://lindenb.github.io/jvarkit/VcfFilterJdk.html
private String prevContig=null;
private int prevPos=-1;
private final int distance= 100_000;
public Object apply(final VariantContext variant) {
final double AF = variant.getAttributeAsDoubleList("AF",1.0).stream().mapToDouble(Double::doubleValue).min().orElse(1.0);
if(AF>0.05) return false;
if(!variant.getContig().equals(prevContig) || variant.getStart()> prevPos )
{
prevContig = variant.getContig();
prevPos = variant.getEnd() + distance;
return true;
}
return false;
}
**usage**:
```
$ wget -q -O - "ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/release/20130502/ALL.chr14.phase3_shapeit2_mvncall_integrated_v5a.20130502.genotypes.vcf.gz" | gunzip -c | java -jar dist/vcffilterjdk.jar -f jeter.code --body
(...)
#CHROM POS ID REF ALT QUAL FILTER INFO
14 19000017 rs375700886 C T 100 PASS AA=.|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=8633;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0.001;VT=SNP
14 19100025 rs201348429 G A 100 PASS AA=G|||;AC=5;AF=0.000998403;AFR_AF=0;AMR_AF=0;AN=5008;DP=35531;EAS_AF=0.004;EUR_AF=0;NS=2504;SAS_AF=0.001;VT=SNP
14 19200040 rs543063896 G T 100 PASS AA=g|||;AC=2;AF=0.000399361;AFR_AF=0;AMR_AF=0;AN=5008;DP=18098;EAS_AF=0;EUR_AF=0.001;NS=2504;SAS_AF=0.001;VT=SNP
14 19300074 rs531199478 T A 100 PASS AA=-|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=16207;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0.001;VT=SNP
14 19400095 rs560944058 A G 100 PASS AA=.|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=32464;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0.001;VT=SNP
14 19500295 rs557566396 T A 100 PASS AA=T|||;AC=5;AF=0.000998403;AFR_AF=0;AMR_AF=0.0043;AN=5008;DP=65422;EAS_AF=0;EUR_AF=0.002;NS=2504;SAS_AF=0;VT=SNP
14 19600309 rs549731335 A G 100 PASS AA=A|||;AC=1;AF=0.000199681;AFR_AF=0.0008;AMR_AF=0;AN=5008;DP=38613;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 19700430 rs549302478 T C 100 PASS AA=T|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=34929;EAS_AF=0;EUR_AF=0.001;NS=2504;SAS_AF=0;VT=SNP
14 19800530 rs572589774 G A 100 PASS AA=.|||;AC=3;AF=0.000599042;AFR_AF=0.0008;AMR_AF=0;AN=5008;DP=26775;EAS_AF=0;EUR_AF=0.002;NS=2504;SAS_AF=0;VT=SNP
14 19900580 rs557052698 G C 100 PASS AA=G|||;AC=2;AF=0.000399361;AFR_AF=0.0015;AMR_AF=0;AN=5008;DP=37564;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 20000898 rs532972399 G GT 100 PASS AA=?|T|TT|unsure;AC=3;AF=0.000599042;AFR_AF=0.0023;AMR_AF=0;AN=5008;DP=30057;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0;VT=INDEL
14 20100978 rs534676846 C T 100 PASS AA=c|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=12705;EAS_AF=0.001;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 20201045 rs577045445 T C 100 PASS AA=.|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=26164;EAS_AF=0.001;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 20301077 rs546456970 C A 100 PASS AA=.|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=30763;EAS_AF=0.001;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 20401244 rs542803776 C A 100 PASS AA=c|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=12769;EAS_AF=0.001;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 20501262 rs143468540 C T 100 PASS AA=C|||;AC=9;AF=0.00179712;AFR_AF=0.0068;AMR_AF=0;AN=5008;DP=20808;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0;VT=SNP
14 20601283 rs536911573 G T 100 PASS AA=G|||;AC=1;AF=0.000199681;AFR_AF=0;AMR_AF=0;AN=5008;DP=23461;EAS_AF=0;EUR_AF=0;NS=2504;SAS_AF=0.001;VT=SNP
(...)
```
| biostars | {"uid": 365303, "view_count": 938, "vote_count": 1} |
<p>I find the way to get the identity of hsp ,but no alignment.One alignment can have multiple hsp, that means the score(identity) of hsp is no equal to alignment. when i do blast in webpage, i always get scores and identities of alignments. Is that any logical problem i made, who can explain this?</p>
| <p>BLAST is a local alignment tool - it reports segments of the sequence, called HSPs, that align and produce the best scores. Each HSP in a match is a segment of query matching to a segment of the subject producing a high score. You can either use the meaningful maximum score or use a score totaled across all HSPs.</p>
<p>BLAST web UI has the same option - for identity percentage in case of multiple HSPs, it gives you the minimum identity% of the HSPs.</p>
| biostars | {"uid": 118504, "view_count": 3508, "vote_count": 1} |
Hi, everyone,
As my title describes, where to download *Arabidopsis* gene expression data in batch? I know some sites, for example, eFP browser, AtGenExpress Visualization Tool (AVT), but they only browser one gene each time. I would like to get global gene expression map. Could anyone help? Thank you very much! | Hello biolab,
If you are thinking about array-based gene expression, you could download in batch at [Araport][1]. There are at least two ways to do so:
1. If you don't have a specific experiment in mind, use this pre-defined template (Click [Here][2]). You could select the expression values of a list of genes. There are some demo lists available for you to get started. In order to use a costumed list you need to create a list by uploading the gene IDs (AGIs) [here][3]. The output of this query result is a interactive table that allows you to filter the experiment type, expression ration, etc.
2. If you are interested in a particular experiment condition, use this template instead (Click [here][4]). There is a drop-down menu allowing you to choose a specific experimental setting as well constrain the expression signal, ratio, and p-value. The result table would display all the genes met the criteria in that experiment.
Keep in mind that you would need to register and log in to permanently save the result. I hope this has addressed your question.
[1]: https://www.araport.org
[2]: https://apps.araport.org/thalemine/template.do?name=Gene_Expression&scope=all
[3]: https://apps.araport.org/thalemine/bag.do
[4]: https://apps.araport.org/thalemine/template.do?name=ExperimentCondition_gene&scope=all | biostars | {"uid": 157428, "view_count": 2386, "vote_count": 1} |
Hi,
I do have two groups of fastq files, I need to merge R1 reads of one file whose has same name at the beginning with the other R1 files reads.
Like Soil-13 is similar in two RI reads files,
I do have multiple pair-end read files like this that I need to merge into one. similarly, I need to do for R2 reads.
at the end I want to I have two soil-13 fastq files, one for R1 reads and other for R2 read. Like this, I need to do with multiple files.
Soil-13_S4_L001_R1_001.fastq
Soil-13_S4_L001_R2_001.fastq
Soil-15_S5_L001_R1_001.fastq
Soil-15_S5_L001_R2_001.fastq
Soil-13_S62_L001_R1_001.fastq
Soil-13_S62_L001_R2_001.fastq
Soil-15_S72_L001_R1_001.fastq
Soil-15_S72_L001_R2_001.fastq
Kind Regards
| You can just concatenate them with cat:
cat Soil-13*_R1_*.fastq > Soil-13_R1_001.fastq
cat Soil-13*_R2_*.fastq > Soil-13_R2_001.fastq
etc.
You can generate a for loop to run through all libraries (bash):
for l in $(ls *_R1_*.fastq | cut -d "_" -f 1 |sort |uniq); do cat ${l}*_R1_*.fastq > ${l}_R1_001.fastq && cat ${l}*_R2_*.fastq; done
| biostars | {"uid": 371355, "view_count": 1976, "vote_count": 2} |
Assume we identify - by RNA-seq, tiling arrays, by prediction - possible candidate regions for non-coding, small RNAs. I wish to verify and predict the function of as many RNAs as possible by computation before going to the lab. One could use eg. Rfam to find similar sequences, after that we are left with more than 90% that have no match. One could predict the 2D structure using eg. [RNAfold][1], compare that using [RNAforester][2]. But that does not get me even close to a function prediction. Do you have experience with other tools or a better computational pipeline that gets more information out of the ncRNA candidates, possibly even something specific to bacteria.
[1]: http://www.tbi.univie.ac.at/~ivo/RNA/RNAfold.html
[2]: http://bibiserv.techfak.uni-bielefeld.de/rnaforester/ | How are you doing your comparison to Rfam? If you're just running rfam_scan.pl or the CMs then this may not give you the results you really want. See the recent [paper by Kolbe & Eddy][1] to hear more about the limitations of CMs on truncated sequences. Could explain some of your lack of sensitivity vs Rfam. However there is a terrifying number of ncRNAs not yet covered by Rfam.
I'm not sure clustering RNAfold predictions with RNAforester will tell you much either. [Locarna][2] and [CMfinder][3] have been used previously to cluster many ncRNA predictions.
[1]: http://www.ncbi.nlm.nih.gov/pubmed/19304875
[2]: http://www.ncbi.nlm.nih.gov/pubmed/20444875
[3]: http://www.ncbi.nlm.nih.gov/pubmed/16357030 | biostars | {"uid": 1662, "view_count": 6071, "vote_count": 12} |
Hi All,
I have a table with six columns (in the following).
![enter image description here][1]
[1]: http://uupload.ir/files/xm5g_taible.png
So, I want to delete SNPs that are less than number 10 in the SNP column.
What is the best idea?
Best Regard
Mostafa | awk !
a cmdline such as below one should do the trick:
awk ' $4 >= 10' <table-file> > new_file
and if you want to (additionally) apply calculations on a certain column:
awk ' $4 >= 10' <table-file> | awk '$2=$2+2000' > new_file
| biostars | {"uid": 337161, "view_count": 1292, "vote_count": 1} |
<p>I am writing a script to get SNP positions from dbSNP by using NCBI E-Utils. When I use esearch and efetch to retrieve the positions, I can only access positions from the newest genome build. Since I want to use GRCh37, this causes problem. I was wondering if you knew of a way of asking for a specific assembly in E-Utils.</p>
| dbSNP does not support searches for older assembly coordinates.
Depending on exactly what you are trying to do, you can probably use lift-over or <a href="https://www.ncbi.nlm.nih.gov/genome/tools/remap">remap</a> to first convert your coordinates. | biostars | {"uid": 100566, "view_count": 1893, "vote_count": 1} |
Hello all,
I am new to RNA-seq, and would like your input to see if my results are real or not. I am trying to see if there are any differentially expressed genes between knockout and wild-type strain with 3 animals for each genotype. In my gene_exp.diff output from cuffdiff the most significant p-value is 5.00E-05 and I have 58 genes with this value, and all of them have the exact same q-value of 0.0150853. I am a bit skeptical that all of these genes can have the exact same p and q values....
Could it be that they actually have a lower significant value, but cuffdiff cuts them all off at a p-value of 5.00E-05 and q-value of 0.0150853?
I have included a sample of my results below & I am using cufflinks 2.2.1
I would appreciate any insight into this phenomenon.
Cheers,
Yuka
```
value_1 value_2 log2(fold_change) test_stat p_value q_value significant
89.2154 58.9422 -0.597993 -2.83732 5.00E-05 0.0150853 yes
1.11605 0.637795 -0.807238 -2.87108 5.00E-05 0.0150853 yes
3.28464 2.19358 -0.582446 -2.43504 5.00E-05 0.0150853 yes
133.831 3.77205 -5.14892 -10.0845 5.00E-05 0.0150853 yes
439.124 707.344 0.687783 3.06388 5.00E-05 0.0150853 yes
46.5894 20.684 -1.17148 -4.54961 5.00E-05 0.0150853 yes
4.29346 0.487185 -3.1396 -5.51822 5.00E-05 0.0150853 yes
3.98649 2.53882 -0.650961 -2.34521 5.00E-05 0.0150853 yes
2.43337 7.40836 1.6062 3.70451 5.00E-05 0.0150853 yes
3.09364 1.61501 -0.937765 -2.6357 5.00E-05 0.0150853 yes
22.7107 34.5983 0.607328 2.75673 5.00E-05 0.0150853 yes
1.62867 0.384298 -2.0834 -3.16599 5.00E-05 0.0150853 yes
85.5877 40.7053 -1.07218 -3.79772 5.00E-05 0.0150853 yes
7.59285 4.52762 -0.74589 -3.00281 5.00E-05 0.0150853 yes
0.655639 1.15038 0.811131 3.2065 5.00E-05 0.0150853 yes
16.8931 9.04287 -0.901585 -3.10804 5.00E-05 0.0150853 yes
2.13982 1.15785 -0.886043 -2.26105 5.00E-05 0.0150853 yes
51.0117 70.4272 0.465303 2.23548 5.00E-05 0.0150853 yes
16.4426 4.60742 -1.8354 -6.30806 5.00E-05 0.0150853 yes
1.70664 3.06055 0.842634 2.63646 5.00E-05 0.0150853 yes
25898.9 11368.1 -1.1879 -2.67136 5.00E-05 0.0150853 yes
31.0799 43.4487 0.483332 2.38779 5.00E-05 0.0150853 yes
1.34841 0.854922 -0.657398 -2.31314 5.00E-05 0.0150853 yes
8.65166 2.89657 -1.57863 -6.03995 5.00E-05 0.0150853 yes
84.8254 128.85 0.603121 3.00014 5.00E-05 0.0150853 yes
7.3413 4.40212 -0.737838 -3.1716 5.00E-05 0.0150853 yes
35.9883 52.4989 0.544761 2.75034 5.00E-05 0.0150853 yes
5.09062 3.22742 -0.657457 -2.46845 5.00E-05 0.0150853 yes
3.54614 2.26065 -0.649511 -2.44616 5.00E-05 0.0150853 yes
5.88631 2.35884 -1.31929 -2.90402 5.00E-05 0.0150853 yes
1.63451 0.707828 -1.20739 -3.0934 5.00E-05 0.0150853 yes
7.81814 12.558 0.68371 2.81042 5.00E-05 0.0150853 yes
73.4685 110.3 0.586236 2.84624 5.00E-05 0.0150853 yes
3.8276 2.46574 -0.634417 -2.42448 5.00E-05 0.0150853 yes
22.13 14.2849 -0.631518 -2.55388 5.00E-05 0.0150853 yes
10.0575 5.06492 -0.989659 -3.61907 5.00E-05 0.0150853 yes
4.56879 2.70737 -0.754918 -2.5256 5.00E-05 0.0150853 yes
5.17134 3.10769 -0.734697 -2.88216 5.00E-05 0.0150853 yes
10.8373 6.79769 -0.672893 -2.30828 5.00E-05 0.0150853 yes
1.13635 0.563153 -1.0128 -2.01683 5.00E-05 0.0150853 yes
45.5072 76.8825 0.756562 3.63131 5.00E-05 0.0150853 yes
1.01017 0.112521 -3.16633 -4.40682 5.00E-05 0.0150853 yes
1.3256 0 #NAME? #NAME? 5.00E-05 0.0150853 yes
3.47888 0 #NAME? #NAME? 5.00E-05 0.0150853 yes
76.2555 33.5566 -1.18424 -4.46786 5.00E-05 0.0150853 yes
11.6164 19.7901 0.768611 3.5777 5.00E-05 0.0150853 yes
4.70657 3.21184 -0.551276 -2.49339 5.00E-05 0.0150853 yes
4.45358 1.18755 -1.90698 -5.88499 5.00E-05 0.0150853 yes
179.376 122.177 -0.554005 -2.59482 5.00E-05 0.0150853 yes
1.88054 1.23137 -0.610883 -2.60224 5.00E-05 0.0150853 yes
51.2297 34.7039 -0.561882 -2.35145 5.00E-05 0.0150853 yes
1.55802 5.58375 1.84152 4.20437 5.00E-05 0.0150853 yes
1.15313 0 #NAME? #NAME? 5.00E-05 0.0150853 yes
6.63245 12.3112 0.89236 3.63541 5.00E-05 0.0150853 yes
4.33681 7.43317 0.777344 2.41337 5.00E-05 0.0150853 yes
57.6986 11.8767 -2.2804 -4.98563 5.00E-05 0.0150853 yes
6.55491 2.75186 -1.25217 -2.86793 5.00E-05 0.0150853 yes
25.3647 16.1568 -0.650683 -2.48292 5.00E-05 0.0150853 yes
``` | <p>If I remember correctly, cuffdiff uses (in recent versions like yours) a kind of permutation test (<strong>edit</strong>: sampling procedure) to assess the significance of DE. This is why it has got "bins" of qvalues and the lowest one is still not so low. I'm anyway not sure of this.</p>
<p>Whatever the cause of this is, it is a common situation, so it doesn't mean you did something wrong.</p>
<p><strong>edit:</strong> just found a very partial comment of the authours on this topic: see the last point of the description of 2.1.0 <a href="http://cole-trapnell-lab.github.io/cufflinks/releases/v2.1.0/">release</a></p>
| biostars | {"uid": 175148, "view_count": 4273, "vote_count": 1} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.