Q: Is there someone I should speak with about study design? I have read recent papers about pooling samples and other strategies. My impression is that whether I pool or not, about 5 patients in each group is enough to balance out interpatient SNPs. Do you think there is any value to including a computational biologist in deciding on study design?

A: Getting some statistical expertise during the design stages is certainly a good idea. You can contact John Tobias at jtobias@pennmedicine.upenn.edu

Five replicates would be a good place to start if you can afford that many arrays. Individual genes will be regulated by a variety of processes that may have much greater or less variance than average SNP occurence. Using microarray assays only, you will have to do however many replicates it takes for a statistically significant estimate of expression level to make a strong conclusion about any given gene on your differential expression candidate list.

Pooling hides information about biological variance that may be informative. It is better to replicate at the level of individual biological units so that each gene's profile is preserved in its genomic context. An outlier can then be examined relative to all the other genes that may have contributed to the observed unusual expression. Of course pooling is sometimes unavoidable, as when a single sample is too small to yield enough RNA for the assay.

top

Q: I previously analyzed my Affymetrix data in Microarray Suite v 4, and some of the genes that appeared on my "most wanted list" (2xup) have now lost their signal upon reanalysis with MAS v 5 (in the data from old chips that I generated again using v5, they were 'absent'). Do you know what the difference between the versions is?

A: There are two Affymetrix documents that explain the differences between the two MAS versions. Both are posted in the Affy Users section of www.med.upenn.edu/microarr. If genes went from present to absent, it may be due to new ways of handling background signal from the mismatch probes. Compare the number of probe pairs used in the analysis for both versions, and check the p-value for detection in MAS 5. You could also adjust the analysis parameters to accept less stringent thresholds and see how that affects your calls.

top

Q: We are trying to validate some of the genes that have been found to be upregulated in the Affy experiments. But the major problem is false positives - several genes seen to be upregulated by the Affy software analysis do not show the same result by RT-PCR. Based on your experience what is the approximate rate of false positives in Affy analysis? How do we overcome this problem? Could we co-hybridize our expt. and control on the same chip - is this possible with the Affy chips?

A: The false positive rate will mostly depend on the experimental design and biological variation. Some things to check include:

Variation of samples within a treatment - for a given candidate gene, what is the standard deviation of the average expression level within each treatment? The expression level error bars won't overlap for the best differentially expressed candidates.

Array-to-array normalization - global scaling is usually robust, but may not be appropriate if a large proportion of the assayed genes are really changing. Do you get the same results if you normalize to a spiked control, or an internal control gene known to not be affected?

Change p-value - for pairwise comparisons, focus less on the magnitude of change and more on the change call confidence. Is the candidate consistently called as changed in various pairwise combinations?

Method of confirmation - some (probably small) number of genes will give different results when assayed by oligomer probes, RT-PCR, and/or northern blots. Check your RT-PCR primers or northern probes against the probe and target sequences at Netaffx, and BLAST all of these against the latest GenBank and Unigene builds to see if gene family members or homologous sequences might contribute signal.

Affymetrix published one early paper using competitive (two channel labeling) hybridization on a GeneChip. We wouldn't be able to try this without retrofitting the scanner; Affymetrix doesn't support two dye detection, probably to maintain 1 sample = 1 chip and sell more arrays. Certainly the next step for many genomic-scale profiling projects is to print a custom array of cDNA or oligomer probes on glass slides, which can be used for competitive hybridization. A good number of probes can be produced for a couple of thousand dollars, allowing you to print more than enough arrays of your candidate genes to fully characterize their expression profile.

top

Q: Which file from Affymetrix do I load into Genespring? Do I redo normalizations in Genespring or use what I have from MAS ?

A: Export the metrics sheet from an absolute analysis (no baseline chip) to Excel. Save as a tab-delimited text file. Run the GeneSpring autoloader and when prompted, point to that .txt file. GeneSpring will automatically detect this as Affymetrix data and apply a default set of normalizations and interpretations. We generally use scaling to a target of 150 in MAS and then accept the per chip normalization in GeneSpring. This "extra" normalization does no harm, but you may wish to compare a number of normalization methods in both MAS and GeneSpring to see how it affects your data. See additional tips in the GeneSpring Users section of https://pathbio.med.upenn.edu/pbr/portal/mpf/.

top

Q: How can I find out if the following genes are on the mouse chip:

A: Use the Netaffx tools from our link in the Affymetrix Users section. Registration is free but required to access the analysis toolset. You can query any GeneChip probe set by gene name, keywords, BLAST sequence similarity and more. Remember that some sets include more than one microarray.

top

Q:
1. The documentation for the RNA isolation kit I am using says that some downstream applications require a DNase digestion to remove any contaminating DNA. Is this step necessary?

2. Should we perform a denaturation step (65C degree treatment)?

3. My recollection from our discussion is that you require 5ug of sample RNA. Is this correct? What volume should it be in? (The final elution step of the kit I'm using leaves the sample in 70-80 microliters of buffer - should we EtOH precipitate this and bring it up in a lesser volume?)

A:
1. No DNase is necessary unless you think there is significant contamination, enough to affect nucleic acid quantitation. First strand synthesis is from a polyT primer using reverse transcriptase, and amplification is by in vitro transcription, so there is little chance of genomic DNA contributing to the product.

2. No need to denature, this is done during primer hybridization.

3. The minimum for a standard Affy sample is 5ug at 0.5ug/ul. You may want to add Ambion Superase-In or similar to the eluate. If you elute in a buffer, EtOH precipitate with RNase free glycogen or PelletPaint, wash, and resuspend in water. Don't let the pellet dry out completely. You could also use a Microcon3 or similar concentrator. If you elute in water, just partially lyophilize to reduce volume.

The A260/280 should be 1.7 or better. Also run a regular agarose gel to check size distribution and intensities of ribosomal bands. Bring the OD readings and gel picture with your samples.

top

Q: We are looking at a gene named X, which showed a change between two groups of the cells with a 1.4 signal log ratio and notation "D". However, the one sample showed a signal value of 52.1 with notation "A" while the control cells showed 143.6 with "A".

How should I interpret those values which showed a change but both with "A"? Are they actually not expressed? Is the change meaningless? Is the signal value 143.6 not too high to be judged as "A"? There are other genes in the data that showed lower signal value than 143.6 but with "P".

A: We routinely disregard changes called for genes that are not detected in either sample. The Affy software attempts to detect changes for all genes, but as you've concluded this is meaningless unless the RNA is accurately detected in at least one sample. Signal values, and signal log ratios, must be judged in the context of their associated p-values. A gene might have one or two probes with high fluorescence and ten probes with no fluorescence; this could produce a signal value of 143 (or higher) but a poor p-value and an A call because the signal is not consistent across the probe set for that gene. Conversely, all probes for a gene might show a low but consistent signal above background, producing a signal value of 30 and a good p-value and thus a P call.

Similarly, signal log ratios are evaluated for consistent changes across all probes for that gene. A large change may fail the significance test and get a poor p-value and an NC call, while a more consistent but smaller magnitude change has a good p-value and thus an I or D (or MI, MD) call.

When interpreting your results, try some sorting on the p-values first before considering the magnitude of change.

top

Q: I've installed MAS, and while it seems to be a great program, I can't figure out whether or not I'm able to "average" the 3 replicates for each treatment. Is there some way to do this?

Also, could you give me some feel for what situations call for scaling? When would normailization not be sufficient/optimal?

A: As you have found, MAS is unable to deal with replicates. There are some options: export the metrics output to Excel, put all the Signal columns from replicates on one sheet and calculate the average, then use average Signal to run comparisons in Excel. Another way is to load the metrics into GeneSpring which allows you to designate replicates for averaging before further analysis.

Scaling is our standard normalization, because it is more robust than relying on a single set of internal control probes or spiked controls added during hybridization. It is appropriate when the samples being assayed do not have a large proportion of genes undergoing changes. The average signal from all features is calculated, and a scaling factor applied to all features to adjust that mean to an arbitrary target. 150 works well for us. A related method is to normalize to the total signal from all features rather than the mean. This does the same thing as scaling to the mean for samples within an experiment, but makes comparisons to later experiments more difficult. If many genes are different, say when comparing muscle vs brain tissue (an extreme example), scaling would artificially remove real differences. Normalization to an internal or spiked control would then be appropriate.

top

Last Updated on 05/04/2015