Human Mobility Networks, Travel Restrictions, and the Global Spread of 2009 H1N1 Pandemic

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

After the emergence of the H1N1 influenza in 2009, some countries responded with travel-related controls during the early stage of the outbreak in an attempt to contain or slow down its international spread. These controls along with self-imposed travel limitations contributed to a decline of about 40% in international air traffic to/from Mexico following the international alert. However, no containment was achieved by such restrictions and the virus was able to reach pandemic proportions in a short time. When gauging the value and efficacy of mobility and travel restrictions it is crucial to rely on epidemic models that integrate the wide range of features characterizing human mobility and the many options available to public health organizations for responding to a pandemic. Here we present a comprehensive computational and theoretical study of the role of travel restrictions in halting and delaying pandemics by using a model that explicitly integrates air travel and short-range mobility data with high-resolution demographic data across the world and that is validated by the accumulation of data from the 2009 H1N1 pandemic. We explore alternative scenarios for the 2009 H1N1 pandemic by assessing the potential impact of mobility restrictions that vary with respect to their magnitude and their position in the pandemic timeline. We provide a quantitative discussion of the delay obtained by different mobility restrictions and the likelihood of containing outbreaks of infectious diseases at their source, confirming the limited value and feasibility of international travel restrictions. These results are rationalized in the theoretical framework characterizing the invasion dynamics of the epidemics at the metapopulation level.


Tags: , ,
Posted in Computer Science | Comments Off

History Shaped the Geographic Distribution of Genomic Admixture on the Island of Puerto Rico

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Contemporary genetic variation among Latin Americans human groups reflects population migrations shaped by complex historical, social and economic factors. Consequently, admixture patterns may vary by geographic regions ranging from countries to neighborhoods. We examined the geographic variation of admixture across the island of Puerto Rico and the degree to which it could be explained by historic and social events. We analyzed a census-based sample of 642 Puerto Rican individuals that were genotyped for 93 ancestry informative markers (AIMs) to estimate African, European and Native American ancestry. Socioeconomic status (SES) data and geographic location were obtained for each individual. There was significant geographic variation of ancestry across the island. In particular, African ancestry demonstrated a decreasing East to West gradient that was partially explained by historical factors linked to the colonial sugar plantation system. SES also demonstrated a parallel decreasing cline from East to West. However, at a local level, SES and African ancestry were negatively correlated. European ancestry was strongly negatively correlated with African ancestry and therefore showed patterns complementary to African ancestry. By contrast, Native American ancestry showed little variation across the island and across individuals and appears to have played little social role historically. The observed geographic distributions of SES and genetic variation relate to historical social events and mating patterns, and have substantial implications for the design of studies in the recently admixed Puerto Rican population. More generally, our results demonstrate the importance of incorporating social and geographic data with genetics when studying contemporary admixed populations.


Tags: , ,
Posted in Computer Science | Comments Off

Model of Yield Response of Corn to Plant Population and Absorption of Solar Energy

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Biomass yield of agronomic crops is influenced by a number of factors, including crop species, soil type, applied nutrients, water availability, and plant population. This article is focused on dependence of biomass yield (Mg ha−1 and g plant−1) on plant population (plants m−2). Analysis includes data from the literature for three independent studies with the warm-season annual corn (Zea mays L.) grown in the United States. Data are analyzed with a simple exponential mathematical model which contains two parameters, viz. Ym (Mg ha−1) for maximum yield at high plant population and c (m2 plant−1) for the population response coefficient. This analysis leads to a new parameter called characteristic plant population, xc = 1/c (plants m−2). The model is shown to describe the data rather well for the three field studies. In one study measurements were made of solar radiation at different positions in the plant canopy. The coefficient of absorption of solar energy was assumed to be the same as c and provided a physical basis for the exponential model. The three studies showed no definitive peak in yield with plant population, but generally exhibited asymptotic approach to maximum yield with increased plant population. Values of xc were very similar for the three field studies with the same crop species.


Tags: , ,
Posted in Computer Science | Comments Off

An Enhanced Probabilistic LDA for Multi-Class Brain Computer Interface

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Background

There is a growing interest in the study of signal processing and machine learning methods, which may make the brain computer interface (BCI) a new communication channel. A variety of classification methods have been utilized to convert the brain information into control commands. However, most of the methods only produce uncalibrated values and uncertain results.

Methodology/Principal Findings

In this study, we presented a probabilistic method “enhanced BLDA” (EBLDA) for multi-class motor imagery BCI, which utilized Bayesian linear discriminant analysis (BLDA) with probabilistic output to improve the classification performance. EBLDA builds a new classifier that enlarges training dataset by adding test samples with high probability. EBLDA is based on the hypothesis that unlabeled samples with high probability provide valuable information to enhance learning process and generate a classifier with refined decision boundaries. To investigate the performance of EBLDA, we first used carefully designed simulated datasets to study how EBLDA works. Then, we adopted a real BCI dataset for further evaluation. The current study shows that: 1) Probabilistic information can improve the performance of BCI for subjects with high kappa coefficient; 2) With supplementary training samples from the test samples of high probability, EBLDA is significantly better than BLDA in classification, especially for small training datasets, in which EBLDA can obtain a refined decision boundary by a shift of BLDA decision boundary with the support of the information from test samples.

Conclusions/Significance

The proposed EBLDA could potentially reduce training effort. Therefore, it is valuable for us to realize an effective online BCI system, especially for multi-class BCI systems.


Tags: , ,
Posted in Computer Science | Comments Off

Modelers’ Perception of Mathematical Modeling in Epidemiology: A Web-Based Survey

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Background

Mathematical modeling in epidemiology (MME) is being used increasingly. However, there are many uncertainties in terms of definitions, uses and quality features of MME.

Methodology/Principal Findings

To delineate the current status of these models, a 10-item questionnaire on MME was devised. Proposed via an anonymous internet-based survey, the questionnaire was completed by 189 scientists who had published in the domain of MME. A small minority (18%) of respondents claimed to have in mind a concise definition of MME. Some techniques were identified by the researchers as characterizing MME (e.g. Markov models), while others–at the same level of sophistication in terms of mathematics–were not (e.g. Cox regression). The researchers' opinions were also contrasted about the potential applications of MME, perceived as higly relevant for providing insight into complex mechanisms and less relevant for identifying causal factors. The quality criteria were those of good science and were not related to the size and the nature of the public health problems addressed.

Conclusions/Significance

This study shows that perceptions on the nature, uses and quality criteria of MME are contrasted, even among the very community of published authors in this domain. Nevertheless, MME is an emerging discipline in epidemiology and this study underlines that it is associated with specific areas of application and methods. The development of this discipline is likely to deserve a framework providing recommendations and guidance at various steps of the studies, from design to report.


Tags: , ,
Posted in Computatioanl biology | Comments Off

Considering Transposable Element Diversification in De Novo Annotation Approaches

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Transposable elements (TEs) are mobile, repetitive DNA sequences that are almost ubiquitous in prokaryotic and eukaryotic genomes. They have a large impact on genome structure, function and evolution. With the recent development of high-throughput sequencing methods, many genome sequences have become available, making possible comparative studies of TE dynamics at an unprecedented scale. Several methods have been proposed for the de novo identification of TEs in sequenced genomes. Most begin with the detection of genomic repeats, but the subsequent steps for defining TE families differ. High-quality TE annotations are available for the Drosophila melanogaster and Arabidopsis thaliana genome sequences, providing a solid basis for the benchmarking of such methods. We compared the performance of specific algorithms for the clustering of interspersed repeats and found that only a particular combination of algorithms detected TE families with good recovery of the reference sequences. We then applied a new procedure for reconciling the different clustering results and classifying TE sequences. The whole approach was implemented in a pipeline using the REPET package. Finally, we show that our combined approach highlights the dynamics of well defined TE families by making it possible to identify structural variations among their copies. This approach makes it possible to annotate TE families and to study their diversification in a single analysis, improving our understanding of TE dynamics at the whole-genome scale and for diverse species.


Tags: , ,
Posted in Computatioanl biology | Comments Off

Landscape Mapping of Functional Proteins in Insulin Signal Transduction and Insulin Resistance: A Network-Based Protein-Protein Interaction Analysis

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

The type 2 diabetes has increased rapidly in recent years throughout the world. The insulin signal transduction mechanism gets disrupted sometimes and it's known as insulin-resistance. It is one of the primary causes associated with type-2 diabetes. The signaling mechanisms involved several proteins that include 7 major functional proteins such as INS, INSR, IRS1, IRS2, PIK3CA, Akt2, and GLUT4. Using these 7 principal proteins, multiple sequences alignment has been created. The scores between sequences also have been developed. We have constructed a phylogenetic tree and modified it with node and distance. Besides, we have generated sequence logos and ultimately developed the protein-protein interaction network. The small insulin signal transduction protein arrangement shows complex network between the functional proteins.


Tags: , ,
Posted in Computatioanl biology | Comments Off

Identification of Candidate Small-Molecule Therapeutics to Cancer by Gene-Signature Perturbation in Connectivity Mapping

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Connectivity mapping is a recently developed technique for discovering the underlying connections between different biological states based on gene-expression similarities. The sscMap method has been shown to provide enhanced sensitivity in mapping meaningful connections leading to testable biological hypotheses and in identifying drug candidates with particular pharmacological and/or toxicological properties. Challenges remain, however, as to how to prioritise the large number of discovered connections in an unbiased manner such that the success rate of any following-up investigation can be maximised. We introduce a new concept, gene-signature perturbation, which aims to test whether an identified connection is stable enough against systematic minor changes (perturbation) to the gene-signature. We applied the perturbation method to three independent datasets obtained from the GEO database: acute myeloid leukemia (AML), cervical cancer, and breast cancer treated with letrozole. We demonstrate that the perturbation approach helps to identify meaningful biological connections which suggest the most relevant candidate drugs. In the case of AML, we found that the prevalent compounds were retinoic acids and PPAR activators. For cervical cancer, our results suggested that potential drugs are likely to involve the EGFR pathway; and with the breast cancer dataset, we identified candidates that are involved in prostaglandin inhibition. Thus the gene-signature perturbation approach added real values to the whole connectivity mapping process, allowing for increased specificity in the identification of possible therapeutic candidates.


Tags: , ,
Posted in Computatioanl biology | Comments Off

Humanization and Characterization of an Anti-Human TNF-α Murine Monoclonal Antibody

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

A murine monoclonal antibody, m357, showing the highly neutralizing activities for human tumor necrosis factor (TNF-α) was chosen to be humanized by a variable domain resurfacing approach. The non-conserved surface residues in the framework regions of both the heavy and light chain variable regions were identified via a molecular modeling of m357 built by computer-assisted homology modeling. By replacing these critical surface residues with the human counterparts, a humanized version, h357, was generated. The humanized h357 IgG1 was then stably expressed in a mammalian cell line and the purified antibody maintained the high antigen binding affinity as compared with the parental m357 based on a soluble TNF-α neutralization bioassay. Furthermore, h357 IgG1 possesses the ability to mediate antibody-dependent cell-mediated cytotoxicity and complement dependent cytotoxicity upon binding to cells bearing the transmembrane form of TNF-α. In a mouse model of collagen antibody-induced arthritis, h357 IgG significantly inhibited disease progression by intra-peritoneal injection of 50 µg/mouse once-daily for 9 consecutive days. These results provided a basis for the development of h357 IgG as therapeutic use.


Tags: , ,
Posted in Computatioanl biology | Comments Off

Mayday SeaSight: Combined Analysis of Deep Sequencing and Microarray Data

Written by Scott Christley et al. on January 31, 2011 – 8:00 am -

Recently emerged deep sequencing technologies offer new high-throughput methods to quantify gene expression, epigenetic modifications and DNA-protein binding. From a computational point of view, the data is very different from that produced by the already established microarray technology, providing a new perspective on the samples under study and complementing microarray gene expression data. Software offering the integrated analysis of data from different technologies is of growing importance as new data emerge in systems biology studies. MAYDAY is an extensible platform for visual data exploration and interactive analysis and provides many methods for dissecting complex transcriptome datasets. We present MAYDAY SEASIGHT, an extension that allows to integrate data from different platforms such as deep sequencing and microarrays. It offers methods for computing expression values from mapped reads and raw microarray data, background correction and normalization and linking microarray probes to genomic coordinates. It is now possible to use MAYDAY's wealth of methods to analyze sequencing data and to combine data from different technologies in one analysis.


Tags: , ,
Posted in Computatioanl biology | Comments Off
RSS