Archive for the ‘Computer Science’ Category
Development of Immune-Specific Interaction Potentials and Their Application in the Multi-Agent-System VaccImm
Written by Scott Christley et al. on August 17, 2011 – 9:00 pm -by Anna Lena Woelke, Joachim von Eichborn, Manuela S. Murgueitio, Catherine L. Worth, Filippo Castiglione, Robert Preissner
Peptide vaccination in cancer therapy is a promising alternative to conventional methods. However, the parameters for this personalized treatment are difficult to access experimentally. In this respect, in silico models can help to narrow down the parameter space or to explain certain phenomena at a systems level. Herein, we develop two empirical interaction potentials specific to B-cell and T-cell receptor complexes and validate their applicability in comparison to a more general potential. The interaction potentials are applied to the model VaccImm which simulates the immune response against solid tumors under peptide vaccination therapy. This multi-agent system is derived from another immune system simulator (C-ImmSim) and now includes a module that enables the amino acid sequence of immune receptors and their ligands to be taken into account. The multi-agent approach is combined with approved methods for prediction of major histocompatibility complex (MHC)-binding peptides and the newly developed interaction potentials. In the analysis, we critically assess the impact of the different modules on the simulation with VaccImm and how they influence each other. In addition, we explore the reasons for failures in inducing an immune response by examining the activation states of the immune cell populations in detail. In summary, the present work introduces immune-specific interaction potentials and their application to the agent-based model VaccImm which simulates peptide vaccination in cancer therapy.Tags: computer, news, science
Posted in Computer Science | Comments Off
Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models
Written by Scott Christley et al. on August 17, 2011 – 9:00 pm -by Mario Rojas Q., David Masip, Alexander Todorov, Jordi Vitria
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.Tags: computer, news, science
Posted in Computer Science | Comments Off
Landscape Ecotoxicology of Coho Salmon Spawner Mortality in Urban Streams
Written by Scott Christley et al. on August 17, 2011 – 9:00 pm -by Blake E. Feist, Eric R. Buhle, Paul Arnold, Jay W. Davis, Nathaniel L. Scholz
In the Pacific Northwest of the United States, adult coho salmon (Oncorhynchus kisutch) returning from the ocean to spawn in urban basins of the Puget Sound region have been prematurely dying at high rates (up to 90% of the total runs) for more than a decade. The current weight of evidence indicates that coho deaths are caused by toxic chemical contaminants in land-based runoff to urban streams during the fall spawning season. Non-point source pollution in urban landscapes typically originates from discrete urban and residential land use activities. In the present study we conducted a series of spatial analyses to identify correlations between land use and land cover (roadways, impervious surfaces, forests, etc.) and the magnitude of coho mortality in six streams with different drainage basin characteristics. We found that spawner mortality was most closely and positively correlated with the relative proportion of local roads, impervious surfaces, and commercial property within a basin. These and other correlated variables were used to identify unmonitored basins in the greater Seattle metropolitan area where recurrent coho spawner die-offs may be likely. This predictive map indicates a substantial geographic area of vulnerability for the Puget Sound coho population segment, a species of concern under the U.S. Endangered Species Act. Our spatial risk representation has numerous applications for urban growth management, coho conservation, and basin restoration (e.g., avoiding the unintentional creation of ecological traps). Moreover, the approach and tools are transferable to areas supporting coho throughout western North America.Tags: computer, news, science
Posted in Computer Science | Comments Off
Efficient Replication of over 180 Genetic Associations with Self-Reported Medical Data
Written by Scott Christley et al. on August 17, 2011 – 9:00 pm -by Joyce Y. Tung, Chuong B. Do, David A. Hinds, Amy K. Kiefer, J. Michael Macpherson, Arnab B. Chowdry, Uta Francke, Brian T. Naughton, Joanna L. Mountain, Anne Wojcicki, Nicholas Eriksson
While the cost and speed of generating genomic data have come down dramatically in recent years, the slow pace of collecting medical data for large cohorts continues to hamper genetic research. Here we evaluate a novel online framework for obtaining large amounts of medical information from a recontactable cohort by assessing our ability to replicate genetic associations using these data. Using web-based questionnaires, we gathered self-reported data on 50 medical phenotypes from a generally unselected cohort of over 20,000 genotyped individuals. Of a list of genetic associations curated by NHGRI, we successfully replicated about 75% of the associations that we expected to (based on the number of cases in our cohort and reported odds ratios, and excluding a set of associations with contradictory published evidence). Altogether we replicated over 180 previously reported associations, including many for type 2 diabetes, prostate cancer, cholesterol levels, and multiple sclerosis. We found significant variation across categories of conditions in the percentage of expected associations that we were able to replicate, which may reflect systematic inflation of the effects in some initial reports, or differences across diseases in the likelihood of misdiagnosis or misreport. We also demonstrated that we could improve replication success by taking advantage of our recontactable cohort, offering more in-depth questions to refine self-reported diagnoses. Our data suggest that online collection of self-reported data from a recontactable cohort may be a viable method for both broad and deep phenotyping in large populations.Tags: computer, news, science
Posted in Computer Science | Comments Off
OASIS: Online Application for the Survival Analysis of Lifespan Assays Performed in Aging Research
Written by Scott Christley et al. on August 15, 2011 – 9:00 pm -by Jae-Seong Yang, Hyun-Jun Nam, Mihwa Seo, Seong Kyu Han, Yonghwan Choi, Hong Gil Nam, Seung-Jae Lee, Sanguk Kim
BackgroundAging is a fundamental biological process. Characterization of genetic and environmental factors that influence lifespan is a crucial step toward understanding the mechanisms of aging at the organism level. To capture the different effects of genetic and environmental factors on lifespan, appropriate statistical analyses are needed.
Methodology/Principal FindingsWe developed an online application for survival analysis (OASIS) that helps conduct various novel statistical tasks involved in analyzing survival data in a user-friendly manner. OASIS provides standard survival analysis results including Kaplan-Meier estimates and mean/median survival time by taking censored survival data. OASIS also provides various statistical tests including comparison of mean survival time, overall survival curve, and survival rate at specific time point. To visualize survival data, OASIS generates survival and log cumulative hazard plots that enable researchers to easily interpret their experimental results. Furthermore, we provide statistical methods that can analyze variances among survival datasets. In addition, users can analyze proportional effects of risk factors on survival.
Conclusions/SignificanceOASIS provides a platform that is essential to facilitate efficient statistical analyses of survival data in the field of aging research. Web application and a detailed description of algorithms are accessible from http://sbi.postech.ac.kr/oasis.
Tags: computer, news, science
Posted in Computer Science | Comments Off
Strategy to Find Molecular Signatures in a Small Series of Rare Cancers: Validation for Radiation-Induced Breast and Thyroid Tumors
Written by Scott Christley et al. on August 11, 2011 – 9:00 pm -by Nicolas Ugolin, Catherine Ory, Emilie Lefevre, Nora Benhabiles, Paul Hofman, Martin Schlumberger, Sylvie Chevillard
Methods of classification using transcriptome analysis for case-by-case tumor diagnosis could be limited by tumor heterogeneity and masked information in the gene expression profiles, especially as the number of tumors is small. We propose a new strategy, EMts_2PCA, based on: 1) The identification of a gene expression signature with a great potential for discriminating subgroups of tumors (EMts stage), which includes: a) a learning step, based on an expectation-maximization (EM) algorithm, to select sets of candidate genes whose expressions discriminate two subgroups, b) a training step to select from the sets of candidate genes those with the highest potential to classify training tumors, c) the compilation of genes selected during the training step, and standardization of their levels of expression to finalize the signature. 2) The predictive classification of independent prospective tumors, according to the two subgroups of interest, by the definition of a validation space based on a two-step principal component analysis (2PCA). The present method was evaluated by classifying three series of tumors and its robustness, in terms of tumor clustering and prediction, was further compared with that of three classification methods (Gene expression bar code, Top-scoring pair(s) and a PCA-based method). Results showed that EMts_2PCA was very efficient in tumor classification and prediction, with scores always better that those obtained by the most common methods of tumor clustering. Specifically, EMts_2PCA permitted identification of highly discriminating molecular signatures to differentiate post-Chernobyl thyroid or post-radiotherapy breast tumors from their sporadic counterparts that were previously unsuccessfully classified or classified with errors.Tags: computer, news, science
Posted in Computer Science | Comments Off
A Bayesian Method for Evaluating and Discovering Disease Loci Associations
Written by Scott Christley et al. on August 10, 2011 – 9:00 pm -by Xia Jiang, M. Michael Barmada, Gregory F. Cooper, Michael J. Becich
BackgroundA genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need.
Methodology/FindingsWe introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found.
Conclusions/SignificanceWe conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations.
Tags: computer, news, science
Posted in Computer Science | Comments Off
Integrated Mapping of Establishment Risk for Emerging Vector-Borne Infections: A Case Study of Canine Leishmaniasis in Southwest France
Written by Scott Christley et al. on August 9, 2011 – 9:00 pm -by Nienke Hartemink, Sophie O. Vanwambeke, Hans Heesterbeek, David Rogers, David Morley, Bernard Pesson, Clive Davies, Shazia Mahamdallie, Paul Ready
BackgroundZoonotic visceral leishmaniasis is endemic in the Mediterranean Basin, where the dog is the main reservoir host. The disease's causative agent, Leishmania infantum, is transmitted by blood-feeding female sandflies. This paper reports an integrative study of canine leishmaniasis in a region of France spanning the southwest Massif Central and the northeast Pyrenees, where the vectors are the sandflies Phlebotomus ariasi and P. perniciosus.
MethodsSandflies were sampled in 2005 using sticky traps placed uniformly over an area of approximately 100 by 150 km. High- and low-resolution satellite data for the area were combined to construct a model of the sandfly data, which was then used to predict sandfly abundance throughout the area on a pixel by pixel basis (resolution of c. 1 km). Using literature- and expert-derived estimates of other variables and parameters, a spatially explicit R0 map for leishmaniasis was constructed within a Geographical Information System. R0 is a measure of the risk of establishment of a disease in an area, and it also correlates with the amount of control needed to stop transmission.
ConclusionsTo our knowledge, this is the first analysis that combines a vector abundance prediction model, based on remotely-sensed variables measured at different levels of spatial resolution, with a fully mechanistic process-based temperature-dependent R0 model. The resulting maps should be considered as proofs-of-principle rather than as ready-to-use risk maps, since validation is currently not possible. The described approach, based on integrating several modeling methods, provides a useful new set of tools for the study of the risk of outbreaks of vector-borne diseases.
Tags: computer, news, science
Posted in Computer Science | Comments Off
Automatic Compilation from High-Level Biologically-Oriented Programming Language to Genetic Regulatory Networks
Written by Scott Christley et al. on August 5, 2011 – 9:00 pm -by Jacob Beal, Ting Lu, Ron Weiss
BackgroundThe field of synthetic biology promises to revolutionize our ability to engineer biological systems, providing important benefits for a variety of applications. Recent advances in DNA synthesis and automated DNA assembly technologies suggest that it is now possible to construct synthetic systems of significant complexity. However, while a variety of novel genetic devices and small engineered gene networks have been successfully demonstrated, the regulatory complexity of synthetic systems that have been reported recently has somewhat plateaued due to a variety of factors, including the complexity of biology itself and the lag in our ability to design and optimize sophisticated biological circuitry.
Methodology/Principal FindingsTo address the gap between DNA synthesis and circuit design capabilities, we present a platform that enables synthetic biologists to express desired behavior using a convenient high-level biologically-oriented programming language, Proto. The high level specification is compiled, using a regulatory motif based mechanism, to a gene network, optimized, and then converted to a computational simulation for numerical verification. Through several example programs we illustrate the automated process of biological system design with our platform, and show that our compiler optimizations can yield significant reductions in the number of genes () and latency of the optimized engineered gene networks.
Conclusions/SignificanceOur platform provides a convenient and accessible tool for the automated design of sophisticated synthetic biological systems, bridging an important gap between DNA synthesis and circuit design capabilities. Our platform is user-friendly and features biologically relevant compiler optimizations, providing an important foundation for the development of sophisticated biological systems.
Tags: computer, news, science
Posted in Computer Science | Comments Off
Unraveling Spurious Properties of Interaction Networks with Tailored Random Networks
Written by Scott Christley et al. on August 5, 2011 – 9:00 pm -by Stephan Bialonski, Martin Wendler, Klaus Lehnertz
We investigate interaction networks that we derive from multivariate time series with methods frequently employed in diverse scientific fields such as biology, quantitative finance, physics, earth and climate sciences, and the neurosciences. Mimicking experimental situations, we generate time series with finite length and varying frequency content but from independent stochastic processes. Using the correlation coefficient and the maximum cross-correlation, we estimate interdependencies between these time series. With clustering coefficient and average shortest path length, we observe unweighted interaction networks, derived via thresholding the values of interdependence, to possess non-trivial topologies as compared to Erdös-Rényi networks, which would indicate small-world characteristics. These topologies reflect the mostly unavoidable finiteness of the data, which limits the reliability of typically used estimators of signal interdependence. We propose random networks that are tailored to the way interaction networks are derived from empirical data. Through an exemplary investigation of multichannel electroencephalographic recordings of epileptic seizures – known for their complex spatial and temporal dynamics – we show that such random networks help to distinguish network properties of interdependence structures related to seizure dynamics from those spuriously induced by the applied methods of analysis.Tags: computer, news, science
Posted in Computer Science | Comments Off
