ERIC Educational Resources Information Center
Kaya, Osman Nafiz; Dogan, Alev; Gokcek, Nur; Kilic, Ziya; Kilic, Esma
2007-01-01
The purpose of this study was to investigate the effects of multiple intelligences (MI) teaching approach on 8th Grade students' achievement in and attitudes toward science. This study used a pretest-posttest control group experimental design. While the experimental group (n=30) was taught a unit on acids and bases using MI teaching approach, the…
Metabolic pathways as possible therapeutic targets for progressive multiple sclerosis.
Heidker, Rebecca M; Emerson, Mitchell R; LeVine, Steven M
2017-08-01
Unlike relapsing remitting multiple sclerosis, there are very few therapeutic options for patients with progressive forms of multiple sclerosis. While immune mechanisms are key participants in the pathogenesis of relapsing remitting multiple sclerosis, the mechanisms underlying the development of progressive multiple sclerosis are less well understood. Putative mechanisms behind progressive multiple sclerosis have been put forth: insufficient energy production via mitochondrial dysfunction, activated microglia, iron accumulation, oxidative stress, activated astrocytes, Wallerian degeneration, apoptosis, etc . Furthermore, repair processes such as remyelination are incomplete. Experimental therapies that strive to improve metabolism within neurons and glia, e.g. , oligodendrocytes, could act to counter inadequate energy supplies and/or support remyelination. Most experimental approaches have been examined as standalone interventions; however, it is apparent that the biochemical steps being targeted are part of larger pathways, which are further intertwined with other metabolic pathways. Thus, the potential benefits of a tested intervention, or of an established therapy, e.g. , ocrelizumab, could be undermined by constraints on upstream and/or downstream steps. If correct, then this argues for a more comprehensive, multifaceted approach to therapy. Here we review experimental approaches to support neuronal and glial metabolism, and/or promote remyelination, which may have potential to lessen or delay progressive multiple sclerosis.
Bioinformatics approaches to predict target genes from transcription factor binding data.
Essebier, Alexandra; Lamprecht, Marnie; Piper, Michael; Bodén, Mikael
2017-12-01
Transcription factors regulate gene expression and play an essential role in development by maintaining proliferative states, driving cellular differentiation and determining cell fate. Transcription factors are capable of regulating multiple genes over potentially long distances making target gene identification challenging. Currently available experimental approaches to detect distal interactions have multiple weaknesses that have motivated the development of computational approaches. Although an improvement over experimental approaches, existing computational approaches are still limited in their application, with different weaknesses depending on the approach. Here, we review computational approaches with a focus on data dependency, cell type specificity and usability. With the aim of identifying transcription factor target genes, we apply available approaches to typical transcription factor experimental datasets. We show that approaches are not always capable of annotating all transcription factor binding sites; binding sites should be treated disparately; and a combination of approaches can increase the biological relevance of the set of genes identified as targets. Copyright © 2017 Elsevier Inc. All rights reserved.
Experimental studies of systematic multiple-energy operation at HIMAC synchrotron
NASA Astrophysics Data System (ADS)
Mizushima, K.; Katagiri, K.; Iwata, Y.; Furukawa, T.; Fujimoto, T.; Sato, S.; Hara, Y.; Shirai, T.; Noda, K.
2014-07-01
Multiple-energy synchrotron operation providing carbon-ion beams with various energies has been used for scanned particle therapy at NIRS. An energy range from 430 to 56 MeV/u and about 200 steps within this range are required to vary the Bragg peak position for effective treatment. The treatment also demands the slow extraction of beam with highly reliable properties, such as spill, position and size, for all energies. We propose an approach to generating multiple-energy operation meeting these requirements within a short time. In this approach, the device settings at most energy steps are determined without manual adjustments by using systematic parameter tuning depending on the beam energy. Experimental verification was carried out at the HIMAC synchrotron, and its results proved that this approach can greatly reduce the adjustment period.
Boyd, Philip W; Collins, Sinead; Dupont, Sam; Fabricius, Katharina; Gattuso, Jean-Pierre; Havenhand, Jonathan; Hutchins, David A; Riebesell, Ulf; Rintoul, Max S; Vichi, Marcello; Biswas, Haimanti; Ciotti, Aurea; Gao, Kunshan; Gehlen, Marion; Hurd, Catriona L; Kurihara, Haruko; McGraw, Christina M; Navarro, Jorge M; Nilsson, Göran E; Passow, Uta; Pörtner, Hans-Otto
2018-06-01
Marine life is controlled by multiple physical and chemical drivers and by diverse ecological processes. Many of these oceanic properties are being altered by climate change and other anthropogenic pressures. Hence, identifying the influences of multifaceted ocean change, from local to global scales, is a complex task. To guide policy-making and make projections of the future of the marine biosphere, it is essential to understand biological responses at physiological, evolutionary and ecological levels. Here, we contrast and compare different approaches to multiple driver experiments that aim to elucidate biological responses to a complex matrix of ocean global change. We present the benefits and the challenges of each approach with a focus on marine research, and guidelines to navigate through these different categories to help identify strategies that might best address research questions in fundamental physiology, experimental evolutionary biology and community ecology. Our review reveals that the field of multiple driver research is being pulled in complementary directions: the need for reductionist approaches to obtain process-oriented, mechanistic understanding and a requirement to quantify responses to projected future scenarios of ocean change. We conclude the review with recommendations on how best to align different experimental approaches to contribute fundamental information needed for science-based policy formulation. © 2018 John Wiley & Sons Ltd.
USDA-ARS?s Scientific Manuscript database
A number of recent soil biota studies have deviated from the standard experimental approach of generating a distinct data value for each experimental unit (e.g. Yang et al., 2013; Gundale et al., 2014). Instead, these studies have mixed together soils from multiple experimental units (i.e. sites wi...
Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,
The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been
Sheahan, Linda; While, Alison; Bloomfield, Jacqueline
2015-12-01
The teaching and learning of clinical skills is a key component of nurse education programmes. The clinical competency of pre-registration nursing students has raised questions about the proficiency of teaching strategies for clinical skill acquisition within pre-registration education. This study aimed to test the effectiveness of teaching clinical skills using a multiple intelligences teaching approach (MITA) compared with the conventional teaching approach. A randomised controlled trial was conducted. Participants were randomly allocated to an experimental group (MITA intervention) (n=46) and a control group (conventional teaching) (n=44) to learn clinical skills. Setting was in one Irish third-level educational institution. Participants were all first year nursing students (n=90) in one institution. The experimental group was taught using MITA delivered by the researcher while the control group was taught by a team of six experienced lecturers. Participant preference for learning was measured by the Index of Learning Styles (ILS). Participants' multiple intelligence (MI) preferences were measured with a multiple intelligences development assessment scale (MIDAS). All participants were assessed using the same objective structured clinical examination (OSCE) at the end of semester one and semester two. MI assessment preferences were measured by a multiple intelligences assessment preferences questionnaire. The MITA intervention was evaluated using a questionnaire. The strongest preference on ILS for both groups was the sensing style. The highest MI was interpersonal intelligence. Participants in the experimental group had higher scores in all three OSCEs (p<0.05) at Time 1, suggesting that MITA had a positive effect on clinical skill acquisition. Most participants favoured practical examinations, followed by multiple choice questions as methods of assessment. MITA was evaluated positively. The study findings support the use of MITA for clinical skills teaching and advance the understanding of how MI teaching approaches may be used in nursing education. Copyright © 2015 Elsevier Ltd. All rights reserved.
Experimental design matters for statistical analysis: how to handle blocking.
Jensen, Signe M; Schaarschmidt, Frank; Onofri, Andrea; Ritz, Christian
2018-03-01
Nowadays, evaluation of the effects of pesticides often relies on experimental designs that involve multiple concentrations of the pesticide of interest or multiple pesticides at specific comparable concentrations and, possibly, secondary factors of interest. Unfortunately, the experimental design is often more or less neglected when analysing data. Two data examples were analysed using different modelling strategies. First, in a randomized complete block design, mean heights of maize treated with a herbicide and one of several adjuvants were compared. Second, translocation of an insecticide applied to maize as a seed treatment was evaluated using incomplete data from an unbalanced design with several layers of hierarchical sampling. Extensive simulations were carried out to further substantiate the effects of different modelling strategies. It was shown that results from suboptimal approaches (two-sample t-tests and ordinary ANOVA assuming independent observations) may be both quantitatively and qualitatively different from the results obtained using an appropriate linear mixed model. The simulations demonstrated that the different approaches may lead to differences in coverage percentages of confidence intervals and type 1 error rates, confirming that misleading conclusions can easily happen when an inappropriate statistical approach is chosen. To ensure that experimental data are summarized appropriately, avoiding misleading conclusions, the experimental design should duly be reflected in the choice of statistical approaches and models. We recommend that author guidelines should explicitly point out that authors need to indicate how the statistical analysis reflects the experimental design. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
Laing, Nigel G
2008-01-01
Currently a multiplicity of experimental approaches to therapy for genetic muscle diseases is being investigated. These include replacement of the missing gene, manipulation of the gene message, repair of the mutation, upregulation of an alternative gene and pharmacological interventions targeting a number of systems. A number of these approaches are in current clinical trials. There is considerable anticipation that perhaps more than one of the approaches will finally prove of clinical benefit, but there are many voices of caution. No matter which approaches might ultimately prove effective, there is a consensus that for most benefit to the patients it will be necessary to start treatment as early as possible. A consensus is also developing that the only way to do this is to implement population-based newborn screening to identify affected children shortly after birth. Population-based newborn screening is currently practised in very few places in the world and it brings with it implications for prevention rather than cure of genetic muscle diseases.
2014-09-30
beaked whales , and shallow-diving mysticetes, with a focus on humpback whales . Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...obtained via large-aperture vertical array techniques (for humpback whales ). APPROACH The experimental approach used by this project uses data...m depth. The motivation behind these multiple deployments is that multiple techniques can be used to estimate humpback whale call position, and
Ferber, Julia; Schneider, Gudrun; Havlik, Linda; Heuft, Gereon; Friederichs, Hendrik; Schrewe, Franz-Bernhard; Schulz-Steinel, Andrea; Burgmer, Markus
2014-01-01
To improve the synergy of established methods of teaching, the Department of Psychosomatics and Psychotherapy, University Hospital Münster, developed a web-based elearning tool using video clips of standardized patients. The effect of this blended-learning approach was evaluated. A multiple-choice test was performed by a naive (without the e-learning tool) and an experimental (with the tool) cohort of medical students to test the groups' expertise in psychosomatics. In addition, participants' satisfaction with the new tool was evaluated (numeric rating scale of 0-10). The experimental cohort was more satisfied with the curriculum and more interested in psychosomatics. Furthermore, the experimental cohort scored significantly better in the multiple-choice test. The new tool proved to be an important addition to the classical curriculum as a blended-learning approach which improves students' satisfaction and knowledge in psychosomatics.
Multiple electron processes of He and Ne by proton impact
NASA Astrophysics Data System (ADS)
Terekhin, Pavel Nikolaevich; Montenegro, Pablo; Quinto, Michele; Monti, Juan; Fojon, Omar; Rivarola, Roberto
2016-05-01
A detailed investigation of multiple electron processes (single and multiple ionization, single capture, transfer-ionization) of He and Ne is presented for proton impact at intermediate and high collision energies. Exclusive absolute cross sections for these processes have been obtained by calculation of transition probabilities in the independent electron and independent event models as a function of impact parameter in the framework of the continuum distorted wave-eikonal initial state theory. A binomial analysis is employed to calculate exclusive probabilities. The comparison with available theoretical and experimental results shows that exclusive probabilities are needed for a reliable description of the experimental data. The developed approach can be used for obtaining the input database for modeling multiple electron processes of charged particles passing through the matter.
Nwagu, Evelyn N; Ezedum, Chuks E; Nwagu, Eric K N
2015-09-01
The rising incidence of drug abuse among youths in Nigeria is a source of concern for health educators. This study was carried out on primary six pupils to determine the effect of a Multiple Intelligences Teaching Approach Drug Education Programme (MITA-DEP) on pupils' acquisition of drug refusal skills. A programme of drug education based on the Multiple Intelligences Teaching Approach (MITA) was developed. An experimental group was taught using this programme while a control group was taught using the same programme but developed based on the Traditional Teaching Approach. Pupils taught with the MITA acquired more drug refusal skills than those taught with the Traditional Teaching Approach. Urban pupils taught with the MITA acquired more skills than rural pupils. There was no statistically significant difference in the mean refusal skills of male and female pupils taught with the MITA. © The Author(s) 2014.
Cognitive Models: The Missing Link to Learning Fraction Multiplication and Division
ERIC Educational Resources Information Center
de Castro, Belinda V.
2008-01-01
This quasi-experimental study aims to streamline cognitive models on fraction multiplication and division that contain the most worthwhile features of other existing models. Its exploratory nature and its approach to proof elicitation can be used to help establish its effectiveness in building students' understanding of fractions as compared to…
Representation and presentation of requirements knowledge
NASA Technical Reports Server (NTRS)
Johnson, W. L.; Feather, Martin S.; Harris, David R.
1992-01-01
An approach to representation and presentation of knowledge used in the ARIES, an experimental requirements/specification environment, is described. The approach applies the notion of a representation architecture to the domain of software engineering and incorporates a strong coupling to a transformation system. It is characterized by a single highly expressive underlying representation, interfaced simultaneously to multiple presentations, each with notations of differing degrees of expressivity. This enables analysts to use multiple languages for describing systems and have these descriptions yield a single consistent model of the system.
Nature plus nurture: the triggering of multiple sclerosis.
Wekerle, Hartmut
2015-01-01
Recent clinical and experimental studies indicate that multiple sclerosis develops as consequence of a failed interplay between genetic ("nature") and environmental ("nurture") factors. A large number of risk genes favour an autoimmune response against the body's own brain matter. New experimental data indicate that the actual trigger of this attack is however provided by an interaction of brain-specific immune cells with components of the regular commensal gut flora, the intestinal microbiota. This concept opens the way for new therapeutic approaches involving modulation of the microbiota by dietary or antibiotic regimens.
ERIC Educational Resources Information Center
McKay, Mary M.; Gopalan, Geetha; Franco, Lydia; Dean-Assael, Kara; Chacko, Anil; Jackson, Jerrold M.; Fuss, Ashley
2011-01-01
This article presents preliminary outcomes associated with an experimental, longitudinal study of a Multiple Family Group (MFG) service delivery approach set within 13 urban outpatient clinics serving children and their families living in inner-city, primarily African American and Latino communities. Specifically, this article focuses on parent…
A reproducible approach to high-throughput biological data acquisition and integration
Rahnavard, Gholamali; Waldron, Levi; McIver, Lauren; Shafquat, Afrah; Franzosa, Eric A.; Miropolsky, Larissa; Sweeney, Christopher
2015-01-01
Modern biological research requires rapid, complex, and reproducible integration of multiple experimental results generated both internally and externally (e.g., from public repositories). Although large systematic meta-analyses are among the most effective approaches both for clinical biomarker discovery and for computational inference of biomolecular mechanisms, identifying, acquiring, and integrating relevant experimental results from multiple sources for a given study can be time-consuming and error-prone. To enable efficient and reproducible integration of diverse experimental results, we developed a novel approach for standardized acquisition and analysis of high-throughput and heterogeneous biological data. This allowed, first, novel biomolecular network reconstruction in human prostate cancer, which correctly recovered and extended the NFκB signaling pathway. Next, we investigated host-microbiome interactions. In less than an hour of analysis time, the system retrieved data and integrated six germ-free murine intestinal gene expression datasets to identify the genes most influenced by the gut microbiota, which comprised a set of immune-response and carbohydrate metabolism processes. Finally, we constructed integrated functional interaction networks to compare connectivity of peptide secretion pathways in the model organisms Escherichia coli, Bacillus subtilis, and Pseudomonas aeruginosa. PMID:26157642
Nair, Ajay K; Sasidharan, Arun; John, John P; Mehrotra, Seema; Kutty, Bindu M
2016-01-01
The present study describes the development of a neurocognitive paradigm: "Assessing Neurocognition via Gamified Experimental Logic" (ANGEL), for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs)-the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300, and ERN. The ANGEL paradigm allows assessment of 10 neurocognitive variables over the course of three "game" levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs. response-inhibition; responses to easy vs. difficult tasks; responses based on gestalt perception of clear vs. ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools) that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions.
Nair, Ajay K.; Sasidharan, Arun; John, John P.; Mehrotra, Seema; Kutty, Bindu M.
2016-01-01
The present study describes the development of a neurocognitive paradigm: “Assessing Neurocognition via Gamified Experimental Logic” (ANGEL), for performing the parametric evaluation of multiple neurocognitive functions simultaneously. ANGEL employs an audiovisual sensory motor design for the acquisition of multiple event related potentials (ERPs)—the C1, P50, MMN, N1, N170, P2, N2pc, LRP, P300, and ERN. The ANGEL paradigm allows assessment of 10 neurocognitive variables over the course of three “game” levels of increasing complexity ranging from simple passive observation to complex discrimination and response in the presence of multiple distractors. The paradigm allows assessment of several levels of rapid decision making: speeded up response vs. response-inhibition; responses to easy vs. difficult tasks; responses based on gestalt perception of clear vs. ambiguous stimuli; and finally, responses with set shifting during challenging tasks. The paradigm has been tested using 18 healthy participants from both sexes and the possibilities of varied data analyses have been presented in this paper. The ANGEL approach provides an ecologically valid assessment (as compared to existing tools) that quickly yields a very rich dataset and helps to assess multiple ERPs that can be studied extensively to assess cognitive functions in health and disease conditions. PMID:26858586
Forging our understanding of lncRNAs in the brain.
Andersen, Rebecca E; Lim, Daniel A
2018-01-01
During both development and adulthood, the human brain expresses many thousands of long noncoding RNAs (lncRNAs), and aberrant lncRNA expression has been associated with a wide range of neurological diseases. Although the biological significance of most lncRNAs remains to be discovered, it is now clear that certain lncRNAs carry out important functions in neurodevelopment, neural cell function, and perhaps even diseases of the human brain. Given the relatively inclusive definition of lncRNAs-transcripts longer than 200 nucleotides with essentially no protein coding potential-this class of noncoding transcript is both large and very diverse. Furthermore, emerging data indicate that lncRNA genes can act via multiple, non-mutually exclusive molecular mechanisms, and specific functions are difficult to predict from lncRNA expression or sequence alone. Thus, the different experimental approaches used to explore the role of a lncRNA might each shed light upon distinct facets of its overall molecular mechanism, and combining multiple approaches may be necessary to fully illuminate the function of any particular lncRNA. To understand how lncRNAs affect brain development and neurological disease, in vivo studies of lncRNA function are required. Thus, in this review, we focus our discussion upon a small set of neural lncRNAs that have been experimentally manipulated in mice. Together, these examples illustrate how studies of individual lncRNAs using multiple experimental approaches can help reveal the richness and complexity of lncRNA function in both neurodevelopment and diseases of the brain.
The SAGE Model of Social Psychological Research.
Power, Séamus A; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-05-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed.
Behavioral Modeling of Adversaries with Multiple Objectives in Counterterrorism.
Mazicioglu, Dogucan; Merrick, Jason R W
2018-05-01
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions. © 2017 Society for Risk Analysis.
ERIC Educational Resources Information Center
Scannell, Dale P.; Haugh, Oscar M.
The purpose of the study was to compare the effectiveness with which composition skills could be taught by the traditional theme-assignment approach and by an experimental method using weekly multiple-choice composition tests in lieu of theme writing. The weekly tests were based on original but typical first-draft compositions and covered problems…
FireProt: Energy- and Evolution-Based Computational Design of Thermostable Multiple-Point Mutants.
Bednar, David; Beerens, Koen; Sebestova, Eva; Bendl, Jaroslav; Khare, Sagar; Chaloupkova, Radka; Prokop, Zbynek; Brezovsky, Jan; Baker, David; Damborsky, Jiri
2015-11-01
There is great interest in increasing proteins' stability to enhance their utility as biocatalysts, therapeutics, diagnostics and nanomaterials. Directed evolution is a powerful, but experimentally strenuous approach. Computational methods offer attractive alternatives. However, due to the limited reliability of predictions and potentially antagonistic effects of substitutions, only single-point mutations are usually predicted in silico, experimentally verified and then recombined in multiple-point mutants. Thus, substantial screening is still required. Here we present FireProt, a robust computational strategy for predicting highly stable multiple-point mutants that combines energy- and evolution-based approaches with smart filtering to identify additive stabilizing mutations. FireProt's reliability and applicability was demonstrated by validating its predictions against 656 mutations from the ProTherm database. We demonstrate that thermostability of the model enzymes haloalkane dehalogenase DhaA and γ-hexachlorocyclohexane dehydrochlorinase LinA can be substantially increased (ΔTm = 24°C and 21°C) by constructing and characterizing only a handful of multiple-point mutants. FireProt can be applied to any protein for which a tertiary structure and homologous sequences are available, and will facilitate the rapid development of robust proteins for biomedical and biotechnological applications.
Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?
Haro, Juan; Ferré, Pilar
2018-06-01
It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.
Pasquali, Matias; Serchi, Tommaso; Planchon, Sebastien; Renaut, Jenny
2017-01-01
The two-dimensional difference gel electrophoresis method is a valuable approach for proteomics. The method, using cyanine fluorescent dyes, allows the co-migration of multiple protein samples in the same gel and their simultaneous detection, thus reducing experimental and analytical time. 2D-DIGE, compared to traditional post-staining 2D-PAGE protocols (e.g., colloidal Coomassie or silver nitrate), provides faster and more reliable gel matching, limiting the impact of gel to gel variation, and allows also a good dynamic range for quantitative comparisons. By the use of internal standards, it is possible to normalize for experimental variations in spot intensities and gel patterns. Here we describe the experimental steps we follow in our routine 2D-DIGE procedure that we then apply to multiple biological questions.
NASA Astrophysics Data System (ADS)
Shrivastava, Prashant Kumar; Pandey, Arun Kumar
2018-06-01
Inconel-718 has found high demand in different industries due to their superior mechanical properties. The traditional cutting methods are facing difficulties for cutting these alloys due to their low thermal potential, lower elasticity and high chemical compatibility at inflated temperature. The challenges of machining and/or finishing of unusual shapes and/or sizes in these materials have also faced by traditional machining. Laser beam cutting may be applied for the miniaturization and ultra-precision cutting and/or finishing by appropriate control of different process parameter. This paper present multi-objective optimization the kerf deviation, kerf width and kerf taper in the laser cutting of Incone-718 sheet. The second order regression models have been developed for different quality characteristics by using the experimental data obtained through experimentation. The regression models have been used as objective function for multi-objective optimization based on the hybrid approach of multiple regression analysis and genetic algorithm. The comparison of optimization results to experimental results shows an improvement of 88%, 10.63% and 42.15% in kerf deviation, kerf width and kerf taper, respectively. Finally, the effects of different process parameters on quality characteristics have also been discussed.
ERIC Educational Resources Information Center
Zhou, Bo; Konstorum, Anna; Duong, Thao; Tieu, Kinh H.; Wells, William M.; Brown, Gregory G.; Stern, Hal S.; Shahbaba, Babak
2013-01-01
We propose a hierarchical Bayesian model for analyzing multi-site experimental fMRI studies. Our method takes the hierarchical structure of the data (subjects are nested within sites, and there are multiple observations per subject) into account and allows for modeling between-site variation. Using posterior predictive model checking and model…
Lima, Estevao; Rolanda, Carla; Correia-Pinto, Jorge
2009-05-01
An isolated transgastric port raises serious limitations in performing natural orifice translumenal endoscopic surgery (NOTES) complex procedures in the urology field. In an attempt to overcome these limitations, several solutions has been advanced, such as the hybrid approach (adding a single abdominal port access) or the pure NOTES combined approach (joining multiple natural orifice ports). To review the current state of experimental and clinical results of multiple ports in NOTES, a literature search of PubMed was performed, seeking publications from January 2002 to 2008 on NOTES. In addition, we looked at pertinent abstracts of annual meetings of the American Urological Association, the European Association of Urology, and the World Congress of Endourology from 2007. Multiple ports of entry seem to be necessary, mainly for moderately complex procedures. Thus, we could find studies using the hybrid approach (combination of transgastric or transvaginal access with a single transabdominal port), or using the pure NOTES combined approach (transgastric and transvesical, transvaginal and transcolonic, or transgastric and transvaginal). There is still limited experience in humans using these approaches, and no comparative studies exist to date. It is predictable that for moderately complex procedures, we will need multiple ports, so the transvaginal-transabdominal (hybrid) approach is the most appealing, whereas in a pure NOTES perspective, the transgastric-transvesical approach seems to be the preferred approach. We are waiting for new equipment and instruments that are more appropriate for these novel techniques.
Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurt Derr; Milos Manic
Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhancedmore » by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.« less
Palmprint authentication using multiple classifiers
NASA Astrophysics Data System (ADS)
Kumar, Ajay; Zhang, David
2004-08-01
This paper investigates the performance improvement for palmprint authentication using multiple classifiers. The proposed methods on personal authentication using palmprints can be divided into three categories; appearance- , line -, and texture-based. A combination of these approaches can be used to achieve higher performance. We propose to simultaneously extract palmprint features from PCA, Line detectors and Gabor-filters and combine their corresponding matching scores. This paper also investigates the comparative performance of simple combination rules and the hybrid fusion strategy to achieve performance improvement. Our experimental results on the database of 100 users demonstrate the usefulness of such approach over those based on individual classifiers.
The SAGE Model of Social Psychological Research
Power, Séamus A.; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-01-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed. PMID:29361241
Modeling of Receptor Tyrosine Kinase Signaling: Computational and Experimental Protocols.
Fey, Dirk; Aksamitiene, Edita; Kiyatkin, Anatoly; Kholodenko, Boris N
2017-01-01
The advent of systems biology has convincingly demonstrated that the integration of experiments and dynamic modelling is a powerful approach to understand the cellular network biology. Here we present experimental and computational protocols that are necessary for applying this integrative approach to the quantitative studies of receptor tyrosine kinase (RTK) signaling networks. Signaling by RTKs controls multiple cellular processes, including the regulation of cell survival, motility, proliferation, differentiation, glucose metabolism, and apoptosis. We describe methods of model building and training on experimentally obtained quantitative datasets, as well as experimental methods of obtaining quantitative dose-response and temporal dependencies of protein phosphorylation and activities. The presented methods make possible (1) both the fine-grained modeling of complex signaling dynamics and identification of salient, course-grained network structures (such as feedback loops) that bring about intricate dynamics, and (2) experimental validation of dynamic models.
Departures From Optimality When Pursuing Multiple Approach or Avoidance Goals
2016-01-01
This article examines how people depart from optimality during multiple-goal pursuit. The authors operationalized optimality using dynamic programming, which is a mathematical model used to calculate expected value in multistage decisions. Drawing on prospect theory, they predicted that people are risk-averse when pursuing approach goals and are therefore more likely to prioritize the goal in the best position than the dynamic programming model suggests is optimal. The authors predicted that people are risk-seeking when pursuing avoidance goals and are therefore more likely to prioritize the goal in the worst position than is optimal. These predictions were supported by results from an experimental paradigm in which participants made a series of prioritization decisions while pursuing either 2 approach or 2 avoidance goals. This research demonstrates the usefulness of using decision-making theories and normative models to understand multiple-goal pursuit. PMID:26963081
Richardson, Miles
2017-04-01
In ergonomics there is often a need to identify and predict the separate effects of multiple factors on performance. A cost-effective fractional factorial approach to understanding the relationship between task characteristics and task performance is presented. The method has been shown to provide sufficient independent variability to reveal and predict the effects of task characteristics on performance in two domains. The five steps outlined are: selection of performance measure, task characteristic identification, task design for user trials, data collection, regression model development and task characteristic analysis. The approach can be used for furthering knowledge of task performance, theoretical understanding, experimental control and prediction of task performance. Practitioner Summary: A cost-effective method to identify and predict the separate effects of multiple factors on performance is presented. The five steps allow a better understanding of task factors during the design process.
NASA Astrophysics Data System (ADS)
Naderi, D.; Pahlavani, M. R.; Alavi, S. A.
2013-05-01
Using the Langevin dynamical approach, the neutron multiplicity and the anisotropy of angular distribution of fission fragments in heavy ion fusion-fission reactions were calculated. We applied one- and two-dimensional Langevin equations to study the decay of a hot excited compound nucleus. The influence of the level-density parameter on neutron multiplicity and anisotropy of angular distribution of fission fragments was investigated. We used the level-density parameter based on the liquid drop model with two different values of the Bartel approach and Pomorska approach. Our calculations show that the anisotropy and neutron multiplicity are affected by level-density parameter and neck thickness. The calculations were performed on the 16O+208Pb and 20Ne+209Bi reactions. Obtained results in the case of the two-dimensional Langevin with a level-density parameter based on Bartel and co-workers approach are in better agreement with experimental data.
Mingo, Janire; Erramuzpe, Asier; Luna, Sandra; Aurtenetxe, Olaia; Amo, Laura; Diez, Ibai; Schepens, Jan T. G.; Hendriks, Wiljan J. A. J.; Cortés, Jesús M.; Pulido, Rafael
2016-01-01
Site-directed mutagenesis (SDM) is a powerful tool to create defined collections of protein variants for experimental and clinical purposes, but effectiveness is compromised when a large number of mutations is required. We present here a one-tube-only standardized SDM approach that generates comprehensive collections of amino acid substitution variants, including scanning- and single site-multiple mutations. The approach combines unified mutagenic primer design with the mixing of multiple distinct primer pairs and/or plasmid templates to increase the yield of a single inverse-PCR mutagenesis reaction. Also, a user-friendly program for automatic design of standardized primers for Ala-scanning mutagenesis is made available. Experimental results were compared with a modeling approach together with stochastic simulation data. For single site-multiple mutagenesis purposes and for simultaneous mutagenesis in different plasmid backgrounds, combination of primer sets and/or plasmid templates in a single reaction tube yielded the distinct mutations in a stochastic fashion. For scanning mutagenesis, we found that a combination of overlapping primer sets in a single PCR reaction allowed the yield of different individual mutations, although this yield did not necessarily follow a stochastic trend. Double mutants were generated when the overlap of primer pairs was below 60%. Our results illustrate that one-tube-only SDM effectively reduces the number of reactions required in large-scale mutagenesis strategies, facilitating the generation of comprehensive collections of protein variants suitable for functional analysis. PMID:27548698
A multiplicative regularization for force reconstruction
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Multiple rotation assessment through isothetic fringes in speckle photography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angel, Luciano; Tebaldi, Myrian; Bolognini, Nestor
2007-05-10
The use of different pupils for storing each speckled image in speckle photography is employed to determine multiple in-plane rotations. The method consists of recording a four-exposure specklegram where the rotations are done between exposures. This specklegram is then optically processed in a whole field approach rendering isothetic fringes, which give detailed information about the multiple rotations. It is experimentally demonstrated that the proposed arrangement permits the depiction of six isothetics in order to measure either six different angles or three nonparallel components for two local general in-plane displacements.
NASA Astrophysics Data System (ADS)
Chen, Zhangqi; Liu, Zi-Kui; Zhao, Ji-Cheng
2018-05-01
Diffusion coefficients of seven binary systems (Ti-Mo, Ti-Nb, Ti-Ta, Ti-Zr, Zr-Mo, Zr-Nb, and Zr-Ta) at 1200 °C, 1000 °C, and 800 °C were experimentally determined using three Ti-Mo-Nb-Ta-Zr diffusion multiples. Electron probe microanalysis (EPMA) was performed to collect concentration profiles at the binary diffusion regions. Forward simulation analysis (FSA) was then applied to extract both impurity and interdiffusion coefficients in Ti-rich and Zr-rich part of the bcc phase. Excellent agreements between our results and most of the literature data validate the high-throughput approach combining FSA with diffusion multiples to obtain a large amount of systematic diffusion data, which will help establish the diffusion (mobility) databases for the design and development of biomedical and structural Ti alloys.
NASA Astrophysics Data System (ADS)
Chen, Zhangqi; Liu, Zi-Kui; Zhao, Ji-Cheng
2018-07-01
Diffusion coefficients of seven binary systems (Ti-Mo, Ti-Nb, Ti-Ta, Ti-Zr, Zr-Mo, Zr-Nb, and Zr-Ta) at 1200 °C, 1000 °C, and 800 °C were experimentally determined using three Ti-Mo-Nb-Ta-Zr diffusion multiples. Electron probe microanalysis (EPMA) was performed to collect concentration profiles at the binary diffusion regions. Forward simulation analysis (FSA) was then applied to extract both impurity and interdiffusion coefficients in Ti-rich and Zr-rich part of the bcc phase. Excellent agreements between our results and most of the literature data validate the high-throughput approach combining FSA with diffusion multiples to obtain a large amount of systematic diffusion data, which will help establish the diffusion (mobility) databases for the design and development of biomedical and structural Ti alloys.
Beamspace fast fully adaptive brain source localization for limited data sequences
NASA Astrophysics Data System (ADS)
Ravan, Maryam
2017-05-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.
An Approach to Economic Dispatch with Multiple Fuels Based on Particle Swarm Optimization
NASA Astrophysics Data System (ADS)
Sriyanyong, Pichet
2011-06-01
Particle Swarm Optimization (PSO), a stochastic optimization technique, shows superiority to other evolutionary computation techniques in terms of less computation time, easy implementation with high quality solution, stable convergence characteristic and independent from initialization. For this reason, this paper proposes the application of PSO to the Economic Dispatch (ED) problem, which occurs in the operational planning of power systems. In this study, ED problem can be categorized according to the different characteristics of its cost function that are ED problem with smooth cost function and ED problem with multiple fuels. Taking the multiple fuels into account will make the problem more realistic. The experimental results show that the proposed PSO algorithm is more efficient than previous approaches under consideration as well as highly promising in real world applications.
Energy dependence of strangeness production and event-byevent fluctuations
NASA Astrophysics Data System (ADS)
Rustamov, Anar
2018-02-01
We review the energy dependence of strangeness production in nucleus-nucleus collisions and contrast it with the experimental observations in pp and p-A collisions at LHC energies as a function of the charged particle multiplicities. For the high multiplicity final states the results from pp and p-Pb reactions systematically approach the values obtained from Pb-Pb collisions. In statistical models this implies an approach to the thermodynamic limit, where differences of mean multiplicities between various formalisms, such as Canonical and Grand Canonical Ensembles, vanish. Furthermore, we report on event-by-event net-proton fluctuations as measured by STAR at RHIC/BNL and by ALICE at LHC/CERN and discuss various non-dynamical contributions to these measurements, which should be properly subtracted before comparison to theoretical calculations on dynamical net-baryon fluctuations.
Simultaneous nano-tracking of multiple motor proteins via spectral discrimination of quantum dots.
Kakizuka, Taishi; Ikezaki, Keigo; Kaneshiro, Junichi; Fujita, Hideaki; Watanabe, Tomonobu M; Ichimura, Taro
2016-07-01
Simultaneous nanometric tracking of multiple motor proteins was achieved by combining multicolor fluorescent labeling of target proteins and imaging spectroscopy, revealing dynamic behaviors of multiple motor proteins at the sub-diffraction-limit scale. Using quantum dot probes of distinct colors, we experimentally verified the localization precision to be a few nanometers at temporal resolution of 30 ms or faster. One-dimensional processive movement of two heads of a single myosin molecule and multiple myosin molecules was successfully traced. Furthermore, the system was modified for two-dimensional measurement and applied to tracking of multiple myosin molecules. Our approach is useful for investigating cooperative movement of proteins in supramolecular nanomachinery.
Simultaneous nano-tracking of multiple motor proteins via spectral discrimination of quantum dots
Kakizuka, Taishi; Ikezaki, Keigo; Kaneshiro, Junichi; Fujita, Hideaki; Watanabe, Tomonobu M.; Ichimura, Taro
2016-01-01
Simultaneous nanometric tracking of multiple motor proteins was achieved by combining multicolor fluorescent labeling of target proteins and imaging spectroscopy, revealing dynamic behaviors of multiple motor proteins at the sub-diffraction-limit scale. Using quantum dot probes of distinct colors, we experimentally verified the localization precision to be a few nanometers at temporal resolution of 30 ms or faster. One-dimensional processive movement of two heads of a single myosin molecule and multiple myosin molecules was successfully traced. Furthermore, the system was modified for two-dimensional measurement and applied to tracking of multiple myosin molecules. Our approach is useful for investigating cooperative movement of proteins in supramolecular nanomachinery. PMID:27446684
Experimental verification of nanofluid shear-wave reconversion in ultrasonic fields.
Forrester, Derek Michael; Huang, Jinrui; Pinfield, Valerie J; Luppé, Francine
2016-03-14
Here we present the verification of shear-mediated contributions to multiple scattering of ultrasound in suspensions. Acoustic spectroscopy was carried out with suspensions of silica of differing particle sizes and concentrations in water to find the attenuation at a broad range of frequencies. As the particle sizes approach the nanoscale, commonly used multiple scattering models fail to match experimental results. We develop a new model, taking into account shear mediated contributions, and find excellent agreement with the attenuation spectra obtained using two types of spectrometer. The results determine that shear-wave phenomena must be considered in ultrasound characterisation of nanofluids at even relatively low concentrations of scatterers that are smaller than one micrometre in diameter.
Bersini, Simone; Gilardi, Mara; Arrigoni, Chiara; Talò, Giuseppe; Zamai, Moreno; Zagra, Luigi; Caiolfa, Valeria; Moretti, Matteo
2016-01-01
The generation of functional, vascularized tissues is a key challenge for both tissue engineering applications and the development of advanced in vitro models analyzing interactions among circulating cells, endothelium and organ-specific microenvironments. Since vascularization is a complex process guided by multiple synergic factors, it is critical to analyze the specific role that different experimental parameters play in the generation of physiological tissues. Our goals were to design a novel meso-scale model bridging the gap between microfluidic and macro-scale studies, and high-throughput screen the effects of multiple variables on the vascularization of bone-mimicking tissues. We investigated the influence of endothelial cell (EC) density (3-5 Mcells/ml), cell ratio among ECs, mesenchymal stem cells (MSCs) and osteo-differentiated MSCs (1:1:0, 10:1:0, 10:1:1), culture medium (endothelial, endothelial + angiopoietin-1, 1:1 endothelial/osteo), hydrogel type (100%fibrin, 60%fibrin+40%collagen), tissue geometry (2 × 2 × 2, 2 × 2 × 5 mm(3)). We optimized the geometry and oxygen gradient inside hydrogels through computational simulations and we analyzed microvascular network features including total network length/area and vascular branch number/length. Particularly, we employed the "Design of Experiment" statistical approach to identify key differences among experimental conditions. We combined the generation of 3D functional tissue units with the fine control over the local microenvironment (e.g. oxygen gradients), and developed an effective strategy to enable the high-throughput screening of multiple experimental parameters. Our approach allowed to identify synergic correlations among critical parameters driving microvascular network development within a bone-mimicking environment and could be translated to any vascularized tissue. Copyright © 2015 Elsevier Ltd. All rights reserved.
Adverse outcome pathways (AOPs) are conceptual frameworks that portray causal and predictive linkages between key events at multiple scales of biological organization that connect molecular initiating events and early cellular perturbations (e.g., initiation of toxicity pathways)...
MODELING A MIXTURE: PBPK/PD APPROACHES FOR PREDICTING CHEMICAL INTERACTIONS.
Since environmental chemical exposures generally involve multiple chemicals, there are both regulatory and scientific drivers to develop methods to predict outcomes of these exposures. Even using efficient statistical and experimental designs, it is not possible to test in vivo a...
Structured plant metabolomics for the simultaneous exploration of multiple factors.
Vasilev, Nikolay; Boccard, Julien; Lang, Gerhard; Grömping, Ulrike; Fischer, Rainer; Goepfert, Simon; Rudaz, Serge; Schillberg, Stefan
2016-11-17
Multiple factors act simultaneously on plants to establish complex interaction networks involving nutrients, elicitors and metabolites. Metabolomics offers a better understanding of complex biological systems, but evaluating the simultaneous impact of different parameters on metabolic pathways that have many components is a challenging task. We therefore developed a novel approach that combines experimental design, untargeted metabolic profiling based on multiple chromatography systems and ionization modes, and multiblock data analysis, facilitating the systematic analysis of metabolic changes in plants caused by different factors acting at the same time. Using this method, target geraniol compounds produced in transgenic tobacco cell cultures were grouped into clusters based on their response to different factors. We hypothesized that our novel approach may provide more robust data for process optimization in plant cell cultures producing any target secondary metabolite, based on the simultaneous exploration of multiple factors rather than varying one factor each time. The suitability of our approach was verified by confirming several previously reported examples of elicitor-metabolite crosstalk. However, unravelling all factor-metabolite networks remains challenging because it requires the identification of all biochemically significant metabolites in the metabolomics dataset.
Experimental confirmation of multiple community states in a marine ecosystem.
Petraitis, Peter S; Methratta, Elizabeth T; Rhile, Erika C; Vidargas, Nicholas A; Dudgeon, Steve R
2009-08-01
Small changes in environmental conditions can unexpectedly tip an ecosystem from one community type to another, and these often irreversible shifts have been observed in semi-arid grasslands, freshwater lakes and ponds, coral reefs, and kelp forests. A commonly accepted explanation is that these ecosystems contain multiple stable points, but experimental tests confirming multiple stable states have proven elusive. Here we present a novel approach and show that mussel beds and rockweed stands are multiple stable states on intertidal shores in the Gulf of Maine, USA. Using broad-scale observational data and long-term data from experimental clearings, we show that the removal of rockweed by winter ice scour can tip persistent rockweed stands to mussel beds. The observational data were analyzed with Anderson's discriminant analysis of principal coordinates, which provided an objective function to separate mussel beds from rockweed stands. The function was then applied to 55 experimental plots, which had been established in rockweed stands in 1996. Based on 2005 data, all uncleared controls and all but one of the small clearings were classified as rockweed stands; 37% of the large clearings were classified as mussel beds. Our results address the establishment of mussels versus rockweeds and complement rather than refute the current paradigm that mussel beds and rockweed stands, once established, are maintained by site-specific differences in strong consumer control.
Kirschner, Denise E; Linderman, Jennifer J
2009-04-01
In addition to traditional and novel experimental approaches to study host-pathogen interactions, mathematical and computer modelling have recently been applied to address open questions in this area. These modelling tools not only offer an additional avenue for exploring disease dynamics at multiple biological scales, but also complement and extend knowledge gained via experimental tools. In this review, we outline four examples where modelling has complemented current experimental techniques in a way that can or has already pushed our knowledge of host-pathogen dynamics forward. Two of the modelling approaches presented go hand in hand with articles in this issue exploring fluorescence resonance energy transfer and two-photon intravital microscopy. Two others explore virtual or 'in silico' deletion and depletion as well as a new method to understand and guide studies in genetic epidemiology. In each of these examples, the complementary nature of modelling and experiment is discussed. We further note that multi-scale modelling may allow us to integrate information across length (molecular, cellular, tissue, organism, population) and time (e.g. seconds to lifetimes). In sum, when combined, these compatible approaches offer new opportunities for understanding host-pathogen interactions.
Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design
NASA Technical Reports Server (NTRS)
Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan
2005-01-01
A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.
Applying Agrep to r-NSA to solve multiple sequences approximate matching.
Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak
2014-01-01
This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.
NASA Astrophysics Data System (ADS)
Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie
2017-05-01
Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.
Statistical strategies for averaging EC50 from multiple dose-response experiments.
Jiang, Xiaoqi; Kopp-Schneider, Annette
2015-11-01
In most dose-response studies, repeated experiments are conducted to determine the EC50 value for a chemical, requiring averaging EC50 estimates from a series of experiments. Two statistical strategies, the mixed-effect modeling and the meta-analysis approach, can be applied to estimate average behavior of EC50 values over all experiments by considering the variabilities within and among experiments. We investigated these two strategies in two common cases of multiple dose-response experiments in (a) complete and explicit dose-response relationships are observed in all experiments and in (b) only in a subset of experiments. In case (a), the meta-analysis strategy is a simple and robust method to average EC50 estimates. In case (b), all experimental data sets can be first screened using the dose-response screening plot, which allows visualization and comparison of multiple dose-response experimental results. As long as more than three experiments provide information about complete dose-response relationships, the experiments that cover incomplete relationships can be excluded from the meta-analysis strategy of averaging EC50 estimates. If there are only two experiments containing complete dose-response information, the mixed-effects model approach is suggested. We subsequently provided a web application for non-statisticians to implement the proposed meta-analysis strategy of averaging EC50 estimates from multiple dose-response experiments.
NASA Astrophysics Data System (ADS)
Ahmad, Kashif; Conci, Nicola; Boato, Giulia; De Natale, Francesco G. B.
2017-11-01
Over the last few years, a rapid growth has been witnessed in the number of digital photos produced per year. This rapid process poses challenges in the organization and management of multimedia collections, and one viable solution consists of arranging the media on the basis of the underlying events. However, album-level annotation and the presence of irrelevant pictures in photo collections make event-based organization of personal photo albums a more challenging task. To tackle these challenges, in contrast to conventional approaches relying on supervised learning, we propose a pipeline for event recognition in personal photo collections relying on a multiple instance-learning (MIL) strategy. MIL is a modified form of supervised learning and fits well for such applications with weakly labeled data. The experimental evaluation of the proposed approach is carried out on two large-scale datasets including a self-collected and a benchmark dataset. On both, our approach significantly outperforms the existing state-of-the-art.
Adaptive learning and control for MIMO system based on adaptive dynamic programming.
Fu, Jian; He, Haibo; Zhou, Xinmin
2011-07-01
Adaptive dynamic programming (ADP) is a promising research field for design of intelligent controllers, which can both learn on-the-fly and exhibit optimal behavior. Over the past decades, several generations of ADP design have been proposed in the literature, which have demonstrated many successful applications in various benchmarks and industrial applications. While many of the existing researches focus on multiple-inputs-single-output system with steepest descent search, in this paper we investigate a generalized multiple-input-multiple-output (GMIMO) ADP design for online learning and control, which is more applicable to a wide range of practical real-world applications. Furthermore, an improved weight-updating algorithm based on recursive Levenberg-Marquardt methods is presented and embodied in the GMIMO approach to improve its performance. Finally, we test the performance of this approach based on a practical complex system, namely, the learning and control of the tension and height of the looper system in a hot strip mill. Experimental results demonstrate that the proposed approach can achieve effective and robust performance.
NASA Astrophysics Data System (ADS)
Bevilacqua, R.; Lehmann, T.; Romano, M.
2011-04-01
This work introduces a novel control algorithm for close proximity multiple spacecraft autonomous maneuvers, based on hybrid linear quadratic regulator/artificial potential function (LQR/APF), for applications including autonomous docking, on-orbit assembly and spacecraft servicing. Both theoretical developments and experimental validation of the proposed approach are presented. Fuel consumption is sub-optimized in real-time through re-computation of the LQR at each sample time, while performing collision avoidance through the APF and a high level decisional logic. The underlying LQR/APF controller is integrated with a customized wall-following technique and a decisional logic, overcoming problems such as local minima. The algorithm is experimentally tested on a four spacecraft simulators test bed at the Spacecraft Robotics Laboratory of the Naval Postgraduate School. The metrics to evaluate the control algorithm are: autonomy of the system in making decisions, successful completion of the maneuver, required time, and propellant consumption.
A support vector machine based control application to the experimental three-tank system.
Iplikci, Serdar
2010-07-01
This paper presents a support vector machine (SVM) approach to generalized predictive control (GPC) of multiple-input multiple-output (MIMO) nonlinear systems. The possession of higher generalization potential and at the same time avoidance of getting stuck into the local minima have motivated us to employ SVM algorithms for modeling MIMO systems. Based on the SVM model, detailed and compact formulations for calculating predictions and gradient information, which are used in the computation of the optimal control action, are given in the paper. The proposed MIMO SVM-based GPC method has been verified on an experimental three-tank liquid level control system. Experimental results have shown that the proposed method can handle the control task successfully for different reference trajectories. Moreover, a detailed discussion on data gathering, model selection and effects of the control parameters have been given in this paper. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Metainference: A Bayesian inference method for heterogeneous systems.
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.
Guo, Yi; Lebel, R Marc; Zhu, Yinghua; Lingala, Sajan Goud; Shiroishi, Mark S; Law, Meng; Nayak, Krishna
2016-05-01
To clinically evaluate a highly accelerated T1-weighted dynamic contrast-enhanced (DCE) MRI technique that provides high spatial resolution and whole-brain coverage via undersampling and constrained reconstruction with multiple sparsity constraints. Conventional (rate-2 SENSE) and experimental DCE-MRI (rate-30) scans were performed 20 minutes apart in 15 brain tumor patients. The conventional clinical DCE-MRI had voxel dimensions 0.9 × 1.3 × 7.0 mm(3), FOV 22 × 22 × 4.2 cm(3), and the experimental DCE-MRI had voxel dimensions 0.9 × 0.9 × 1.9 mm(3), and broader coverage 22 × 22 × 19 cm(3). Temporal resolution was 5 s for both protocols. Time-resolved images and blood-brain barrier permeability maps were qualitatively evaluated by two radiologists. The experimental DCE-MRI scans showed no loss of qualitative information in any of the cases, while achieving substantially higher spatial resolution and whole-brain spatial coverage. Average qualitative scores (from 0 to 3) were 2.1 for the experimental scans and 1.1 for the conventional clinical scans. The proposed DCE-MRI approach provides clinically superior image quality with higher spatial resolution and coverage than currently available approaches. These advantages may allow comprehensive permeability mapping in the brain, which is especially valuable in the setting of large lesions or multiple lesions spread throughout the brain.
NASA Astrophysics Data System (ADS)
Kar, Somnath; Choudhury, Subikash; Muhuri, Sanjib; Ghosh, Premomoy
2017-01-01
Satisfactory description of data by hydrodynamics-motivated models, as has been reported recently by experimental collaborations at the LHC, confirm "collectivity" in high-multiplicity proton-proton (p p ) collisions. Notwithstanding this, a detailed study of high-multiplicity p p data in other approaches or models is essential for better understanding of the specific phenomenon. In this study, the focus is on a pQCD-inspired multiparton interaction (MPI) model, including a color reconnection (CR) scheme as implemented in the Monte Carlo code, PYTHIA8 tune 4C. The MPI with the color reconnection reproduces the dependence of the mean transverse momentum ⟨pT⟩ on the charged particle multiplicity Nch in p p collisions at the LHC, providing an alternate explanation to the signature of "hydrodynamic collectivity" in p p data. It is, therefore, worth exploring how this model responds to other related features of high-multiplicity p p events. This comparative study with recent experimental results demonstrates the limitations of the model in explaining some of the prominent features of the final-state charged particles up to the intermediate-pT (pT<2.0 GeV /c ) range in high-multiplicity p p events.
ℓ(p)-Norm multikernel learning approach for stock market price forecasting.
Shao, Xigao; Wu, Kun; Liao, Bifeng
2012-01-01
Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.
Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines
NASA Astrophysics Data System (ADS)
Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.
Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions
Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Greg; Waxmonsky, Jim; Yu, Jihnhee; Murphy, Susan
2013-01-01
In recent years, research in the area of intervention development is shifting from the traditional fixed-intervention approach to adaptive interventions, which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence of decision rules that specify how intervention options should be adapted to an individual’s characteristics and changing needs, with the general aim to optimize the long-term effectiveness of the intervention. Here, we review adaptive interventions, discussing the potential contribution of this concept to research in the behavioral and social sciences. We then propose the sequential multiple assignment randomized trial (SMART), an experimental design useful for addressing research questions that inform the construction of high-quality adaptive interventions. To clarify the SMART approach and its advantages, we compare SMART with other experimental approaches. We also provide methods for analyzing data from SMART to address primary research questions that inform the construction of a high-quality adaptive intervention. PMID:23025433
Surrogate 239Pu(n, fxn) and 241Pu(n, fxn) average fission-neutron-multiplicity measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, J. T.; Alan, B. S.; Akindele, O. A.
2017-09-26
We have constructed a new neutron-charged-particle detector array called NeutronSTARS. It has been described extensively in LLNL-TR-703909 [1] and Akindele et al [2]. We have used this new neutron-charged-particle array to measure the 241Pu and 239Pu fissionneutron multiplicity as a function of equivalent incident-neutron energy from 100 keV to 20 MeV. The experimental approach, detector array, data analysis, and results are summarized in the following sections.
An Experimental Approach to Mathematical Modeling in Biology
ERIC Educational Resources Information Center
Ledder, Glenn
2008-01-01
The simplest age-structured population models update a population vector via multiplication by a matrix. These linear models offer an opportunity to introduce mathematical modeling to students of limited mathematical sophistication and background. We begin with a detailed discussion of mathematical modeling, particularly in a biological context.…
ERIC Educational Resources Information Center
Hu, Qian
2011-01-01
Most of challenges facing today's government cannot be resolved without collaborative efforts from multiple non-state stakeholders, organizations, and active participation from citizens. Collaborative governance has become an important form of management practice. Yet the success of this inclusive management approach depends on whether government…
Set membership experimental design for biological systems.
Marvel, Skylar W; Williams, Cranos M
2012-03-21
Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.
Set membership experimental design for biological systems
2012-01-01
Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240
A regenerative approach to the treatment of multiple sclerosis.
Deshmukh, Vishal A; Tardif, Virginie; Lyssiotis, Costas A; Green, Chelsea C; Kerman, Bilal; Kim, Hyung Joon; Padmanabhan, Krishnan; Swoboda, Jonathan G; Ahmad, Insha; Kondo, Toru; Gage, Fred H; Theofilopoulos, Argyrios N; Lawson, Brian R; Schultz, Peter G; Lairson, Luke L
2013-10-17
Progressive phases of multiple sclerosis are associated with inhibited differentiation of the progenitor cell population that generates the mature oligodendrocytes required for remyelination and disease remission. To identify selective inducers of oligodendrocyte differentiation, we performed an image-based screen for myelin basic protein (MBP) expression using primary rat optic-nerve-derived progenitor cells. Here we show that among the most effective compounds identifed was benztropine, which significantly decreases clinical severity in the experimental autoimmune encephalomyelitis (EAE) model of relapsing-remitting multiple sclerosis when administered alone or in combination with approved immunosuppressive treatments for multiple sclerosis. Evidence from a cuprizone-induced model of demyelination, in vitro and in vivo T-cell assays and EAE adoptive transfer experiments indicated that the observed efficacy of this drug results directly from an enhancement of remyelination rather than immune suppression. Pharmacological studies indicate that benztropine functions by a mechanism that involves direct antagonism of M1 and/or M3 muscarinic receptors. These studies should facilitate the development of effective new therapies for the treatment of multiple sclerosis that complement established immunosuppressive approaches.
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
NASA Astrophysics Data System (ADS)
Suminar, Iin; Muslim, Liliawati, Winny
2017-05-01
The purpose of this research was to identify student's written argument embedded in scientific inqury investigation and argumentation skill using integrated argument-based inquiry with multiple representation approach. This research was using quasi experimental method with the nonequivalent pretest-posttest control group design. Sample ot this research was 10th grade students at one of High School in Bandung using two classes, they were 26 students of experiment class and 26 students of control class. Experiment class using integrated argument-based inquiry with multiple representation approach, while control class using argument-based inquiry. This study was using argumentation worksheet and argumentation test. Argumentation worksheet encouraged students to formulate research questions, design experiment, observe experiment and explain the data as evidence, construct claim, warrant, embedded multiple modus representation and reflection. Argumentation testinclude problem which asks students to explain evidence, warrants, and backings support of each claim. The result of this research show experiment class students's argumentation skill performed better than control class students that
Multiple chiral topological states in liquid crystals from unstructured light beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loussert, Charles; Brasselet, Etienne, E-mail: e.brasselet@loma.u-bordeaux1.fr
2014-02-03
It is shown experimentally that unstructured light beams can generate a wealth of distinct metastable defect structures in thin films of chiral liquid crystals. Various kinds of individual chiral topological states are obtained as well as dimers and trimers, which correspond to the entanglement of several topological unit cells. Self-assembled nested assemblies of several metastable particle-like topological states can also be formed. Finally, we propose and experimentally demonstrate an opto-electrical approach to generate tailor-made architectures.
Mayhew, Terry M; Lucocq, John M
2011-03-01
Various methods for quantifying cellular immunogold labelling on transmission electron microscope thin sections are currently available. All rely on sound random sampling principles and are applicable to single immunolabelling across compartments within a given cell type or between different experimental groups of cells. Although methods are also available to test for colocalization in double/triple immunogold labelling studies, so far, these have relied on making multiple measurements of gold particle densities in defined areas or of inter-particle nearest neighbour distances. Here, we present alternative two-step approaches to codistribution and colocalization assessment that merely require raw counts of gold particles in distinct cellular compartments. For assessing codistribution over aggregate compartments, initial statistical evaluation involves combining contingency table and chi-squared analyses to provide predicted gold particle distributions. The observed and predicted distributions allow testing of the appropriate null hypothesis, namely, that there is no difference in the distribution patterns of proteins labelled by different sizes of gold particle. In short, the null hypothesis is that of colocalization. The approach for assessing colabelling recognises that, on thin sections, a compartment is made up of a set of sectional images (profiles) of cognate structures. The approach involves identifying two groups of compartmental profiles that are unlabelled and labelled for one gold marker size. The proportions in each group that are also labelled for the second gold marker size are then compared. Statistical analysis now uses a 2 × 2 contingency table combined with the Fisher exact probability test. Having identified double labelling, the profiles can be analysed further in order to identify characteristic features that might account for the double labelling. In each case, the approach is illustrated using synthetic and/or experimental datasets and can be refined to correct observed labelling patterns to specific labelling patterns. These simple and efficient approaches should be of more immediate utility to those interested in codistribution and colocalization in multiple immunogold labelling investigations.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
A Hough Transform Global Probabilistic Approach to Multiple-Subject Diffusion MRI Tractography
Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M.; Sapiro, Guillermo
2011-01-01
A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. PMID:21376655
Low, Diana H P; Motakis, Efthymios
2013-10-01
Binding free energy calculations obtained through molecular dynamics simulations reflect intermolecular interaction states through a series of independent snapshots. Typically, the free energies of multiple simulated series (each with slightly different starting conditions) need to be estimated. Previous approaches carry out this task by moving averages at certain decorrelation times, assuming that the system comes from a single conformation description of binding events. Here, we discuss a more general approach that uses statistical modeling, wavelets denoising and hierarchical clustering to estimate the significance of multiple statistically distinct subpopulations, reflecting potential macrostates of the system. We present the deltaGseg R package that performs macrostate estimation from multiple replicated series and allows molecular biologists/chemists to gain physical insight into the molecular details that are not easily accessible by experimental techniques. deltaGseg is a Bioconductor R package available at http://bioconductor.org/packages/release/bioc/html/deltaGseg.html.
Automated optimal coordination of multiple-DOF neuromuscular actions in feedforward neuroprostheses.
Lujan, J Luis; Crago, Patrick E
2009-01-01
This paper describes a new method for designing feedforward controllers for multiple-muscle, multiple-DOF, motor system neural prostheses. The design process is based on experimental measurement of the forward input/output properties of the neuromechanical system and numerical optimization of stimulation patterns to meet muscle coactivation criteria, thus resolving the muscle redundancy (i.e., overcontrol) and the coupled DOF problems inherent in neuromechanical systems. We designed feedforward controllers to control the isometric forces at the tip of the thumb in two directions during stimulation of three thumb muscles as a model system. We tested the method experimentally in ten able-bodied individuals and one patient with spinal cord injury. Good control of isometric force in both DOFs was observed, with rms errors less than 10% of the force range in seven experiments and statistically significant correlations between the actual and target forces in all ten experiments. Systematic bias and slope errors were observed in a few experiments, likely due to the neuromuscular fatigue. Overall, the tests demonstrated the ability of a general design approach to satisfy both control and coactivation criteria in multiple-muscle, multiple-axis neuromechanical systems, which is applicable to a wide range of neuromechanical systems and stimulation electrodes.
Natarajan, R; Nirdosh, I; Venuvanalingam, P; Ramalingam, M
2002-07-01
The QPPR approach has been used to model cupferrons as mineral collectors. Separation efficiencies (Es) of these chelating agents have been correlated with property parameters namely, log P, log Koc, substituent-constant sigma, Mullikan and ESP derived charges using multiple regression analysis. Es of substituted-cupferrons in the flotation of a uranium ore could be predicted within experimental error either by log P or log Koc and an electronic parameter. However, when a halo, methoxy or phenyl substituent was in para to the chelating group, experimental Es was greater than the predicted values. Inclusion of a Boolean type indicative parameter improved significantly the predictability power. This approach has been extended to 2-aminothiophenols that were used to float a zinc ore and the correlations were found to be reasonably good.
Multiplicative noise removal via a learned dictionary.
Huang, Yu-Mei; Moisan, Lionel; Ng, Michael K; Zeng, Tieyong
2012-11-01
Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.
Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.
Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu
2018-03-10
This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.
Kurita, Takashi; Sueda, Keiichi; Tsubakimoto, Koji; Miyanaga, Noriaki
2010-07-05
We experimentally demonstrated coherent beam combining using optical parametric amplification with a nonlinear crystal pumped by random-phased multiple-beam array of the second harmonic of a Nd:YAG laser at 10-Hz repetition rate. In the proof-of-principle experiment, the phase jump between two pump beams was precisely controlled by a motorized actuator. For the demonstration of multiple-beam combining a random phase plate was used to create random-phased beamlets as a pump pulse. Far-field patterns of the pump, the signal, and the idler indicated that the spatially coherent signal beams were obtained on both cases. This approach allows scaling of the intensity of optical parametric chirped pulse amplification up to the exa-watt level while maintaining diffraction-limited beam quality.
Experimental and numerical study of shock-driven collapse of multiple cavity arrays
NASA Astrophysics Data System (ADS)
Betney, Matthew; Anderson, Phillip; Tully, Brett; Doyle, Hugo; Hawker, Nicholas; Ventikos, Yiannis
2014-10-01
This study presents a numerical and experimental investigation of the interaction of a single shock wave with multiple air-filled spherical cavities. The 5 mm diameter cavities are cast in a hydrogel, and collapsed by a shock wave generated by the impact of a projectile fired from a single-stage light-gas gun. Incident shock pressures of up to 1 GPa have been measured, and the results compared to simulations conducted using a front-tracking approach. The authors have previously studied the collapse dynamics of a single cavity. An important process is the formation of a high-speed transverse jet, which impacts the leeward cavity wall and produces a shockwave. The speed of this shock has been measured using schlieren imaging, and the density has been measured with a fibre optic probe. This confirmed the computational prediction that the produced shock is of a higher pressure than the original incident shock. When employing multiple cavity arrays, the strong shock produced by the collapse of one cavity can substantially affect the collapse of further cavities. With control over cavity placement, these effects may be utilised to intensify collapse. This intensification is experimentally measured via analysis of the optical emission.
ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting
Shao, Xigao; Wu, Kun; Liao, Bifeng
2012-01-01
Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561
ERIC Educational Resources Information Center
Phan, Huy P.; Ngu, Bing H.
2017-01-01
In social sciences, the use of stringent methodological approaches is gaining increasing emphasis. Researchers have recognized the limitations of cross-sectional, non-manipulative data in the study of causality. True experimental designs, in contrast, are preferred as they represent rigorous standards for achieving causal flows between variables.…
ERIC Educational Resources Information Center
McConeghy, Kevin; Wing, Coady; Wong, Vivian C.
2015-01-01
Randomized experiments have long been established as the gold standard for addressing causal questions. However, experiments are not always feasible or desired, so observational methods are also needed. When multiple observations on the same variable are available, a repeated measures design may be used to assess whether a treatment administered…
A new multiple air beam approach for in-process form error optical measurement
NASA Astrophysics Data System (ADS)
Gao, Y.; Li, R.
2018-07-01
In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.
Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M
2014-09-01
Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.
Metainference: A Bayesian inference method for heterogeneous systems
Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele
2016-01-01
Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi, E-mail: yiguo@usc.edu; Zhu, Yinghua; Lingala, Sajan Goud
Purpose: To clinically evaluate a highly accelerated T1-weighted dynamic contrast-enhanced (DCE) MRI technique that provides high spatial resolution and whole-brain coverage via undersampling and constrained reconstruction with multiple sparsity constraints. Methods: Conventional (rate-2 SENSE) and experimental DCE-MRI (rate-30) scans were performed 20 minutes apart in 15 brain tumor patients. The conventional clinical DCE-MRI had voxel dimensions 0.9 × 1.3 × 7.0 mm{sup 3}, FOV 22 × 22 × 4.2 cm{sup 3}, and the experimental DCE-MRI had voxel dimensions 0.9 × 0.9 × 1.9 mm{sup 3}, and broader coverage 22 × 22 × 19 cm{sup 3}. Temporal resolution was 5 smore » for both protocols. Time-resolved images and blood–brain barrier permeability maps were qualitatively evaluated by two radiologists. Results: The experimental DCE-MRI scans showed no loss of qualitative information in any of the cases, while achieving substantially higher spatial resolution and whole-brain spatial coverage. Average qualitative scores (from 0 to 3) were 2.1 for the experimental scans and 1.1 for the conventional clinical scans. Conclusions: The proposed DCE-MRI approach provides clinically superior image quality with higher spatial resolution and coverage than currently available approaches. These advantages may allow comprehensive permeability mapping in the brain, which is especially valuable in the setting of large lesions or multiple lesions spread throughout the brain.« less
Some intriguing aspects of multiparticle production processes
NASA Astrophysics Data System (ADS)
Wilk, Grzegorz; Włodarczyk, Zbigniew
2018-04-01
Multiparticle production processes provide valuable information about the mechanism of the conversion of the initial energy of projectiles into a number of secondaries by measuring their multiplicity distributions and their distributions in phase space. They therefore serve as a reference point for more involved measurements. Distributions in phase space are usually investigated using the statistical approach, very successful in general but failing in cases of small colliding systems, small multiplicities, and at the edges of the allowed phase space, in which cases the underlying dynamical effects competing with the statistical distributions take over. We discuss an alternative approach, which applies to the whole phase space without detailed knowledge of dynamics. It is based on a modification of the usual statistics by generalizing it to a superstatistical form. We stress particularly the scaling and self-similar properties of such an approach manifesting themselves as the phenomena of the log-periodic oscillations and oscillations of temperature caused by sound waves in hadronic matter. Concerning the multiplicity distributions we discuss in detail the phenomenon of the oscillatory behavior of the modified combinants apparently observed in experimental data.
Multifractal characteristics of multiparticle production in heavy-ion collisions at SPS energies
NASA Astrophysics Data System (ADS)
Khan, Shaista; Ahmad, Shakeel
Entropy, dimensions and other multifractal characteristics of multiplicity distributions of relativistic charged hadrons produced in ion-ion collisions at SPS energies are investigated. The analysis of the experimental data is carried out in terms of phase space bin-size dependence of multiplicity distributions following the Takagi’s approach. Yet another method is also followed to study the multifractality which, is not related to the bin-width and (or) the detector resolution, rather involves multiplicity distribution of charged particles in full phase space in terms of information entropy and its generalization, Rényi’s order-q information entropy. The findings reveal the presence of multifractal structure — a remarkable property of the fluctuations. Nearly constant values of multifractal specific heat “c” estimated by the two different methods of analysis followed indicate that the parameter “c” may be used as a universal characteristic of the particle production in high energy collisions. The results obtained from the analysis of the experimental data agree well with the predictions of Monte Carlo model AMPT.
NASA Astrophysics Data System (ADS)
Kartikasari, A.; Widjajanti, D. B.
2017-02-01
The aim of this study is to explore the effectiveness of learning approach using problem-based learning based on multiple intelligences in developing student’s achievement, mathematical connection ability, and self-esteem. This study is experimental research with research sample was 30 of Grade X students of MIA III MAN Yogyakarta III. Learning materials that were implemented consisting of trigonometry and geometry. For the purpose of this study, researchers designed an achievement test made up of 44 multiple choice questions with respectively 24 questions on the concept of trigonometry and 20 questions for geometry. The researcher also designed a connection mathematical test and self-esteem questionnaire that consisted of 7 essay questions on mathematical connection test and 30 items of self-esteem questionnaire. The learning approach said that to be effective if the proportion of students who achieved KKM on achievement test, the proportion of students who achieved a minimum score of high category on the results of both mathematical connection test and self-esteem questionnaire were greater than or equal to 70%. Based on the hypothesis testing at the significance level of 5%, it can be concluded that the learning approach using problem-based learning based on multiple intelligences was effective in terms of student’s achievement, mathematical connection ability, and self-esteem.
Narrow band imaging combined with water immersion technique in the diagnosis of celiac disease.
Valitutti, Francesco; Oliva, Salvatore; Iorfida, Donatella; Aloi, Marina; Gatti, Silvia; Trovato, Chiara Maria; Montuori, Monica; Tiberti, Antonio; Cucchiara, Salvatore; Di Nardo, Giovanni
2014-12-01
The "multiple-biopsy" approach both in duodenum and bulb is the best strategy to confirm the diagnosis of celiac disease; however, this increases the invasiveness of the procedure itself and is time-consuming. To evaluate the diagnostic yield of a single biopsy guided by narrow-band imaging combined with water immersion technique in paediatric patients. Prospective assessment of the diagnostic accuracy of narrow-band imaging/water immersion technique-driven biopsy approach versus standard protocol in suspected celiac disease. The experimental approach correctly diagnosed 35/40 children with celiac disease, with an overall diagnostic sensitivity of 87.5% (95% CI: 77.3-97.7). An altered pattern of narrow-band imaging/water immersion technique endoscopic visualization was significantly associated with villous atrophy at guided biopsy (Spearman Rho 0.637, p<0.001). Concordance of narrow-band imaging/water immersion technique endoscopic assessments was high between two operators (K: 0.884). The experimental protocol was highly timesaving compared to the standard protocol. An altered narrow-band imaging/water immersion technique pattern coupled with high anti-transglutaminase antibodies could allow a single guided biopsy to diagnose celiac disease. When no altered mucosal pattern is visible even by narrow-band imaging/water immersion technique, multiple bulbar and duodenal biopsies should be obtained. Copyright © 2014. Published by Elsevier Ltd.
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
IR-IR Conformation Specific Spectroscopy of Na+(Glucose) Adducts
NASA Astrophysics Data System (ADS)
Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; Garand, Etienne
2018-01-01
We report an IR-IR double resonance study of the structural landscape present in the Na+(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na+(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctive Na+ coordination. [Figure not available: see fulltext.
A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set
Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong
2012-01-01
Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181
Guan, Fada; Johns, Jesse M; Vasudevan, Latha; Zhang, Guoqing; Tang, Xiaobin; Poston, John W; Braby, Leslie A
2015-06-01
Coincident counts can be observed in experimental radiation spectroscopy. Accurate quantification of the radiation source requires the detection efficiency of the spectrometer, which is often experimentally determined. However, Monte Carlo analysis can be used to supplement experimental approaches to determine the detection efficiency a priori. The traditional Monte Carlo method overestimates the detection efficiency as a result of omitting coincident counts caused mainly by multiple cascade source particles. In this study, a novel "multi-primary coincident counting" algorithm was developed using the Geant4 Monte Carlo simulation toolkit. A high-purity Germanium detector for ⁶⁰Co gamma-ray spectroscopy problems was accurately modeled to validate the developed algorithm. The simulated pulse height spectrum agreed well qualitatively with the measured spectrum obtained using the high-purity Germanium detector. The developed algorithm can be extended to other applications, with a particular emphasis on challenging radiation fields, such as counting multiple types of coincident radiations released from nuclear fission or used nuclear fuel.
Zimmer, Christoph
2016-01-01
Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models.
Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition
Islam, Md. Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676
Feature and score fusion based multiple classifier selection for iris recognition.
Islam, Md Rabiul
2014-01-01
The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.
Using constraints and their value for optimization of large ODE systems
Domijan, Mirela; Rand, David A.
2015-01-01
We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300
Multi-scale mechanics of granular solids from grain-resolved X-ray measurements
NASA Astrophysics Data System (ADS)
Hurley, R. C.; Hall, S. A.; Wright, J. P.
2017-11-01
This work discusses an experimental technique for studying the mechanics of three-dimensional (3D) granular solids. The approach combines 3D X-ray diffraction and X-ray computed tomography to measure grain-resolved strains, kinematics and contact fabric in the bulk of a granular solid, from which continuum strains, grain stresses, interparticle forces and coarse-grained elasto-plastic moduli can be determined. We demonstrate the experimental approach and analysis of selected results on a sample of 1099 stiff, frictional grains undergoing multiple uniaxial compression cycles. We investigate the inter-particle force network, elasto-plastic moduli and associated length scales, reversibility of mechanical responses during cyclic loading, the statistics of microscopic responses and microstructure-property relationships. This work serves to highlight both the fundamental insight into granular mechanics that is furnished by combined X-ray measurements and describes future directions in the field of granular materials that can be pursued with such approaches.
Assessing health risks of synthetic vitreous fibers: an integrative approach.
McClellan, R O
1994-12-01
This paper reviews a tiered approach to acquiring information from multiple experimental systems to understand and assess the potential human health risks of exposure to airborne synthetic fibers. The approach is grounded in the now widely accepted research-risk assessment-risk management paradigm. It involves the acquisition of information that will provide mechanistic linkages within the exposure-dose-response paradigm. It advocates the use of the inhalation route of exposure for developing relevant information for assessing human health risks and calls attention to serious problems encountered using nonphysiologic routes of administration to assess human health risks.
NASA Astrophysics Data System (ADS)
Xie, Qijie; Zheng, Bofang; Shu, Chester
2017-05-01
We demonstrate a simple approach for adjustable multiplication of optical pulses in a fiber using the temporal Talbot effect. Binary electrical patterns are used to control the multiplication factor in our approach. The input 10 GHz picosecond pulses are pedestal-free and are shaped directly from a CW laser. The pulses are then intensity modulated by different sets of binary patterns prior to entering a fiber of fixed dispersion. Tunable repetition-rate multiplication by different factors of 2, 4, and 8 have been achieved and up to 80 GHz pulse train has been experimentally generated. We also evaluate numerically the influence of the extinction ratio of the intensity modulator on the performance of the multiplied pulse train. In addition, the impact of the modulator bias on the uniformity of the output pulses has also been analyzed through simulation and experiment and a good agreement is reached. Last, we perform numerical simulation on the RF spectral characteristics of the output pulses. The insensitivity of the signal-to-subharmonic noise ratio (SSNR) to the laser linewidth shows that our multiplication scheme is highly tolerant to the incoherence of the input optical pulses.
He, Jianjun; Gu, Hong; Liu, Wenqi
2012-01-01
It is well known that an important step toward understanding the functions of a protein is to determine its subcellular location. Although numerous prediction algorithms have been developed, most of them typically focused on the proteins with only one location. In recent years, researchers have begun to pay attention to the subcellular localization prediction of the proteins with multiple sites. However, almost all the existing approaches have failed to take into account the correlations among the locations caused by the proteins with multiple sites, which may be the important information for improving the prediction accuracy of the proteins with multiple sites. In this paper, a new algorithm which can effectively exploit the correlations among the locations is proposed by using gaussian process model. Besides, the algorithm also can realize optimal linear combination of various feature extraction technologies and could be robust to the imbalanced data set. Experimental results on a human protein data set show that the proposed algorithm is valid and can achieve better performance than the existing approaches.
A model for dynamic allocation of human attention among multiple tasks
NASA Technical Reports Server (NTRS)
Sheridan, T. B.; Tulga, M. K.
1978-01-01
The problem of multi-task attention allocation with special reference to aircraft piloting is discussed with the experimental paradigm used to characterize this situation and the experimental results obtained in the first phase of the research. A qualitative description of an approach to mathematical modeling, and some results obtained with it are also presented to indicate what aspects of the model are most promising. Two appendices are given which (1) discuss the model in relation to graph theory and optimization and (2) specify the optimization algorithm of the model.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
ERIC Educational Resources Information Center
Wang, Tzu-Hua
2010-01-01
This research combines the idea of cake format dynamic assessment defined by Sternberg and Grigorenko (2001) and the "graduated prompt approach" proposed by (Campione and Brown, 1985) and (Campione and Brown, 1987) to develop a multiple-choice Web-based dynamic assessment system. This research adopts a quasi-experimental design to…
gPhysics--Using Smart Glasses for Head-Centered, Context-Aware Learning in Physics Experiments
ERIC Educational Resources Information Center
Kuhn, Jochen; Lukowicz, Paul; Hirth, Michael; Poxrucker, Andreas; Weppner, Jens; Younas, Junaid
2016-01-01
Smart Glasses such as Google Glass are mobile computers combining classical Head-Mounted Displays (HMD) with several sensors. Therefore, contact-free, sensor-based experiments can be linked with relating, near-eye presented multiple representations. We will present a first approach on how Smart Glasses can be used as an experimental tool for…
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
NASA Astrophysics Data System (ADS)
Bonelli, Francesco; Tuttafesta, Michele; Colonna, Gianpiero; Cutrone, Luigi; Pascazio, Giuseppe
2017-10-01
This paper describes the most advanced results obtained in the context of fluid dynamic simulations of high-enthalpy flows using detailed state-to-state air kinetics. Thermochemical non-equilibrium, typical of supersonic and hypersonic flows, was modeled by using both the accurate state-to-state approach and the multi-temperature model proposed by Park. The accuracy of the two thermochemical non-equilibrium models was assessed by comparing the results with experimental findings, showing better predictions provided by the state-to-state approach. To overcome the huge computational cost of the state-to-state model, a multiple-nodes GPU implementation, based on an MPI-CUDA approach, was employed and a comprehensive code performance analysis is presented. Both the pure MPI-CPU and the MPI-CUDA implementations exhibit excellent scalability performance. GPUs outperform CPUs computing especially when the state-to-state approach is employed, showing speed-ups, of the single GPU with respect to the single-core CPU, larger than 100 in both the case of one MPI process and multiple MPI process.
A Hough transform global probabilistic approach to multiple-subject diffusion MRI tractography.
Aganj, Iman; Lenglet, Christophe; Jahanshad, Neda; Yacoub, Essa; Harel, Noam; Thompson, Paul M; Sapiro, Guillermo
2011-08-01
A global probabilistic fiber tracking approach based on the voting process provided by the Hough transform is introduced in this work. The proposed framework tests candidate 3D curves in the volume, assigning to each one a score computed from the diffusion images, and then selects the curves with the highest scores as the potential anatomical connections. The algorithm avoids local minima by performing an exhaustive search at the desired resolution. The technique is easily extended to multiple subjects, considering a single representative volume where the registered high-angular resolution diffusion images (HARDI) from all the subjects are non-linearly combined, thereby obtaining population-representative tracts. The tractography algorithm is run only once for the multiple subjects, and no tract alignment is necessary. We present experimental results on HARDI volumes, ranging from simulated and 1.5T physical phantoms to 7T and 4T human brain and 7T monkey brain datasets. Copyright © 2011 Elsevier B.V. All rights reserved.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Noise Modeling From Conductive Shields Using Kirchhoff Equations.
Sandin, Henrik J; Volegov, Petr L; Espy, Michelle A; Matlashov, Andrei N; Savukov, Igor M; Schultz, Larry J
2010-10-09
Progress in the development of high-sensitivity magnetic-field measurements has stimulated interest in understanding the magnetic noise of conductive materials, especially of magnetic shields based on high-permeability materials and/or high-conductivity materials. For example, SQUIDs and atomic magnetometers have been used in many experiments with mu-metal shields, and additionally SQUID systems frequently have radio frequency shielding based on thin conductive materials. Typical existing approaches to modeling noise only work with simple shield and sensor geometries while common experimental setups today consist of multiple sensor systems with complex shield geometries. With complex sensor arrays used in, for example, MEG and Ultra Low Field MRI studies, knowledge of the noise correlation between sensors is as important as knowledge of the noise itself. This is crucial for incorporating efficient noise cancelation schemes for the system. We developed an approach that allows us to calculate the Johnson noise for arbitrary shaped shields and multiple sensor systems. The approach is efficient enough to be able to run on a single PC system and return results on a minute scale. With a multiple sensor system our approach calculates not only the noise for each sensor but also the noise correlation matrix between sensors. Here we will show how the algorithm can be implemented.
IR-IR Conformation Specific Spectroscopy of Na +(Glucose) Adducts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.
Here in this paper we report an IR-IR double resonance study of the structural landscape present in the Na +(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na +(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctivemore » Na+ coordination.« less
IR-IR Conformation Specific Spectroscopy of Na +(Glucose) Adducts
Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; ...
2017-09-27
Here in this paper we report an IR-IR double resonance study of the structural landscape present in the Na +(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na +(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctivemore » Na+ coordination.« less
Studying technology use as social practice: the untapped potential of ethnography
2011-01-01
Information and communications technologies (ICTs) in healthcare are often introduced with expectations of higher-quality, more efficient, and safer care. Many fail to meet these expectations. We argue here that the well-documented failures of ICTs in healthcare are partly attributable to the philosophical foundations of much health informatics research. Positivistic assumptions underpinning the design, implementation and evaluation of ICTs (in particular the notion that technology X has an impact which can be measured and reproduced in new settings), and the deterministic experimental and quasi-experimental study designs which follow from these assumptions, have inherent limitations when ICTs are part of complex social practices involving multiple human actors. We suggest that while experimental and quasi-experimental studies have an important place in health informatics research overall, ethnography is the preferred methodological approach for studying ICTs introduced into complex social systems. But for ethnographic approaches to be accepted and used to their full potential, many in the health informatics community will need to revisit their philosophical assumptions about what counts as research rigor. PMID:21521535
Ferguson, Adam R.; Popovich, Phillip G.; Xu, Xiao-Ming; Snow, Diane M.; Igarashi, Michihiro; Beattie, Christine E.; Bixby, John L.
2014-01-01
Abstract The lack of reproducibility in many areas of experimental science has a number of causes, including a lack of transparency and precision in the description of experimental approaches. This has far-reaching consequences, including wasted resources and slowing of progress. Additionally, the large number of laboratories around the world publishing articles on a given topic make it difficult, if not impossible, for individual researchers to read all of the relevant literature. Consequently, centralized databases are needed to facilitate the generation of new hypotheses for testing. One strategy to improve transparency in experimental description, and to allow the development of frameworks for computer-readable knowledge repositories, is the adoption of uniform reporting standards, such as common data elements (data elements used in multiple clinical studies) and minimum information standards. This article describes a minimum information standard for spinal cord injury (SCI) experiments, its major elements, and the approaches used to develop it. Transparent reporting standards for experiments using animal models of human SCI aim to reduce inherent bias and increase experimental value. PMID:24870067
Zimmer, Christoph
2016-01-01
Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802
Statistical modelling of networked human-automation performance using working memory capacity.
Ahmed, Nisar; de Visser, Ewart; Shaw, Tyler; Mohamed-Ameen, Amira; Campbell, Mark; Parasuraman, Raja
2014-01-01
This study examines the challenging problem of modelling the interaction between individual attentional limitations and decision-making performance in networked human-automation system tasks. Analysis of real experimental data from a task involving networked supervision of multiple unmanned aerial vehicles by human participants shows that both task load and network message quality affect performance, but that these effects are modulated by individual differences in working memory (WM) capacity. These insights were used to assess three statistical approaches for modelling and making predictions with real experimental networked supervisory performance data: classical linear regression, non-parametric Gaussian processes and probabilistic Bayesian networks. It is shown that each of these approaches can help designers of networked human-automated systems cope with various uncertainties in order to accommodate future users by linking expected operating conditions and performance from real experimental data to observable cognitive traits like WM capacity. Practitioner Summary: Working memory (WM) capacity helps account for inter-individual variability in operator performance in networked unmanned aerial vehicle supervisory tasks. This is useful for reliable performance prediction near experimental conditions via linear models; robust statistical prediction beyond experimental conditions via Gaussian process models and probabilistic inference about unknown task conditions/WM capacities via Bayesian network models.
Fault Detection for Automotive Shock Absorber
NASA Astrophysics Data System (ADS)
Hernandez-Alcantara, Diana; Morales-Menendez, Ruben; Amezquita-Brooks, Luis
2015-11-01
Fault detection for automotive semi-active shock absorbers is a challenge due to the non-linear dynamics and the strong influence of the disturbances such as the road profile. First obstacle for this task, is the modeling of the fault, which has been shown to be of multiplicative nature. Many of the most widespread fault detection schemes consider additive faults. Two model-based fault algorithms for semiactive shock absorber are compared: an observer-based approach and a parameter identification approach. The performance of these schemes is validated and compared using a commercial vehicle model that was experimentally validated. Early results shows that a parameter identification approach is more accurate, whereas an observer-based approach is less sensible to parametric uncertainty.
An integrated experimental approach to treating young people who sexually abuse.
Longo, Robert E
2004-01-01
This article promotes the use of an integrated (holistic) approach to treating juvenile sexual offenders. An integrated model takes into account the fact that: (a) youth are resilient, (b) youth progress through various stages of development, (c) these stages are often arrested as a result of trauma, child abuse and neglect, and attachment disorders, (d) humanistic approaches and the therapeutic relationship are essential to the healing and recovery process, (e) youth learn and work with a variety of learning styles and multiple intelligences, (f) many traditional assessment and treatment approaches can be modified and blended with an integrated approach, and (g) the use of experiential treatments can have a positive and profound impact in treating youth with sexual behavior problems.
Huang, Xiaojing; Lauer, Kenneth; Clark, Jesse N.; ...
2015-03-13
We report an experimental ptychography measurement performed in fly-scan mode. With a visible-light laser source, we demonstrate a 5-fold reduction of data acquisition time. By including multiple mutually incoherent modes into the incident illumination, high quality images were successfully reconstructed from blurry diffraction patterns. Thus, this approach significantly increases the throughput of ptychography, especially for three-dimensional applications and the visualization of dynamic systems.
ERIC Educational Resources Information Center
Chi, Min; VanLehn, Kurt; Litman, Diane; Jordan, Pamela
2011-01-01
Pedagogical strategies are policies for a tutor to decide the next action when there are multiple actions available. When the content is controlled to be the same across experimental conditions, there has been little evidence that tutorial decisions have an impact on students' learning. In this paper, we applied Reinforcement Learning (RL) to…
Markov Chains For Testing Redundant Software
NASA Technical Reports Server (NTRS)
White, Allan L.; Sjogren, Jon A.
1990-01-01
Preliminary design developed for validation experiment that addresses problems unique to assuring extremely high quality of multiple-version programs in process-control software. Approach takes into account inertia of controlled system in sense it takes more than one failure of control program to cause controlled system to fail. Verification procedure consists of two steps: experimentation (numerical simulation) and computation, with Markov model for each step.
Creating aperiodic photonic structures by synthesized Mathieu-Gauss beams
NASA Astrophysics Data System (ADS)
Vasiljević, Jadranka M.; Zannotti, Alessandro; Timotijević, Dejan V.; Denz, Cornelia; Savić, Dragana M. Jović
2017-08-01
We demonstrate a kind of aperiodic photonic structure realized using the interference of multiple Mathieu-Gauss beams. Depending on the beam configurations, their mutual distances, angles of rotation, or phase relations we are able to observe different classes of such aperiodic optically induced refractive index structures. Our experimental approach is based on the optical induction in a single parallel writing process.
Physiologically relevant organs on chips
Yum, Kyungsuk; Hong, Soon Gweon; Lee, Luke P.
2015-01-01
Recent advances in integrating microengineering and tissue engineering have generated promising microengineered physiological models for experimental medicine and pharmaceutical research. Here we review the recent development of microengineered physiological systems, or organs on chips, that reconstitute the physiologically critical features of specific human tissues and organs and their interactions. This technology uses microengineering approaches to construct organ-specific microenvironments, reconstituting tissue structures, tissue–tissue interactions and interfaces, and dynamic mechanical and biochemical stimuli found in specific organs, to direct cells to assemble into functional tissues. We first discuss microengineering approaches to reproduce the key elements of physiologically important, dynamic mechanical microenvironments, biochemical microenvironments, and microarchitectures of specific tissues and organs in microfluidic cell culture systems. This is followed by examples of microengineered individual organ models that incorporate the key elements of physiological microenvironments into single microfluidic cell culture systems to reproduce organ-level functions. Finally, microengineered multiple organ systems that simulate multiple organ interactions to better represent human physiology, including human responses to drugs, is covered in this review. This emerging organs-on-chips technology has the potential to become an alternative to 2D and 3D cell culture and animal models for experimental medicine, human disease modeling, drug development, and toxicology. PMID:24357624
Katul, Gabriel G; Porporato, Amilcare; Nikora, Vladimir
2012-12-01
The existence of a "-1" power-law scaling at low wavenumbers in the longitudinal velocity spectrum of wall-bounded turbulence was explained by multiple mechanisms; however, experimental support has not been uniform across laboratory studies. This letter shows that Heisenberg's eddy viscosity approach can provide a theoretical framework that bridges these multiple mechanisms and explains the elusiveness of the "-1" power law in some experiments. Novel theoretical outcomes are conjectured about the role of intermittency and very-large scale motions in modifying the k⁻¹ scaling.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Approximate solution of the multiple watchman routes problem with restricted visibility range.
Faigl, Jan
2010-10-01
In this paper, a new self-organizing map (SOM) based adaptation procedure is proposed to address the multiple watchman route problem with the restricted visibility range in the polygonal domain W. A watchman route is represented by a ring of connected neuron weights that evolves in W, while obstacles are considered by approximation of the shortest path. The adaptation procedure considers a coverage of W by the ring in order to attract nodes toward uncovered parts of W. The proposed procedure is experimentally verified in a set of environments and several visibility ranges. Performance of the procedure is compared with the decoupled approach based on solutions of the art gallery problem and the consecutive traveling salesman problem. The experimental results show the suitability of the proposed procedure based on relatively simple supporting geometrical structures, enabling application of the SOM principles to watchman route problems in W.
NASA Technical Reports Server (NTRS)
Christoffersen, R.; Loeffler, M. J.; Dukes, C. A.; Keller, L. P.; Baragiola, R. A.
2016-01-01
The use of pulsed laser irradiation to simulate the short duration, high-energy conditions characteristic of micrometeorite impacts is now an established approach in experimental space weathering studies. The laser generates both melt and vapor deposits that contain nanophase metallic Fe (npFe(sup 0)) grains with size distributions and optical properties similar to those in natural impact-generated melt and vapor deposits. There remains uncertainty, however, about how well lasers simulate the mechanical work and internal (thermal) energy partitioning that occurs in actual impacts. We are currently engaged in making a direct comparison between the products of laser irradiation and experimental/natural hypervelocity impacts. An initial step reported here is to use analytical SEM and TEM is to attain a better understanding of how the microstructure and composition of laser deposits evolve over multiple cycles of pulsed laser irradiation.
NASA Astrophysics Data System (ADS)
Toman, Blaza; Nelson, Michael A.; Lippa, Katrice A.
2016-10-01
Chemical purity assessment using quantitative 1H-nuclear magnetic resonance spectroscopy is a method based on ratio references of mass and signal intensity of the analyte species to that of chemical standards of known purity. As such, it is an example of a calculation using a known measurement equation with multiple inputs. Though multiple samples are often analyzed during purity evaluations in order to assess measurement repeatability, the uncertainty evaluation must also account for contributions from inputs to the measurement equation. Furthermore, there may be other uncertainty components inherent in the experimental design, such as independent implementation of multiple calibration standards. As such, the uncertainty evaluation is not purely bottom up (based on the measurement equation) or top down (based on the experimental design), but inherently contains elements of both. This hybrid form of uncertainty analysis is readily implemented with Bayesian statistical analysis. In this article we describe this type of analysis in detail and illustrate it using data from an evaluation of chemical purity and its uncertainty for a folic acid material.
SAW based micro- and acousto-fluidics in biomedicine
NASA Astrophysics Data System (ADS)
Ramasamy, Mouli; Varadan, Vijay K.
2017-04-01
Protein association starts with random collisions of individual proteins. Multiple collisions and rotational diffusion brings the molecules to a state of orientation. Majority of the protein associations are influenced by electrostatic interactions. To introduce: electrostatic rate enhancement, Brownian dynamics and transient complex theory has been traditionally used. Due to the recent advances in interdisciplinary sciences, an array of molecular assembly methods is being studied. Protein nanostructural assembly and macromolecular crowding are derived from the subsets of biochemistry to study protein-protein interactions and protein self-assembly. This paper tries to investigate the issue of enhancing the protein self-association rate, and bridging the gap between the simulations and experimental results. The methods proposed here include: electrostatic rate enhancement, macromolecular crowing, nanostructural protein assembly, microfluidics based approaches and magnetic force based approaches. Despite the suggestions of several methods, microfluidic and magnetic force based approaches seem to serve the need of protein assembly in a wider scale. Congruence of these approaches may also yield better results. Even though, these methods prove to be conceptually strong, to prevent the disagreement of theory and practice, a wide range of experiments is required. This proposal intends to study theoretical and experimental methods to successfully implement the aforementioned assembly strategies, and conclude with an extensive analysis of experimental data to address practical feasibility.
Accurate Simulation and Detection of Coevolution Signals in Multiple Sequence Alignments
Ackerman, Sharon H.; Tillier, Elisabeth R.; Gatti, Domenico L.
2012-01-01
Background While the conserved positions of a multiple sequence alignment (MSA) are clearly of interest, non-conserved positions can also be important because, for example, destabilizing effects at one position can be compensated by stabilizing effects at another position. Different methods have been developed to recognize the evolutionary relationship between amino acid sites, and to disentangle functional/structural dependencies from historical/phylogenetic ones. Methodology/Principal Findings We have used two complementary approaches to test the efficacy of these methods. In the first approach, we have used a new program, MSAvolve, for the in silico evolution of MSAs, which records a detailed history of all covarying positions, and builds a global coevolution matrix as the accumulated sum of individual matrices for the positions forced to co-vary, the recombinant coevolution, and the stochastic coevolution. We have simulated over 1600 MSAs for 8 protein families, which reflect sequences of different sizes and proteins with widely different functions. The calculated coevolution matrices were compared with the coevolution matrices obtained for the same evolved MSAs with different coevolution detection methods. In a second approach we have evaluated the capacity of the different methods to predict close contacts in the representative X-ray structures of an additional 150 protein families using only experimental MSAs. Conclusions/Significance Methods based on the identification of global correlations between pairs were found to be generally superior to methods based only on local correlations in their capacity to identify coevolving residues using either simulated or experimental MSAs. However, the significant variability in the performance of different methods with different proteins suggests that the simulation of MSAs that replicate the statistical properties of the experimental MSA can be a valuable tool to identify the coevolution detection method that is most effective in each case. PMID:23091608
Integrated data analysis for genome-wide research.
Steinfath, Matthias; Repsilber, Dirk; Scholz, Matthias; Walther, Dirk; Selbig, Joachim
2007-01-01
Integrated data analysis is introduced as the intermediate level of a systems biology approach to analyse different 'omics' datasets, i.e., genome-wide measurements of transcripts, protein levels or protein-protein interactions, and metabolite levels aiming at generating a coherent understanding of biological function. In this chapter we focus on different methods of correlation analyses ranging from simple pairwise correlation to kernel canonical correlation which were recently applied in molecular biology. Several examples are presented to illustrate their application. The input data for this analysis frequently originate from different experimental platforms. Therefore, preprocessing steps such as data normalisation and missing value estimation are inherent to this approach. The corresponding procedures, potential pitfalls and biases, and available software solutions are reviewed. The multiplicity of observations obtained in omics-profiling experiments necessitates the application of multiple testing correction techniques.
Multi-instance learning based on instance consistency for image retrieval
NASA Astrophysics Data System (ADS)
Zhang, Miao; Wu, Zhize; Wan, Shouhong; Yue, Lihua; Yin, Bangjie
2017-07-01
Multiple-instance learning (MIL) has been successfully utilized in image retrieval. Existing approaches cannot select positive instances correctly from positive bags which may result in a low accuracy. In this paper, we propose a new image retrieval approach called multiple instance learning based on instance-consistency (MILIC) to mitigate such issue. First, we select potential positive instances effectively in each positive bag by ranking instance-consistency (IC) values of instances. Then, we design a feature representation scheme, which can represent the relationship among bags and instances, based on potential positive instances to convert a bag into a single instance. Finally, we can use a standard single-instance learning strategy, such as the support vector machine, for performing object-based image retrieval. Experimental results on two challenging data sets show the effectiveness of our proposal in terms of accuracy and run time.
Stochastic molecular model of enzymatic hydrolysis of cellulose for ethanol production
2013-01-01
Background During cellulosic ethanol production, cellulose hydrolysis is achieved by synergistic action of cellulase enzyme complex consisting of multiple enzymes with different mode of actions. Enzymatic hydrolysis of cellulose is one of the bottlenecks in the commercialization of the process due to low hydrolysis rates and high cost of enzymes. A robust hydrolysis model that can predict hydrolysis profile under various scenarios can act as an important forecasting tool to improve the hydrolysis process. However, multiple factors affecting hydrolysis: cellulose structure and complex enzyme-substrate interactions during hydrolysis make it diffucult to develop mathematical kinetic models that can simulate hydrolysis in presence of multiple enzymes with high fidelity. In this study, a comprehensive hydrolysis model based on stochastic molecular modeling approch in which each hydrolysis event is translated into a discrete event is presented. The model captures the structural features of cellulose, enzyme properties (mode of actions, synergism, inhibition), and most importantly dynamic morphological changes in the substrate that directly affect the enzyme-substrate interactions during hydrolysis. Results Cellulose was modeled as a group of microfibrils consisting of elementary fibrils bundles, where each elementary fibril was represented as a three dimensional matrix of glucose molecules. Hydrolysis of cellulose was simulated based on Monte Carlo simulation technique. Cellulose hydrolysis results predicted by model simulations agree well with the experimental data from literature. Coefficients of determination for model predictions and experimental values were in the range of 0.75 to 0.96 for Avicel hydrolysis by CBH I action. Model was able to simulate the synergistic action of multiple enzymes during hydrolysis. The model simulations captured the important experimental observations: effect of structural properties, enzyme inhibition and enzyme loadings on the hydrolysis and degree of synergism among enzymes. Conclusions The model was effective in capturing the dynamic behavior of cellulose hydrolysis during action of individual as well as multiple cellulases. Simulations were in qualitative and quantitative agreement with experimental data. Several experimentally observed phenomena were simulated without the need for any additional assumptions or parameter changes and confirmed the validity of using the stochastic molecular modeling approach to quantitatively and qualitatively describe the cellulose hydrolysis. PMID:23638989
Wigner tomography of multispin quantum states
NASA Astrophysics Data System (ADS)
Leiner, David; Zeier, Robert; Glaser, Steffen J.
2017-12-01
We study the tomography of multispin quantum states in the context of finite-dimensional Wigner representations. An arbitrary operator can be completely characterized and visualized using multiple shapes assembled from linear combinations of spherical harmonics [A. Garon, R. Zeier, and S. J. Glaser, Phys. Rev. A 91, 042122 (2015), 10.1103/PhysRevA.91.042122]. We develop a general methodology to experimentally recover these shapes by measuring expectation values of rotated axial spherical tensor operators and provide an interpretation in terms of fictitious multipole potentials. Our approach is experimentally demonstrated for quantum systems consisting of up to three spins using nuclear magnetic resonance spectroscopy.
Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.
Han, Hu; K Jain, Anil; Shan, Shiguang; Chen, Xilin
2017-08-10
Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal vs. nominal and holistic vs. local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability.
Robust Pedestrian Tracking and Recognition from FLIR Video: A Unified Approach via Sparse Coding
Li, Xin; Guo, Rui; Chen, Chao
2014-01-01
Sparse coding is an emerging method that has been successfully applied to both robust object tracking and recognition in the vision literature. In this paper, we propose to explore a sparse coding-based approach toward joint object tracking-and-recognition and explore its potential in the analysis of forward-looking infrared (FLIR) video to support nighttime machine vision systems. A key technical contribution of this work is to unify existing sparse coding-based approaches toward tracking and recognition under the same framework, so that they can benefit from each other in a closed-loop. On the one hand, tracking the same object through temporal frames allows us to achieve improved recognition performance through dynamical updating of template/dictionary and combining multiple recognition results; on the other hand, the recognition of individual objects facilitates the tracking of multiple objects (i.e., walking pedestrians), especially in the presence of occlusion within a crowded environment. We report experimental results on both the CASIAPedestrian Database and our own collected FLIR video database to demonstrate the effectiveness of the proposed joint tracking-and-recognition approach. PMID:24961216
Heuristics for multiobjective multiple sequence alignment.
Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B
2016-07-15
Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show that our approaches can obtain better results than TCoffee and Clustal Omega in terms of the first ratio.
Yang, Zhihao; Lin, Yuan; Wu, Jiajin; Tang, Nan; Lin, Hongfei; Li, Yanpeng
2011-10-01
Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. However, the volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database curators to detect and curate protein interaction information manually. We present a multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, and graph and combines their output with Ranking support vector machine (SVM). Experimental evaluations show that the features in individual kernels are complementary and the kernel combined with Ranking SVM achieves better performance than those of the individual kernels, equal weight combination and optimal weight combination. Our approach can achieve state-of-the-art performance with respect to the comparable evaluations, with 64.88% F-score and 88.02% AUC on the AImed corpus. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Piezoelectric T-matrix approach and multiple scattering of electroacoustic waves in thin plates
NASA Astrophysics Data System (ADS)
Darabi, Amir; Ruzzene, Massimo; Leamy, Michael J.
2017-12-01
Metamaterial-enhanced harvesting (MEH) of wave energy in thin plates and other structures has appeared recently for powering small sensors and devices. To support continued MEH concept development, this paper proposes a fully coupled T-matrix formulation for analyzing scattering of incident wave energy from a piezoelectric patch attached to a thin plate. More generally, the T-matrix represents an input-output relationship between incident and reflected waves from inclusions in a host layer, and is introduced herein for a piezoelectric patch connected to an external circuit. The utility of a T-matrix formalism is most apparent in scenarios employing multiple piezoelectric harvesters, where it can be re-used with other T-matrices (such as those previously formulated for rigid, void, and elastic inclusions) in a multiple scattering context to compute the total wavefield and other response quantities, such as harvested power. Following development of the requisite T-matrix, harvesting in an example funnel-shaped metamaterial waveguide structure is predicted using the multiple scattering approach. Enhanced wave energy harvesting predictions are verified through comparisons to experimental results of a funnel-shaped waveguide formed by placing rigid aluminum inclusions in, and multiple piezoelectric harvesters on, a Lexan plate. Good agreement with predicted response quantities is noted.
Multiplex biomarker approach to cardiovascular diseases.
Adamcova, Michaela; Šimko, Fedor
2018-04-12
Personalized medicine is partly based on biomarker-guided diagnostics, therapy and prognosis, which is becoming an unavoidable concept in modern cardiology. However, the clinical significance of single biomarker studies is rather limited. A promising novel approach involves combining multiple markers into a multiplex panel, which could refine the management of a particular patient with cardiovascular pathology. Two principally different assay formats have been developed to facilitate simultaneous quantification of multiple antigens: planar array assays and microbead assays. These approaches may help to better evaluate the complexity and dynamic nature of pathologic processes and offer substantial cost and sample savings compared with traditional enzyme-linked immunosorbent assay (ELISA) measurements. However, a multiplex multimarker approach cannot become a generally disseminated method until analytical problems are solved and further studies confirming improved clinical outcomes are accomplished. These drawbacks underlie the fact that a limited number of systematic studies are available regarding the use of a multiplex biomarker approach in cardiovascular medicine to date. Our perspective underscores the significant potential of the use of the multiplex approach in a wider conceptual framework under the close cooperation of clinical and experimental cardiologists, pathophysiologists and biochemists so that the personalized approach based on standardized multimarker testing may improve the management of various cardiovascular pathologies and become a ubiquitous partner of population-derived evidence-based medicine.
Toward applied behavior analysis of life aloft
NASA Technical Reports Server (NTRS)
Brady, J. V.
1990-01-01
This article deals with systems at multiple levels, at least from cell to organization. It also deals with learning, decision making, and other behavior at multiple levels. Technological development of a human behavioral ecosystem appropriate to space environments requires an analytic and synthetic orientation, explicitly experimental in nature, dictated by scientific and pragmatic considerations, and closely approximating procedures of established effectiveness in other areas of natural science. The conceptual basis of such an approach has its roots in environmentalism which has two main features: (1) knowledge comes from experience rather than from innate ideas, divine revelation, or other obscure sources; and (2) action is governed by consequences rather than by instinct, reason, will, beliefs, attitudes or even the currently fashionable cognitions. Without an experimentally derived data base founded upon such a functional analysis of human behavior, the overgenerality of "ecological systems" approaches render them incapable of ensuring the successful establishment of enduring space habitats. Without an experimentally derived function account of individual behavioral variability, a natural science of behavior cannot exist. And without a natural science of behavior, the social sciences will necessarily remain in their current status as disciplines of less than optimal precision or utility. Such a functional analysis of human performance should provide an operational account of behavior change in a manner similar to the way in which Darwin's approach to natural selection accounted for the evolution of phylogenetic lines (i.e., in descriptive, nonteleological terms). Similarly, as Darwin's account has subsequently been shown to be consonant with information obtained at the cellular level, so too should behavior principles ultimately prove to be in accord with an account of ontogenetic adaptation at a biochemical level. It would thus seem obvious that the most productive conceptual and methodological approaches to long-term research investments focused upon human behavior in space environments will require multidisciplinary inputs from such wide-ranging fields as molecular biology, environmental physiology, behavioral biology, architecture, sociology, and political science, among others.
ERIC Educational Resources Information Center
Shadiev, Rustam; Wu, Ting-Ting; Huang, Yueh-Min
2017-01-01
In this study, we provide STR-texts to non-native English speaking students during English lectures to facilitate learning, attention, and meditation. We carry out an experiment to test the feasibility of our approach. Our results show that the participants in the experimental group both outperform those in the control group on the post-tests and…
ERIC Educational Resources Information Center
Lucyshyn, Joseph M.; Albin, Richard W.; Horner, Robert H.; Mann, Jane C.; Mann, James A.; Wadsworth, Gina
2007-01-01
This study examined the efficacy, social validity, and durability of a positive behavior support (PBS) approach with the family of a girl with autism and severe problem behavior. The study was conducted across a 10-year period beginning when the child was 5 years old. A multiple baseline across family routines design evaluated the functional…
ERIC Educational Resources Information Center
Cheremshynski, Christy; Lucyshyn, Joseph M.; Olson, Deborah L.
2013-01-01
The purpose of this study was to empirically investigate a family-centered approach to positive behavior support (PBS) that was designed to be culturally responsive to families of diverse linguistic and cultural backgrounds. A Japanese mother and a child with autism were the primary participants. Multiple research methods were used. A…
Bostrom, Mathias; O'Keefe, Regis
2009-01-01
Understanding the complex cellular and tissue mechanisms and interactions resulting in periprosthetic osteolysis requires a number of experimental approaches, each of which has its own set of advantages and limitations. In vitro models allow for the isolation of individual cell populations and have furthered our understanding of particle-cell interactions; however, they are limited because they do not mimic the complex tissue environment in which multiple cell interactions occur. In vivo animal models investigate the tissue interactions associated with periprosthetic osteolysis, but the choice of species and whether the implant system is subjected to mechanical load or to unloaded conditions are critical in assessing whether these models can be extrapolated to the clinical condition. Rigid analysis of retrieved tissue from clinical cases of osteolysis offers a different approach to studying the biologic process of osteolysis, but it is limited in that the tissue analyzed represents the end-stage of this process and, thus, may not reflect this process adequately. PMID:18612016
Bostrom, Mathias; O'Keefe, Regis
2008-01-01
Understanding the complex cellular and tissue mechanisms and interactions resulting in periprosthetic osteolysis requires a number of experimental approaches, each of which has its own set of advantages and limitations. In vitro models allow for the isolation of individual cell populations and have furthered our understanding of particle-cell interactions; however, they are limited because they do not mimic the complex tissue environment in which multiple cell interactions occur. In vivo animal models investigate the tissue interactions associated with periprosthetic osteolysis, but the choice of species and whether the implant system is subjected to mechanical load or to unloaded conditions are critical in assessing whether these models can be extrapolated to the clinical condition. Rigid analysis of retrieved tissue from clinical cases of osteolysis offers a different approach to studying the biologic process of osteolysis, but it is limited in that the tissue analyzed represents the end-stage of this process and, thus, may not reflect this process adequately.
Okamoto, Takuma; Sakaguchi, Atsushi
2017-03-01
Generating acoustically bright and dark zones using loudspeakers is gaining attention as one of the most important acoustic communication techniques for such uses as personal sound systems and multilingual guide services. Although most conventional methods are based on numerical solutions, an analytical approach based on the spatial Fourier transform with a linear loudspeaker array has been proposed, and its effectiveness has been compared with conventional acoustic energy difference maximization and presented by computer simulations. To describe the effectiveness of the proposal in actual environments, this paper investigates the experimental validation of the proposed approach with rectangular and Hann windows and compared it with three conventional methods: simple delay-and-sum beamforming, contrast maximization, and least squares-based pressure matching using an actually implemented linear array of 64 loudspeakers in an anechoic chamber. The results of both the computer simulations and the actual experiments show that the proposed approach with a Hann window more accurately controlled the bright and dark zones than the conventional methods.
Iron chelation and multiple sclerosis
Weigel, Kelsey J.; Lynch, Sharon G.; LeVine, Steven M.
2014-01-01
Histochemical and MRI studies have demonstrated that MS (multiple sclerosis) patients have abnormal deposition of iron in both gray and white matter structures. Data is emerging indicating that this iron could partake in pathogenesis by various mechanisms, e.g., promoting the production of reactive oxygen species and enhancing the production of proinflammatory cytokines. Iron chelation therapy could be a viable strategy to block iron-related pathological events or it can confer cellular protection by stabilizing hypoxia inducible factor 1α, a transcription factor that normally responds to hypoxic conditions. Iron chelation has been shown to protect against disease progression and/or limit iron accumulation in some neurological disorders or their experimental models. Data from studies that administered a chelator to animals with experimental autoimmune encephalomyelitis, a model of MS, support the rationale for examining this treatment approach in MS. Preliminary clinical studies have been performed in MS patients using deferoxamine. Although some side effects were observed, the large majority of patients were able to tolerate the arduous administration regimen, i.e., 6–8 h of subcutaneous infusion, and all side effects resolved upon discontinuation of treatment. Importantly, these preliminary studies did not identify a disqualifying event for this experimental approach. More recently developed chelators, deferasirox and deferiprone, are more desirable for possible use in MS given their oral administration, and importantly, deferiprone can cross the blood–brain barrier. However, experiences from other conditions indicate that the potential for adverse events during chelation therapy necessitates close patient monitoring and a carefully considered administration regimen. PMID:24397846
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi
2018-02-01
The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.
An Energy Efficient Cooperative Hierarchical MIMO Clustering Scheme for Wireless Sensor Networks
Nasim, Mehwish; Qaisar, Saad; Lee, Sungyoung
2012-01-01
In this work, we present an energy efficient hierarchical cooperative clustering scheme for wireless sensor networks. Communication cost is a crucial factor in depleting the energy of sensor nodes. In the proposed scheme, nodes cooperate to form clusters at each level of network hierarchy ensuring maximal coverage and minimal energy expenditure with relatively uniform distribution of load within the network. Performance is enhanced by cooperative multiple-input multiple-output (MIMO) communication ensuring energy efficiency for WSN deployments over large geographical areas. We test our scheme using TOSSIM and compare the proposed scheme with cooperative multiple-input multiple-output (CMIMO) clustering scheme and traditional multihop Single-Input-Single-Output (SISO) routing approach. Performance is evaluated on the basis of number of clusters, number of hops, energy consumption and network lifetime. Experimental results show significant energy conservation and increase in network lifetime as compared to existing schemes. PMID:22368459
NASA Astrophysics Data System (ADS)
Guinn, Emily J.; Jagannathan, Bharat; Marqusee, Susan
2015-04-01
A fundamental question in protein folding is whether proteins fold through one or multiple trajectories. While most experiments indicate a single pathway, simulations suggest proteins can fold through many parallel pathways. Here, we use a combination of chemical denaturant, mechanical force and site-directed mutations to demonstrate the presence of multiple unfolding pathways in a simple, two-state folding protein. We show that these multiple pathways have structurally different transition states, and that seemingly small changes in protein sequence and environment can strongly modulate the flux between the pathways. These results suggest that in vivo, the crowded cellular environment could strongly influence the mechanisms of protein folding and unfolding. Our study resolves the apparent dichotomy between experimental and theoretical studies, and highlights the advantage of using a multipronged approach to reveal the complexities of a protein's free-energy landscape.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esteves, A. C. C., E-mail: a.c.c.esteves@tue.nl, E-mail: g.dewith@tue.nl; Lyakhova, K.; Riel, J. M. van
2014-03-28
Nowadays, many self-healing strategies are available for recovering mechanical damage of bulk polymeric materials. The recovery of surface-dependent functionalities on polymer films is, however, equally important and has been less investigated. In this work we study the ability of low surface energy cross-linked poly(ester urethane) networks containing perfluorinated dangling chains to self-replenish their surface, after being submitted to repeated surface damage. For this purpose we used a combined experimental-simulation approach. Experimentally, the cross-linked films were intentionally damaged by cryo-microtoming to remove top layers and create new surfaces which were characterized by water Contact Angle measurements and X-Ray Photoelectron Spectroscopy. Themore » same systems were simultaneously represented by a Dissipative Particles Dynamics simulation method, where the damage was modeled by removing the top film layers in the simulation box and replacing it by new “air” beads. The influence of different experimental parameters, such as the concentration of the low surface energy component and the molecular mobility span of the dangling chains, on the surface recovery is discussed. The combined approach reveals important details of the self-replenishing ability of damaged polymer films such as the occurrence of multiple-healing events, the self-replenishing efficiency, and the minimum “healing agent” concentration for a maximum recovery.« less
Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu
2017-06-01
Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).
Power-efficient method for IM-DD optical transmission of multiple OFDM signals.
Effenberger, Frank; Liu, Xiang
2015-05-18
We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.
A Recipe for Soft Fluidic Elastomer Robots
Marchese, Andrew D.; Katzschmann, Robert K.
2015-01-01
Abstract This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes. PMID:27625913
A Recipe for Soft Fluidic Elastomer Robots.
Marchese, Andrew D; Katzschmann, Robert K; Rus, Daniela
2015-03-01
This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.
A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs
Liu, Wan-Yu; Chou, Chun-Hung
2014-01-01
This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990
Kent, Jack W
2016-02-03
New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.
Multiple Spectral-Spatial Classification Approach for Hyperspectral Data
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.
2010-01-01
A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.
Region-Based Prediction for Image Compression in the Cloud.
Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine
2018-04-01
Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.
Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.
Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S
2010-01-01
Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Xue-Feng; Wu, Xing-Xin; Guo, Wen-Jie
2012-09-15
In the present paper, we aimed to examine the novel effects of cerebroside D, a glycoceramide compound, on murine experimental colitis. Cerebroside D significantly reduced the weight loss, mortality rate and alleviated the macroscopic and microscopic appearances of colitis induced by dexran sulfate sodium. This compound also decreased the levels of TNF-α, IFN-γ and IL-1β in intestinal tissue of mice with experimental colitis in a concentration-dependent manner, accompanied with markedly increased serum level of IL-10. Cerebroside D inhibited proliferation and induced apoptosis of T cells activated by concanavalin A or anti-CD3 plus anti-CD28 antibodies. The compound did not show anmore » effect on naive lymphocytes but prevented cells from entering S phase and G2/M phase during T cells activation. Moreover, the treatment of cerebroside D led to apoptosis of activated T cells with the cleavage of caspase 3, 9, 12 and PARP. These results showed multiple effects of cerebroside D against activated T cells for a novel approach to treatment of colonic inflammation. Highlights: ► Cerebroside D, a glycoceramide compound, alleviated DSS induced colitis. ► The mechanism of the compound involved multiple effects against activated T cells. ► It regulated cytokine profiles in mice with experimental colitis. ► It prevented T cells from entering S and G2/M phases during activation. ► It led to apoptosis of activated T cells with the cleavage of caspases and PARP.« less
Predicting the synergy of multiple stress effects
NASA Astrophysics Data System (ADS)
Liess, Matthias; Foit, Kaarina; Knillmann, Saskia; Schäfer, Ralf B.; Liess, Hans-Dieter
2016-09-01
Toxicants and other, non-chemical environmental stressors contribute to the global biodiversity crisis. Examples include the loss of bees and the reduction of aquatic biodiversity. Although non-compliance with regulations might be contributing, the widespread existence of these impacts suggests that for example the current approach of pesticide risk assessment fails to protect biodiversity when multiple stressors concurrently affect organisms. To quantify such multiple stress effects, we analysed all applicable aquatic studies and found that the presence of environmental stressors increases individual sensitivity to toxicants (pesticides, trace metals) by a factor of up to 100. To predict this dependence, we developed the “Stress Addition Model” (SAM). With the SAM, we assume that each individual has a general stress capacity towards all types of specific stress that should not be exhausted. Experimental stress levels are transferred into general stress levels of the SAM using the stress-related mortality as a common link. These general stress levels of independent stressors are additive, with the sum determining the total stress exerted on a population. With this approach, we provide a tool that quantitatively predicts the highly synergistic direct effects of independent stressor combinations.
The proximal-to-distal sequence in upper-limb motions on multiple levels and time scales.
Serrien, Ben; Baeyens, Jean-Pierre
2017-10-01
The proximal-to-distal sequence is a phenomenon that can be observed in a large variety of motions of the upper limbs in both humans and other mammals. The mechanisms behind this sequence are not completely understood and motor control theories able to explain this phenomenon are currently incomplete. The aim of this narrative review is to take a theoretical constraints-led approach to the proximal-to-distal sequence and provide a broad multidisciplinary overview of relevant literature. This sequence exists at multiple levels (brain, spine, muscles, kinetics and kinematics) and on multiple time scales (motion, motor learning and development, growth and possibly even evolution). We hypothesize that the proximodistal spatiotemporal direction on each time scale and level provides part of the organismic constraints that guide the dynamics at the other levels and time scales. The constraint-led approach in this review may serve as a first onset towards integration of evidence and a framework for further experimentation to reveal the dynamics of the proximal-to-distal sequence. Copyright © 2017 Elsevier B.V. All rights reserved.
Reconstruction From Multiple Particles for 3D Isotropic Resolution in Fluorescence Microscopy.
Fortun, Denis; Guichard, Paul; Hamel, Virginie; Sorzano, Carlos Oscar S; Banterle, Niccolo; Gonczy, Pierre; Unser, Michael
2018-05-01
The imaging of proteins within macromolecular complexes has been limited by the low axial resolution of optical microscopes. To overcome this problem, we propose a novel computational reconstruction method that yields isotropic resolution in fluorescence imaging. The guiding principle is to reconstruct a single volume from the observations of multiple rotated particles. Our new operational framework detects particles, estimates their orientation, and reconstructs the final volume. The main challenge comes from the absence of initial template and a priori knowledge about the orientations. We formulate the estimation as a blind inverse problem, and propose a block-coordinate stochastic approach to solve the associated non-convex optimization problem. The reconstruction is performed jointly in multiple channels. We demonstrate that our method is able to reconstruct volumes with 3D isotropic resolution on simulated data. We also perform isotropic reconstructions from real experimental data of doubly labeled purified human centrioles. Our approach revealed the precise localization of the centriolar protein Cep63 around the centriole microtubule barrel. Overall, our method offers new perspectives for applications in biology that require the isotropic mapping of proteins within macromolecular assemblies.
Automatic Visual Tracking and Social Behaviour Analysis with Multiple Mice
Giancardo, Luca; Sona, Diego; Huang, Huiping; Sannino, Sara; Managò, Francesca; Scheggia, Diego; Papaleo, Francesco; Murino, Vittorio
2013-01-01
Social interactions are made of complex behavioural actions that might be found in all mammalians, including humans and rodents. Recently, mouse models are increasingly being used in preclinical research to understand the biological basis of social-related pathologies or abnormalities. However, reliable and flexible automatic systems able to precisely quantify social behavioural interactions of multiple mice are still missing. Here, we present a system built on two components. A module able to accurately track the position of multiple interacting mice from videos, regardless of their fur colour or light settings, and a module that automatically characterise social and non-social behaviours. The behavioural analysis is obtained by deriving a new set of specialised spatio-temporal features from the tracker output. These features are further employed by a learning-by-example classifier, which predicts for each frame and for each mouse in the cage one of the behaviours learnt from the examples given by the experimenters. The system is validated on an extensive set of experimental trials involving multiple mice in an open arena. In a first evaluation we compare the classifier output with the independent evaluation of two human graders, obtaining comparable results. Then, we show the applicability of our technique to multiple mice settings, using up to four interacting mice. The system is also compared with a solution recently proposed in the literature that, similarly to us, addresses the problem with a learning-by-examples approach. Finally, we further validated our automatic system to differentiate between C57B/6J (a commonly used reference inbred strain) and BTBR T+tf/J (a mouse model for autism spectrum disorders). Overall, these data demonstrate the validity and effectiveness of this new machine learning system in the detection of social and non-social behaviours in multiple (>2) interacting mice, and its versatility to deal with different experimental settings and scenarios. PMID:24066146
Systemic Problems: A perspective on stem cell aging and rejuvenation.
Conboy, Irina M; Conboy, Michael J; Rebo, Justin
2015-10-01
This review provides balanced analysis of the advances in systemic regulation of young and old tissue stem cells and suggests strategies for accelerating development of therapies to broadly combat age-related tissue degenerative pathologies. Many highlighted recent reports on systemic tissue rejuvenation combine parabiosis with a "silver bullet" putatively responsible for the positive effects. Attempts to unify these papers reflect the excitement about this experimental approach and add value in reproducing previous work. At the same time, defined molecular approaches, which are "beyond parabiosis" for the rejuvenation of multiple old organs represent progress toward attenuating or even reversing human tissue aging.
Guo, Wei-Li; Huang, De-Shuang
2017-08-22
Transcription factors (TFs) are DNA-binding proteins that have a central role in regulating gene expression. Identification of DNA-binding sites of TFs is a key task in understanding transcriptional regulation, cellular processes and disease. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) enables genome-wide identification of in vivo TF binding sites. However, it is still difficult to map every TF in every cell line owing to cost and biological material availability, which poses an enormous obstacle for integrated analysis of gene regulation. To address this problem, we propose a novel computational approach, TFBSImpute, for predicting additional TF binding profiles by leveraging information from available ChIP-seq TF binding data. TFBSImpute fuses the dataset to a 3-mode tensor and imputes missing TF binding signals via simultaneous completion of multiple TF binding matrices with positional consistency. We show that signals predicted by our method achieve overall similarity with experimental data and that TFBSImpute significantly outperforms baseline approaches, by assessing the performance of imputation methods against observed ChIP-seq TF binding profiles. Besides, motif analysis shows that TFBSImpute preforms better in capturing binding motifs enriched in observed data compared with baselines, indicating that the higher performance of TFBSImpute is not simply due to averaging related samples. We anticipate that our approach will constitute a useful complement to experimental mapping of TF binding, which is beneficial for further study of regulation mechanisms and disease.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
Rebollar, Eria A; Antwis, Rachael E; Becker, Matthew H; Belden, Lisa K; Bletz, Molly C; Brucker, Robert M; Harrison, Xavier A; Hughey, Myra C; Kueneman, Jordan G; Loudon, Andrew H; McKenzie, Valerie; Medina, Daniel; Minbiole, Kevin P C; Rollins-Smith, Louise A; Walke, Jenifer B; Weiss, Sophie; Woodhams, Douglas C; Harris, Reid N
2016-01-01
Emerging infectious diseases in wildlife are responsible for massive population declines. In amphibians, chytridiomycosis caused by Batrachochytrium dendrobatidis, Bd, has severely affected many amphibian populations and species around the world. One promising management strategy is probiotic bioaugmentation of antifungal bacteria on amphibian skin. In vivo experimental trials using bioaugmentation strategies have had mixed results, and therefore a more informed strategy is needed to select successful probiotic candidates. Metagenomic, transcriptomic, and metabolomic methods, colloquially called "omics," are approaches that can better inform probiotic selection and optimize selection protocols. The integration of multiple omic data using bioinformatic and statistical tools and in silico models that link bacterial community structure with bacterial defensive function can allow the identification of species involved in pathogen inhibition. We recommend using 16S rRNA gene amplicon sequencing and methods such as indicator species analysis, the Kolmogorov-Smirnov Measure, and co-occurrence networks to identify bacteria that are associated with pathogen resistance in field surveys and experimental trials. In addition to 16S amplicon sequencing, we recommend approaches that give insight into symbiont function such as shotgun metagenomics, metatranscriptomics, or metabolomics to maximize the probability of finding effective probiotic candidates, which can then be isolated in culture and tested in persistence and clinical trials. An effective mitigation strategy to ameliorate chytridiomycosis and other emerging infectious diseases is necessary; the advancement of omic methods and the integration of multiple omic data provide a promising avenue toward conservation of imperiled species.
A comparative analysis of experimental selection on the stickleback pelvis.
Miller, S E; Barrueto, M; Schluter, D
2017-06-01
Mechanisms of natural selection can be identified using experimental approaches. However, such experiments often yield nonsignificant effects and imprecise estimates of selection due to low power and small sample sizes. Combining results from multiple experimental studies might produce an aggregate estimate of selection that is more revealing than individual studies. For example, bony pelvic armour varies conspicuously among stickleback populations, and predation by vertebrate and insect predators has been hypothesized to be the main driver of this variation. Yet experimental selection studies testing these hypotheses frequently fail to find a significant effect. We experimentally manipulated length of threespine stickleback (Gasterosteus aculeatus) pelvic spines in a mesocosm experiment to test whether prickly sculpin (Cottus asper), an intraguild predator of stickleback, favours longer spines. The probability of survival was greater for stickleback with unclipped pelvic spines, but this effect was noisy and not significant. We used meta-analysis to combine the results of our mesocosm experiment with previously published experimental studies of selection on pelvic armour. We found evidence that fish predation indeed favours increased pelvic armour, with a moderate effect size. The same approach found little evidence that insect predation favours reduced pelvic armour. The causes of reduced pelvic armour in many stickleback populations remain uncertain. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Testing and analysis of flat and curved panels with multiple cracks
NASA Technical Reports Server (NTRS)
Broek, David; Jeong, David Y.; Thomson, Douglas
1994-01-01
An experimental and analytical investigation of multiple cracking in various types of test specimens is described in this paper. The testing phase is comprised of a flat unstiffened panel series and curved stiffened and unstiffened panel series. The test specimens contained various configurations for initial damage. Static loading was applied to these specimens until ultimate failure, while loads and crack propagation were recorded. This data provides the basis for developing and validating methodologies for predicting linkup of multiple cracks, progression to failure, and overall residual strength. The results from twelve flat coupon and ten full scale curved panel tests are presented. In addition, an engineering analysis procedure was developed to predict multiple crack linkup. Reasonable agreement was found between predictions and actual test results for linkup and residual strength for both flat and curved panels. The results indicate that an engineering analysis approach has the potential to quantitatively assess the effect of multiple cracks in the arrest capability of an aircraft fuselage structure.
N7 logic via patterning using templated DSA: implementation aspects
NASA Astrophysics Data System (ADS)
Bekaert, J.; Doise, J.; Gronheid, R.; Ryckaert, J.; Vandenberghe, G.; Fenger, G.; Her, Y. J.; Cao, Y.
2015-07-01
In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCP). Insertion of DSA for IC fabrication is seriously considered for the 7 nm node. At this node the DSA technology could alleviate costs for multiple patterning and limit the number of masks that would be required per layer. At imec, multiple approaches for inserting DSA into the 7 nm node are considered. One of the most straightforward approaches for implementation would be for via patterning through templated DSA; a grapho-epitaxy flow using cylindrical phase BCP material resulting in contact hole multiplication within a litho-defined pre-pattern. To be implemented for 7 nm node via patterning, not only the appropriate process flow needs to be available, but also DSA-aware mask decomposition is required. In this paper, several aspects of the imec approach for implementing templated DSA will be discussed, including experimental demonstration of density effect mitigation, DSA hole pattern transfer and double DSA patterning, creation of a compact DSA model. Using an actual 7 nm node logic layout, we derive DSA-friendly design rules in a logical way from a lithographer's view point. A concrete assessment is provided on how DSA-friendly design could potentially reduce the number of Via masks for a place-and-routed N7 logic pattern.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
Physiologically relevant organs on chips.
Yum, Kyungsuk; Hong, Soon Gweon; Healy, Kevin E; Lee, Luke P
2014-01-01
Recent advances in integrating microengineering and tissue engineering have generated promising microengineered physiological models for experimental medicine and pharmaceutical research. Here we review the recent development of microengineered physiological systems, or also known as "ogans-on-chips", that reconstitute the physiologically critical features of specific human tissues and organs and their interactions. This technology uses microengineering approaches to construct organ-specific microenvironments, reconstituting tissue structures, tissue-tissue interactions and interfaces, and dynamic mechanical and biochemical stimuli found in specific organs, to direct cells to assemble into functional tissues. We first discuss microengineering approaches to reproduce the key elements of physiologically important, dynamic mechanical microenvironments, biochemical microenvironments, and microarchitectures of specific tissues and organs in microfluidic cell culture systems. This is followed by examples of microengineered individual organ models that incorporate the key elements of physiological microenvironments into single microfluidic cell culture systems to reproduce organ-level functions. Finally, microengineered multiple organ systems that simulate multiple organ interactions to better represent human physiology, including human responses to drugs, is covered in this review. This emerging organs-on-chips technology has the potential to become an alternative to 2D and 3D cell culture and animal models for experimental medicine, human disease modeling, drug development, and toxicology. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhao, Zhehuan; Yang, Zhihao; Luo, Ling; Wang, Lei; Zhang, Yin; Lin, Hongfei; Wang, Jian
2017-12-28
Automatic disease named entity recognition (DNER) is of utmost importance for development of more sophisticated BioNLP tools. However, most conventional CRF based DNER systems rely on well-designed features whose selection is labor intensive and time-consuming. Though most deep learning methods can solve NER problems with little feature engineering, they employ additional CRF layer to capture the correlation information between labels in neighborhoods which makes them much complicated. In this paper, we propose a novel multiple label convolutional neural network (MCNN) based disease NER approach. In this approach, instead of the CRF layer, a multiple label strategy (MLS) first introduced by us, is employed. First, the character-level embedding, word-level embedding and lexicon feature embedding are concatenated. Then several convolutional layers are stacked over the concatenated embedding. Finally, MLS strategy is applied to the output layer to capture the correlation information between neighboring labels. As shown by the experimental results, MCNN can achieve the state-of-the-art performance on both NCBI and CDR corpora. The proposed MCNN based disease NER method achieves the state-of-the-art performance with little feature engineering. And the experimental results show the MLS strategy's effectiveness of capturing the correlation information between labels in the neighborhood.
Micro-Doppler analysis of multiple frequency continuous wave radar signatures
NASA Astrophysics Data System (ADS)
Anderson, Michael G.; Rogers, Robert L.
2007-04-01
Micro-Doppler refers to Doppler scattering returns produced by non rigid-body motion. Micro-Doppler gives rise to many detailed radar image features in addition to those associated with bulk target motion. Targets of different classes (for example, humans, animals, and vehicles) produce micro-Doppler images that are often distinguishable even by nonexpert observers. Micro-Doppler features have great potential for use in automatic target classification algorithms. Although the potential benefit of using micro-Doppler in classification algorithms is high, relatively little experimental (non-synthetic) micro-Doppler data exists. Much of the existing experimental data comes from highly cooperative targets (human or vehicle targets directly approaching the radar). This research involved field data collection and analysis of micro-Doppler radar signatures from non-cooperative targets. The data was collected using a low cost Xband multiple frequency continuous wave (MFCW) radar with three transmit frequencies. The collected MFCW radar signatures contain data from humans, vehicles, and animals. The presented data includes micro-Doppler signatures previously unavailable in the literature such as crawling humans and various animal species. The animal micro-Doppler signatures include deer, dog, and goat datasets. This research focuses on the analysis of micro-Doppler from noncooperative targets approaching the radar at various angles, maneuvers, and postures.
NASA Astrophysics Data System (ADS)
Ashat, Ali; Pratama, Heru Berian
2017-12-01
The successful Ciwidey-Patuha geothermal field size assessment required integration data analysis of all aspects to determined optimum capacity to be installed. Resources assessment involve significant uncertainty of subsurface information and multiple development scenarios from these field. Therefore, this paper applied the application of experimental design approach to the geothermal numerical simulation of Ciwidey-Patuha to generate probabilistic resource assessment result. This process assesses the impact of evaluated parameters affecting resources and interacting between these parameters. This methodology have been successfully estimated the maximum resources with polynomial function covering the entire range of possible values of important reservoir parameters.
A community computational challenge to predict the activity of pairs of compounds.
Bansal, Mukesh; Yang, Jichen; Karan, Charles; Menden, Michael P; Costello, James C; Tang, Hao; Xiao, Guanghua; Li, Yajuan; Allen, Jeffrey; Zhong, Rui; Chen, Beibei; Kim, Minsoo; Wang, Tao; Heiser, Laura M; Realubit, Ronald; Mattioli, Michela; Alvarez, Mariano J; Shen, Yao; Gallahan, Daniel; Singer, Dinah; Saez-Rodriguez, Julio; Xie, Yang; Stolovitzky, Gustavo; Califano, Andrea
2014-12-01
Recent therapeutic successes have renewed interest in drug combinations, but experimental screening approaches are costly and often identify only small numbers of synergistic combinations. The DREAM consortium launched an open challenge to foster the development of in silico methods to computationally rank 91 compound pairs, from the most synergistic to the most antagonistic, based on gene-expression profiles of human B cells treated with individual compounds at multiple time points and concentrations. Using scoring metrics based on experimental dose-response curves, we assessed 32 methods (31 community-generated approaches and SynGen), four of which performed significantly better than random guessing. We highlight similarities between the methods. Although the accuracy of predictions was not optimal, we find that computational prediction of compound-pair activity is possible, and that community challenges can be useful to advance the field of in silico compound-synergy prediction.
Scalable architecture for a room temperature solid-state quantum information processor.
Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D
2012-04-24
The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.
Ma, Songyun; Scheider, Ingo; Bargmann, Swantje
2016-09-01
An anisotropic constitutive model is proposed in the framework of finite deformation to capture several damage mechanisms occurring in the microstructure of dental enamel, a hierarchical bio-composite. It provides the basis for a homogenization approach for an efficient multiscale (in this case: multiple hierarchy levels) investigation of the deformation and damage behavior. The influence of tension-compression asymmetry and fiber-matrix interaction on the nonlinear deformation behavior of dental enamel is studied by 3D micromechanical simulations under different loading conditions and fiber lengths. The complex deformation behavior and the characteristics and interaction of three damage mechanisms in the damage process of enamel are well captured. The proposed constitutive model incorporating anisotropic damage is applied to the first hierarchical level of dental enamel and validated by experimental results. The effect of the fiber orientation on the damage behavior and compressive strength is studied by comparing micro-pillar experiments of dental enamel at the first hierarchical level in multiple directions of fiber orientation. A very good agreement between computational and experimental results is found for the damage evolution process of dental enamel. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
A Probabilistic Palimpsest Model of Visual Short-term Memory
Matthey, Loic; Bays, Paul M.; Dayan, Peter
2015-01-01
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204
A probabilistic palimpsest model of visual short-term memory.
Matthey, Loic; Bays, Paul M; Dayan, Peter
2015-01-01
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ.
Experimental scattershot boson sampling
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J.; Galvão, Ernesto F.; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-01-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy. PMID:26601164
Experimental scattershot boson sampling.
Bentivegna, Marco; Spagnolo, Nicolò; Vitelli, Chiara; Flamini, Fulvio; Viggianiello, Niko; Latmiral, Ludovico; Mataloni, Paolo; Brod, Daniel J; Galvão, Ernesto F; Crespi, Andrea; Ramponi, Roberta; Osellame, Roberto; Sciarrino, Fabio
2015-04-01
Boson sampling is a computational task strongly believed to be hard for classical computers, but efficiently solvable by orchestrated bosonic interference in a specialized quantum computer. Current experimental schemes, however, are still insufficient for a convincing demonstration of the advantage of quantum over classical computation. A new variation of this task, scattershot boson sampling, leads to an exponential increase in speed of the quantum device, using a larger number of photon sources based on parametric down-conversion. This is achieved by having multiple heralded single photons being sent, shot by shot, into different random input ports of the interferometer. We report the first scattershot boson sampling experiments, where six different photon-pair sources are coupled to integrated photonic circuits. We use recently proposed statistical tools to analyze our experimental data, providing strong evidence that our photonic quantum simulator works as expected. This approach represents an important leap toward a convincing experimental demonstration of the quantum computational supremacy.
Pathway Towards Fluency: Using 'disaggregate instruction' to promote science literacy
NASA Astrophysics Data System (ADS)
Brown, Bryan A.; Ryoo, Kihyun; Rodriguez, Jamie
2010-07-01
This study examines the impact of Disaggregate Instruction on students' science learning. Disaggregate Instruction is the idea that science teaching and learning can be separated into conceptual and discursive components. Using randomly assigned experimental and control groups, 49 fifth-grade students received web-based science lessons on photosynthesis using our experimental approach. We supplemented quantitative statistical comparisons of students' performance on pre- and post-test questions (multiple choice and short answer) with a qualitative analysis of students' post-test interviews. The results revealed that students in the experimental group outscored their control group counterparts across all measures. In addition, students taught using the experimental method demonstrated an improved ability to write using scientific language as well as an improved ability to provide oral explanations using scientific language. This study has important implications for how science educators can prepare teachers to teach diverse student populations.
Application of homomorphism to secure image sharing
NASA Astrophysics Data System (ADS)
Islam, Naveed; Puech, William; Hayat, Khizar; Brouzet, Robert
2011-09-01
In this paper, we present a new approach for sharing images between l players by exploiting the additive and multiplicative homomorphic properties of two well-known public key cryptosystems, i.e. RSA and Paillier. Contrary to the traditional schemes, the proposed approach employs secret sharing in a way that limits the influence of the dealer over the protocol and allows each player to participate with the help of his key-image. With the proposed approach, during the encryption step, each player encrypts his own key-image using the dealer's public key. The dealer encrypts the secret-to-be-shared image with the same public key and then, the l encrypted key-images plus the encrypted to-be shared image are multiplied homomorphically to get another encrypted image. After this step, the dealer can safely get a scrambled image which corresponds to the addition or multiplication of the l + 1 original images ( l key-images plus the secret image) because of the additive homomorphic property of the Paillier algorithm or multiplicative homomorphic property of the RSA algorithm. When the l players want to extract the secret image, they do not need to use keys and the dealer has no role. Indeed, with our approach, to extract the secret image, the l players need only to subtract their own key-image with no specific order from the scrambled image. Thus, the proposed approach provides an opportunity to use operators like multiplication on encrypted images for the development of a secure privacy preserving protocol in the image domain. We show that it is still possible to extract a visible version of the secret image with only l-1 key-images (when one key-image is missing) or when the l key-images used for the extraction are different from the l original key-images due to a lossy compression for example. Experimental results and security analysis verify and prove that the proposed approach is secure from cryptographic viewpoint.
Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.
Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl
2015-01-01
The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.
A perspectivist approach to theory construction.
McGuire, William J
2004-01-01
A perspectivist approach is taken to the theory-construction process in psychological research. This approach assumes that all hypotheses and theories are true, as all are false, depending on the perspective from which they are viewed, and that the purpose of research is to discover which are the crucial perspectives. Perspectivism assumes also that both the a priori conceptual phase of research and the a posteriori empirical phase have both discovery and testing functions. Topics discussed include how the perspectivist approach can improve methodology training and practice (particularly as regards theory construction); what researchers accept as theoretical explanations; the nature of mediational theories; how theories can be formalized, expressed in multiple modalities and for various scaling cases; and how experimental designs can be enriched by theory-guided mediational and interactional variables.
Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR
NASA Astrophysics Data System (ADS)
Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.
2016-05-01
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.
Gingival Mesenchymal Stem/Progenitor Cells: A Unique Tissue Engineering Gem
Fawzy El-Sayed, Karim M.; Dörfer, Christof E.
2016-01-01
The human gingiva, characterized by its outstanding scarless wound healing properties, is a unique tissue and a pivotal component of the periodontal apparatus, investing and surrounding the teeth in their sockets in the alveolar bone. In the last years gingival mesenchymal stem/progenitor cells (G-MSCs), with promising regenerative and immunomodulatory properties, have been isolated and characterized from the gingival lamina propria. These cells, in contrast to other mesenchymal stem/progenitor cell sources, are abundant, readily accessible, and easily obtainable via minimally invasive cell isolation techniques. The present review summarizes the current scientific evidence on G-MSCs' isolation, their characterization, the investigated subpopulations, the generated induced pluripotent stem cells- (iPSC-) like G-MSCs, their regenerative properties, and current approaches for G-MSCs' delivery. The review further demonstrates their immunomodulatory properties, the transplantation preconditioning attempts via multiple biomolecules to enhance their attributes, and the experimental therapeutic applications conducted to treat multiple diseases in experimental animal models in vivo. G-MSCs show remarkable tissue reparative/regenerative potential, noteworthy immunomodulatory properties, and primary experimental therapeutic applications of G-MSCs are very promising, pointing at future biologically based therapeutic techniques, being potentially superior to conventional clinical treatment modalities. PMID:27313628
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyer, Christopher; Rosenthal, Anja; Myhill, Robert
We have performed an experimental cross calibration of a suite of mineral equilibria within mantle rock bulk compositions that are commonly used in geobarometry to determine the equilibration depths of upper mantle assemblages. Multiple barometers were compared simultaneously in experimental runs, where the pressure was determined using in-situ measurements of the unit cell volumes of MgO, NaCl, Re and h-BN between 3.6 and 10.4 GPa, and 1250 and 1500 °C. The experiments were performed in a large volume press (LVPs) in combination with synchrotron X-ray diffraction. Noble metal capsules drilled with multiple sample chambers were loaded with a range ofmore » bulk compositions representative of peridotite, eclogite and pyroxenite lithologies. By this approach, we simultaneously calibrated the geobarometers applicable to different mantle lithologies under identical and well determined pressure and temperature conditions. We identified discrepancies between the calculated and experimental pressures for which we propose simple linear or constant correction factors to some of the previously published barometric equations. As a result, we establish internally-consistent cross-calibrations for a number of garnet-orthopyroxene, garnet-clinopyroxene, Ca-Tschermaks-in-clinopyroxene and majorite geobarometers.« less
Fractional optical cryptographic protocol for data containers in a noise-free multiuser environment
NASA Astrophysics Data System (ADS)
Jaramillo, Alexis; Barrera, John Fredy; Zea, Alejandro Vélez; Torroba, Roberto
2018-03-01
Optical encryption systems have great potential for flexible and high-performance data protection, making them an area of rapid development. However, most approaches present two main issues, namely, the presence of speckle noise, and the degree of security they offer. Here we introduce an experimental implementation of an optical encrypting protocol that tackles these issues by taking advantage of recent developments in the field. These developments include the introduction of information containers for noise free information retrieval, the use of multiplexing to allow for a multiple user environment and an architecture based on the Joint fractional Fourier transform that allows increased degrees of freedom and simplifies the experimental requirements. Thus, data handling via QR code containers involving multiple users processed in a fractional joint transform correlator produce coded information with increased security and ease of use. In this way, we can guarantee that only the user with the correct combination of encryption key and security parameters can achieve noise free information after deciphering. We analyze the performance of the system when the order of the fractional Fourier transform is changed during decryption. We show experimental results that confirm the validity of our proposal.
Computational micromechanics of fatigue of microstructures in the HCF–VHCF regimes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castelluccio, Gustavo M.; Musinski, William D.; McDowell, David L.
Advances in higher resolution experimental techniques have shown that metallic materials can develop fatigue cracks under cyclic loading levels significantly below the yield stress. Indeed, the traditional notion of a fatigue limit can be recast in terms of limits associated with nucleation and arrest of fatigue cracks at the microstructural scale. Though fatigue damage characteristically emerges from irreversible dislocation processes at sub-grain scales, the specific microstructure attributes, environment, and loading conditions can strongly affect the apparent failure mode and surface to subsurface transitions. This paper discusses multiple mechanisms that occur during fatigue loading in the high cycle fatigue (HCF) tomore » very high cycle fatigue (VHCF) regimes. We compare these regimes, focusing on strategies to bridge experimental and modeling approaches exercised at multiple length scales and discussing particular challenges to modeling and simulation regarding microstructure-sensitive fatigue driving forces and thresholds. Finally, we discuss some of the challenges in predicting the transition of failure mechanisms at different stress and strain amplitudes.« less
Multidimensional Methods for the Formulation of Biopharmaceuticals and Vaccines
Maddux, Nathaniel R.; Joshi, Sangeeta B.; Volkin, David B.; Ralston, John P.; Middaugh, C. Russell
2013-01-01
Determining and preserving the higher order structural integrity and conformational stability of proteins, plasmid DNA and macromolecular complexes such as viruses, virus-like particles and adjuvanted antigens is often a significant barrier to the successful stabilization and formulation of biopharmaceutical drugs and vaccines. These properties typically must be investigated with multiple lower resolution experimental methods, since each technique monitors only a narrow aspect of the overall conformational state of a macromolecular system. This review describes the use of empirical phase diagrams (EPDs) to combine large amounts of data from multiple high-throughput instruments and construct a map of a target macromolecule's physical state as a function of temperature, solvent conditions, and other stress variables. We present a tutorial on the mathematical methodology, an overview of some of the experimental methods typically used, and examples of some of the previous major formulation applications. We also explore novel applications of EPDs including potential new mathematical approaches as well as possible new biopharmaceutical applications such as analytical comparability, chemical stability, and protein dynamics. PMID:21647886
Computational micromechanics of fatigue of microstructures in the HCF–VHCF regimes
Castelluccio, Gustavo M.; Musinski, William D.; McDowell, David L.
2016-05-19
Advances in higher resolution experimental techniques have shown that metallic materials can develop fatigue cracks under cyclic loading levels significantly below the yield stress. Indeed, the traditional notion of a fatigue limit can be recast in terms of limits associated with nucleation and arrest of fatigue cracks at the microstructural scale. Though fatigue damage characteristically emerges from irreversible dislocation processes at sub-grain scales, the specific microstructure attributes, environment, and loading conditions can strongly affect the apparent failure mode and surface to subsurface transitions. This paper discusses multiple mechanisms that occur during fatigue loading in the high cycle fatigue (HCF) tomore » very high cycle fatigue (VHCF) regimes. We compare these regimes, focusing on strategies to bridge experimental and modeling approaches exercised at multiple length scales and discussing particular challenges to modeling and simulation regarding microstructure-sensitive fatigue driving forces and thresholds. Finally, we discuss some of the challenges in predicting the transition of failure mechanisms at different stress and strain amplitudes.« less
Design of nucleic acid sequences for DNA computing based on a thermodynamic approach
Tanaka, Fumiaki; Kameda, Atsushi; Yamamoto, Masahito; Ohuchi, Azuma
2005-01-01
We have developed an algorithm for designing multiple sequences of nucleic acids that have a uniform melting temperature between the sequence and its complement and that do not hybridize non-specifically with each other based on the minimum free energy (ΔGmin). Sequences that satisfy these constraints can be utilized in computations, various engineering applications such as microarrays, and nano-fabrications. Our algorithm is a random generate-and-test algorithm: it generates a candidate sequence randomly and tests whether the sequence satisfies the constraints. The novelty of our algorithm is that the filtering method uses a greedy search to calculate ΔGmin. This effectively excludes inappropriate sequences before ΔGmin is calculated, thereby reducing computation time drastically when compared with an algorithm without the filtering. Experimental results in silico showed the superiority of the greedy search over the traditional approach based on the hamming distance. In addition, experimental results in vitro demonstrated that the experimental free energy (ΔGexp) of 126 sequences correlated well with ΔGmin (|R| = 0.90) than with the hamming distance (|R| = 0.80). These results validate the rationality of a thermodynamic approach. We implemented our algorithm in a graphic user interface-based program written in Java. PMID:15701762
Clustering single cells: a review of approaches on high-and low-depth single-cell RNA-seq data.
Menon, Vilas
2017-12-11
Advances in single-cell RNA-sequencing technology have resulted in a wealth of studies aiming to identify transcriptomic cell types in various biological systems. There are multiple experimental approaches to isolate and profile single cells, which provide different levels of cellular and tissue coverage. In addition, multiple computational strategies have been proposed to identify putative cell types from single-cell data. From a data generation perspective, recent single-cell studies can be classified into two groups: those that distribute reads shallowly over large numbers of cells and those that distribute reads more deeply over a smaller cell population. Although there are advantages to both approaches in terms of cellular and tissue coverage, it is unclear whether different computational cell type identification methods are better suited to one or the other experimental paradigm. This study reviews three cell type clustering algorithms, each representing one of three broad approaches, and finds that PCA-based algorithms appear most suited to low read depth data sets, whereas gene clustering-based and biclustering algorithms perform better on high read depth data sets. In addition, highly related cell classes are better distinguished by higher-depth data, given the same total number of reads; however, simultaneous discovery of distinct and similar types is better served by lower-depth, higher cell number data. Overall, this study suggests that the depth of profiling should be determined by initial assumptions about the diversity of cells in the population, and that the selection of clustering algorithm(s) is subsequently based on the depth of profiling will allow for better identification of putative transcriptomic cell types. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Neves, Marco A. C.; Simões, Sérgio; Sá e Melo, M. Luisa
2010-12-01
CXCR4 is a G-protein coupled receptor for CXCL12 that plays an important role in human immunodeficiency virus infection, cancer growth and metastasization, immune cell trafficking and WHIM syndrome. In the absence of an X-ray crystal structure, theoretical modeling of the CXCR4 receptor remains an important tool for structure-function analysis and to guide the discovery of new antagonists with potential clinical use. In this study, the combination of experimental data and molecular modeling approaches allowed the development of optimized ligand-receptor models useful for elucidation of the molecular determinants of small molecule binding and functional antagonism. The ligand-guided homology modeling approach used in this study explicitly re-shaped the CXCR4 binding pocket in order to improve discrimination between known CXCR4 antagonists and random decoys. Refinement based on multiple test-sets with small compounds from single chemotypes provided the best early enrichment performance. These results provide an important tool for structure-based drug design and virtual ligand screening of new CXCR4 antagonists.
Detection of epistatic effects with logic regression and a classical linear regression model.
Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata
2014-02-01
To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
NASA Astrophysics Data System (ADS)
Pitts, James Daniel
Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.
Identification of the Dominant Flow Structure in the Viscous Wall Region of a Turbulent Flow.
1979-08-01
wall. Also multiple probes were used in the fluid downstream from the wall probes to measure the axial velocities at different radial positions. The...Notwithstanding the limitations of the different experimental techniques used to study the viscous wall region, a dimensionless spanwise spacing (made...calculations made necessary another approach and led to the simplified flow model of Sirkar (1969). This model was used by Fortuna (1971) to explain
Li, Linglong; Yang, Yaodong; Zhang, Dawei; ...
2018-03-30
Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Linglong; Yang, Yaodong; Zhang, Dawei
Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less
NASA Astrophysics Data System (ADS)
Yang, Jingyu; Lin, Jiahui; Liu, Yuejun; Yang, Kang; Zhou, Lanwei; Chen, Guoping
2017-08-01
It is well known that intelligent control theory has been used in many research fields, novel modeling method (DROMM) is used for flexible rectangular active vibration control, and then the validity of new model is confirmed by comparing finite element model with new model. In this paper, taking advantage of the dynamics of flexible rectangular plate, a two-loop sliding mode (TSM) MIMO approach is introduced for designing multiple-input multiple-output continuous vibration control system, which can overcome uncertainties, disturbances or unstable dynamics. An illustrative example is given in order to show the feasibility of the method. Numerical simulations and experiment confirm the effectiveness of the proposed TSM MIMO controller.
Jefferys, Stuart R; Giddings, Morgan C
2011-03-15
Post-translational modifications are vital to the function of proteins, but are hard to study, especially since several modified isoforms of a protein may be present simultaneously. Mass spectrometers are a great tool for investigating modified proteins, but the data they provide is often incomplete, ambiguous and difficult to interpret. Combining data from multiple experimental techniques-especially bottom-up and top-down mass spectrometry-provides complementary information. When integrated with background knowledge this allows a human expert to interpret what modifications are present and where on a protein they are located. However, the process is arduous and for high-throughput applications needs to be automated. This article explores a data integration methodology based on Markov chain Monte Carlo and simulated annealing. Our software, the Protein Inference Engine (the PIE) applies these algorithms using a modular approach, allowing multiple types of data to be considered simultaneously and for new data types to be added as needed. Even for complicated data representing multiple modifications and several isoforms, the PIE generates accurate modification predictions, including location. When applied to experimental data collected on the L7/L12 ribosomal protein the PIE was able to make predictions consistent with manual interpretation for several different L7/L12 isoforms using a combination of bottom-up data with experimentally identified intact masses. Software, demo projects and source can be downloaded from http://pie.giddingslab.org/
Cankorur-Cetinkaya, Ayca; Dias, Joao M. L.; Kludas, Jana; Slater, Nigel K. H.; Rousu, Juho; Dikicioglu, Duygu
2017-01-01
Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple‐to‐use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257). PMID:28635591
An experimental investigation on thermal exposure during bone drilling.
Lee, Jueun; Ozdoganlar, O Burak; Rabin, Yoed
2012-12-01
This study presents an experimental investigation of the effects of spindle speed, feed rate, and depth of drilling on the temperature distribution during drilling of the cortical section of the bovine femur. In an effort to reduce measurement uncertainties, a new approach for temperature measurements during bone drilling is presented in this study. The new approach is based on a setup for precise positioning of multiple thermocouples, automated data logging system, and a computer numerically controlled (CNC) machining system. A battery of experiments that has been performed to assess the uncertainty and repeatability of the new approach displayed adequate results. Subsequently, a parametric study was conducted to determine the effects of spindle speed, feed rate, hole depth, and thermocouple location on the measured bone temperature. This study suggests that the exposure time during bone drilling far exceeds the commonly accepted threshold for thermal injury, which may prevail at significant distances from the drilled hole. Results of this study suggest that the correlation of the thermal exposure threshold for bone injury and viability should be further explored. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.
Hosoda, Kazufumi; Tsuda, Soichiro; Kadowaki, Kohmei; Nakamura, Yutaka; Nakano, Tadashi; Ishii, Kojiro
2016-02-01
Understanding ecosystem dynamics is crucial as contemporary human societies face ecosystem degradation. One of the challenges that needs to be recognized is the complex hierarchical dynamics. Conventional dynamic models in ecology often represent only the population level and have yet to include the dynamics of the sub-organism level, which makes an ecosystem a complex adaptive system that shows characteristic behaviors such as resilience and regime shifts. The neglect of the sub-organism level in the conventional dynamic models would be because integrating multiple hierarchical levels makes the models unnecessarily complex unless supporting experimental data are present. Now that large amounts of molecular and ecological data are increasingly accessible in microbial experimental ecosystems, it is worthwhile to tackle the questions of their complex hierarchical dynamics. Here, we propose an approach that combines microbial experimental ecosystems and a hierarchical dynamic model named population-reaction model. We present a simple microbial experimental ecosystem as an example and show how the system can be analyzed by a population-reaction model. We also show that population-reaction models can be applied to various ecological concepts, such as predator-prey interactions, climate change, evolution, and stability of diversity. Our approach will reveal a path to the general understanding of various ecosystems and organisms. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
van der Sluis, Olaf; Vossen, Bart; Geers, Marc
2018-01-01
Metal-elastomer interfacial systems, often encountered in stretchable electronics, demonstrate remarkably high interface fracture toughness values. Evidently, a large gap exists between the rather small adhesion energy levels at the microscopic scale (‘intrinsic adhesion’) and the large measured macroscopic work-of-separation. This energy gap is closed here by unravelling the underlying dissipative mechanisms through a systematic numerical/experimental multi-scale approach. This self-containing contribution collects and reviews previously published results and addresses the remaining open questions by providing new and independent results obtained from an alternative experimental set-up. In particular, the experimental studies on Cu-PDMS (Poly(dimethylsiloxane)) samples conclusively reveal the essential role of fibrillation mechanisms at the micro-meter scale during the metal-elastomer delamination process. The micro-scale numerical analyses on single and multiple fibrils show that the dynamic release of the stored elastic energy by multiple fibril fracture, including the interaction with the adjacent deforming bulk PDMS and its highly nonlinear behaviour, provide a mechanistic understanding of the high work-of-separation. An experimentally validated quantitative relation between the macroscopic work-of-separation and peel front height is established from the simulation results. Finally, it is shown that a micro-mechanically motivated shape of the traction-separation law in cohesive zone models is essential to describe the delamination process in fibrillating metal-elastomer systems in a physically meaningful way. PMID:29393908
A new approach of watermarking technique by means multichannel wavelet functions
NASA Astrophysics Data System (ADS)
Agreste, Santa; Puccio, Luigia
2012-12-01
The digital piracy involving images, music, movies, books, and so on, is a legal problem that has not found a solution. Therefore it becomes crucial to create and to develop methods and numerical algorithms in order to solve the copyright problems. In this paper we focus the attention on a new approach of watermarking technique applied to digital color images. Our aim is to describe the realized watermarking algorithm based on multichannel wavelet functions with multiplicity r = 3, called MCWM 1.0. We report a large experimentation and some important numerical results in order to show the robustness of the proposed algorithm to geometrical attacks.
Modal testing with Asher's method using a Fourier analyzer and curve fitting
NASA Technical Reports Server (NTRS)
Gold, R. R.; Hallauer, W. L., Jr.
1979-01-01
An unusual application of the method proposed by Asher (1958) for structural dynamic and modal testing is discussed. Asher's method has the capability, using the admittance matrix and multiple-shaker sinusoidal excitation, of separating structural modes having indefinitely close natural frequencies. The present application uses Asher's method in conjunction with a modern Fourier analyzer system but eliminates the necessity of exciting the test structure simultaneously with several shakers. Evaluation of this approach with numerically simulated data demonstrated its effectiveness; the parameters of two modes having almost identical natural frequencies were accurately identified. Laboratory evaluation of this approach was inconclusive because of poor experimental input data.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Multiple altitude TM thermal infrared images were analyzed and the observed radiance values were computed. The data obtained represent an experimental relation between preceived radiance and altitude. A LOWTRAB approach was tested which incorporates a modification to the path radiance model. This modification assumes that the scattering out of the optical path is equal in magnitude and direction to the scattering into the path. The radiance observed at altitude by an aircraft sensor was used as input to the model. Expected radiance as a function of altitude was then computed down to the ground. The results were not very satisfactory because of somewhat large errors in temperature and because of the difference in the shape of the modeled and experimental curves.
A Study of Specific Fracture Energy at Percussion Drilling
NASA Astrophysics Data System (ADS)
A, Shadrina; T, Kabanova; V, Krets; L, Saruev
2014-08-01
The paper presents experimental studies of rock failure provided by percussion drilling. Quantification and qualitative analysis were carried out to estimate critical values of rock failure depending on the hammer pre-impact velocity, types of drill bits and cylindrical hammer parameters (weight, length, diameter), and turn angle of a drill bit. Obtained data in this work were compared with obtained results by other researchers. The particle-size distribution in granite-cutting sludge was analyzed in this paper. Statistical approach (Spearmen's rank-order correlation, multiple regression analysis with dummy variables, Kruskal-Wallis nonparametric test) was used to analyze the drilling process. Experimental data will be useful for specialists engaged in simulation and illustration of rock failure.
Sun, Tao; Liu, Hongbo; Yu, Hong; Chen, C L Philip
2016-06-28
The central time series crystallizes the common patterns of the set it represents. In this paper, we propose a global constrained degree-pruning dynamic programming (g(dp)²) approach to obtain the central time series through minimizing dynamic time warping (DTW) distance between two time series. The DTW matching path theory with global constraints is proved theoretically for our degree-pruning strategy, which is helpful to reduce the time complexity and computational cost. Our approach can achieve the optimal solution between two time series. An approximate method to the central time series of multiple time series [called as m_g(dp)²] is presented based on DTW barycenter averaging and our g(dp)² approach by considering hierarchically merging strategy. As illustrated by the experimental results, our approaches provide better within-group sum of squares and robustness than other relevant algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thrall, Brian D.; Minard, Kevin R.; Teeguarden, Justin G.
A Cooperative Research and Development Agreement (CRADA) was sponsored by Battelle Memorial Institute (Battelle, Columbus), to initiate a collaborative research program across multiple Department of Energy (DOE) National Laboratories aimed at developing a suite of new capabilities for predictive toxicology. Predicting the potential toxicity of emerging classes of engineered nanomaterials was chosen as one of two focusing problems for this program. PNNL’s focus toward this broader goal was to refine and apply experimental and computational tools needed to provide quantitative understanding of nanoparticle dosimetry for in vitro cell culture systems, which is necessary for comparative risk estimates for different nanomaterialsmore » or biological systems. Research conducted using lung epithelial and macrophage cell models successfully adapted magnetic particle detection and fluorescent microscopy technologies to quantify uptake of various forms of engineered nanoparticles, and provided experimental constraints and test datasets for benchmark comparison against results obtained using an in vitro computational dosimetry model, termed the ISSD model. The experimental and computational approaches developed were used to demonstrate how cell dosimetry is applied to aid in interpretation of genomic studies of nanoparticle-mediated biological responses in model cell culture systems. The combined experimental and theoretical approach provides a highly quantitative framework for evaluating relationships between biocompatibility of nanoparticles and their physical form in a controlled manner.« less
NASA Astrophysics Data System (ADS)
Hubbart, J. A.; Kellner, R. E.; Zeiger, S. J.
2016-12-01
Advancements in watershed management are both a major challenge, and urgent need of this century. The experimental watershed study (EWS) approach provides critical baseline and long-term information that can improve decision-making, and reduce misallocation of mitigation investments. Historically, the EWS approach was used in wildland watersheds to quantitatively characterize basic landscape alterations (e.g. forest harvest, road building). However, in recent years, EWS is being repurposed in contemporary multiple-land-use watersheds comprising a mosaic of land use practices such as urbanizing centers, industry, agriculture, and rural development. The EWS method provides scalable and transferrable results that address the uncertainties of development, while providing a scientific basis for total maximum daily load (TMDL) targets in increasing numbers of Clean Water Act 303(d) listed waters. Collaborative adaptive management (CAM) programs, designed to consider the needs of many stakeholders, can also benefit from EWS-generated information, which can be used for best decision making, and serve as a guidance tool throughout the CAM program duration. Of similar importance, long-term EWS monitoring programs create a model system to show stakeholders how investing in rigorous scientific research initiatives improves decision-making, thereby increasing management efficiencies through more focused investments. The evolution from classic wildland EWS designs to contemporary EWS designs in multiple-land-use watersheds will be presented while illustrating how such an approach can encourage innovation, cooperation, and trust among watershed stakeholders working to reach the common goal of improving and sustaining hydrologic regimes and water quality.
Three dimensional time reversal optical tomography
NASA Astrophysics Data System (ADS)
Wu, Binlin; Cai, W.; Alrubaiee, M.; Xu, M.; Gayen, S. K.
2011-03-01
Time reversal optical tomography (TROT) approach is used to detect and locate absorptive targets embedded in a highly scattering turbid medium to assess its potential in breast cancer detection. TROT experimental arrangement uses multi-source probing and multi-detector signal acquisition and Multiple-Signal-Classification (MUSIC) algorithm for target location retrieval. Light transport from multiple sources through the intervening medium with embedded targets to the detectors is represented by a response matrix constructed using experimental data. A TR matrix is formed by multiplying the response matrix by its transpose. The eigenvectors with leading non-zero eigenvalues of the TR matrix correspond to embedded objects. The approach was used to: (a) obtain the location and spatial resolution of an absorptive target as a function of its axial position between the source and detector planes; and (b) study variation in spatial resolution of two targets at the same axial position but different lateral positions. The target(s) were glass sphere(s) of diameter ~9 mm filled with ink (absorber) embedded in a 60 mm-thick slab of Intralipid-20% suspension in water with an absorption coefficient μa ~ 0.003 mm-1 and a transport mean free path lt ~ 1 mm at 790 nm, which emulate the average values of those parameters for human breast tissue. The spatial resolution and accuracy of target location depended on axial position, and target contrast relative to the background. Both the targets could be resolved and located even when they were only 4-mm apart. The TROT approach is fast, accurate, and has the potential to be useful in breast cancer detection and localization.
Study and characterization of a MEMS micromirror device
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
2004-08-01
In this paper, advances in our study and characterization of a MEMS micromirror device are presented. The micromirror device, of 510 mm characteristic length, operates in a dynamic mode with a maximum displacement on the order of 10 mm along its principal optical axis and oscillation frequencies of up to 1.3 kHz. Developments are carried on by analytical, computational, and experimental methods. Analytical and computational nonlinear geometrical models are developed in order to determine the optimal loading-displacement operational characteristics of the micromirror. Due to the operational mode of the micromirror, the experimental characterization of its loading-displacement transfer function requires utilization of advanced optical metrology methods. Optoelectronic holography (OEH) methodologies based on multiple wavelengths that we are developing to perform such characterization are described. It is shown that the analytical, computational, and experimental approach is effective in our developments.
Considering RNAi experimental design in parasitic helminths.
Dalzell, Johnathan J; Warnock, Neil D; McVeigh, Paul; Marks, Nikki J; Mousley, Angela; Atkinson, Louise; Maule, Aaron G
2012-04-01
Almost a decade has passed since the first report of RNA interference (RNAi) in a parasitic helminth. Whilst much progress has been made with RNAi informing gene function studies in disparate nematode and flatworm parasites, substantial and seemingly prohibitive difficulties have been encountered in some species, hindering progress. An appraisal of current practices, trends and ideals of RNAi experimental design in parasitic helminths is both timely and necessary for a number of reasons: firstly, the increasing availability of parasitic helminth genome/transcriptome resources means there is a growing need for gene function tools such as RNAi; secondly, fundamental differences and unique challenges exist for parasite species which do not apply to model organisms; thirdly, the inherent variation in experimental design, and reported difficulties with reproducibility undermine confidence. Ideally, RNAi studies of gene function should adopt standardised experimental design to aid reproducibility, interpretation and comparative analyses. Although the huge variations in parasite biology and experimental endpoints make RNAi experimental design standardization difficult or impractical, we must strive to validate RNAi experimentation in helminth parasites. To aid this process we identify multiple approaches to RNAi experimental validation and highlight those which we deem to be critical for gene function studies in helminth parasites.
Sauer, J; Darioly, A; Mast, M Schmid; Schmid, P C; Bischof, N
2010-11-01
The article proposes a multi-level approach for evaluating communication skills training (CST) as an important element of crew resource management (CRM) training. Within this methodological framework, the present work examined the effectiveness of CST in matching or mismatching team compositions with regard to hierarchical status and competence. There is little experimental research that evaluated the effectiveness of CRM training at multiple levels (i.e. reaction, learning, behaviour) and in teams composed of members of different status and competence. An experiment with a two (CST: with vs. without) by two (competence/hierarchical status: congruent vs. incongruent) design was carried out. A total of 64 participants were trained for 2.5 h on a simulated process control environment, with the experimental group being given 45 min of training on receptiveness and influencing skills. Prior to the 1-h experimental session, participants were assigned to two-person teams. The results showed overall support for the use of such a multi-level approach of training evaluation. Stronger positive effects of CST were found for subjective measures than for objective performance measures. STATEMENT OF RELEVANCE: This work provides some guidance for the use of a multi-level evaluation of CRM training. It also emphasises the need to collect objective performance data for training evaluation in addition to subjective measures with a view to gain a more accurate picture of the benefits of such training approaches.
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Kontopantelis, Evangelos; Parisi, Rosa; Springate, David A; Reeves, David
2017-01-13
In modern health care systems, the computerization of all aspects of clinical care has led to the development of large data repositories. For example, in the UK, large primary care databases hold millions of electronic medical records, with detailed information on diagnoses, treatments, outcomes and consultations. Careful analyses of these observational datasets of routinely collected data can complement evidence from clinical trials or even answer research questions that cannot been addressed in an experimental setting. However, 'missingness' is a common problem for routinely collected data, especially for biological parameters over time. Absence of complete data for the whole of a individual's study period is a potential bias risk and standard complete-case approaches may lead to biased estimates. However, the structure of the data values makes standard cross-sectional multiple-imputation approaches unsuitable. In this paper we propose and evaluate mibmi, a new command for cleaning and imputing longitudinal body mass index data. The regression-based data cleaning aspects of the algorithm can be useful when researchers analyze messy longitudinal data. Although the multiple imputation algorithm is computationally expensive, it performed similarly or even better to existing alternatives, when interpolating observations. The mibmi algorithm can be a useful tool for analyzing longitudinal body mass index data, or other longitudinal data with very low individual-level variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Gaudiot, J.-L.
1991-12-31
Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less
Ensemble Clustering using Semidefinite Programming with Applications
Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui
2011-01-01
In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0–1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain. PMID:21927539
Ensemble Clustering using Semidefinite Programming with Applications.
Singh, Vikas; Mukherjee, Lopamudra; Peng, Jiming; Xu, Jinhui
2010-05-01
In this paper, we study the ensemble clustering problem, where the input is in the form of multiple clustering solutions. The goal of ensemble clustering algorithms is to aggregate the solutions into one solution that maximizes the agreement in the input ensemble. We obtain several new results for this problem. Specifically, we show that the notion of agreement under such circumstances can be better captured using a 2D string encoding rather than a voting strategy, which is common among existing approaches. Our optimization proceeds by first constructing a non-linear objective function which is then transformed into a 0-1 Semidefinite program (SDP) using novel convexification techniques. This model can be subsequently relaxed to a polynomial time solvable SDP. In addition to the theoretical contributions, our experimental results on standard machine learning and synthetic datasets show that this approach leads to improvements not only in terms of the proposed agreement measure but also the existing agreement measures based on voting strategies. In addition, we identify several new application scenarios for this problem. These include combining multiple image segmentations and generating tissue maps from multiple-channel Diffusion Tensor brain images to identify the underlying structure of the brain.
Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco
2014-09-17
In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.
Tangprasertchai, Narin S; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S; Qin, Peter Z
2015-01-01
The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve "correct" all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. © 2015 Elsevier Inc. All rights reserved.
Tangprasertchai, Narin S.; Zhang, Xiaojun; Ding, Yuan; Tham, Kenneth; Rohs, Remo; Haworth, Ian S.; Qin, Peter Z.
2015-01-01
The technique of site-directed spin labeling (SDSL) provides unique information on biomolecules by monitoring the behavior of a stable radical tag (i.e., spin label) using electron paramagnetic resonance (EPR) spectroscopy. In this chapter, we describe an approach in which SDSL is integrated with computational modeling to map conformations of nucleic acids. This approach builds upon a SDSL tool kit previously developed and validated, which includes three components: (i) a nucleotide-independent nitroxide probe, designated as R5, which can be efficiently attached at defined sites within arbitrary nucleic acid sequences; (ii) inter-R5 distances in the nanometer range, measured via pulsed EPR; and (iii) an efficient program, called NASNOX, that computes inter-R5 distances on given nucleic acid structures. Following a general framework of data mining, our approach uses multiple sets of measured inter-R5 distances to retrieve “correct” all-atom models from a large ensemble of models. The pool of models can be generated independently without relying on the inter-R5 distances, thus allowing a large degree of flexibility in integrating the SDSL-measured distances with a modeling approach best suited for the specific system under investigation. As such, the integrative experimental/computational approach described here represents a hybrid method for determining all-atom models based on experimentally-derived distance measurements. PMID:26477260
Accelerated gradient based diffuse optical tomographic image reconstruction.
Biswas, Samir Kumar; Rajan, K; Vasu, R M
2011-01-01
Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Morita, Syoichi; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi
2015-03-01
This paper describes an automatic approach for anatomy partitioning on three-dimensional (3D) computedtomography (CT) images that divide the human torso into several volume-of-interesting (VOI) images based on anatomical definition. The proposed approach combines several individual detections of organ-location with a groupwise organ-location calibration and correction to achieve an automatic and robust multiple-organ localization task. The essence of the proposed method is to jointly detect the 3D minimum bounding box for each type of organ shown on CT images based on intra-organ-image-textures and inter-organ-spatial-relationship in the anatomy. Machine-learning-based template matching and generalized Hough transform-based point-distribution estimation are used in the detection and calibration processes. We apply this approach to the automatic partitioning of a torso region on CT images, which are divided into 35 VOIs presenting major organ regions and tissues required by routine diagnosis in clinical medicine. A database containing 4,300 patient cases of high-resolution 3D torso CT images is used for training and performance evaluations. We confirmed that the proposed method was successful in target organ localization on more than 95% of CT cases. Only two organs (gallbladder and pancreas) showed a lower success rate: 71 and 78% respectively. In addition, we applied this approach to another database that included 287 patient cases of whole-body CT images scanned for positron emission tomography (PET) studies and used for additional performance evaluation. The experimental results showed that no significant difference between the anatomy partitioning results from those two databases except regarding the spleen. All experimental results showed that the proposed approach was efficient and useful in accomplishing localization tasks for major organs and tissues on CT images scanned using different protocols.
Buchner, Ginka S; Murphy, Ronan D; Buchete, Nicolae-Viorel; Kubelka, Jan
2011-08-01
The problem of spontaneous folding of amino acid chains into highly organized, biologically functional three-dimensional protein structures continues to challenge the modern science. Understanding how proteins fold requires characterization of the underlying energy landscapes as well as the dynamics of the polypeptide chains in all stages of the folding process. In recent years, important advances toward these goals have been achieved owing to the rapidly growing interdisciplinary interest and significant progress in both experimental techniques and theoretical methods. Improvements in the experimental time resolution led to determination of the timescales of the important elementary events in folding, such as formation of secondary structure and tertiary contacts. Sensitive single molecule methods made possible probing the distributions of the unfolded and folded states and following the folding reaction of individual protein molecules. Discovery of proteins that fold in microseconds opened the possibility of atomic-level theoretical simulations of folding and their direct comparisons with experimental data, as well as of direct experimental observation of the barrier-less folding transition. The ultra-fast folding also brought new questions, concerning the intrinsic limits of the folding rates and experimental signatures of barrier-less "downhill" folding. These problems will require novel approaches for even more detailed experimental investigations of the folding dynamics as well as for the analysis of the folding kinetic data. For theoretical simulations of folding, a main challenge is how to extract the relevant information from overwhelmingly detailed atomistic trajectories. New theoretical methods have been devised to allow a systematic approach towards a quantitative analysis of the kinetic network of folding-unfolding transitions between various configuration states of a protein, revealing the transition states and the associated folding pathways at multiple levels, from atomistic to coarse-grained representations. This article is part of a Special Issue entitled: Protein Dynamics: Experimental and Computational Approaches. Copyright © 2010 Elsevier B.V. All rights reserved.
Inoue, Shigeyoshi; Bag, Prasenjit; Weetman, Catherine
2018-05-23
Synthesis and isolation of stable main group compounds featuring multiple bonds has been of keen interest for the last several decades. Multiply bonded complexes were obtained using sterically demanding substituents that provide kinetic and thermodynamic stability. Many of these compounds have unusual structural and electronic properties that challenges the classical concept of covalent multiple bonding. In contrast, analogous aluminium compounds are scarce in spite of its high natural abundance. The parent dialumene (Al2H2) has been calculated to be extremely weak, thus making Al multiple bonds a challenging synthetic target. This review provides an overview of these recent advances in the cutting edge synthetic approaches used to obtain aluminium homo- and heterodiatomic multiply bonded complexes. Additionally, the reactivity of these novel compounds towards various small molecules and reagents will be discussed herein. This review provides an overview on the current progress in aluminium multiple bond chemistry and the careful ligand design required to stabilise these reactive species. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
3D measurement using combined Gray code and dual-frequency phase-shifting approach
NASA Astrophysics Data System (ADS)
Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin
2018-04-01
The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.
Vertical decomposition with Genetic Algorithm for Multiple Sequence Alignment
2011-01-01
Background Many Bioinformatics studies begin with a multiple sequence alignment as the foundation for their research. This is because multiple sequence alignment can be a useful technique for studying molecular evolution and analyzing sequence structure relationships. Results In this paper, we have proposed a Vertical Decomposition with Genetic Algorithm (VDGA) for Multiple Sequence Alignment (MSA). In VDGA, we divide the sequences vertically into two or more subsequences, and then solve them individually using a guide tree approach. Finally, we combine all the subsequences to generate a new multiple sequence alignment. This technique is applied on the solutions of the initial generation and of each child generation within VDGA. We have used two mechanisms to generate an initial population in this research: the first mechanism is to generate guide trees with randomly selected sequences and the second is shuffling the sequences inside such trees. Two different genetic operators have been implemented with VDGA. To test the performance of our algorithm, we have compared it with existing well-known methods, namely PRRP, CLUSTALX, DIALIGN, HMMT, SB_PIMA, ML_PIMA, MULTALIGN, and PILEUP8, and also other methods, based on Genetic Algorithms (GA), such as SAGA, MSA-GA and RBT-GA, by solving a number of benchmark datasets from BAliBase 2.0. Conclusions The experimental results showed that the VDGA with three vertical divisions was the most successful variant for most of the test cases in comparison to other divisions considered with VDGA. The experimental results also confirmed that VDGA outperformed the other methods considered in this research. PMID:21867510
An efficient approach to ARMA modeling of biological systems with multiple inputs and delays
NASA Technical Reports Server (NTRS)
Perrott, M. H.; Cohen, R. J.
1996-01-01
This paper presents a new approach to AutoRegressive Moving Average (ARMA or ARX) modeling which automatically seeks the best model order to represent investigated linear, time invariant systems using their input/output data. The algorithm seeks the ARMA parameterization which accounts for variability in the output of the system due to input activity and contains the fewest number of parameters required to do so. The unique characteristics of the proposed system identification algorithm are its simplicity and efficiency in handling systems with delays and multiple inputs. We present results of applying the algorithm to simulated data and experimental biological data In addition, a technique for assessing the error associated with the impulse responses calculated from estimated ARMA parameterizations is presented. The mapping from ARMA coefficients to impulse response estimates is nonlinear, which complicates any effort to construct confidence bounds for the obtained impulse responses. Here a method for obtaining a linearization of this mapping is derived, which leads to a simple procedure to approximate the confidence bounds.
MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL
Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi
2017-01-01
A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively. PMID:29051701
MGUPGMA: A Fast UPGMA Algorithm With Multiple Graphics Processing Units Using NCCL.
Hua, Guan-Jie; Hung, Che-Lun; Lin, Chun-Yuan; Wu, Fu-Che; Chan, Yu-Wei; Tang, Chuan Yi
2017-01-01
A phylogenetic tree is a visual diagram of the relationship between a set of biological species. The scientists usually use it to analyze many characteristics of the species. The distance-matrix methods, such as Unweighted Pair Group Method with Arithmetic Mean and Neighbor Joining, construct a phylogenetic tree by calculating pairwise genetic distances between taxa. These methods have the computational performance issue. Although several new methods with high-performance hardware and frameworks have been proposed, the issue still exists. In this work, a novel parallel Unweighted Pair Group Method with Arithmetic Mean approach on multiple Graphics Processing Units is proposed to construct a phylogenetic tree from extremely large set of sequences. The experimental results present that the proposed approach on a DGX-1 server with 8 NVIDIA P100 graphic cards achieves approximately 3-fold to 7-fold speedup over the implementation of Unweighted Pair Group Method with Arithmetic Mean on a modern CPU and a single GPU, respectively.
Aerodynamic analysis for aircraft with nacelles, pylons, and winglets at transonic speeds
NASA Technical Reports Server (NTRS)
Boppe, Charles W.
1987-01-01
A computational method has been developed to provide an analysis for complex realistic aircraft configurations at transonic speeds. Wing-fuselage configurations with various combinations of pods, pylons, nacelles, and winglets can be analyzed along with simpler shapes such as airfoils, isolated wings, and isolated bodies. The flexibility required for the treatment of such diverse geometries is obtained by using a multiple nested grid approach in the finite-difference relaxation scheme. Aircraft components (and their grid systems) can be added or removed as required. As a result, the computational method can be used in the same manner as a wind tunnel to study high-speed aerodynamic interference effects. The multiple grid approach also provides high boundary point density/cost ratio. High resolution pressure distributions can be obtained. Computed results are correlated with wind tunnel and flight data using four different transport configurations. Experimental/computational component interference effects are included for cases where data are available. The computer code used for these comparisons is described in the appendices.
A hadoop-based method to predict potential effective drug combination.
Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.
A Hadoop-Based Method to Predict Potential Effective Drug Combination
Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789
Optimum decoding and detection of a multiplicative amplitude-encoded watermark
NASA Astrophysics Data System (ADS)
Barni, Mauro; Bartolini, Franco; De Rosa, Alessia; Piva, Alessandro
2002-04-01
The aim of this paper is to present a novel approach to the decoding and the detection of multibit, multiplicative, watermarks embedded in the frequency domain. Watermark payload is conveyed by amplitude modulating a pseudo-random sequence, thus resembling conventional DS spread spectrum techniques. As opposed to conventional communication systems, though, the watermark is embedded within the host DFT coefficients by using a multiplicative rule. The watermark decoding technique presented in the paper is an optimum one, in that it minimizes the bit error probability. The problem of watermark presence assessment, which is often underestimated by state-of-the-art research on multibit watermarking, is addressed too, and the optimum detection rule derived according to the Neyman-Pearson criterion. Experimental results are shown both to demonstrate the validity of the theoretical analysis and to highlight the good performance of the proposed system.
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
E-Nose Vapor Identification Based on Dempster-Shafer Fusion of Multiple Classifiers
NASA Technical Reports Server (NTRS)
Li, Winston; Leung, Henry; Kwan, Chiman; Linnell, Bruce R.
2005-01-01
Electronic nose (e-nose) vapor identification is an efficient approach to monitor air contaminants in space stations and shuttles in order to ensure the health and safety of astronauts. Data preprocessing (measurement denoising and feature extraction) and pattern classification are important components of an e-nose system. In this paper, a wavelet-based denoising method is applied to filter the noisy sensor measurements. Transient-state features are then extracted from the denoised sensor measurements, and are used to train multiple classifiers such as multi-layer perceptions (MLP), support vector machines (SVM), k nearest neighbor (KNN), and Parzen classifier. The Dempster-Shafer (DS) technique is used at the end to fuse the results of the multiple classifiers to get the final classification. Experimental analysis based on real vapor data shows that the wavelet denoising method can remove both random noise and outliers successfully, and the classification rate can be improved by using classifier fusion.
Toytman, I; Silbergleit, A; Simanovski, D; Palanker, D
2010-10-01
Transparent biological tissues can be precisely dissected with ultrafast lasers using optical breakdown in the tight focal zone. Typically, tissues are cut by sequential application of pulses, each of which produces a single cavitation bubble. We investigate the hydrodynamic interactions between simultaneous cavitation bubbles originating from multiple laser foci. Simultaneous expansion and collapse of cavitation bubbles can enhance the cutting efficiency, by increasing the resulting deformations in tissue, and the associated rupture zone. An analytical model of the flow induced by the bubbles is presented and experimentally verified. The threshold strain of the material rupture is measured in a model tissue. Using the computational model and the experimental value of the threshold strain one can compute the shape of the rupture zone in tissue resulting from application of multiple bubbles. With the threshold strain of 0.7 two simultaneous bubbles produce a continuous cut when applied at the distance 1.35 times greater than that required in sequential approach. Simultaneous focusing of the laser in multiple spots along the line of intended cut can extend this ratio to 1.7. Counterpropagating jets forming during collapse of two bubbles in materials with low viscosity can further extend the cutting zone-up to approximately a factor of 1.5.
Bayesian Dose-Response Modeling in Sparse Data
NASA Astrophysics Data System (ADS)
Kim, Steven B.
This book discusses Bayesian dose-response modeling in small samples applied to two different settings. The first setting is early phase clinical trials, and the second setting is toxicology studies in cancer risk assessment. In early phase clinical trials, experimental units are humans who are actual patients. Prior to a clinical trial, opinions from multiple subject area experts are generally more informative than the opinion of a single expert, but we may face a dilemma when they have disagreeing prior opinions. In this regard, we consider compromising the disagreement and compare two different approaches for making a decision. In addition to combining multiple opinions, we also address balancing two levels of ethics in early phase clinical trials. The first level is individual-level ethics which reflects the perspective of trial participants. The second level is population-level ethics which reflects the perspective of future patients. We extensively compare two existing statistical methods which focus on each perspective and propose a new method which balances the two conflicting perspectives. In toxicology studies, experimental units are living animals. Here we focus on a potential non-monotonic dose-response relationship which is known as hormesis. Briefly, hormesis is a phenomenon which can be characterized by a beneficial effect at low doses and a harmful effect at high doses. In cancer risk assessments, the estimation of a parameter, which is known as a benchmark dose, can be highly sensitive to a class of assumptions, monotonicity or hormesis. In this regard, we propose a robust approach which considers both monotonicity and hormesis as a possibility. In addition, We discuss statistical hypothesis testing for hormesis and consider various experimental designs for detecting hormesis based on Bayesian decision theory. Past experiments have not been optimally designed for testing for hormesis, and some Bayesian optimal designs may not be optimal under a wrong parametric assumption. In this regard, we consider a robust experimental design which does not require any parametric assumption.
Wang, Chun-Hua; Zhong, Yi; Zhang, Yan; Liu, Jin-Ping; Wang, Yue-Fei; Jia, Wei-Na; Wang, Guo-Cai; Li, Zheng; Zhu, Yan; Gao, Xiu-Mei
2016-02-01
Chinese medicine is known to treat complex diseases with multiple components and multiple targets. However, the main effective components and their related key targets and functions remain to be identified. Herein, a network analysis method was developed to identify the main effective components and key targets of a Chinese medicine, Lianhua-Qingwen Formula (LQF). The LQF is commonly used for the prevention and treatment of viral influenza in China. It is composed of 11 herbs, gypsum and menthol with 61 compounds being identified in our previous work. In this paper, these 61 candidate compounds were used to find their related targets and construct the predicted-target (PT) network. An influenza-related protein-protein interaction (PPI) network was constructed and integrated with the PT network. Then the compound-effective target (CET) network and compound-ineffective target network (CIT) were extracted, respectively. A novel approach was developed to identify effective components by comparing CET and CIT networks. As a result, 15 main effective components were identified along with 61 corresponding targets. 7 of these main effective components were further experimentally validated to have antivirus efficacy in vitro. The main effective component-target (MECT) network was further constructed with main effective components and their key targets. Gene Ontology (GO) analysis of the MECT network predicted key functions such as NO production being modulated by the LQF. Interestingly, five effective components were experimentally tested and exhibited inhibitory effects on NO production in the LPS induced RAW 264.7 cell. In summary, we have developed a novel approach to identify the main effective components in a Chinese medicine LQF and experimentally validated some of the predictions.
Shahriari, Mohammadali; Biglarbegian, Mohammad
2018-01-01
This paper presents a new conflict resolution methodology for multiple mobile robots while ensuring their motion-liveness, especially for cluttered and dynamic environments. Our method constructs a mathematical formulation in a form of an optimization problem by minimizing the overall travel times of the robots subject to resolving all the conflicts in their motion. This optimization problem can be easily solved through coordinating only the robots' speeds. To overcome the computational cost in executing the algorithm for very cluttered environments, we develop an innovative method through clustering the environment into independent subproblems that can be solved using parallel programming techniques. We demonstrate the scalability of our approach through performing extensive simulations. Simulation results showed that our proposed method is capable of resolving the conflicts of 100 robots in less than 1.23 s in a cluttered environment that has 4357 intersections in the paths of the robots. We also developed an experimental testbed and demonstrated that our approach can be implemented in real time. We finally compared our approach with other existing methods in the literature both quantitatively and qualitatively. This comparison shows while our approach is mathematically sound, it is more computationally efficient, scalable for very large number of robots, and guarantees the live and smooth motion of robots.
EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.
Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin
2018-04-24
Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.
Linking definitions, mechanisms, and modeling of drought-induced tree death.
Anderegg, William R L; Berry, Joseph A; Field, Christopher B
2012-12-01
Tree death from drought and heat stress is a critical and uncertain component in forest ecosystem responses to a changing climate. Recent research has illuminated how tree mortality is a complex cascade of changes involving interconnected plant systems over multiple timescales. Explicit consideration of the definitions, dynamics, and temporal and biological scales of tree mortality research can guide experimental and modeling approaches. In this review, we draw on the medical literature concerning human death to propose a water resource-based approach to tree mortality that considers the tree as a complex organism with a distinct growth strategy. This approach provides insight into mortality mechanisms at the tree and landscape scales and presents promising avenues into modeling tree death from drought and temperature stress. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chtourou, Rim; Haugou, Gregory; Leconte, Nicolas; Zouari, Bassem; Chaari, Fahmi; Markiewicz, Eric
2015-09-01
Resistance Spot Welding (RSW) of multiple sheets with multiple materials are increasingly realized in the automotive industry. The mechanical strength of such new generation of spot welded assemblies is not that much dealt with. This is true in particular for experiments dedicated to investigate the mechanical strength of spot weld made by multi sheets of different grades, and their macro modeling in structural computations. Indeed, the most published studies are limited to two sheet assemblies. Therefore, in the first part of this work an advanced experimental set-up with a reduced mass is proposed to characterize the quasi-static and dynamic mechanical behavior and rupture of spot weld made by several sheets of different grades. The proposed device is based on Arcan test, the plates contribution in the global response is, thus, reduced. Loading modes I/II are, therefore, combined and well controlled. In the second part a simplified spot weld connector element (macroscopic modeling) is proposed to describe the nonlinear response and rupture of this new generation of spot welded assemblies. The weld connector model involves several parameters to be set. The remaining parameters are finally identified through a reverse engineering approach using mechanical responses of experimental tests presented in the first part of this work.
Kaptein, Maurits; van Emden, Robin; Iannuzzi, Davide
2017-01-01
Due to the ubiquitous presence of treatment heterogeneity, measurement error, and contextual confounders, numerous social phenomena are hard to study. Precise control of treatment variables and possible confounders is often key to the success of studies in the social sciences, yet often proves out of the realm of control of the experimenter. To amend this situation we propose a novel approach coined "lock-in feedback" which is based on a method that is routinely used in high-precision physics experiments to extract small signals out of a noisy environment. Here, we adapt the method to noisy social signals in multiple dimensions and evaluate it by studying an inherently noisy topic: the perception of (subjective) beauty. We show that the lock-in feedback approach allows one to select optimal treatment levels despite the presence of considerable noise. Furthermore, through the introduction of an external contextual shock we demonstrate that we can find relationships between noisy variables that were hitherto unknown. We therefore argue that lock-in methods may provide a valuable addition to the social scientist's experimental toolbox and we explicitly discuss a number of future applications.
2017-01-01
Due to the ubiquitous presence of treatment heterogeneity, measurement error, and contextual confounders, numerous social phenomena are hard to study. Precise control of treatment variables and possible confounders is often key to the success of studies in the social sciences, yet often proves out of the realm of control of the experimenter. To amend this situation we propose a novel approach coined “lock-in feedback” which is based on a method that is routinely used in high-precision physics experiments to extract small signals out of a noisy environment. Here, we adapt the method to noisy social signals in multiple dimensions and evaluate it by studying an inherently noisy topic: the perception of (subjective) beauty. We show that the lock-in feedback approach allows one to select optimal treatment levels despite the presence of considerable noise. Furthermore, through the introduction of an external contextual shock we demonstrate that we can find relationships between noisy variables that were hitherto unknown. We therefore argue that lock-in methods may provide a valuable addition to the social scientist’s experimental toolbox and we explicitly discuss a number of future applications. PMID:28306728
Renal cortex segmentation using optimal surface search with novel graph construction.
Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie
2011-01-01
In this paper, we propose a novel approach to solve the renal cortex segmentation problem, which has rarely been studied. In this study, the renal cortex segmentation problem is handled as a multiple-surfaces extraction problem, which is solved using the optimal surface search method. We propose a novel graph construction scheme in the optimal surface search to better accommodate multiple surfaces. Different surface sub-graphs are constructed according to their properties, and inter-surface relationships are also modeled in the graph. The proposed method was tested on 17 clinical CT datasets. The true positive volume fraction (TPVF) and false positive volume fraction (FPVF) are 74.10% and 0.08%, respectively. The experimental results demonstrate the effectiveness of the proposed method.
da Silva, Annielle Mendes Brito; Silva-Gonçalves, Laíz Costa; Oliveira, Fernando Augusto; Arcisio-Miranda, Manoel
2018-07-01
Glioblastoma multiforme is the most common and lethal malignant brain tumor. Because of its complexity and heterogeneity, this tumor has become resistant to conventional therapies and the available treatment produces multiple side effects. Here, using multiple experimental approaches, we demonstrate that three mastoparan peptides-Polybia-MP1, Mastoparan X, and HR1-from solitary wasp venom exhibit potent anticancer activity toward human glioblastoma multiforme cells. Importantly, the antiglioblastoma action of mastoparan peptides occurs by membranolytic activity, leading to necrosis. Our data also suggest a direct relation between mastoparan membranolytic potency and the presence of negatively charged phospholipids like phosphatidylserine. Collectively, these data may warrant additional studies for mastoparan peptides as new agents for the treatment of glioblastoma multiforme brain tumor.
Supervised restoration of degraded medical images using multiple-point geostatistics.
Pham, Tuan D
2012-06-01
Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
High-charge and multiple-star vortex coronagraphy from stacked vector vortex phase masks.
Aleksanyan, Artur; Brasselet, Etienne
2018-02-01
Optical vortex phase masks are now installed at many ground-based large telescopes for high-contrast astronomical imaging. To date, such instrumental advances have been restricted to the use of helical phase masks of the lowest even order, while future giant telescopes will require high-order masks. Here we propose a single-stage on-axis scheme to create high-order vortex coronagraphs based on second-order vortex phase masks. By extending our approach to an off-axis design, we also explore the implementation of multiple-star vortex coronagraphy. An experimental laboratory demonstration is reported and supported by numerical simulations. These results offer a practical roadmap to the development of future coronagraphic tools with enhanced performances.
Behavioral testing strategies in a localized animal model of multiple sclerosis.
Buddeberg, Bigna S; Kerschensteiner, Martin; Merkler, Doron; Stadelmann, Christine; Schwab, Martin E
2004-08-01
To assess neurological impairments quantitatively in an animal model of multiple sclerosis (MS), we have used a targeted model of experimental autoimmune encephalomyelitis (EAE), which leads to the formation of anatomically defined lesions in the spinal cord. Deficits in the hindlimb locomotion are therefore well defined and highly reproducible, in contrast to the situation in generalized EAE with disseminated lesions. Behavioral tests for hindlimb sensorimotor functions, originally established for traumatic spinal cord injury, revealed temporary or persistent deficits in open field locomotion, the grid walk, the narrow beam and the measurement of the foot exorotation angle. Such refined behavioral testing in EAE will be crucial for the analysis of new therapeutic approaches for MS that seek to improve or prevent neurological impairment.
Multiple ionization of neon by soft x-rays at ultrahigh intensity
NASA Astrophysics Data System (ADS)
Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.
2013-08-01
At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.
Prediction of Ionizing Radiation Resistance in Bacteria Using a Multiple Instance Learning Model.
Aridhi, Sabeur; Sghaier, Haïtham; Zoghlami, Manel; Maddouri, Mondher; Nguifo, Engelbert Mephu
2016-01-01
Ionizing-radiation-resistant bacteria (IRRB) are important in biotechnology. In this context, in silico methods of phenotypic prediction and genotype-phenotype relationship discovery are limited. In this work, we analyzed basal DNA repair proteins of most known proteome sequences of IRRB and ionizing-radiation-sensitive bacteria (IRSB) in order to learn a classifier that correctly predicts this bacterial phenotype. We formulated the problem of predicting bacterial ionizing radiation resistance (IRR) as a multiple-instance learning (MIL) problem, and we proposed a novel approach for this purpose. We provide a MIL-based prediction system that classifies a bacterium to either IRRB or IRSB. The experimental results of the proposed system are satisfactory with 91.5% of successful predictions.
A theoretical study of electron multiplication coefficient in a cold-cathode Penning ion generator
NASA Astrophysics Data System (ADS)
Noori, H.; Ranjbar, A. H.; Rahmanipour, R.
2017-11-01
The discharge mechanism of a Penning ion generator (PIG) is seriously influenced by the electron ionization process. A theoretical approach has been proposed to formulate the electron multiplication coefficient, M, of a PIG as a function of the axial magnetic field and the applied voltage. A numerical simulation was used to adjust the free parameters of expression M. Using the coefficient M, the values of the effective secondary electron emission coefficient, γeff, were obtained to be from 0.09 to 0.22. In comparison to the experimental results, the average value of γeff differs from the secondary coefficient of clean and dirty metals by the factors 1.4 and 0.5, respectively.
NASA Astrophysics Data System (ADS)
Liu, Jie; Hu, Youmin; Wang, Yan; Wu, Bo; Fan, Jikai; Hu, Zhongxu
2018-05-01
The diagnosis of complicated fault severity problems in rotating machinery systems is an important issue that affects the productivity and quality of manufacturing processes and industrial applications. However, it usually suffers from several deficiencies. (1) A considerable degree of prior knowledge and expertise is required to not only extract and select specific features from raw sensor signals, and but also choose a suitable fusion for sensor information. (2) Traditional artificial neural networks with shallow architectures are usually adopted and they have a limited ability to learn the complex and variable operating conditions. In multi-sensor-based diagnosis applications in particular, massive high-dimensional and high-volume raw sensor signals need to be processed. In this paper, an integrated multi-sensor fusion-based deep feature learning (IMSFDFL) approach is developed to identify the fault severity in rotating machinery processes. First, traditional statistics and energy spectrum features are extracted from multiple sensors with multiple channels and combined. Then, a fused feature vector is constructed from all of the acquisition channels. Further, deep feature learning with stacked auto-encoders is used to obtain the deep features. Finally, the traditional softmax model is applied to identify the fault severity. The effectiveness of the proposed IMSFDFL approach is primarily verified by a one-stage gearbox experimental platform that uses several accelerometers under different operating conditions. This approach can identify fault severity more effectively than the traditional approaches.
NASA Astrophysics Data System (ADS)
Taravat, A.; Del Frate, F.
2013-09-01
As a major aspect of marine pollution, oil release into the sea has serious biological and environmental impacts. Among remote sensing systems (which is a tool that offers a non-destructive investigation method), synthetic aperture radar (SAR) can provide valuable synoptic information about the position and size of the oil spill due to its wide area coverage and day/night, and all-weather capabilities. In this paper we present a new automated method for oil-spill monitoring. A new approach is based on the combination of Weibull Multiplicative Model and machine learning techniques to differentiate between dark spots and the background. First, the filter created based on Weibull Multiplicative Model is applied to each sub-image. Second, the sub-image is segmented by two different neural networks techniques (Pulsed Coupled Neural Networks and Multilayer Perceptron Neural Networks). As the last step, a very simple filtering process is used to eliminate the false targets. The proposed approaches were tested on 20 ENVISAT and ERS2 images which contained dark spots. The same parameters were used in all tests. For the overall dataset, the average accuracies of 94.05 % and 95.20 % were obtained for PCNN and MLP methods, respectively. The average computational time for dark-spot detection with a 256 × 256 image in about 4 s for PCNN segmentation using IDL software which is the fastest one in this field at present. Our experimental results demonstrate that the proposed approach is very fast, robust and effective. The proposed approach can be applied to the future spaceborne SAR images.
Detection of multiple damages employing best achievable eigenvectors under Bayesian inference
NASA Astrophysics Data System (ADS)
Prajapat, Kanta; Ray-Chaudhuri, Samit
2018-05-01
A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.
Peptide-membrane Interactions by Spin-labeling EPR
Smirnova, Tatyana I.; Smirnov, Alex I.
2016-01-01
Site-directed spin labeling (SDSL) in combination with Electron Paramagnetic Resonance (EPR) spectroscopy is a well-established method that has recently grown in popularity as an experimental technique, with multiple applications in protein and peptide science. The growth is driven by development of labeling strategies, as well as by considerable technical advances in the field, that are paralleled by an increased availability of EPR instrumentation. While the method requires an introduction of a paramagnetic probe at a well-defined position in a peptide sequence, it has been shown to be minimally destructive to the peptide structure and energetics of the peptide-membrane interactions. In this chapter, we describe basic approaches for using SDSL EPR spectroscopy to study interactions between small peptides and biological membranes or membrane mimetic systems. We focus on experimental approaches to quantify peptide-membrane binding, topology of bound peptides, and characterize peptide aggregation. Sample preparation protocols including spin-labeling methods and preparation of membrane mimetic systems are also described. PMID:26477253
Thermal conductivity of hybrid short fiber composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, M.L.; Taya, M.; Hatta, H.
1993-01-01
A combined analytical/experimental study has been undertaken to investigate the effective thermal conductivity of hybrid composite materials. The analysis utilizes the equivalent inclusion approach for steady state heat conduction (Hatta and Taya, 1986) through which the interaction between the various reinforcing phases at finite concentrations is approximated by the Mori-Tanaka (1973) mean field approach. The multiple reinforcing phases of the composite are modeled as ellipsoidal in shape and thus can simulate a wide range of microstructural geometries ranging from thin platelet to continuous fiber reinforcement. The case when one phase of the composite is penny-shaped microcracks is studied in detail.more » Multiphase composites consisting of a Kerimid matrix and Al2O3 short fibers and Si3N4 whiskers were fabricated and, after a careful study of their microstructure, their thermal conductivities were measured. Analytical predictions are shown to be in good agreement with experimental results obtained for the Al2O3/Si3N4/Kerimid short fiber composites. 26 refs.« less
Robinson, Mark D; De Souza, David P; Keen, Woon Wai; Saunders, Eleanor C; McConville, Malcolm J; Speed, Terence P; Likić, Vladimir A
2007-10-29
Gas chromatography-mass spectrometry (GC-MS) is a robust platform for the profiling of certain classes of small molecules in biological samples. When multiple samples are profiled, including replicates of the same sample and/or different sample states, one needs to account for retention time drifts between experiments. This can be achieved either by the alignment of chromatographic profiles prior to peak detection, or by matching signal peaks after they have been extracted from chromatogram data matrices. Automated retention time correction is particularly important in non-targeted profiling studies. A new approach for matching signal peaks based on dynamic programming is presented. The proposed approach relies on both peak retention times and mass spectra. The alignment of more than two peak lists involves three steps: (1) all possible pairs of peak lists are aligned, and similarity of each pair of peak lists is estimated; (2) the guide tree is built based on the similarity between the peak lists; (3) peak lists are progressively aligned starting with the two most similar peak lists, following the guide tree until all peak lists are exhausted. When two or more experiments are performed on different sample states and each consisting of multiple replicates, peak lists within each set of replicate experiments are aligned first (within-state alignment), and subsequently the resulting alignments are aligned themselves (between-state alignment). When more than two sets of replicate experiments are present, the between-state alignment also employs the guide tree. We demonstrate the usefulness of this approach on GC-MS metabolic profiling experiments acquired on wild-type and mutant Leishmania mexicana parasites. We propose a progressive method to match signal peaks across multiple GC-MS experiments based on dynamic programming. A sensitive peak similarity function is proposed to balance peak retention time and peak mass spectra similarities. This approach can produce the optimal alignment between an arbitrary number of peak lists, and models explicitly within-state and between-state peak alignment. The accuracy of the proposed method was close to the accuracy of manually-curated peak matching, which required tens of man-hours for the analyzed data sets. The proposed approach may offer significant advantages for processing of high-throughput metabolomics data, especially when large numbers of experimental replicates and multiple sample states are analyzed.
Mousa-Pasandi, Mohammad E; Zhuge, Qunbi; Xu, Xian; Osman, Mohamed M; El-Sahn, Ziad A; Chagnon, Mathieu; Plant, David V
2012-07-02
We experimentally investigate the performance of a low-complexity non-iterative phase noise induced inter-carrier interference (ICI) compensation algorithm in reduced-guard-interval dual-polarization coherent-optical orthogonal-frequency-division-multiplexing (RGI-DP-CO-OFDM) transport systems. This interpolation-based ICI compensator estimates the time-domain phase noise samples by a linear interpolation between the CPE estimates of the consecutive OFDM symbols. We experimentally study the performance of this scheme for a 28 Gbaud QPSK RGI-DP-CO-OFDM employing a low cost distributed feedback (DFB) laser. Experimental results using a DFB laser with the linewidth of 2.6 MHz demonstrate 24% and 13% improvement in transmission reach with respect to the conventional equalizer (CE) in presence of weak and strong dispersion-enhanced-phase-noise (DEPN), respectively. A brief analysis of the computational complexity of this scheme in terms of the number of required complex multiplications is provided. This practical approach does not suffer from error propagation while enjoying low computational complexity.
NASA Astrophysics Data System (ADS)
Roberts, Andrew; Appleby-Thomas, Gareth; Hazell, Paul
2011-06-01
Following multiple loading events the resultant shock state of a material will lie away from the principle Hugoniot. Prediction of such states requires knowledge of a materials equation-of-state. The material-specific variable Grunieisen gamma (Γ) defines the shape of ``off-Hugoniot'' points in energy-volume-pressure space. Experimentally the shock-reverberation technique (based on the principle of impedance-matching) has previously allowed estimation of the first-order Grunieisen gamma term (Γ1) for a silicone elastomer. Here, this approach was employed to calculate Γ1 for two dissimilar materials, Polyether ether ketone (PEEK) and the armour-grade aluminium alloy 5083 (H32); thereby allowing discussion of limitations of this technique in the context of plate-impact experiments employing Manganin stress gauges. Finally, the experimentally determined values for Γ1 were further refined by comparison between experimental records and numerical simulations carried out using the commercial code ANYSYS Autodyn®.
Can you sequence ecology? Metagenomics of adaptive diversification.
Marx, Christopher J
2013-01-01
Few areas of science have benefited more from the expansion in sequencing capability than the study of microbial communities. Can sequence data, besides providing hypotheses of the functions the members possess, detect the evolutionary and ecological processes that are occurring? For example, can we determine if a species is adapting to one niche, or if it is diversifying into multiple specialists that inhabit distinct niches? Fortunately, adaptation of populations in the laboratory can serve as a model to test our ability to make such inferences about evolution and ecology from sequencing. Even adaptation to a single niche can give rise to complex temporal dynamics due to the transient presence of multiple competing lineages. If there are multiple niches, this complexity is augmented by segmentation of the population into multiple specialists that can each continue to evolve within their own niche. For a known example of parallel diversification that occurred in the laboratory, sequencing data gave surprisingly few obvious, unambiguous signs of the ecological complexity present. Whereas experimental systems are open to direct experimentation to test hypotheses of selection or ecological interaction, the difficulty in "seeing ecology" from sequencing for even such a simple system suggests translation to communities like the human microbiome will be quite challenging. This will require both improved empirical methods to enhance the depth and time resolution for the relevant polymorphisms and novel statistical approaches to rigorously examine time-series data for signs of various evolutionary and ecological phenomena within and between species.
Computational Models and Emergent Properties of Respiratory Neural Networks
Lindsey, Bruce G.; Rybak, Ilya A.; Smith, Jeffrey C.
2012-01-01
Computational models of the neural control system for breathing in mammals provide a theoretical and computational framework bringing together experimental data obtained from different animal preparations under various experimental conditions. Many of these models were developed in parallel and iteratively with experimental studies and provided predictions guiding new experiments. This data-driven modeling approach has advanced our understanding of respiratory network architecture and neural mechanisms underlying generation of the respiratory rhythm and pattern, including their functional reorganization under different physiological conditions. Models reviewed here vary in neurobiological details and computational complexity and span multiple spatiotemporal scales of respiratory control mechanisms. Recent models describe interacting populations of respiratory neurons spatially distributed within the Bötzinger and pre-Bötzinger complexes and rostral ventrolateral medulla that contain core circuits of the respiratory central pattern generator (CPG). Network interactions within these circuits along with intrinsic rhythmogenic properties of neurons form a hierarchy of multiple rhythm generation mechanisms. The functional expression of these mechanisms is controlled by input drives from other brainstem components, including the retrotrapezoid nucleus and pons, which regulate the dynamic behavior of the core circuitry. The emerging view is that the brainstem respiratory network has rhythmogenic capabilities at multiple levels of circuit organization. This allows flexible, state-dependent expression of different neural pattern-generation mechanisms under various physiological conditions, enabling a wide repertoire of respiratory behaviors. Some models consider control of the respiratory CPG by pulmonary feedback and network reconfiguration during defensive behaviors such as cough. Future directions in modeling of the respiratory CPG are considered. PMID:23687564
Gray, Whitney Austin; Kesten, Karen S; Hurst, Stephen; Day, Tama Duffy; Anderko, Laura
2012-01-01
The aim of this pilot study was to test design interventions such as lighting, color, and spatial color patterning on nurses' stress, alertness, and satisfaction, and to provide an example of how clinical simulation centers can be used to conduct research. The application of evidence-based design research in healthcare settings requires a transdisciplinary approach. Integrating approaches from multiple fields in real-life settings often proves time consuming and experimentally difficult. However, forums for collaboration such as clinical simulation centers may offer a solution. In these settings, identical operating and patient rooms are used to deliver simulated patient care scenarios using automated mannequins. Two identical rooms were modified in the clinical simulation center. Nurses spent 30 minutes in each room performing simulated cardiac resuscitation. Subjective measures of nurses' stress, alertness, and satisfaction were collected and compared between settings and across time using matched-pair t-test analysis. Nurses reported feeling less stressed after exposure to the experimental room than nurses who were exposed to the control room (2.22, p = .03). Scores post-session indicated a significant reduction in stress and an increase in alertness after exposure to the experimental room as compared to the control room, with significance levels below .10. (Change in stress scores: 3.44, p = .069); (change in alertness scores: 3.6, p = .071). This study reinforces the use of validated survey tools to measure stress, alertness, and satisfaction. Results support human-centered design approaches by evaluating the effect on nurses in an experimental setting.
Rue-Albrecht, Kévin; McGettigan, Paul A; Hernández, Belinda; Nalpas, Nicolas C; Magee, David A; Parnell, Andrew C; Gordon, Stephen V; MacHugh, David E
2016-03-11
Identification of gene expression profiles that differentiate experimental groups is critical for discovery and analysis of key molecular pathways and also for selection of robust diagnostic or prognostic biomarkers. While integration of differential expression statistics has been used to refine gene set enrichment analyses, such approaches are typically limited to single gene lists resulting from simple two-group comparisons or time-series analyses. In contrast, functional class scoring and machine learning approaches provide powerful alternative methods to leverage molecular measurements for pathway analyses, and to compare continuous and multi-level categorical factors. We introduce GOexpress, a software package for scoring and summarising the capacity of gene ontology features to simultaneously classify samples from multiple experimental groups. GOexpress integrates normalised gene expression data (e.g., from microarray and RNA-seq experiments) and phenotypic information of individual samples with gene ontology annotations to derive a ranking of genes and gene ontology terms using a supervised learning approach. The default random forest algorithm allows interactions between all experimental factors, and competitive scoring of expressed genes to evaluate their relative importance in classifying predefined groups of samples. GOexpress enables rapid identification and visualisation of ontology-related gene panels that robustly classify groups of samples and supports both categorical (e.g., infection status, treatment) and continuous (e.g., time-series, drug concentrations) experimental factors. The use of standard Bioconductor extension packages and publicly available gene ontology annotations facilitates straightforward integration of GOexpress within existing computational biology pipelines.
The aetiopathogenesis of fatigue: unpredictable, complex and persistent
Clark, James E.; Fai Ng, W.; Watson, Stuart; Newton, Julia L.
2016-01-01
Background Chronic fatigue syndrome is a common condition characterized by severe fatigue with post-exertional malaise, impaired cognitive ability, poor sleep quality, muscle pain, multi-joint pain, tender lymph nodes, sore throat or headache. Its defining symptom, fatigue is common to several diseases. Areas of agreement Research has established a broad picture of impairment across autonomic, endocrine and inflammatory systems though progress seems to have reached an impasse. Areas of controversy The absence of a clear consensus view of the pathophysiology of fatigue suggests the need to switch from a focus on abnormalities in one system to an experimental and clinical approach which integrates findings across multiple systems and their constituent parts and to consider multiple environmental factors. Growing points We discuss this with reference to three key factors, non-determinism, non-reductionism and self-organization and suggest that an approach based on these principles may afford a coherent explanatory framework for much of the observed phenomena in fatigue and offers promising avenues for future research. Areas timely for developing research By adopting this approach, the field can examine issues regarding aetiopathogenesis and treatment, with relevance for future research and clinical practice. PMID:26872857
Zhang, Yihui; Webb, Richard Chad; Luo, Hongying; Xue, Yeguang; Kurniawan, Jonas; Cho, Nam Heon; Krishnan, Siddharth; Li, Yuhang; Huang, Yonggang
2016-01-01
Long-term, continuous measurement of core body temperature is of high interest, due to the widespread use of this parameter as a key biomedical signal for clinical judgment and patient management. Traditional approaches rely on devices or instruments in rigid and planar forms, not readily amenable to intimate or conformable integration with soft, curvilinear, time-dynamic, surfaces of the skin. Here, materials and mechanics designs for differential temperature sensors are presented which can attach softly and reversibly onto the skin surface, and also sustain high levels of deformation (e.g., bending, twisting, and stretching). A theoretical approach, together with a modeling algorithm, yields core body temperature from multiple differential measurements from temperature sensors separated by different effective distances from the skin. The sensitivity, accuracy, and response time are analyzed by finite element analyses (FEA) to provide guidelines for relationships between sensor design and performance. Four sets of experiments on multiple devices with different dimensions and under different convection conditions illustrate the key features of the technology and the analysis approach. Finally, results indicate that thermally insulating materials with cellular structures offer advantages in reducing the response time and increasing the accuracy, while improving the mechanics and breathability. PMID:25953120
NASA Astrophysics Data System (ADS)
Cornut, B.; Kedous-Lebouc, A.; Waeckerlé, Th.
1996-07-01
Research on SiFe is a busy field which corresponds to the main soft magnetic materials interests of the Laboratoire d'Electrotechnique de Grenoble. Three mutually enriched areas are being explored: metallurgical research towards the production of cube textured sheets, instrumentation research allowing precise measurements of magnetic properties under extreme conditions, and models of magnetization vectorial laws or loss prediction to be included in computer aided design.
Immune allied genetic algorithm for Bayesian network structure learning
NASA Astrophysics Data System (ADS)
Song, Qin; Lin, Feng; Sun, Wei; Chang, KC
2012-06-01
Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.
Experimental evidence of quantum radiation reaction in aligned crystals.
Wistisen, Tobias N; Di Piazza, Antonino; Knudsen, Helge V; Uggerhøj, Ulrik I
2018-02-23
Quantum radiation reaction is the influence of multiple photon emissions from a charged particle on the particle's dynamics, characterized by a significant energy-momentum loss per emission. Here we report experimental radiation emission spectra from ultrarelativistic positrons in silicon in a regime where quantum radiation reaction effects dominate the positron's dynamics. Our analysis shows that while the widely used quantum approach is overall the best model, it does not completely describe all the data in this regime. Thus, these experimental findings may prompt seeking more generally valid methods to describe quantum radiation reaction. This experiment is a fundamental test of quantum electrodynamics in a regime where the dynamics of charged particles is strongly influenced not only by the external electromagnetic fields but also by the radiation field generated by the charges themselves and where each photon emission may significantly reduce the energy of the charge.
Anomalous Hall effect scaling in ferromagnetic thin films
NASA Astrophysics Data System (ADS)
Grigoryan, Vahram L.; Xiao, Jiang; Wang, Xuhui; Xia, Ke
2017-10-01
We propose a scaling law for anomalous Hall effect in ferromagnetic thin films. Our approach distinguishes multiple scattering sources, namely, bulk impurity, phonon for Hall resistivity, and most importantly the rough surface contribution to longitudinal resistivity. In stark contrast to earlier laws that rely on temperature- and thickness-dependent fitting coefficients, this scaling law fits the recent experimental data excellently with constant parameters that are independent of temperature and film thickness, strongly indicating that this law captures the underlying physical processes. Based on a few data points, this scaling law can even fit all experimental data in full temperature and thickness range. We apply this law to interpret the experimental data for Fe, Co, and Ni and conclude that (i) the phonon-induced skew scattering is unimportant as expected; (ii) contribution from the impurity-induced skew scattering is negative; (iii) the intrinsic (extrinsic) mechanism dominates in Fe (Co), and both the extrinsic and intrinsic contributions are important in Ni.
Examining the Relationships Between Education, Social Networks and Democratic Support With ABM
NASA Technical Reports Server (NTRS)
Drucker, Nick; Campbell, Kenyth
2011-01-01
This paper introduces an agent-based model that explores the relationships between education, social networks, and support for democratic ideals. This study examines two factors thai affect democratic support, education, and social networks. Current theory concerning these two variables suggests that positive relationships exist between education and democratic support and between social networks and the spread of ideas. The model contains multiple variables of democratic support, two of which are evaluated through experimentation. The model allows individual entities within the system to make "decisions" about their democratic support independent of one another. The agent based approach also allows entities to utilize their social networks to spread ideas. Current theory supports experimentation results. In addion , these results show the model is capable of reproducing real world outcomes. This paper addresses the model creation process and the experimentation procedure, as well as future research avenues and potential shortcomings of the model
A complex network approach for nanoparticle agglomeration analysis in nanoscale images
NASA Astrophysics Data System (ADS)
Machado, Bruno Brandoli; Scabini, Leonardo Felipe; Margarido Orue, Jonatan Patrick; de Arruda, Mauro Santos; Goncalves, Diogo Nunes; Goncalves, Wesley Nunes; Moreira, Raphaell; Rodrigues-Jr, Jose F.
2017-02-01
Complex networks have been widely used in science and technology because of their ability to represent several systems. One of these systems is found in Biochemistry, in which the synthesis of new nanoparticles is a hot topic. However, the interpretation of experimental results in the search of new nanoparticles poses several challenges. This is due to the characteristics of nanoparticle images and due to their multiple intricate properties; one property of recurrent interest is the agglomeration of particles. Addressing this issue, this paper introduces an approach that uses complex networks to detect and describe nanoparticle agglomerates so to foster easier and more insightful analyses. In this approach, each detected particle in an image corresponds to a vertice and the distances between the particles define a criterion for creating edges. Edges are created if the distance is smaller than a radius of interest. Once this network is set, we calculate several discrete measures able to reveal the most outstanding agglomerates in a nanoparticle image. Experimental results using images of scanning tunneling microscopy (STM) of gold nanoparticles demonstrated the effectiveness of the proposed approach over several samples, as reflected by the separability between particles in three usual settings. The results also demonstrated efficacy for both convex and non-convex agglomerates.
Experimental Design for Multi-drug Combination Studies Using Signaling Networks
Huang, Hengzhen; Fang, Hong-Bin; Tan, Ming T.
2017-01-01
Summary Combinations of multiple drugs are an important approach to maximize the chance for therapeutic success by inhibiting multiple pathways/targets. Analytic methods for studying drug combinations have received increasing attention because major advances in biomedical research have made available large number of potential agents for testing. The preclinical experiment on multi-drug combinations plays a key role in (especially cancer) drug development because of the complex nature of the disease, the need to reduce development time and costs. Despite recent progresses in statistical methods for assessing drug interaction, there is an acute lack of methods for designing experiments on multi-drug combinations. The number of combinations grows exponentially with the number of drugs and dose-levels and it quickly precludes laboratory testing. Utilizing experimental dose-response data of single drugs and a few combinations along with pathway/network information to obtain an estimate of the functional structure of the dose-response relationship in silico, we propose an optimal design that allows exploration of the dose-effect surface with the smallest possible sample size in this paper. The simulation studies show our proposed methods perform well. PMID:28960231
NASA Technical Reports Server (NTRS)
McGill, Matthew; Markus, Thorsten; Scott, V. Stanley; Neumann, Thomas
2012-01-01
The Ice, Cloud, and land Elevation Satellite-2 (ICESat-2) mission is currently under development by NASA. The primary mission of ICESat-2 will be to measure elevation changes of the Greenland and Antarctic ice sheets, document changes in sea ice thickness distribution, and derive important information about the current state of the global ice coverage. To make this important measurement, NASA is implementing a new type of satellite-based surface altimetry based on sensing of laser pulses transmitted to, and reflected from, the surface. Because the ICESat-2 measurement approach is different from that used for previous altimeter missions, a high-fidelity aircraft instrument, the Multiple Altimeter Beam Experimental Lidar (MABEL), was developed to demonstrate the measurement concept and provide verification of the ICESat-2 methodology. The MABEL instrument will serve as a prototype for the ICESat-2 mission and also provides a science tool for studies of land surface topography. This paper outlines the science objectives for the ICESat-2 mission, the current measurement concept for ICESat-2, and the instrument concept and preliminary data from MABEL.
NASA Astrophysics Data System (ADS)
Nelson, J. L.; Chaplin-Kramer, R.; Ziv, G.; Wolny, S.; Vogl, A. L.; Tallis, H.; Bremer, L.
2013-12-01
The risk of water scarcity is a rising threat in a rapidly changing world. Communities and investors are using the new institution of water funds to enact conservation practices in watersheds to bolster a clean, predictable water supply for multiple stakeholders. Water funds finance conservation activities to support water-related ecosystem services, and here we relate our work to develop innovative approaches to experimental design of monitoring programs to track the effectiveness of water funds throughout Latin America. We highlight two examples: the Fund for the Protection of Water (FONAG), in Quito, Ecuador, and Water for Life, Agua por la Vida, in Cali, Colombia. Our approach is meant to test whether a) water funds' restoration and protection actions result in changes in water quality and/or quantity at the site scale and the subwatershed scale, and b) the suite of investments for the whole water fund reach established goals for improving water quality and/or quantity at the basin scale or point of use. Our goal is to create monitoring standards for ecosystem-service assessment and clearly demonstrate translating those standards to field implementation in a statistically robust and cost-effective way. In the gap between data-intensive methods requiring historic, long-term water sampling and more subjective, ad hoc assessments, we have created a quantitative, land-cover-based approach to pairing conservation activity with appropriate controls in order to determine the impact of water-fund actions. To do so, we use a statistical approach in combination with open-source tools developed by the Natural Capital Project to optimize water funds' investments in nature and assess ecosystem-service provision (Resource Investment Optimization System, RIOS, and InVEST). We report on the process of identifying micro-, subwatershed or watershed matches to serve as controls for conservation 'impact' sites, based on globally-available land cover, precipitation, and soil data, without available water data. In two watersheds within the 'Water for Life' fund in Colombia, we used maps of nine biophysical inputs to RIOS to rank sites for their similarity to impact sediment retention, and then identified the top Impact/Control microwatershed pairs based on averaged, two-sample Kolmogorov-Smirnov statistics for each input. In FONAG, Ecuador, we used the approach to identify appropriate control sites for designated restoration sites. Our approach can be used at multiple scales, and can be used whether the conservation 'treatments' are assigned (a quasi-experimental approach) or both impact and control sites are identified in a fully experimental design. Our results highlight the need for innovative analytic methods to improve monitoring design in data-scarce regions.
A sparse representation-based approach for copy-move image forgery detection in smooth regions
NASA Astrophysics Data System (ADS)
Abdessamad, Jalila; ElAdel, Asma; Zaied, Mourad
2017-03-01
Copy-move image forgery is the act of cloning a restricted region in the image and pasting it once or multiple times within that same image. This procedure intends to cover a certain feature, probably a person or an object, in the processed image or emphasize it through duplication. Consequences of this malicious operation can be unexpectedly harmful. Hence, the present paper proposes a new approach that automatically detects Copy-move Forgery (CMF). In particular, this work broaches a widely common open issue in CMF research literature that is detecting CMF within smooth areas. Indeed, the proposed approach represents the image blocks as a sparse linear combination of pre-learned bases (a mixture of texture and color-wise small patches) which allows a robust description of smooth patches. The reported experimental results demonstrate the effectiveness of the proposed approach in identifying the forged regions in CM attacks.
NASA Astrophysics Data System (ADS)
Baumgart, M.; Druml, N.; Consani, M.
2018-05-01
This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.
Classification-Based Spatial Error Concealment for Visual Communications
NASA Astrophysics Data System (ADS)
Chen, Meng; Zheng, Yefeng; Wu, Min
2006-12-01
In an error-prone transmission environment, error concealment is an effective technique to reconstruct the damaged visual content. Due to large variations of image characteristics, different concealment approaches are necessary to accommodate the different nature of the lost image content. In this paper, we address this issue and propose using classification to integrate the state-of-the-art error concealment techniques. The proposed approach takes advantage of multiple concealment algorithms and adaptively selects the suitable algorithm for each damaged image area. With growing awareness that the design of sender and receiver systems should be jointly considered for efficient and reliable multimedia communications, we proposed a set of classification-based block concealment schemes, including receiver-side classification, sender-side attachment, and sender-side embedding. Our experimental results provide extensive performance comparisons and demonstrate that the proposed classification-based error concealment approaches outperform the conventional approaches.
Feature and Region Selection for Visual Learning.
Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando
2016-03-01
Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.
Tarpey, Michael D; Amorese, Adam J; Balestrieri, Nicholas P; Ryan, Terence E; Schmidt, Cameron A; McClung, Joseph M; Spangenburg, Espen E
2018-04-17
The ability to assess skeletal muscle function and delineate regulatory mechanisms is essential to uncovering therapeutic approaches that preserve functional independence in a disease state. Skeletal muscle provides distinct experimental challenges due to inherent differences across muscle groups, including fiber type and size that may limit experimental approaches. The flexor digitorum brevis (FDB) possesses numerous properties that offer the investigator a high degree of experimental flexibility to address specific hypotheses. To date, surprisingly few studies have taken advantage of the FDB to investigate mechanisms regulating skeletal muscle function. The purpose of this study was to characterize and experimentally demonstrate the value of the FDB muscle for scientific investigations. First, we characterized the FDB phenotype and provide reference comparisons to skeletal muscles commonly used in the field. We developed approaches allowing for experimental assessment of force production, in vitro and in vivo microscopy, and mitochondrial respiration to demonstrate the versatility of the FDB. As proof-of principle, we performed experiments to alter force production or mitochondrial respiration to validate the flexibility the FDB affords the investigator. The FDB is made up of small predominantly type IIa and IIx fibers that collectively produce less peak isometric force than the extensor digitorum longus (EDL) or soleus muscles, but demonstrates a greater fatigue resistance than the EDL. Unlike the other muscles, inherent properties of the FDB muscle make it amenable to multiple in vitro- and in vivo-based microscopy methods. Due to its anatomical location, the FDB can be used in cardiotoxin-induced muscle injury protocols and is amenable to electroporation of cDNA with a high degree of efficiency allowing for an effective means of genetic manipulation. Using a novel approach, we also demonstrate methods for assessing mitochondrial respiration in the FDB, which are comparable to the commonly used gastrocnemius muscle. As proof of principle, short-term overexpression of Pgc1α in the FDB increased mitochondrial respiration rates. The results highlight the experimental flexibility afforded the investigator by using the FDB muscle to assess mechanisms that regulate skeletal muscle function.
Helwig, Nathaniel E; Shorter, K Alex; Ma, Ping; Hsiao-Wecksler, Elizabeth T
2016-10-03
Cyclic biomechanical data are commonplace in orthopedic, rehabilitation, and sports research, where the goal is to understand and compare biomechanical differences between experimental conditions and/or subject populations. A common approach to analyzing cyclic biomechanical data involves averaging the biomechanical signals across cycle replications, and then comparing mean differences at specific points of the cycle. This pointwise analysis approach ignores the functional nature of the data, which can hinder one׳s ability to find subtle differences between experimental conditions and/or subject populations. To overcome this limitation, we propose using mixed-effects smoothing spline analysis of variance (SSANOVA) to analyze differences in cyclic biomechanical data. The SSANOVA framework makes it possible to decompose the estimated function into the portion that is common across groups (i.e., the average cycle, AC) and the portion that differs across groups (i.e., the contrast cycle, CC). By partitioning the signal in such a manner, we can obtain estimates of the CC differences (CCDs), which are the functions directly describing group differences in the cyclic biomechanical data. Using both simulated and experimental data, we illustrate the benefits of using SSANOVA models to analyze differences in noisy biomechanical (gait) signals collected from multiple locations (joints) of subjects participating in different experimental conditions. Using Bayesian confidence intervals, the SSANOVA results can be used in clinical and research settings to reliably quantify biomechanical differences between experimental conditions and/or subject populations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reconstituting protein interaction networks using parameter-dependent domain-domain interactions
2013-01-01
Background We can describe protein-protein interactions (PPIs) as sets of distinct domain-domain interactions (DDIs) that mediate the physical interactions between proteins. Experimental data confirm that DDIs are more consistent than their corresponding PPIs, lending support to the notion that analyses of DDIs may improve our understanding of PPIs and lead to further insights into cellular function, disease, and evolution. However, currently available experimental DDI data cover only a small fraction of all existing PPIs and, in the absence of structural data, determining which particular DDI mediates any given PPI is a challenge. Results We present two contributions to the field of domain interaction analysis. First, we introduce a novel computational strategy to merge domain annotation data from multiple databases. We show that when we merged yeast domain annotations from six annotation databases we increased the average number of domains per protein from 1.05 to 2.44, bringing it closer to the estimated average value of 3. Second, we introduce a novel computational method, parameter-dependent DDI selection (PADDS), which, given a set of PPIs, extracts a small set of domain pairs that can reconstruct the original set of protein interactions, while attempting to minimize false positives. Based on a set of PPIs from multiple organisms, our method extracted 27% more experimentally detected DDIs than existing computational approaches. Conclusions We have provided a method to merge domain annotation data from multiple sources, ensuring large and consistent domain annotation for any given organism. Moreover, we provided a method to extract a small set of DDIs from the underlying set of PPIs and we showed that, in contrast to existing approaches, our method was not biased towards DDIs with low or high occurrence counts. Finally, we used these two methods to highlight the influence of the underlying annotation density on the characteristics of extracted DDIs. Although increased annotations greatly expanded the possible DDIs, the lack of knowledge of the true biological false positive interactions still prevents an unambiguous assignment of domain interactions responsible for all protein network interactions. Executable files and examples are given at: http://www.bhsai.org/downloads/padds/ PMID:23651452
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
Detection Identification and Quantification of Keto-Hydroperoxides in Low-Temperature Oxidation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Nils; Moshammer, Kai; Jasper, Ahren W.
2017-07-01
Keto-hydroperoxides are reactive partially oxidized intermediates that play a central role in chain-branching reactions during the low-temperature oxidation of hydrocarbons. In this Perspective, we outline how these short lived species can be detected, identified, and quantified using integrated experimental and theoretical approaches. The procedures are based on direct molecular-beam sampling from reactive environments, followed by mass spectrometry with single-photon ionization, identification of fragmentation patterns, and theoretical calculations of ionization thresholds, fragment appearance energies, and photoionization cross sections. Using the oxidation of neo-pentane and tetrahydrofuran as examples, the individual steps of the experimental approaches are described in depth together with amore » detailed description of the theoretical efforts. For neo-pentane, the experimental data are consistent with the calculated ionization and fragment appearance energies of the keto-hydroperoxide, thus adding confidence to the analysis routines and the employed levels of theory. For tetrahydrofuran, multiple keto-hydroperoxide isomers are possible due to the presence of nonequivalent O 2 addition sites. Despite this additional complexity, the experimental data allow for the identification of two to four keto-hydroperoxides. Mole fraction profiles of the keto-hydroperoxides, which are quantified using calculated photoionization cross sections, are provided together with estimated uncertainties as function of the temperature of the reactive mixture and can serve as validation targets for chemically detailed mechanisms.« less
Fusion of multichannel local and global structural cues for photo aesthetics evaluation.
Luming Zhang; Yue Gao; Zimmermann, Roger; Qi Tian; Xuelong Li
2014-03-01
Photo aesthetic quality evaluation is a fundamental yet under addressed task in computer vision and image processing fields. Conventional approaches are frustrated by the following two drawbacks. First, both the local and global spatial arrangements of image regions play an important role in photo aesthetics. However, existing rules, e.g., visual balance, heuristically define which spatial distribution among the salient regions of a photo is aesthetically pleasing. Second, it is difficult to adjust visual cues from multiple channels automatically in photo aesthetics assessment. To solve these problems, we propose a new photo aesthetics evaluation framework, focusing on learning the image descriptors that characterize local and global structural aesthetics from multiple visual channels. In particular, to describe the spatial structure of the image local regions, we construct graphlets small-sized connected graphs by connecting spatially adjacent atomic regions. Since spatially adjacent graphlets distribute closely in their feature space, we project them onto a manifold and subsequently propose an embedding algorithm. The embedding algorithm encodes the photo global spatial layout into graphlets. Simultaneously, the importance of graphlets from multiple visual channels are dynamically adjusted. Finally, these post-embedding graphlets are integrated for photo aesthetics evaluation using a probabilistic model. Experimental results show that: 1) the visualized graphlets explicitly capture the aesthetically arranged atomic regions; 2) the proposed approach generalizes and improves four prominent aesthetic rules; and 3) our approach significantly outperforms state-of-the-art algorithms in photo aesthetics prediction.
Everyday stress response targets in the science of behavior change.
Smyth, Joshua M; Sliwinski, Martin J; Zawadzki, Matthew J; Scott, Stacey B; Conroy, David E; Lanza, Stephanie T; Marcusson-Clavertz, David; Kim, Jinhyuk; Stawski, Robert S; Stoney, Catherine M; Buxton, Orfeu M; Sciamanna, Christopher N; Green, Paige M; Almeida, David M
2018-02-01
Stress is an established risk factor for negative health outcomes, and responses to everyday stress can interfere with health behaviors such as exercise and sleep. In accordance with the Science of Behavior Change (SOBC) program, we apply an experimental medicine approach to identifying stress response targets, developing stress response assays, intervening upon these targets, and testing intervention effectiveness. We evaluate an ecologically valid, within-person approach to measuring the deleterious effects of everyday stress on physical activity and sleep patterns, examining multiple stress response components (i.e., stress reactivity, stress recovery, and stress pile-up) as indexed by two key response indicators (negative affect and perseverative cognition). Our everyday stress response assay thus measures multiple malleable stress response targets that putatively shape daily health behaviors (physical activity and sleep). We hypothesize that larger reactivity, incomplete recovery, and more frequent stress responses (pile-up) will negatively impact health behavior enactment in daily life. We will identify stress-related reactivity, recovery, and response in the indicators using coordinated analyses across multiple naturalistic studies. These results are the basis for developing a new stress assay and replicating the initial findings in a new sample. This approach will advance our understanding of how specific aspects of everyday stress responses influence health behaviors, and can be used to develop and test an innovative ambulatory intervention for stress reduction in daily life to enhance health behaviors. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kovalchuk, Sergey V; Funkner, Anastasia A; Metsker, Oleg G; Yakovlev, Aleksey N
2018-06-01
An approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study. A combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others). The proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia. The proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others. Copyright © 2018 Elsevier Inc. All rights reserved.
Studying Cellular Signal Transduction with OMIC Technologies.
Landry, Benjamin D; Clarke, David C; Lee, Michael J
2015-10-23
In the gulf between genotype and phenotype exists proteins and, in particular, protein signal transduction systems. These systems use a relatively limited parts list to respond to a much longer list of extracellular, environmental, and/or mechanical cues with rapidity and specificity. Most signaling networks function in a highly non-linear and often contextual manner. Furthermore, these processes occur dynamically across space and time. Because of these complexities, systems and "OMIC" approaches are essential for the study of signal transduction. One challenge in using OMIC-scale approaches to study signaling is that the "signal" can take different forms in different situations. Signals are encoded in diverse ways such as protein-protein interactions, enzyme activities, localizations, or post-translational modifications to proteins. Furthermore, in some cases, signals may be encoded only in the dynamics, duration, or rates of change of these features. Accordingly, systems-level analyses of signaling may need to integrate multiple experimental and/or computational approaches. As the field has progressed, the non-triviality of integrating experimental and computational analyses has become apparent. Successful use of OMIC methods to study signaling will require the "right" experiments and the "right" modeling approaches, and it is critical to consider both in the design phase of the project. In this review, we discuss common OMIC and modeling approaches for studying signaling, emphasizing the philosophical and practical considerations for effectively merging these two types of approaches to maximize the probability of obtaining reliable and novel insights into signaling biology. Copyright © 2015 Elsevier Ltd. All rights reserved.
2016-06-02
Retrieval of droplet-size density distribution from multiple-field-of-view cross-polarized lidar signals: theory and experimental validation...theoretical and experimental studies of mul- tiple scattering and multiple-field-of-view (MFOV) li- dar detection have made possible the retrieval of cloud...droplet cloud are typical of Rayleigh scattering, with a signature close to a dipole (phase function quasi -flat and a zero-depolarization ratio
Optimizing methods and dodging pitfalls in microbiome research.
Kim, Dorothy; Hofstaedter, Casey E; Zhao, Chunyu; Mattei, Lisa; Tanes, Ceylan; Clarke, Erik; Lauder, Abigail; Sherrill-Mix, Scott; Chehoud, Christel; Kelsen, Judith; Conrad, Máire; Collman, Ronald G; Baldassano, Robert; Bushman, Frederic D; Bittinger, Kyle
2017-05-05
Research on the human microbiome has yielded numerous insights into health and disease, but also has resulted in a wealth of experimental artifacts. Here, we present suggestions for optimizing experimental design and avoiding known pitfalls, organized in the typical order in which studies are carried out. We first review best practices in experimental design and introduce common confounders such as age, diet, antibiotic use, pet ownership, longitudinal instability, and microbial sharing during cohousing in animal studies. Typically, samples will need to be stored, so we provide data on best practices for several sample types. We then discuss design and analysis of positive and negative controls, which should always be run with experimental samples. We introduce a convenient set of non-biological DNA sequences that can be useful as positive controls for high-volume analysis. Careful analysis of negative and positive controls is particularly important in studies of samples with low microbial biomass, where contamination can comprise most or all of a sample. Lastly, we summarize approaches to enhancing experimental robustness by careful control of multiple comparisons and to comparing discovery and validation cohorts. We hope the experimental tactics summarized here will help researchers in this exciting field advance their studies efficiently while avoiding errors.
Multiple re-encounter approach to radical pair reactions and the role of nonlinear master equations.
Clausen, Jens; Guerreschi, Gian Giacomo; Tiersch, Markus; Briegel, Hans J
2014-08-07
We formulate a multiple-encounter model of the radical pair mechanism that is based on a random coupling of the radical pair to a minimal model environment. These occasional pulse-like couplings correspond to the radical encounters and give rise to both dephasing and recombination. While this is in agreement with the original model of Haberkorn and its extensions that assume additional dephasing, we show how a nonlinear master equation may be constructed to describe the conditional evolution of the radical pairs prior to the detection of their recombination. We propose a nonlinear master equation for the evolution of an ensemble of independently evolving radical pairs whose nonlinearity depends on the record of the fluorescence signal. We also reformulate Haberkorn's original argument on the physicality of reaction operators using the terminology of quantum optics/open quantum systems. Our model allows one to describe multiple encounters within the exponential model and connects this with the master equation approach. We include hitherto neglected effects of the encounters, such as a separate dephasing in the triplet subspace, and predict potential new effects, such as Grover reflections of radical spins, that may be observed if the strength and time of the encounters can be experimentally controlled.
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleming, P. A.; Van Wingerden, J. W.; Wright, A. D.
2012-01-01
In this paper we present results from an ongoing controller comparison study at the National Renewable Energy Laboratory's (NREL's) National Wind Technology Center (NWTC). The intention of the study is to demonstrate the advantage of using modern multivariable methods for designing control systems for wind turbines versus conventional approaches. We will demonstrate the advantages through field-test results from experimental turbines located at the NWTC. At least two controllers are being developed side-by-side to meet an incrementally increasing number of turbine load-reduction objectives. The first, a multiple single-input, single-output (m-SISO) approach, uses separately developed decoupled and classicially tuned controllers, which is,more » to the best of our knowledge, common practice in the wind industry. The remaining controllers are developed using state-space multiple-input and multiple-output (MIMO) techniques to explicity account for coupling between loops and to optimize given known frequency structures of the turbine and disturbance. In this first publication from the study, we present the structure of the ongoing controller comparison experiment, the design process for the two controllers compared in this phase, and initial comparison results obtained in field-testing.« less
NASA Astrophysics Data System (ADS)
Prakash, Priyanka; Sayyed-Ahmad, Abdallah; Cho, Kwang-Jin; Dolino, Drew M.; Chen, Wei; Li, Hongyang; Grant, Barry J.; Hancock, John F.; Gorfe, Alemayehu A.
2017-01-01
Recent studies found that membrane-bound K-Ras dimers are important for biological function. However, the structure and thermodynamic stability of these complexes remained unknown because they are hard to probe by conventional approaches. Combining data from a wide range of computational and experimental approaches, here we describe the structure, dynamics, energetics and mechanism of assembly of multiple K-Ras dimers. Utilizing a range of techniques for the detection of reactive surfaces, protein-protein docking and molecular simulations, we found that two largely polar and partially overlapping surfaces underlie the formation of multiple K-Ras dimers. For validation we used mutagenesis, electron microscopy and biochemical assays under non-denaturing conditions. We show that partial disruption of a predicted interface through charge reversal mutation of apposed residues reduces oligomerization while introduction of cysteines at these positions enhanced dimerization likely through the formation of an intermolecular disulfide bond. Free energy calculations indicated that K-Ras dimerization involves direct but weak protein-protein interactions in solution, consistent with the notion that dimerization is facilitated by membrane binding. Taken together, our atomically detailed analyses provide unique mechanistic insights into K-Ras dimer formation and membrane organization as well as the conformational fluctuations and equilibrium thermodynamics underlying these processes.
NASA Astrophysics Data System (ADS)
Niaz, Mansoor
The main objectives of this study are:(1) to elaborate a framework based on a rational reconstruction of developments that led to the formulation of the laws of definite and multiple proportions; (2) to ascertain students' views of the two laws; (3) to formulate criteria based on the framework for evaluating chemistry textbooks' treatment of the two laws; and (4) to provide a rationale for chemistry teachers to respond to the question: Can we teach chemistry without the laws of definite and multiple proportions? Results obtained show that most of the textbooks present the laws of definite and multiple proportions within an inductivist perspective, characterized by the following sequence: experimental findings showed that chemical elements combined in fixed/multiple proportions, followed by the formulation of the laws of definite and multiple proportions, and finally Dalton's atomic theory was postulated to explain the laws. Students were found to be reluctant to question the laws that they learnt as the building blocks of chemistry. It is concluded that by emphasizing the laws of definite and multiple proportions, textbooks inevitably endorse the dichotomy between theories and laws, which is questioned by philosophers of science (Lakatos 1970; Giere 1995a, b). An alternative approach is presented which shows that we can teach chemistry without the laws of definite and multiple proportions.
Finding the target sites of RNA-binding proteins
Li, Xiao; Kazan, Hilal; Lipshitz, Howard D; Morris, Quaid D
2014-01-01
RNA–protein interactions differ from DNA–protein interactions because of the central role of RNA secondary structure. Some RNA-binding domains (RBDs) recognize their target sites mainly by their shape and geometry and others are sequence-specific but are sensitive to secondary structure context. A number of small- and large-scale experimental approaches have been developed to measure RNAs associated in vitro and in vivo with RNA-binding proteins (RBPs). Generalizing outside of the experimental conditions tested by these assays requires computational motif finding. Often RBP motif finding is done by adapting DNA motif finding methods; but modeling secondary structure context leads to better recovery of RBP-binding preferences. Genome-wide assessment of mRNA secondary structure has recently become possible, but these data must be combined with computational predictions of secondary structure before they add value in predicting in vivo binding. There are two main approaches to incorporating structural information into motif models: supplementing primary sequence motif models with preferred secondary structure contexts (e.g., MEMERIS and RNAcontext) and directly modeling secondary structure recognized by the RBP using stochastic context-free grammars (e.g., CMfinder and RNApromo). The former better reconstruct known binding preferences for sequence-specific RBPs but are not suitable for modeling RBPs that recognize shape and geometry of RNAs. Future work in RBP motif finding should incorporate interactions between multiple RBDs and multiple RBPs in binding to RNA. WIREs RNA 2014, 5:111–130. doi: 10.1002/wrna.1201 PMID:24217996
Hartlieb, Kathryn Brogan; Naar, Sylvie; Ledgerwood, David M; Templin, Thomas N; Ellis, Deborah A; Donohue, Bradley; Cunningham, Phillippe B
2015-12-07
Contingency management (CM) interventions, which use operant conditioning principles to encourage completion of target behavioral goals, may be useful for improving adherence to behavioral skills training (BST). Research-to-date has yet to explore CM for weight loss in minority adolescents. To examine the effects of CM in improving adolescent weight loss when added to BST. The study utilized an innovative experimental design that builds upon multiple baseline approaches as recommended by the National Institutes of Health. Six obese African-American youth and their primary caregivers living in Detroit, Michigan, USA. Adolescents received between 4 and 12 weeks of BST during a baseline period and subsequently received CM targeting weight loss. Youth weight. Linear mixed effects modeling was used in the analysis. CM did not directly affect adolescent weight loss above that of BST (p=0.053). However, when caregivers were involved in CM session treatment, contingency management had a positive effect on adolescent weight loss. The estimated weight loss due to CM when caregivers also attended was 0.66 kg/week (p<0.001, [95% CI; -1.96, -0.97]) relative to the baseline trajectory. This study demonstrates application of a novel experimental approach to intervention development and demonstrated the importance of parent involvement when delivering contingency management for minority youth weight loss. Lessons learned from contingency management program implementation are also discussed in order to inform practice.
Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation
NASA Technical Reports Server (NTRS)
Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R
2006-01-01
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.
Stress in multiple sclerosis: review of new developments and future directions.
Lovera, Jesus; Reza, Tara
2013-11-01
In the experimental autoimmune encephalitis model of multiple sclerosis, the effects of stress on disease severity depend on multiple factors, including the animal's genetics and the type of stressor. The studies in humans relating stress to the risk of developing multiple sclerosis have found discordant results. The studies looking at the association of stress with relapses show a fairly consistent association, where higher stress is associated with a higher risk of relapse. Higher stress levels also appear to increase the risk of development of gadolinium-enhancing lesions. A recent randomized trial shows that reducing stress using stress management therapy (SMT), a cognitive-behavioral therapy approach, results in a statistically significant reduction in new magnetic resonance imaging lesions. The magnitude of this effect is large and comparable to the effects of existent disease-modifying therapies, but no data exist yet proving that SMT reduces relapses or clinical progression; the effect of SMT appears to be short-lived. Additional work is needed to improve the duration of this effect and make this therapy more widely accessible.
Varghese, Sreeja; Cotter, Michelle; Chevot, Franciane; Fergus, Claire; Cunningham, Colm; Mills, Kingston H; Connon, Stephen J; Southern, John M; Kelly, Vincent P
2017-02-28
Queuine is a modified pyrrolopyrimidine nucleobase derived exclusively from bacteria. It post-transcriptionally replaces guanine 34 in transfer RNA isoacceptors for Asp, Asn, His and Tyr, in almost all eukaryotic organisms, through the activity of the ancient tRNA guanine transglycosylase (TGT) enzyme. tRNA hypomodification with queuine is a characteristic of rapidly-proliferating, non-differentiated cells. Autoimmune diseases, including multiple sclerosis, are characterised by the rapid expansion of T cells directed to self-antigens. Here, we demonstrate the potential medicinal relevance of targeting the modification of tRNA in the treatment of a chronic multiple sclerosis model—murine experimental autoimmune encephalomyelitis. Administration of a de novo designed eukaryotic TGT substrate (NPPDAG) led to an unprecedented complete reversal of clinical symptoms and a dramatic reduction of markers associated with immune hyperactivation and neuronal damage after five daily doses. TGT is essential for the therapeutic effect, since animals deficient in TGT activity were refractory to therapy. The data suggest that exploitation of the eukaryotic TGT enzyme is a promising approach for the treatment of multiple sclerosis.
Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system.
Min, Jianliang; Wang, Ping; Hu, Jianfeng
2017-01-01
Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1-2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver.
Long, Yun; Zhou, Linjie; Wang, Jian
2016-01-01
Photonic generation of microwave signal is obviously attractive for many prominent advantages, such as large bandwidth, low loss, and immunity to electromagnetic interference. Based on a single integrated silicon Mach–Zehnder modulator (MZM), we propose and experimentally demonstrate a simple and compact photonic scheme to enable frequency-multiplicated microwave signal. Using the fabricated integrated MZM, we also demonstrate the feasibility of microwave amplitude-shift keying (ASK) modulation based on integrated photonic approach. In proof-of-concept experiments, 2-GHz frequency-doubled microwave signal is generated using a 1-GHz driving signal. 750-MHz/1-GHz frequency-tripled/quadrupled microwave signals are obtained with a driving signal of 250 MHz. In addition, a 50-Mb/s binary amplitude coded 1-GHz microwave signal is also successfully generated. PMID:26832305
Generalized Effective Medium Theory for Particulate Nanocomposite Materials
Siddiqui, Muhammad Usama; Arif, Abul Fazal M.
2016-01-01
The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites. PMID:28773817
Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering
NASA Astrophysics Data System (ADS)
Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki
2018-03-01
We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.
Three-dimensional vectorial multifocal arrays created by pseudo-period encoding
NASA Astrophysics Data System (ADS)
Zeng, Tingting; Chang, Chenliang; Chen, Zhaozhong; Wang, Hui-Tian; Ding, Jianping
2018-06-01
Multifocal arrays have been attracting considerable attention recently owing to their potential applications in parallel optical tweezers, parallel single-molecule orientation determination, parallel recording and multifocal multiphoton microscopy. However, the generation of vectorial multifocal arrays with a tailorable structure and polarization state remains a great challenge, and reports on multifocal arrays have hitherto been restricted either to scalar focal spots without polarization versatility or to regular arrays with fixed spacing. In this work, we propose a specific pseudo-period encoding technique to create three-dimensional (3D) vectorial multifocal arrays with the ability to manipulate the position, polarization state and intensity of each focal spot. We experimentally validated the flexibility of our approach in the generation of 3D vectorial multiple spots with polarization multiplicity and position tunability.
Huard, Jérémy; Mueller, Stephanie; Gilles, Ernst D; Klingmüller, Ursula; Klamt, Steffen
2012-01-01
During liver regeneration, quiescent hepatocytes re-enter the cell cycle to proliferate and compensate for lost tissue. Multiple signals including hepatocyte growth factor, epidermal growth factor, tumor necrosis factor α, interleukin-6, insulin and transforming growth factor β orchestrate these responses and are integrated during the G1 phase of the cell cycle. To investigate how these inputs influence DNA synthesis as a measure for proliferation, we established a large-scale integrated logical model connecting multiple signaling pathways and the cell cycle. We constructed our model based upon established literature knowledge, and successively improved and validated its structure using hepatocyte-specific literature as well as experimental DNA synthesis data. Model analyses showed that activation of the mitogen-activated protein kinase and phosphatidylinositol 3-kinase pathways was sufficient and necessary for triggering DNA synthesis. In addition, we identified key species in these pathways that mediate DNA replication. Our model predicted oncogenic mutations that were compared with the COSMIC database, and proposed intervention targets to block hepatocyte growth factor-induced DNA synthesis, which we validated experimentally. Our integrative approach demonstrates that, despite the complexity and size of the underlying interlaced network, logical modeling enables an integrative understanding of signaling-controlled proliferation at the cellular level, and thus can provide intervention strategies for distinct perturbation scenarios at various regulatory levels. PMID:22443451
Accurate Ambient Noise Assessment Using Smartphones
Zamora, Willian; Calafate, Carlos T.; Cano, Juan-Carlos; Manzoni, Pietro
2017-01-01
Nowadays, smartphones have become ubiquitous and one of the main communication resources for human beings. Their widespread adoption was due to the huge technological progress and to the development of multiple useful applications. Their characteristics have also experienced a substantial improvement as they now integrate multiple sensors able to convert the smartphone into a flexible and multi-purpose sensing unit. The combined use of multiple smartphones endowed with several types of sensors gives the possibility to monitor a certain area with fine spatial and temporal granularity, a procedure typically known as crowdsensing. In this paper, we propose using smartphones as environmental noise-sensing units. For this purpose, we focus our study on the sound capture and processing procedure, analyzing the impact of different noise calculation algorithms, as well as in determining their accuracy when compared to a professional noise measurement unit. We analyze different candidate algorithms using different types of smartphones, and we study the most adequate time period and sampling strategy to optimize the data-gathering process. In addition, we perform an experimental study comparing our approach with the results obtained using a professional device. Experimental results show that, if the smartphone application is well tuned, it is possible to measure noise levels with a accuracy degree comparable to professional devices for the entire dynamic range typically supported by microphones embedded in smartphones, i.e., 35–95 dB. PMID:28430126
Echolocating bats use future-target information for optimal foraging.
Fujioka, Emyo; Aihara, Ikkyu; Sumiya, Miwa; Aihara, Kazuyuki; Hiryu, Shizuko
2016-04-26
When seeing or listening to an object, we aim our attention toward it. While capturing prey, many animal species focus their visual or acoustic attention toward the prey. However, for multiple prey items, the direction and timing of attention for effective foraging remain unknown. In this study, we adopted both experimental and mathematical methodology with microphone-array measurements and mathematical modeling analysis to quantify the attention of echolocating bats that were repeatedly capturing airborne insects in the field. Here we show that bats select rational flight paths to consecutively capture multiple prey items. Microphone-array measurements showed that bats direct their sonar attention not only to the immediate prey but also to the next prey. In addition, we found that a bat's attention in terms of its flight also aims toward the next prey even when approaching the immediate prey. Numerical simulations revealed a possibility that bats shift their flight attention to control suitable flight paths for consecutive capture. When a bat only aims its flight attention toward its immediate prey, it rarely succeeds in capturing the next prey. These findings indicate that bats gain increased benefit by distributing their attention among multiple targets and planning the future flight path based on additional information of the next prey. These experimental and mathematical studies allowed us to observe the process of decision making by bats during their natural flight dynamics.
Wang, Zhishi; Craven, Mark; Newton, Michael A.; Ahlquist, Paul
2013-01-01
Systematic, genome-wide RNA interference (RNAi) analysis is a powerful approach to identify gene functions that support or modulate selected biological processes. An emerging challenge shared with some other genome-wide approaches is that independent RNAi studies often show limited agreement in their lists of implicated genes. To better understand this, we analyzed four genome-wide RNAi studies that identified host genes involved in influenza virus replication. These studies collectively identified and validated the roles of 614 cell genes, but pair-wise overlap among the four gene lists was only 3% to 15% (average 6.7%). However, a number of functional categories were overrepresented in multiple studies. The pair-wise overlap of these enriched-category lists was high, ∼19%, implying more agreement among studies than apparent at the gene level. Probing this further, we found that the gene lists implicated by independent studies were highly connected in interacting networks by independent functional measures such as protein-protein interactions, at rates significantly higher than predicted by chance. We also developed a general, model-based approach to gauge the effects of false-positive and false-negative factors and to estimate, from a limited number of studies, the total number of genes involved in a process. For influenza virus replication, this novel statistical approach estimates the total number of cell genes involved to be ∼2,800. This and multiple other aspects of our experimental and computational results imply that, when following good quality control practices, the low overlap between studies is primarily due to false negatives rather than false-positive gene identifications. These results and methods have implications for and applications to multiple forms of genome-wide analysis. PMID:24068911
Design of clinical trials involving multiple hypothesis tests with a common control.
Schou, I Manjula; Marschner, Ian C
2017-07-01
Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Flesia, C.; Schwendimann, P.
1992-01-01
The contribution of the multiple scattering to the lidar signal is dependent on the optical depth tau. Therefore, the radar analysis, based on the assumption that the multiple scattering can be neglected is limited to cases characterized by low values of the optical depth (tau less than or equal to 0.1) and hence it exclude scattering from most clouds. Moreover, all inversion methods relating lidar signal to number densities and particle size must be modified since the multiple scattering affects the direct analysis. The essential requests of a realistic model for lidar measurements which include the multiple scattering and which can be applied to practical situations follow. (1) Requested are not only a correction term or a rough approximation describing results of a certain experiment, but a general theory of multiple scattering tying together the relevant physical parameter we seek to measure. (2) An analytical generalization of the lidar equation which can be applied in the case of a realistic aerosol is requested. A pure analytical formulation is important in order to avoid the convergency and stability problems which, in the case of numerical approach, are due to the large number of events that have to be taken into account in the presence of large depth and/or a strong experimental noise.
NASA Astrophysics Data System (ADS)
Jiang, Guo-Qian; Xie, Ping; Wang, Xiao; Chen, Meng; He, Qun
2017-11-01
The performance of traditional vibration based fault diagnosis methods greatly depends on those handcrafted features extracted using signal processing algorithms, which require significant amounts of domain knowledge and human labor, and do not generalize well to new diagnosis domains. Recently, unsupervised representation learning provides an alternative promising solution to feature extraction in traditional fault diagnosis due to its superior learning ability from unlabeled data. Given that vibration signals usually contain multiple temporal structures, this paper proposes a multiscale representation learning (MSRL) framework to learn useful features directly from raw vibration signals, with the aim to capture rich and complementary fault pattern information at different scales. In our proposed approach, a coarse-grained procedure is first employed to obtain multiple scale signals from an original vibration signal. Then, sparse filtering, a newly developed unsupervised learning algorithm, is applied to automatically learn useful features from each scale signal, respectively, and then the learned features at each scale to be concatenated one by one to obtain multiscale representations. Finally, the multiscale representations are fed into a supervised classifier to achieve diagnosis results. Our proposed approach is evaluated using two different case studies: motor bearing and wind turbine gearbox fault diagnosis. Experimental results show that the proposed MSRL approach can take full advantages of the availability of unlabeled data to learn discriminative features and achieved better performance with higher accuracy and stability compared to the traditional approaches.
A formal concept analysis approach to consensus clustering of multi-experiment expression data
2014-01-01
Background Presently, with the increasing number and complexity of available gene expression datasets, the combination of data from multiple microarray studies addressing a similar biological question is gaining importance. The analysis and integration of multiple datasets are expected to yield more reliable and robust results since they are based on a larger number of samples and the effects of the individual study-specific biases are diminished. This is supported by recent studies suggesting that important biological signals are often preserved or enhanced by multiple experiments. An approach to combining data from different experiments is the aggregation of their clusterings into a consensus or representative clustering solution which increases the confidence in the common features of all the datasets and reveals the important differences among them. Results We propose a novel generic consensus clustering technique that applies Formal Concept Analysis (FCA) approach for the consolidation and analysis of clustering solutions derived from several microarray datasets. These datasets are initially divided into groups of related experiments with respect to a predefined criterion. Subsequently, a consensus clustering algorithm is applied to each group resulting in a clustering solution per group. These solutions are pooled together and further analysed by employing FCA which allows extracting valuable insights from the data and generating a gene partition over all the experiments. In order to validate the FCA-enhanced approach two consensus clustering algorithms are adapted to incorporate the FCA analysis. Their performance is evaluated on gene expression data from multi-experiment study examining the global cell-cycle control of fission yeast. The FCA results derived from both methods demonstrate that, although both algorithms optimize different clustering characteristics, FCA is able to overcome and diminish these differences and preserve some relevant biological signals. Conclusions The proposed FCA-enhanced consensus clustering technique is a general approach to the combination of clustering algorithms with FCA for deriving clustering solutions from multiple gene expression matrices. The experimental results presented herein demonstrate that it is a robust data integration technique able to produce good quality clustering solution that is representative for the whole set of expression matrices. PMID:24885407
Oita, Azusa; Tsuboi, Yuuri; Date, Yasuhiro; Oshima, Takahiro; Sakata, Kenji; Yokoyama, Akiko; Moriya, Shigeharu; Kikuchi, Jun
2018-04-24
There is an increasing need for assessing aquatic ecosystems that are globally endangered. Since aquatic ecosystems are complex, integrated consideration of multiple factors utilizing omics technologies can help us better understand aquatic ecosystems. An integrated strategy linking three analytical (machine learning, factor mapping, and forecast-error-variance decomposition) approaches for extracting the features of surface water from datasets comprising ions, metabolites, and microorganisms is proposed herein. The three developed approaches can be employed for diverse datasets of sample sizes and experimentally analyzed factors. The three approaches are applied to explore the features of bay water surrounding Odaiba, Tokyo, Japan, as a case study. Firstly, the machine learning approach separated 681 surface water samples within Japan into three clusters, categorizing Odaiba water into seawater with relatively low inorganic ions, including Mg, Ba, and B. Secondly, the factor mapping approach illustrated Odaiba water samples from the summer as rich in multiple amino acids and some other metabolites and poor in inorganic ions relative to other seasons based on their seasonal dynamics. Finally, forecast-error-variance decomposition using vector autoregressive models indicated that a type of microalgae (Raphidophyceae) grows in close correlation with alanine, succinic acid, and valine on filters and with isobutyric acid and 4-hydroxybenzoic acid in filtrate, Ba, and average wind speed. Our integrated strategy can be used to examine many biological, chemical, and environmental physical factors to analyze surface water. Copyright © 2018. Published by Elsevier B.V.
Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng
2016-01-01
Motivation: Identifying drug–target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug–target interactions of new candidate drugs or targets. Methods: Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. Results: The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. Availability: http://datamining-iip.fudan.edu.cn/service/DrugE-Rank Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307615
Yuan, Qingjun; Gao, Junning; Wu, Dongliang; Zhang, Shihua; Mamitsuka, Hiroshi; Zhu, Shanfeng
2016-06-15
Identifying drug-target interactions is an important task in drug discovery. To reduce heavy time and financial cost in experimental way, many computational approaches have been proposed. Although these approaches have used many different principles, their performance is far from satisfactory, especially in predicting drug-target interactions of new candidate drugs or targets. Approaches based on machine learning for this problem can be divided into two types: feature-based and similarity-based methods. Learning to rank is the most powerful technique in the feature-based methods. Similarity-based methods are well accepted, due to their idea of connecting the chemical and genomic spaces, represented by drug and target similarities, respectively. We propose a new method, DrugE-Rank, to improve the prediction performance by nicely combining the advantages of the two different types of methods. That is, DrugE-Rank uses LTR, for which multiple well-known similarity-based methods can be used as components of ensemble learning. The performance of DrugE-Rank is thoroughly examined by three main experiments using data from DrugBank: (i) cross-validation on FDA (US Food and Drug Administration) approved drugs before March 2014; (ii) independent test on FDA approved drugs after March 2014; and (iii) independent test on FDA experimental drugs. Experimental results show that DrugE-Rank outperforms competing methods significantly, especially achieving more than 30% improvement in Area under Prediction Recall curve for FDA approved new drugs and FDA experimental drugs. http://datamining-iip.fudan.edu.cn/service/DrugE-Rank zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun
Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less
NASA Astrophysics Data System (ADS)
Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; Derome, Dominique; Carmeliet, Jan
2018-03-01
An entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace's law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results. Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.
Qin, Feifei; Mazloomi Moqaddam, Ali; Kang, Qinjun; ...
2018-03-22
Here, an entropic multiple-relaxation-time lattice Boltzmann approach is coupled to a multirange Shan-Chen pseudopotential model to study the two-phase flow. Compared with previous multiple-relaxation-time multiphase models, this model is stable and accurate for the simulation of a two-phase flow in a much wider range of viscosity and surface tension at a high liquid-vapor density ratio. A stationary droplet surrounded by equilibrium vapor is first simulated to validate this model using the coexistence curve and Laplace’s law. Then, two series of droplet impact behavior, on a liquid film and a flat surface, are simulated in comparison with theoretical or experimental results.more » Droplet impact on a liquid film is simulated for different Reynolds numbers at high Weber numbers. With the increase of the Sommerfeld parameter, onset of splashing is observed and multiple secondary droplets occur. The droplet spreading ratio agrees well with the square root of time law and is found to be independent of Reynolds number. Moreover, shapes of simulated droplets impacting hydrophilic and superhydrophobic flat surfaces show good agreement with experimental observations through the entire dynamic process. The maximum spreading ratio of a droplet impacting the superhydrophobic flat surface is studied for a large range of Weber numbers. Results show that the rescaled maximum spreading ratios are in good agreement with a universal scaling law. This series of simulations demonstrates that the proposed model accurately captures the complex fluid-fluid and fluid-solid interfacial physical processes for a wide range of Reynolds and Weber numbers at high density ratios.« less
McNamara, J P; Hanigan, M D; White, R R
2016-12-01
The National Animal Nutrition Program "National Research Support Project 9" supports efforts in livestock nutrition, including the National Research Council's committees on the nutrient requirements of animals. Our objective was to review the status of experimentation and data reporting in animal nutrition literature and to provide suggestions for the advancement of animal nutrition research and the ongoing improvement of field-applied nutrient requirement models. Improved data reporting consistency and completeness represent a substantial opportunity to improve nutrition-related mathematical models. We reviewed a body of nutrition research; recorded common phrases used to describe diets, animals, housing, and environmental conditions; and proposed equivalent numerical data that could be reported. With the increasing availability of online supplementary material sections in journals, we developed a comprehensive checklist of data that should be included in publications. To continue to improve our research effectiveness, studies utilizing multiple research methodologies to address complex systems and measure multiple variables will be necessary. From the current body of animal nutrition literature, we identified a series of opportunities to integrate research focuses (nutrition, reproduction and genetics) to advance the development of nutrient requirement models. From our survey of current experimentation and data reporting in animal nutrition, we identified 4 key opportunities to advance animal nutrition knowledge: (1) coordinated experiments should be designed to employ multiple research methodologies; (2) systems-oriented research approaches should be encouraged and supported; (3) publication guidelines should be updated to encourage and support sharing of more complete data sets; and (4) new experiments should be more rapidly integrated into our knowledge bases, research programs and practical applications. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
A study of pilot modeling in multi-controller tasks
NASA Technical Reports Server (NTRS)
Whitbeck, R. F.; Knight, J. R.
1972-01-01
A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.
Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).
Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad
2018-04-01
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.
Super-resolved Parallel MRI by Spatiotemporal Encoding
Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio
2016-01-01
Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293
Optimization of Multiple Related Negotiation through Multi-Negotiation Network
NASA Astrophysics Data System (ADS)
Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi
In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.
Yiu, Kai-Hang; Tse, Hung-Fat
2014-06-01
The disease burden of diabetes mellitus (DM) and its associated cardiovascular complications represent a growing and major global health problem. Recent studies suggest that circulating exogenous endothelial progenitor cells (EPCs) play an important role in endothelial repair and neovascularization at sites of injury or ischemia. Both experimental and clinical studies have demonstrated that hyperglycemia related to DM can induce alterations to EPCs. The reduction and dysfunction of EPCs related to DM correlate with the occurrence and severity of microvascular and macrovascular complications, suggesting a close mechanistic link between EPC dysfunction and impaired vascular function/repair in DM. These alterations to EPCs, likely mediated by multiple pathophysiological mechanisms, including inflammation, oxidative stress, and alterations in Akt and the nitric oxide pathway, affect EPCs at multiple stages: differentiation and mobilization in the bone marrow, trafficking and survival in the circulation, and homing and neovascularization. Several different therapeutic approaches have consequently been proposed to reverse the reduction and dysfunction of EPCs in DM and may represent a novel therapeutic approach to prevent and treat DM-related cardiovascular complications. © 2014 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Dong, Huaipeng; Zhang, Qi; Shi, Jun
2017-12-01
Magnetic resonance (MR) images suffer from intensity inhomogeneity. Segmentation-based approaches can simultaneously achieve both intensity inhomogeneity compensation (IIC) and tissue segmentation for MR images with little noise, but they often fail for images polluted by severe noise. Here, we propose a noise-robust algorithm named noise-suppressed multiplicative intrinsic component optimization (NSMICO) for simultaneous IIC and tissue segmentation. Considering the spatial characteristics in an image, an adaptive nonlocal means filtering term is incorporated into the objective function of NSMICO to decrease image deterioration due to noise. Then, a fuzzy local factor term utilizing the spatial and gray-level relationship among local pixels is embedded into the objective function to reach a balance between noise suppression and detail preservation. Experimental results on synthetic natural and MR images with various levels of intensity inhomogeneity and noise, as well as in vivo clinical MR images, have demonstrated the effectiveness of the NSMICO and its superiority to three competing approaches. The NSMICO could be potentially valuable for MR image IIC and tissue segmentation.
Wan, Shixiang; Duan, Yucong; Zou, Quan
2017-09-01
Predicting the subcellular localization of proteins is an important and challenging problem. Traditional experimental approaches are often expensive and time-consuming. Consequently, a growing number of research efforts employ a series of machine learning approaches to predict the subcellular location of proteins. There are two main challenges among the state-of-the-art prediction methods. First, most of the existing techniques are designed to deal with multi-class rather than multi-label classification, which ignores connections between multiple labels. In reality, multiple locations of particular proteins imply that there are vital and unique biological significances that deserve special focus and cannot be ignored. Second, techniques for handling imbalanced data in multi-label classification problems are necessary, but never employed. For solving these two issues, we have developed an ensemble multi-label classifier called HPSLPred, which can be applied for multi-label classification with an imbalanced protein source. For convenience, a user-friendly webserver has been established at http://server.malab.cn/HPSLPred. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effects of blade-to-blade dissimilarities on rotor-body lead-lag dynamics
NASA Technical Reports Server (NTRS)
Mcnulty, M. J.
1986-01-01
Small blade-to-blade property differences are investigated to determine their effects on the behavior of a simple rotor-body system. An analytical approach is used which emphasizes the significance of these effects from the experimental point of view. It is found that the primary effect of blade-to-blade dissimilarities is the appearance of additional peaks in the frequency spectrum which are separated from the convention response modes by multiples of the rotor speed. These additional responses are potential experimental problems because when they occur near a mode of interest they act as contaminant frequencies which can make damping measurements difficult. The effects of increased rotor-body coupling and a rotor shaft degree of freedom act to improve the situation by altering the frequency separation of the modes.
NASA Astrophysics Data System (ADS)
Mejid Elsiti, Nagwa; Noordin, M. Y.; Idris, Ani; Saed Majeed, Faraj
2017-10-01
This paper presents an optimization of process parameters of Micro-Electrical Discharge Machining (EDM) process with (γ-Fe2O3) nano-powder mixed dielectric using multi-response optimization Grey Relational Analysis (GRA) method instead of single response optimization. These parameters were optimized based on 2-Level factorial design combined with Grey Relational Analysis. The machining parameters such as peak current, gap voltage, and pulse on time were chosen for experimentation. The performance characteristics chosen for this study are material removal rate (MRR), tool wear rate (TWR), Taper and Overcut. Experiments were conducted using electrolyte copper as the tool and CoCrMo as the workpiece. Experimental results have been improved through this approach.
Experimental models of demyelination and remyelination.
Torre-Fuentes, L; Moreno-Jiménez, L; Pytel, V; Matías-Guiu, J A; Gómez-Pinedo, U; Matías-Guiu, J
2017-08-29
Experimental animal models constitute a useful tool to deepen our knowledge of central nervous system disorders. In the case of multiple sclerosis, however, there is no such specific model able to provide an overview of the disease; multiple models covering the different pathophysiological features of the disease are therefore necessary. We reviewed the different in vitro and in vivo experimental models used in multiple sclerosis research. Concerning in vitro models, we analysed cell cultures and slice models. As for in vivo models, we examined such models of autoimmunity and inflammation as experimental allergic encephalitis in different animals and virus-induced demyelinating diseases. Furthermore, we analysed models of demyelination and remyelination, including chemical lesions caused by cuprizone, lysolecithin, and ethidium bromide; zebrafish; and transgenic models. Experimental models provide a deeper understanding of the different pathogenic mechanisms involved in multiple sclerosis. Choosing one model or another depends on the specific aims of the study. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Body-Earth Mover's Distance: A Matching-Based Approach for Sleep Posture Recognition.
Xu, Xiaowei; Lin, Feng; Wang, Aosen; Hu, Yu; Huang, Ming-Chun; Xu, Wenyao
2016-10-01
Sleep posture is a key component in sleep quality assessment and pressure ulcer prevention. Currently, body pressure analysis has been a popular method for sleep posture recognition. In this paper, a matching-based approach, Body-Earth Mover's Distance (BEMD), for sleep posture recognition is proposed. BEMD treats pressure images as weighted 2D shapes, and combines EMD and Euclidean distance for similarity measure. Compared with existing work, sleep posture recognition is achieved with posture similarity rather than multiple features for specific postures. A pilot study is performed with 14 persons for six different postures. The experimental results show that the proposed BEMD can achieve 91.21% accuracy, which outperforms the previous method with an improvement of 8.01%.
Lawson, Daniel J; Holtrop, Grietje; Flint, Harry
2011-07-01
Process models specified by non-linear dynamic differential equations contain many parameters, which often must be inferred from a limited amount of data. We discuss a hierarchical Bayesian approach combining data from multiple related experiments in a meaningful way, which permits more powerful inference than treating each experiment as independent. The approach is illustrated with a simulation study and example data from experiments replicating the aspects of the human gut microbial ecosystem. A predictive model is obtained that contains prediction uncertainty caused by uncertainty in the parameters, and we extend the model to capture situations of interest that cannot easily be studied experimentally. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Wieser, Stefan; Axmann, Markus; Schütz, Gerhard J.
2008-01-01
We propose here an approach for the analysis of single-molecule trajectories which is based on a comprehensive comparison of an experimental data set with multiple Monte Carlo simulations of the diffusion process. It allows quantitative data analysis, particularly whenever analytical treatment of a model is infeasible. Simulations are performed on a discrete parameter space and compared with the experimental results by a nonparametric statistical test. The method provides a matrix of p-values that assess the probability for having observed the experimental data at each setting of the model parameters. We show the testing approach for three typical situations observed in the cellular plasma membrane: i), free Brownian motion of the tracer, ii), hop diffusion of the tracer in a periodic meshwork of squares, and iii), transient binding of the tracer to slowly diffusing structures. By plotting the p-value as a function of the model parameters, one can easily identify the most consistent parameter settings but also recover mutual dependencies and ambiguities which are difficult to determine by standard fitting routines. Finally, we used the test to reanalyze previous data obtained on the diffusion of the glycosylphosphatidylinositol-protein CD59 in the plasma membrane of the human T24 cell line. PMID:18805933
A Combustion Model for the TWA 800 Center-Wing Fuel Tank Explosion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baer, M.R.; Gross, R.J.
1998-10-02
In support of the National Transportation Safety Board investigation of the TWA Flight 800 accident, a combined experimental/computational effort was conducted that focused on quarter-scale testing and simulation of the fuel-air explosion in the Boeing 747 center wing fuel tank. This report summarizes the modeling approach used at Sandia National Laboratories. In this approach approximations are introduced that capture the essential physics associated with turbulent flame propagation in multiple compartment fuel tanks. This model efficiently defines the pressure loading conditions during a jet-fuel air explosion in a fuel tank confinement. Modeling calculations compare favorably with a variety of experimental quarter-scalemore » tests conducted in rigid confinement. The modeling describes well the overpressure history in several geometry configurations. Upon demonstrating a reasonable comparison to experimental observations, a parametric study of eight possible ignition sources is then discussed. Model calculations demonstrate that different loading conditions arise as the location of the ignition event is varied. By comparing the inferred damage and calculated impulses to that seen in the recovered tank, it maybe possible to reduce the number of likely sources. A possible extension of this work to better define tank damage includes coupling the combustion model as a pressure loading routine for structural failure analysis.« less
ERIC Educational Resources Information Center
Leap, Evelyn M.
2013-01-01
This quasi-experimental study was conducted with two fifth grade classrooms to investigate the effect of scent on students' acquisition and retention of multiplication facts and math anxiety. Forty participants received daily instruction for nine weeks, using a strategy-rich multiplication program called Factivation. Students in the Double Smencil…
Development of a low-cost multiple diode PIV laser for high-speed flow visualization
NASA Astrophysics Data System (ADS)
Bhakta, Raj; Hargather, Michael
2017-11-01
Particle imaging velocimetry (PIV) is an optical visualization technique that typically incorporates a single high-powered laser to illuminate seeded particles in a fluid flow. Standard PIV lasers are extremely costly and have low frequencies that severely limit its capability in high speed, time-resolved imaging. The development of a multiple diode laser system consisting of continuous lasers allows for flexible high-speed imaging with a wider range of test parameters. The developed laser system was fabricated with off-the-shelf parts for approximately 500. A series of experimental tests were conducted to compare the laser apparatus to a standard Nd:YAG double-pulsed PIV laser. Steady and unsteady flows were processed to compare the two systems and validate the accuracy of the multiple laser design. PIV results indicate good correlation between the two laser systems and verifies the construction of a precise laser instrument. The key technical obstacle to this approach was laser calibration and positioning which will be discussed. HDTRA1-14-1-0070.
The Use of Learning Study in Designing Examples for Teaching Physics
NASA Astrophysics Data System (ADS)
Guo, Jian-Peng; Yang, Ling-Yan; Ding, Yi
2017-07-01
Researchers have consistently demonstrated that studying multiple examples is more effective than studying one example because comparing multiple examples can promote schema construction and facilitate discernment of critical aspects. Teachers, however, are usually absent from those self-led text-based studies. In this experimental study, a learning study approach based on variation theory was adopted to examine the effectiveness of teachers' different ways of designing multiple examples in helping students learn a physics principle. Three hundred and fifty-one tenth-grade students learned to distinguish action-reaction from equilibrium (a) by comparing examples that varied critical aspects first separately and then simultaneously, or (b) by comparing examples that separately varied critical aspects only. Results showed that students with average academic attainment benefited more from comparing examples in the first condition. Students with higher academic attainment learned equally within both conditions. This finding supports the advantage of simultaneous variation. The characteristics of students and instructional support should be taken into account when considering the effectiveness of patterns of variation.
Kappenman, Emily S; Luck, Steven J
2012-01-01
Event-related potentials (ERPs) are a powerful tool in understanding and evaluating cognitive, affective, motor, and sensory processing in both healthy and pathological samples. A typical ERP recording session takes considerable time but is designed to isolate only 1-2 components. Although this is appropriate for most basic science purposes, it is an inefficient approach for measuring the broad set of neurocognitive functions that may be disrupted in a neurological or psychiatric disease. The present study provides a framework for more efficiently evaluating multiple neural processes in a single experimental paradigm through the manipulation of functionally orthogonal dimensions. We describe the general MONSTER (Manipulation of Orthogonal Neural Systems Together in Electrophysiological Recordings) approach and explain how it can be adapted to investigate a variety of neurocognitive domains, ERP components, and neural processes of interest. We also demonstrate how this approach can be used to assess group differences by providing data from an implementation of the MONSTER approach in younger (18-30 y of age) and older (65-85 y of age) adult samples. This specific implementation of the MONSTER framework assesses 4 separate neural processes in the visual domain: (1) early sensory processing, using the C1 wave; (2) shifts of covert attention, with the N2pc component; (3) categorization, with the P3 component; and (4) self-monitoring, with the error-related negativity. Although the MONSTER approach is primarily described in the context of ERP experiments, it could also be adapted easily for use with functional magnetic resonance imaging.
A convolutional neural network-based screening tool for X-ray serial crystallography
Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K.
2018-01-01
A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. PMID:29714177
Modeling Quasi-Static and Fatigue-Driven Delamination Migration
NASA Technical Reports Server (NTRS)
De Carvalho, N. V.; Ratcliffe, J. G.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Tay, T. E.
2014-01-01
An approach was proposed and assessed for the high-fidelity modeling of progressive damage and failure in composite materials. It combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. Delamination, matrix cracking, and migration were captured failure and migration criteria based on fracture mechanics. Quasi-static and fatigue loading were modeled within the same overall framework. The methodology proposed was illustrated by simulating the delamination migration test, showing good agreement with the available experimental data.
Low-cost Large Aperture Telescopes for Optical Communications
NASA Technical Reports Server (NTRS)
Hemmati, Hamid
2006-01-01
Low-cost, large-aperture optical receivers are required to form an affordable optical ground receiver network for laser communications. Among the ground receiver station's multiple subsystems, here, we only discuss the ongoing research activities aimed at reducing the cost of the large-size optics on the receiver. Experimental results of two different approaches for fabricating low-cost mirrors of wavefront quality on the order of 100-200X the diffraction limit are described. Laboratory-level effort are underway to improve the surface figure to better than 20X the diffraction limit.
A convolutional neural network-based screening tool for X-ray serial crystallography.
Ke, Tsung Wei; Brewster, Aaron S; Yu, Stella X; Ushizima, Daniela; Yang, Chao; Sauter, Nicholas K
2018-05-01
A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization. open access.
A convolutional neural network-based screening tool for X-ray serial crystallography
Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.; ...
2018-04-24
A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.
A convolutional neural network-based screening tool for X-ray serial crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ke, Tsung-Wei; Brewster, Aaron S.; Yu, Stella X.
A new tool is introduced for screening macromolecular X-ray crystallography diffraction images produced at an X-ray free-electron laser light source. Based on a data-driven deep learning approach, the proposed tool executes a convolutional neural network to detect Bragg spots. Automatic image processing algorithms described can enable the classification of large data sets, acquired under realistic conditions consisting of noisy data with experimental artifacts. Outcomes are compared for different data regimes, including samples from multiple instruments and differing amounts of training data for neural network optimization.
Experimental setups for FEL-based four-wave mixing experiments at FERMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bencivenga, Filippo; Zangrando, Marco; Svetina, Cristian
2016-01-01
The recent advent of free-electron laser (FEL) sources is driving the scientific community to extend table-top laser research to shorter wavelengths adding elemental selectivity and chemical state specificity. Both a compact setup (mini-TIMER) and a separate instrument (EIS-TIMER) dedicated to four-wave-mixing (FWM) experiments has been designed and constructed, to be operated as a branch of the Elastic and Inelastic Scattering beamline: EIS. The FWM experiments that are planned at EIS-TIMER are based on the transient grating approach, where two crossed FEL pulses create a controlled modulation of the sample excitations while a third time-delayed pulse is used to monitor themore » dynamics of the excited state. This manuscript describes such experimental facilities, showing the preliminary results of the commissioning of the EIS-TIMER beamline, and discusses original experimental strategies being developed to study the dynamics of matter at the fs–nm time–length scales. In the near future such experimental tools will allow more sophisticated FEL-based FWM applications, that also include the use of multiple and multi-color FEL pulses.« less
Experimental setups for FEL-based four-wave mixing experiments at FERMI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bencivenga, Filippo; Zangrando, Marco; Svetina, Cristian
The recent advent of free-electron laser (FEL) sources is driving the scientific community to extend table-top laser research to shorter wavelengths adding elemental selectivity and chemical state specificity. Both a compact setup (mini-TIMER) and a separate instrument (EIS-TIMER) dedicated to four-wave-mixing (FWM) experiments has been designed and constructed, to be operated as a branch of the Elastic and Inelastic Scattering beamline: EIS. The FWM experiments that are planned at EIS-TIMER are based on the transient grating approach, where two crossed FEL pulses create a controlled modulation of the sample excitations while a third time-delayed pulse is used to monitor themore » dynamics of the excited state. This manuscript describes such experimental facilities, showing the preliminary results of the commissioning of the EIS-TIMER beamline, and discusses original experimental strategies being developed to study the dynamics of matter at the fs–nm time–length scales. In the near future such experimental tools will allow more sophisticated FEL-based FWM applications, that also include the use of multiple and multi-color FEL pulses.« less
Bechtold, Joan E.; Swider, Pascal; Goreham-Voss, Curtis; Soballe, Kjeld
2016-01-01
This research review aims to focus attention on the effect of specific surgical and host factors on implant fixation, and the importance of accounting for them in experimental and numerical models. These factors affect (a) eventual clinical applicability and (b) reproducibility of findings across research groups. Proper function and longevity for orthopedic joint replacement implants relies on secure fixation to the surrounding bone. Technology and surgical technique has improved over the last 50 years, and robust ingrowth and decades of implant survival is now routinely achieved for healthy patients and first-time (primary) implantation. Second-time (revision) implantation presents with bone loss with interfacial bone gaps in areas vital for secure mechanical fixation. Patients with medical comorbidities such as infection, smoking, congestive heart failure, kidney disease, and diabetes have a diminished healing response, poorer implant fixation, and greater revision risk. It is these more difficult clinical scenarios that require research to evaluate more advanced treatment approaches. Such treatments can include osteogenic or antimicrobial implant coatings, allo- or autogenous cellular or tissue-based approaches, local and systemic drug delivery, surgical approaches. Regarding implant-related approaches, most experimental and numerical models do not generally impose conditions that represent mechanical instability at the implant interface, or recalcitrant healing. Many treatments will work well in forgiving settings, but fail in complex human settings with disease, bone loss, or previous surgery. Ethical considerations mandate that we justify and limit the number of animals tested, which restricts experimental permutations of treatments. Numerical models provide flexibility to evaluate multiple parameters and combinations, but generally need to employ simplifying assumptions. The objectives of this paper are to (a) to highlight the importance of mechanical, material, and surgical features to influence implant–bone healing, using a selection of results from two decades of coordinated experimental and numerical work and (b) discuss limitations of such models and the implications for research reproducibility. Focusing model conditions toward the clinical scenario to be studied, and limiting conclusions to the conditions of a particular model can increase clinical relevance and research reproducibility. PMID:26720312
NASA Astrophysics Data System (ADS)
Diehl, Martin; Groeber, Michael; Haase, Christian; Molodov, Dmitri A.; Roters, Franz; Raabe, Dierk
2017-05-01
Predicting, understanding, and controlling the mechanical behavior is the most important task when designing structural materials. Modern alloy systems—in which multiple deformation mechanisms, phases, and defects are introduced to overcome the inverse strength-ductility relationship—give raise to multiple possibilities for modifying the deformation behavior, rendering traditional, exclusively experimentally-based alloy development workflows inappropriate. For fast and efficient alloy design, it is therefore desirable to predict the mechanical performance of candidate alloys by simulation studies to replace time- and resource-consuming mechanical tests. Simulation tools suitable for this task need to correctly predict the mechanical behavior in dependence of alloy composition, microstructure, texture, phase fractions, and processing history. Here, an integrated computational materials engineering approach based on the open source software packages DREAM.3D and DAMASK (Düsseldorf Advanced Materials Simulation Kit) that enables such virtual material development is presented. More specific, our approach consists of the following three steps: (1) acquire statistical quantities that describe a microstructure, (2) build a representative volume element based on these quantities employing DREAM.3D, and (3) evaluate the representative volume using a predictive crystal plasticity material model provided by DAMASK. Exemplarily, these steps are here conducted for a high-manganese steel.
Kubo, Kenta; Okanoya, Kazuo; Kawai, Nobuyuki
2012-01-01
Although studies have emphasized the multiple components of anger, little is known about the physiological and psychological mechanisms of the approach motivational component and the negative emotional component of anger. In the present study, participants wrote brief opinions about social problems (e.g., tuition hikes) and received a handwritten, insulting comment about their composition from the experimenter. Half of the participants (apology group) received a simple apologetic sentence at the end of the insulting comment. Half of the participants (no apology group) did not receive one. The physiological responses of the participants were recorded prior to, and after they read the comments. Increases in heart rate and asymmetric frontal brain activity were suppressed only in the apology group. Both groups showed an increase in skin conductance response. Our psychological scales showed that the apology suppressed self reported state anger from an approach-motivational standpoint but not from a negative emotional standpoint. The results suggest that anger is not a unitary process but has multiple components. The apology did provide a different physiological profile but did not dampen down the subjective experience of anger. Thus, providing an apology may not always be effective for alleviating the experience of anger to an insult.
Kubo, Kenta; Okanoya, Kazuo; Kawai, Nobuyuki
2012-01-01
Although studies have emphasized the multiple components of anger, little is known about the physiological and psychological mechanisms of the approach motivational component and the negative emotional component of anger. In the present study, participants wrote brief opinions about social problems (e.g., tuition hikes) and received a handwritten, insulting comment about their composition from the experimenter. Half of the participants (apology group) received a simple apologetic sentence at the end of the insulting comment. Half of the participants (no apology group) did not receive one. The physiological responses of the participants were recorded prior to, and after they read the comments. Increases in heart rate and asymmetric frontal brain activity were suppressed only in the apology group. Both groups showed an increase in skin conductance response. Our psychological scales showed that the apology suppressed self reported state anger from an approach-motivational standpoint but not from a negative emotional standpoint. The results suggest that anger is not a unitary process but has multiple components. The apology did provide a different physiological profile but did not dampen down the subjective experience of anger. Thus, providing an apology may not always be effective for alleviating the experience of anger to an insult. PMID:22457729
Pharmacogenetics of the β2-Adrenergic Receptor Gene
Ortega, Victor E.; Hawkins, Gregory A.; Peters, Stephen P.; Bleecker, Eugene R.
2009-01-01
Asthma is a complex genetic disease with multiple genetic and environmental determinants contributing to the observed variability in response to common anti-asthma therapies. Asthma pharmacogenetic research has focused on multiple candidate genes including the β2-adrenergic receptor gene (ADRβ2) and its effect on individual responses to beta agonist therapy. At present, knowledge about the effects of ADRβ2 variation on therapeutic responses is evolving and should not alter current Asthma Guideline approaches consisting of the use of short acting beta agonists for as-needed symptom based therapy and the use of a regular long-acting beta agonist in combination with inhaled corticosteroid therapy for optimal control of asthma symptoms in those asthmatics who are not controlled on inhaled corticosteroid alone. This approach is based upon studies showing a consistent pharmacogenetic response to regular use of short acting beta agonists (SABA) and less consistent findings in studies evaluating long acting beta agonist (LABA). While emerging pharmacogenetic studies are provocative and should lead to functional approaches, conflicting data with responses to LABA therapy may be caused by factors that include small sample sizes of study populations and differences in experimental design that may limit the conclusions that may be drawn from these clinical trials at the present time. PMID:17996583
Zhang, Yihui; Webb, Richard Chad; Luo, Hongying; Xue, Yeguang; Kurniawan, Jonas; Cho, Nam Heon; Krishnan, Siddharth; Li, Yuhang; Huang, Yonggang; Rogers, John A
2016-01-07
Long-term, continuous measurement of core body temperature is of high interest, due to the widespread use of this parameter as a key biomedical signal for clinical judgment and patient management. Traditional approaches rely on devices or instruments in rigid and planar forms, not readily amenable to intimate or conformable integration with soft, curvilinear, time-dynamic, surfaces of the skin. Here, materials and mechanics designs for differential temperature sensors are presented which can attach softly and reversibly onto the skin surface, and also sustain high levels of deformation (e.g., bending, twisting, and stretching). A theoretical approach, together with a modeling algorithm, yields core body temperature from multiple differential measurements from temperature sensors separated by different effective distances from the skin. The sensitivity, accuracy, and response time are analyzed by finite element analyses (FEA) to provide guidelines for relationships between sensor design and performance. Four sets of experiments on multiple devices with different dimensions and under different convection conditions illustrate the key features of the technology and the analysis approach. Finally, results indicate that thermally insulating materials with cellular structures offer advantages in reducing the response time and increasing the accuracy, while improving the mechanics and breathability. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An active structural acoustic control approach for the reduction of the structure-borne road noise
NASA Astrophysics Data System (ADS)
Douville, Hugo; Berry, Alain; Masson, Patrice
2002-11-01
The reduction of the structure-borne road noise generated inside the cabin of an automobile is investigated using an Active Structural Acoustic Control (ASAC) approach. First, a laboratory test bench consisting of a wheel/suspension/lower suspension A-arm assembly has been developed in order to identify the vibroacoustic transfer paths (up to 250 Hz) for realistic road noise excitation of the wheel. Frequency Response Function (FRF) measurements between the excitation/control actuators and each suspension/chassis linkage are used to characterize the different transfer paths that transmit energy through the chassis of the car. Second, a FE/BE model (Finite/Boundary Elements) was developed to simulate the acoustic field of an automobile cab interior. This model is used to predict the acoustic field inside the cabin as a response to the measured forces applied on the suspension/chassis linkages. Finally, an experimental implementation of ASAC is presented. The control approach relies on the use of inertial actuators to modify the vibration behavior of the suspension and the automotive chassis such that its noise radiation efficiency is decreased. The implemented algorithm consists of a MIMO (Multiple-Input-Multiple-Output) feedforward configuration with a filtered-X LMS algorithm using an advanced reference signal (width FIR filters) using the Simulink/Dspace environment for control prototyping.
Optimal design of stimulus experiments for robust discrimination of biochemical reaction networks.
Flassig, R J; Sundmacher, K
2012-12-01
Biochemical reaction networks in the form of coupled ordinary differential equations (ODEs) provide a powerful modeling tool for understanding the dynamics of biochemical processes. During the early phase of modeling, scientists have to deal with a large pool of competing nonlinear models. At this point, discrimination experiments can be designed and conducted to obtain optimal data for selecting the most plausible model. Since biological ODE models have widely distributed parameters due to, e.g. biologic variability or experimental variations, model responses become distributed. Therefore, a robust optimal experimental design (OED) for model discrimination can be used to discriminate models based on their response probability distribution functions (PDFs). In this work, we present an optimal control-based methodology for designing optimal stimulus experiments aimed at robust model discrimination. For estimating the time-varying model response PDF, which results from the nonlinear propagation of the parameter PDF under the ODE dynamics, we suggest using the sigma-point approach. Using the model overlap (expected likelihood) as a robust discrimination criterion to measure dissimilarities between expected model response PDFs, we benchmark the proposed nonlinear design approach against linearization with respect to prediction accuracy and design quality for two nonlinear biological reaction networks. As shown, the sigma-point outperforms the linearization approach in the case of widely distributed parameter sets and/or existing multiple steady states. Since the sigma-point approach scales linearly with the number of model parameter, it can be applied to large systems for robust experimental planning. An implementation of the method in MATLAB/AMPL is available at http://www.uni-magdeburg.de/ivt/svt/person/rf/roed.html. flassig@mpi-magdeburg.mpg.de Supplementary data are are available at Bioinformatics online.
Hou, Guangjin; Gupta, Rupal; Polenova, Tatyana; Vega, Alexander J
2014-02-01
Proton chemical shifts are a rich probe of structure and hydrogen bonding environments in organic and biological molecules. Until recently, measurements of 1 H chemical shift tensors have been restricted to either solid systems with sparse proton sites or were based on the indirect determination of anisotropic tensor components from cross-relaxation and liquid-crystal experiments. We have introduced an MAS approach that permits site-resolved determination of CSA tensors of protons forming chemical bonds with labeled spin-1/2 nuclei in fully protonated solids with multiple sites, including organic molecules and proteins. This approach, originally introduced for the measurements of chemical shift tensors of amide protons, is based on three RN -symmetry based experiments, from which the principal components of the 1 H CS tensor can be reliably extracted by simultaneous triple fit of the data. In this article, we expand our approach to a much more challenging system involving aliphatic and aromatic protons. We start with a review of the prior work on experimental-NMR and computational-quantum-chemical approaches for the measurements of 1 H chemical shift tensors and for relating these to the electronic structures. We then present our experimental results on U- 13 C, 15 N-labeled histdine demonstrating that 1 H chemical shift tensors can be reliably determined for the 1 H 15 N and 1 H 13 C spin pairs in cationic and neutral forms of histidine. Finally, we demonstrate that the experimental 1 H(C) and 1 H(N) chemical shift tensors are in agreement with Density Functional Theory calculations, therefore establishing the usefulness of our method for characterization of structure and hydrogen bonding environment in organic and biological solids.
Single particle fluorescence: a simple experimental approach to evaluate coincidence effects.
Wu, Xihong; Omenetto, Nicoló; Smith, Benjamin W; Winefordner, James D
2007-07-01
Real-time characterization of the chemical and physical properties of individual aerosol particles is an important issue in environmental studies. A well-established way of accomplishing this task relies on the use of laser-induced fluorescence or laser ionization mass spectrometry. We describe here a simple approach aimed at experimentally verifying that single particles are indeed addressed. The approach has been tested with a system consisting of a series of aerodynamic lenses to form a beam of dye-doped particles aerosolized from a solution of known concentration with a medical nebulizer. Two independent spectral detection channels simultaneously measure the fluorescence signals generated in two different spectral regions by the passage of a mixture of two dye-doped particles through a focused laser beam in a vacuum chamber. Coincidence effects, arising from the simultaneous observation of both fluorescence emissions, can then be directly observed. Both dual-color fluorescence and pulse height distribution have been analyzed. As expected, the probability of single- or multiple-particle interaction strongly depends on the particle flux in the chamber, which is related to the concentration of particles in the nebulized solution. In our case, to achieve a two-particle coincidence smaller than 10%, a particle concentration lower than 1.2x10(5) particles/mL is required. Moreover, it was found that the experimental observations are in agreement with a simple mathematical model based on Poisson statistics. Although the results obtained refer to particle concentrations in solution, our approach can equally be applicable to experiments involving direct air sampling, provided that the number density of particles in air can be measured a priori, e.g., with a particle counter.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalenko, V. N.; Vechernin, V. V.
2016-01-22
The ultrarelativistic collisions of heavy and light ions in the center-of-mass energy range from a few up to a hundred GeV per nucleon have been considered in string fusion approach. A Monte Carlo model of proton-proton, proton-nucleus, and nucleus-nucleus collisions has been developed, which takes into account both the string fusion and the finite rapidity length of strings, implementing the hadronic scattering through the interaction of color dipoles. It well describes the proton-nucleus and nucleus-nucleus collisions at the partonic level without using Glauber model of nuclear collisions. All parameters are fixed using experimental data on inelastic cross section and multiplicity.more » In the framework of the model, we performed a beam energy and system size scan and studied the behaviour of n-n, pt-n and pt-pt long-range correlation coefficients. The detailed modeling of the event by event charged particles production allowed to provide predictions in the conditions close to the experimental ones allowing a direct comparison to the data.« less
Plasma Liner Research for MTF at NASA Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
Thio, Y. C. F.; Eskridge, R.; Lee, M.; Martin, A.; Smith, J.; Cassibry, J. T.; Wu, S. T.; Kirkpatrick, R. C.; Knapp, C. E.; Turchi, P. J.;
2002-01-01
The current research effort at NASA Marshall Space Flight Center (MSFC) in MTF is directed towards exploring the critical physics issues of potential embodiments of MTF for propulsion, especially standoff drivers involving plasma liners for MTF. There are several possible approaches for forming plasma liners. One approach consists of using a spherical array of plasma jets to form a spherical plasma shell imploding towards the center of a magnetized plasma, a compact toroid. Current experimental plan and status to explore the physics of forming a 2-D plasma liner (shell) by merging plasma jets are described. A first-generation coaxial plasma guns (Mark-1) to launch the required plasma jets have been built and tested. Plasma jets have been launched reproducibly with a low jitter, and velocities in excess of 50 km/s for the leading edge of the plasma jet. Some further refinements are being explored for the plasma gun, Successful completion of these single-gun tests will be followed by an experimental exploration of the problems of launching a multiple number of these jets simultaneously to form a cylindrical plasma liner.
Investigation of p-type depletion doping for InGaN/GaN-based light-emitting diodes
NASA Astrophysics Data System (ADS)
Zhang, Yiping; Zhang, Zi-Hui; Tan, Swee Tiam; Hernandez-Martinez, Pedro Ludwig; Zhu, Binbin; Lu, Shunpeng; Kang, Xue Jun; Sun, Xiao Wei; Demir, Hilmi Volkan
2017-01-01
Due to the limitation of the hole injection, p-type doping is essential to improve the performance of InGaN/GaN multiple quantum well light-emitting diodes (LEDs). In this work, we propose and show a depletion-region Mg-doping method. Here we systematically analyze the effectiveness of different Mg-doping profiles ranging from the electron blocking layer to the active region. Numerical computations show that the Mg-doping decreases the valence band barrier for holes and thus enhances the hole transportation. The proposed depletion-region Mg-doping approach also increases the barrier height for electrons, which leads to a reduced electron overflow, while increasing the hole concentration in the p-GaN layer. Experimentally measured external quantum efficiency indicates that Mg-doping position is vitally important. The doping in or adjacent to the quantum well degrades the LED performance due to Mg diffusion, increasing the corresponding nonradiative recombination, which is well supported by the measured carrier lifetimes. The experimental results are well numerically reproduced by modifying the nonradiative recombination lifetimes, which further validate the effectiveness of our approach.
Too many targets, not enough patients: rethinking neuroblastoma clinical trials.
Fletcher, Jamie I; Ziegler, David S; Trahair, Toby N; Marshall, Glenn M; Haber, Michelle; Norris, Murray D
2018-06-01
Neuroblastoma is a rare solid tumour of infancy and early childhood with a disproportionate contribution to paediatric cancer mortality and morbidity. Combination chemotherapy, radiation therapy and immunotherapy remains the standard approach to treat high-risk disease, with few recurrent, actionable genetic aberrations identified at diagnosis. However, recent studies indicate that actionable aberrations are far more common in relapsed neuroblastoma, possibly as a result of clonal expansion. In addition, although the major validated disease driver, MYCN, is not currently directly targetable, multiple promising approaches to target MYCN indirectly are in development. We propose that clinical trial design needs to be rethought in order to meet the challenge of providing rigorous, evidence-based assessment of these new approaches within a fairly small patient population and that experimental therapies need to be assessed at diagnosis in very-high-risk patients rather than in relapsed and refractory patients.
Prediction of regulatory gene pairs using dynamic time warping and gene ontology.
Yang, Andy C; Hsu, Hui-Huang; Lu, Ming-Da; Tseng, Vincent S; Shih, Timothy K
2014-01-01
Selecting informative genes is the most important task for data analysis on microarray gene expression data. In this work, we aim at identifying regulatory gene pairs from microarray gene expression data. However, microarray data often contain multiple missing expression values. Missing value imputation is thus needed before further processing for regulatory gene pairs becomes possible. We develop a novel approach to first impute missing values in microarray time series data by combining k-Nearest Neighbour (KNN), Dynamic Time Warping (DTW) and Gene Ontology (GO). After missing values are imputed, we then perform gene regulation prediction based on our proposed DTW-GO distance measurement of gene pairs. Experimental results show that our approach is more accurate when compared with existing missing value imputation methods on real microarray data sets. Furthermore, our approach can also discover more regulatory gene pairs that are known in the literature than other methods.
Overview of Heavy Ion Fusion Accelerator Research in the U. S.
NASA Astrophysics Data System (ADS)
Friedman, Alex
2002-12-01
This article provides an overview of current U.S. research on accelerators for Heavy Ion Fusion, that is, inertial fusion driven by intense beams of heavy ions with the goal of energy production. The concept, beam requirements, approach, and major issues are introduced. An overview of a number of new experiments is presented. These include: the High Current Experiment now underway at Lawrence Berkeley National Laboratory; studies of advanced injectors (and in particular an approach based on the merging of multiple beamlets), being investigated experimentally at Lawrence Livermore National Laboratory); the Neutralized (chamber) Transport Experiment being assembled at Lawrence Berkeley National Laboratory; and smaller experiments at the University of Maryland and at Princeton Plasma Physics Laboratory. The comprehensive program of beam simulations and theory is outlined. Finally, prospects and plans for further development of this promising approach to fusion energy are discussed.
NASA Astrophysics Data System (ADS)
Bochinski, J. R.; Curtis, C.; Roman, M. P.; Clarke, L. I.; Wang, Q.; Thoppey, N. M.; Gorga, R. E.
2014-03-01
Utilizing unconfined polymer fluids (e.g., from solution or melt), edge electrospinning provides a straightforward approach for scaled up production of high quality nanofibers through the formation of many parallel jets. From simple geometries (using solution contained within a sharp-edged bowl or on a flat plate), jets form and spontaneously re-arrange on the fluid surface near the edge. Using appropriate control of the electric field induced feed rate, comparable per jet fabrication as traditional single-needle electrospinning can be realized, resulting in nanofibers with similar diameters, diameter distribution, and collected mat porosity. The presence of multiple jets proportionally enhances the production rate of the system, with minimal experimental complexity and without the possibility of clogging. Extending this needle-less approach to commercial polyethylene polymers, micron scale fibers can be melt electrospun using a similar apparatus. Support from National Science Foundation (CMMI-0800237).
Dimension reduction techniques for the integrative analysis of multi-omics data
Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.
2016-01-01
State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681
A learning framework for age rank estimation based on face images with scattering transform.
Chang, Kuang-Yu; Chen, Chu-Song
2015-03-01
This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Multifarious Roles of Intrinsic Disorder in Proteins Illustrate Its Broad Impact on Plant Biology
Sun, Xiaolin; Rikkerink, Erik H.A.; Jones, William T.; Uversky, Vladimir N.
2013-01-01
Intrinsically disordered proteins (IDPs) are highly abundant in eukaryotic proteomes. Plant IDPs play critical roles in plant biology and often act as integrators of signals from multiple plant regulatory and environmental inputs. Binding promiscuity and plasticity allow IDPs to interact with multiple partners in protein interaction networks and provide important functional advantages in molecular recognition through transient protein–protein interactions. Short interaction-prone segments within IDPs, termed molecular recognition features, represent potential binding sites that can undergo disorder-to-order transition upon binding to their partners. In this review, we summarize the evidence for the importance of IDPs in plant biology and evaluate the functions associated with intrinsic disorder in five different types of plant protein families experimentally confirmed as IDPs. Functional studies of these proteins illustrate the broad impact of disorder on many areas of plant biology, including abiotic stress, transcriptional regulation, light perception, and development. Based on the roles of disorder in the protein–protein interactions, we propose various modes of action for plant IDPs that may provide insight for future experimental approaches aimed at understanding the molecular basis of protein function within important plant pathways. PMID:23362206
NASA Astrophysics Data System (ADS)
Singh, M. K.; Soma, A. K.; Pathak, Ramji; Singh, V.
2014-03-01
This article focuses on multiplicity distributions of shower particles and target fragments for interaction of 84 Kr 36 with NIKFI BR-2 nuclear emulsion target at kinetic energy of 1 GeV per nucleon. Experimental multiplicity distributions of shower particles, grey particles, black particles and heavily ionization particles are well described by multi-component Erlang distribution of multi-source thermal model. We have observed a linear correlation in multiplicities for the above mentioned particles or fragments. Further experimental studies have shown a saturation phenomenon in shower particle multiplicity with the increase of target fragment multiplicity.
Isomer ratios for products of photonuclear reactions on 121Sb
NASA Astrophysics Data System (ADS)
Bezshyyko, Oleg; Dovbnya, Anatoliy; Golinka-Bezshyyko, Larisa; Kadenko, Igor; Vodin, Oleksandr; Olejnik, Stanislav; Tuller, Gleb; Kushnir, Volodymyr; Mitrochenko, Viktor
2017-09-01
Over the past several years various preequilibrium model approaches for nuclear reactions were developed. Diversified detailed experimental data in the medium excitation energy region for nucleus are needed for reasonable selection among these theoretical models. Lack of experimental data in this energy region does essentially limit the possibilities for analysis and comparison of different preequilibrium theoretical models. For photonuclear reactions this energy region extends between bremsstrahlung energies nearly 30-100 MeV. Experimental measurements and estimations of isomer ratios for products of photonuclear reactions with multiple particle escape on antimony have been performed using bremsstrahlung with end-point energies 38, 43 and 53 MeV. Method of induced activity measurement was applied. For acquisition of gamma spectra we used HPGe spectrometer with 20% efficiency and energy resolution 1.9 keV for 1332 keV gamma line of 60Co. Linear accelerator of electrons LU-40 was a source of bremsstrahlung. Energy resolution of electron beam was about 1% and mean current was within (3.8-5.3) μA.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
Yeste, Ada; Nadeau, Meghan; Burns, Evan J.; Weiner, Howard L.; Quintana, Francisco J.
2012-01-01
The immune response is normally controlled by regulatory T cells (Tregs). However, Treg deficits are found in autoimmune diseases, and therefore the induction of functional Tregs is considered a potential therapeutic approach for autoimmune disorders. The activation of the ligand-activated transcription factor aryl hydrocarbon receptor by 2-(1′H-indole-3′-carbonyl)-thiazole-4-carboxylic acid methyl ester (ITE) or other ligands induces dendritic cells (DCs) that promote FoxP3+ Treg differentiation. Here we report the use of nanoparticles (NPs) to coadminister ITE and a T-cell epitope from myelin oligodendrocyte glycoprotein (MOG)35–55 to promote the generation of Tregs by DCs. NP-treated DCs displayed a tolerogenic phenotype and promoted the differentiation of Tregs in vitro. Moreover, NPs carrying ITE and MOG35–55 expanded the FoxP3+ Treg compartment and suppressed the development of experimental autoimmune encephalomyelitis, an experimental model of multiple sclerosis. Thus, NPs are potential new tools to induce functional Tregs in autoimmune disorders. PMID:22745170
Infrared absorptivities of transition metals at room and liquid-helium temperatures.
NASA Technical Reports Server (NTRS)
Jones, M. C.; Palmer, D. C.; Tien, C. L.
1972-01-01
Evaluation of experimental data concerning the normal spectral absorptivities of the transition metals, nickel, iron, platinum, and chromium, at both room and liquid-helium temperatures in the wavelength range from 2.5 to 50 microns. The absorptivities were derived from reflectivity measurements made relative to a room-temperature vapor-deposited gold reference mirror. The absorptivity of the gold reference mirror was measured calorimetrically, by use of infrared laser sources. Investigation of various methods of sample-surface preparation resulted in the choice of a vacuum-annealing process as the final stage. The experimental results are discussed on the basis of the anomalous-skin-effect theory modified for multiple conduction bands. As predicted, the results approach a single-band model toward the longer wavelengths. Agreement between theory and experiment is considerably improved by taking into account the modification of the relaxation time due to the photon-electron-phonon interaction proposed by Holstein (1954) and Gurzhi (1958); but, particularly at helium temperatures, the calculated curve is consistently below the experimental results.
Machine learning algorithms for the creation of clinical healthcare enterprise systems
NASA Astrophysics Data System (ADS)
Mandal, Indrajit
2017-10-01
Clinical recommender systems are increasingly becoming popular for improving modern healthcare systems. Enterprise systems are persuasively used for creating effective nurse care plans to provide nurse training, clinical recommendations and clinical quality control. A novel design of a reliable clinical recommender system based on multiple classifier system (MCS) is implemented. A hybrid machine learning (ML) ensemble based on random subspace method and random forest is presented. The performance accuracy and robustness of proposed enterprise architecture are quantitatively estimated to be above 99% and 97%, respectively (above 95% confidence interval). The study then extends to experimental analysis of the clinical recommender system with respect to the noisy data environment. The ranking of items in nurse care plan is demonstrated using machine learning algorithms (MLAs) to overcome the drawback of the traditional association rule method. The promising experimental results are compared against the sate-of-the-art approaches to highlight the advancement in recommendation technology. The proposed recommender system is experimentally validated using five benchmark clinical data to reinforce the research findings.
Micro Blowing Simulations Using a Coupled Finite-Volume Lattice-Boltzman n L ES Approach
NASA Technical Reports Server (NTRS)
Menon, S.; Feiz, H.
1990-01-01
Three dimensional large-eddy simulations (LES) of single and multiple jet-in-cross-flow (JICF) are conducted using the 19-bit Lattice Boltzmann Equation (LBE) method coupled with a conventional finite-volume (FV) scheme. In this coupled LBE-FV approach, the LBE-LES is employed to simulate the flow inside the jet nozzles while the FV-LES is used to simulate the crossflow. The key application area is the use of this technique is to study the micro blowing technique (MBT) for drag control similar to the recent experiments at NASA/GRC. It is necessary to resolve the flow inside the micro-blowing and suction holes with high resolution without being restricted by the FV time-step restriction. The coupled LBE-FV-LES approach achieves this objectives in a computationally efficient manner. A single jet in crossflow case is used for validation purpose and the results are compared with experimental data and full LBE-LES simulation. Good agreement with data is obtained. Subsequently, MBT over a flat plate with porosity of 25% is simulated using 9 jets in a compressible cross flow at a Mach number of 0.4. It is shown that MBT suppresses the near-wall vortices and reduces the skin friction by up to 50 percent. This is in good agreement with experimental data.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.
Hitchcock, Elaine R.; Ferron, John
2017-01-01
Purpose Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. Method This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Conclusions Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders. PMID:28595354
Kawai, Ryoko; Araki, Mitsugu; Yoshimura, Masashi; Kamiya, Narutoshi; Ono, Masahiro; Saji, Hideo; Okuno, Yasushi
2018-05-16
Development of new diagnostic imaging probes for Alzheimer's disease, such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) probes, has been strongly desired. In this study, we investigated the most accessible amyloid β (Aβ) binding site of [ 123 I]IMPY, a Thioflavin-T-derived SPECT probe, using experimental and computational methods. First, we performed a competitive inhibition assay with Orange-G, which recognizes the KLVFFA region in Aβ fibrils, suggesting that IMPY and Orange-G bind to different sites in Aβ fibrils. Next, we precisely predicted the IMPY binding site on a multiple-protofilament Aβ fibril model using computational approaches, consisting of molecular dynamics and docking simulations. We generated possible IMPY-binding structures using docking simulations to identify candidates for probe-binding sites. The binding free energy of IMPY with the Aβ fibril was calculated by a free energy simulation method, MP-CAFEE. These computational results suggest that IMPY preferentially binds to an interfacial pocket located between two protofilaments and is stabilized mainly through hydrophobic interactions. Finally, our computational approach was validated by comparing it with the experimental results. The present study demonstrates the possibility of computational approaches to screen new PET/SPECT probes for Aβ imaging.
Byun, Tara McAllister; Hitchcock, Elaine R; Ferron, John
2017-06-10
Single-case experimental designs are widely used to study interventions for communication disorders. Traditionally, single-case experiments follow a response-guided approach, where design decisions during the study are based on participants' observed patterns of behavior. However, this approach has been criticized for its high rate of Type I error. In masked visual analysis (MVA), response-guided decisions are made by a researcher who is blinded to participants' identities and treatment assignments. MVA also makes it possible to conduct a hypothesis test assessing the significance of treatment effects. This tutorial describes the principles of MVA, including both how experiments can be set up and how results can be used for hypothesis testing. We then report a case study showing how MVA was deployed in a multiple-baseline across-subjects study investigating treatment for residual errors affecting rhotics. Strengths and weaknesses of MVA are discussed. Given their important role in the evidence base that informs clinical decision making, it is critical for single-case experimental studies to be conducted in a way that allows researchers to draw valid inferences. As a method that can increase the rigor of single-case studies while preserving the benefits of a response-guided approach, MVA warrants expanded attention from researchers in communication disorders.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasti, D.E.; Ramirez, J.J.; Coleman, P.D.
1985-01-01
The Megamp Accelerator and Beam Experiment (MABE) was the technology development testbed for the multiple beam, linear induction accelerator approach for Hermes III, a new 20 MeV, 0.8 MA, 40 ns accelerator being developed at Sandia for gamma-ray simulation. Experimental studies of a high-current, single-beam accelerator (8 MeV, 80 kA), and a nine-beam injector (1.4 MeV, 25 kA/beam) have been completed, and experiments on a nine-beam linear induction accelerator are in progress. A two-beam linear induction accelerator is designed and will be built as a gamma-ray simulator to be used in parallel with Hermes III. The MABE pulsed power systemmore » and accelerator for the multiple beam experiments is described. Results from these experiments and the two-beam design are discussed. 11 refs., 6 figs.« less
Design and synthetic considerations of matrix metalloproteinase inhibitors.
Skotnicki, J S; Zask, A; Nelson, F C; Albright, J D; Levin, J I
1999-06-30
Experimental evidence confirms that the matrix metalloproteinases (MMPs) play a fundamental role in a wide variety of pathologic conditions that involve connective tissue destruction including osteoarthritis and rheumatoid arthritis, tumor metastasis and angiogenesis, corneal ulceration, multiple sclerosis, periodontal disease, and atherosclerosis. Modulation of MMP regulation is possible at several biochemical sites, but direct inhibition of enzyme action provides a particularly attractive target for therapeutic intervention. Hypotheses concerning inhibition of specific MMP(s) with respect to disease target and/or side-effect profile have emerged. Examples are presented of recent advances in medicinal chemistry approaches to the design of matrix metalloproteinase inhibitors (MMPIs), approaches that address structural requirements and that influence potency, selectivity, and bioavailability. Two important approaches to the design, synthesis, and biological evaluation of MMPIs are highlighted: (1) the invention of alternatives to hydroxamic acid zinc chelators and (2) the construction of nonpeptide scaffolds. One current example in each of these two approaches from our own work is described.
Liao, Hstau Y.; Hashem, Yaser; Frank, Joachim
2015-01-01
Summary Single-particle cryogenic electron microscopy (cryo-EM) is a powerful tool for the study of macromolecular structures at high resolution. Classification allows multiple structural states to be extracted and reconstructed from the same sample. One classification approach is via the covariance matrix, which captures the correlation between every pair of voxels. Earlier approaches employ computing-intensive resampling and estimate only the eigenvectors of the matrix, which are then used in a separate fast classification step. We propose an iterative scheme to explicitly estimate the covariance matrix in its entirety. In our approach, the flexibility in choosing the solution domain allows us to examine a part of the molecule in greater detail. 3D covariance maps obtained in this way from experimental data (cryo-EM images of the eukaryotic pre-initiation complex) prove to be in excellent agreement with conclusions derived by using traditional approaches, revealing in addition the interdependencies of ligand bindings and structural changes. PMID:25982529
Liao, Hstau Y; Hashem, Yaser; Frank, Joachim
2015-06-02
Single-particle cryogenic electron microscopy (cryo-EM) is a powerful tool for the study of macromolecular structures at high resolution. Classification allows multiple structural states to be extracted and reconstructed from the same sample. One classification approach is via the covariance matrix, which captures the correlation between every pair of voxels. Earlier approaches employ computing-intensive resampling and estimate only the eigenvectors of the matrix, which are then used in a separate fast classification step. We propose an iterative scheme to explicitly estimate the covariance matrix in its entirety. In our approach, the flexibility in choosing the solution domain allows us to examine a part of the molecule in greater detail. Three-dimensional covariance maps obtained in this way from experimental data (cryo-EM images of the eukaryotic pre-initiation complex) prove to be in excellent agreement with conclusions derived by using traditional approaches, revealing in addition the interdependencies of ligand bindings and structural changes. Copyright © 2015 Elsevier Ltd. All rights reserved.
3D Power Line Extraction from Multiple Aerial Images.
Oh, Jaehong; Lee, Changno
2017-09-29
Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.
3D Power Line Extraction from Multiple Aerial Images
Lee, Changno
2017-01-01
Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters. PMID:28961204
Decoding brain cancer dynamics: a quantitative histogram-based approach using temporal MRI
NASA Astrophysics Data System (ADS)
Zhou, Mu; Hall, Lawrence O.; Goldgof, Dmitry B.; Russo, Robin; Gillies, Robert J.; Gatenby, Robert A.
2015-03-01
Brain tumor heterogeneity remains a challenge for probing brain cancer evolutionary dynamics. In light of evolution, it is a priority to inspect the cancer system from a time-domain perspective since it explicitly tracks the dynamics of cancer variations. In this paper, we study the problem of exploring brain tumor heterogeneity from temporal clinical magnetic resonance imaging (MRI) data. Our goal is to discover evidence-based knowledge from such temporal imaging data, where multiple clinical MRI scans from Glioblastoma multiforme (GBM) patients are generated during therapy. In particular, we propose a quantitative histogram-based approach that builds a prediction model to measure the difference in histograms obtained from pre- and post-treatment. The study could significantly assist radiologists by providing a metric to identify distinctive patterns within each tumor, which is crucial for the goal of providing patient-specific treatments. We examine the proposed approach for a practical application - clinical survival group prediction. Experimental results show that our approach achieved 90.91% accuracy.
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2017-04-01
This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.
Systemic Injection of Neural Stem/Progenitor Cells in Mice with Chronic EAE
Donegà, Matteo; Giusto, Elena; Cossetti, Chiara; Schaeffer, Julia; Pluchino, Stefano
2014-01-01
Neural stem/precursor cells (NPCs) are a promising stem cell source for transplantation approaches aiming at brain repair or restoration in regenerative neurology. This directive has arisen from the extensive evidence that brain repair is achieved after focal or systemic NPC transplantation in several preclinical models of neurological diseases. These experimental data have identified the cell delivery route as one of the main hurdles of restorative stem cell therapies for brain diseases that requires urgent assessment. Intraparenchymal stem cell grafting represents a logical approach to those pathologies characterized by isolated and accessible brain lesions such as spinal cord injuries and Parkinson's disease. Unfortunately, this principle is poorly applicable to conditions characterized by a multifocal, inflammatory and disseminated (both in time and space) nature, including multiple sclerosis (MS). As such, brain targeting by systemic NPC delivery has become a low invasive and therapeutically efficacious protocol to deliver cells to the brain and spinal cord of rodents and nonhuman primates affected by experimental chronic inflammatory damage of the central nervous system (CNS). This alternative method of cell delivery relies on the NPC pathotropism, specifically their innate capacity to (i) sense the environment via functional cell adhesion molecules and inflammatory cytokine and chemokine receptors; (ii) cross the leaking anatomical barriers after intravenous (i.v.) or intracerebroventricular (i.c.v.) injection; (iii) accumulate at the level of multiple perivascular site(s) of inflammatory brain and spinal cord damage; and (i.v.) exert remarkable tissue trophic and immune regulatory effects onto different host target cells in vivo. Here we describe the methods that we have developed for the i.v. and i.c.v. delivery of syngeneic NPCs in mice with experimental autoimmune encephalomyelitis (EAE), as model of chronic CNS inflammatory demyelination, and envisage the systemic stem cell delivery as a valuable technique for the selective targeting of the inflamed brain in regenerative neurology. PMID:24798882
Multiscale Simulation and Modeling of Multilayer Heteroepitactic Growth of C60 on Pentacene.
Acevedo, Yaset M; Cantrell, Rebecca A; Berard, Philip G; Koch, Donald L; Clancy, Paulette
2016-03-29
We apply multiscale methods to describe the strained growth of multiple layers of C60 on a thin film of pentacene. We study this growth in the presence of a monolayer pentacene step to compare our simulations to recent experimental studies by Breuer and Witte of submonolayer growth in the presence of monolayer steps. The molecular-level details of this organic semiconductor interface have ramifications on the macroscale structural and electronic behavior of this system and allow us to describe several unexplained experimental observations for this system. The growth of a C60 thin film on a pentacene surface is complicated by the differing crystal habits of the two component species, leading to heteroepitactical growth. In order to probe this growth, we use three computational methods that offer different approaches to coarse-graining the system and differing degrees of computational efficiency. We present a new, efficient reaction-diffusion continuum model for 2D systems whose results compare well with mesoscale kinetic Monte Carlo (KMC) results for submonolayer growth. KMC extends our ability to simulate multiple layers but requires a library of predefined rates for event transitions. Coarse-grained molecular dynamics (CGMD) circumvents KMC's need for predefined lattices, allowing defects and grain boundaries to provide a more realistic thin film morphology. For multilayer growth, in this particularly suitable candidate for coarse-graining, CGMD is a preferable approach to KMC. Combining the results from these three methods, we show that the lattice strain induced by heteroepitactical growth promotes 3D growth and the creation of defects in the first monolayer. The CGMD results are consistent with experimental results on the same system by Conrad et al. and by Breuer and Witte in which C60 aggregates change from a 2D structure at low temperature to 3D clusters along the pentacene step edges at higher temperatures.
Langwig, Kate E; Wargo, Andrew R; Jones, Darbi R; Viss, Jessie R; Rutan, Barbara J; Egan, Nicholas A; Sá-Guimarães, Pedro; Kim, Min Sun; Kurath, Gael; Gomes, M Gabriela M; Lipsitch, Marc
2017-11-21
Heterogeneity in host susceptibility is a key determinant of infectious disease dynamics but is rarely accounted for in assessment of disease control measures. Understanding how susceptibility is distributed in populations, and how control measures change this distribution, is integral to predicting the course of epidemics with and without interventions. Using multiple experimental and modeling approaches, we show that rainbow trout have relatively homogeneous susceptibility to infection with infectious hematopoietic necrosis virus and that vaccination increases heterogeneity in susceptibility in a nearly all-or-nothing fashion. In a simple transmission model with an R 0 of 2, the highly heterogeneous vaccine protection would cause a 35 percentage-point reduction in outbreak size over an intervention inducing homogenous protection at the same mean level. More broadly, these findings provide validation of methodology that can help to reduce biases in predictions of vaccine impact in natural settings and provide insight into how vaccination shapes population susceptibility. IMPORTANCE Differences among individuals influence transmission and spread of infectious diseases as well as the effectiveness of control measures. Control measures, such as vaccines, may provide leaky protection, protecting all hosts to an identical degree, or all-or-nothing protection, protecting some hosts completely while leaving others completely unprotected. This distinction can have a dramatic influence on disease dynamics, yet this distribution of protection is frequently unaccounted for in epidemiological models and estimates of vaccine efficacy. Here, we apply new methodology to experimentally examine host heterogeneity in susceptibility and mode of vaccine action as distinct components influencing disease outcome. Through multiple experiments and new modeling approaches, we show that the distribution of vaccine effects can be robustly estimated. These results offer new experimental and inferential methodology that can improve predictions of vaccine effectiveness and have broad applicability to human, wildlife, and ecosystem health. Copyright © 2017 Langwig et al.
Ghorbani, Neda; Rassafiani, Mehdi; Izadi-Najafabadi, Sara; Yazdani, Farzaneh; Akbarfahimi, Nazila; Havaei, Naser; Gharebaghy, Soraya
2017-12-01
Cerebral palsy (CP) is the most common cause of physical disabilities during childhood. Therapeutic interventions mainly focus on impairment reduction to address motor-based difficulties. In contrast, Cognitive Orientation to daily Occupational Performance (CO-OP) is a cognitive approach, providing intervention at the level of activity and participation. This study aims to determine whether the CO-OP approach improves motor skills and achievement in motor-based occupational performance goals in children with CP. In this mixed design research (i.e., a multiple baseline single case experimental design and a one-group pretest-posttest design), five children with CP participated in 12 CO-OP intervention sessions. Repeated measures of motor skills for the multiple baseline single case experimental design were taken using the Bruininks-Oseretsky Test of Motor Proficiency (BOTMP); pre- and post-measures of parent/child perception of performance and satisfaction were identified using the Canadian Occupational Performance Measure (COPM); level of achievement was identified using Goal Attainment Scaling (GAS). According to the BOTMP results, all children were able to engage in the CO-OP intervention to improve motor performance. Significant differences after treatment were found in both performance and performance satisfaction ratings using the COPM as rated by parents and children. The GAS results showed progress in achievement levels for all children; all goals were achieved or exceeded. CO-OP intervention can be helpful in improving motor skills and achieving self-identified, motor-based goals in children with CP. Copyright © 2017. Published by Elsevier Ltd.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots.
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-11-24
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled.
SSVEP-based Experimental Procedure for Brain-Robot Interaction with Humanoid Robots
Zhao, Jing; Li, Wei; Mao, Xiaoqian; Li, Mengfan
2015-01-01
Brain-Robot Interaction (BRI), which provides an innovative communication pathway between human and a robotic device via brain signals, is prospective in helping the disabled in their daily lives. The overall goal of our method is to establish an SSVEP-based experimental procedure by integrating multiple software programs, such as OpenViBE, Choregraph, and Central software as well as user developed programs written in C++ and MATLAB, to enable the study of brain-robot interaction with humanoid robots. This is achieved by first placing EEG electrodes on a human subject to measure the brain responses through an EEG data acquisition system. A user interface is used to elicit SSVEP responses and to display video feedback in the closed-loop control experiments. The second step is to record the EEG signals of first-time subjects, to analyze their SSVEP features offline, and to train the classifier for each subject. Next, the Online Signal Processor and the Robot Controller are configured for the online control of a humanoid robot. As the final step, the subject completes three specific closed-loop control experiments within different environments to evaluate the brain-robot interaction performance. The advantage of this approach is its reliability and flexibility because it is developed by integrating multiple software programs. The results show that using this approach, the subject is capable of interacting with the humanoid robot via brain signals. This allows the mind-controlled humanoid robot to perform typical tasks that are popular in robotic research and are helpful in assisting the disabled. PMID:26650051
DOE Office of Scientific and Technical Information (OSTI.GOV)
Settens, Charles M.
2015-01-01
Simultaneous migration of planar transistors to FinFET architectures, the introduction of a plurality of materials to ensure suitable electrical characteristics, and the establishment of reliable multiple patterning lithography schemes to pattern sub-10 nm feature sizes imposes formidable challenges to current in-line dimensional metrologies. Because the shape of a FinFET channel cross-section immediately influences the electrical characteristics, the evaluation of 3D device structures requires measurement of parameters beyond traditional critical dimension (CD), including their sidewall angles, top corner rounding and footing, roughness, recesses and undercuts at single nanometer dimensions; thus, metrologies require sub-nm and approaching atomic level measurement uncertainty. Synchrotron criticalmore » dimension small angle X-ray scattering (CD-SAXS) has unique capabilities to non-destructively monitor the cross-section shape of surface structures with single nanometer uncertainty and can perform overlay metrology to sub-nm uncertainty. In this dissertation, we perform a systematic experimental investigation using CD-SAXS metrology on a hierarchy of semiconductor 3D device architectures including, high-aspect-ratio contact holes, H2 annealed Si fins, and a series of grating type samples at multiple points along a FinFET fabrication process increasing in structural intricacy and ending with fully fabricated FinFET. Comparative studies between CD-SAXS metrology and other relevant semiconductor dimensional metrologies, particularly CDSEM, CD-AFM and TEM are used to determine physical limits of CD-SAXS approach for advanced semiconductor samples. CD-SAXS experimental tradeoffs, advice for model-dependent analysis and thoughts on the compatibility with a semiconductor manufacturing environment are discussed.« less
Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong
2011-01-21
Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Data-driven analysis of functional brain interactions during free listening to music and speech.
Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming
2015-06-01
Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.
Collaborative sparse priors for multi-view ATR
NASA Astrophysics Data System (ADS)
Li, Xuelu; Monga, Vishal
2018-04-01
Recent work has seen a surge of sparse representation based classification (SRC) methods applied to automatic target recognition problems. While traditional SRC approaches used l0 or l1 norm to quantify sparsity, spike and slab priors have established themselves as the gold standard for providing general tunable sparse structures on vectors. In this work, we employ collaborative spike and slab priors that can be applied to matrices to encourage sparsity for the problem of multi-view ATR. That is, target images captured from multiple views are expanded in terms of a training dictionary multiplied with a coefficient matrix. Ideally, for a test image set comprising of multiple views of a target, coefficients corresponding to its identifying class are expected to be active, while others should be zero, i.e. the coefficient matrix is naturally sparse. We develop a new approach to solve the optimization problem that estimates the sparse coefficient matrix jointly with the sparsity inducing parameters in the collaborative prior. ATR problems are investigated on the mid-wave infrared (MWIR) database made available by the US Army Night Vision and Electronic Sensors Directorate, which has a rich collection of views. Experimental results show that the proposed joint prior and coefficient estimation method (JPCEM) can: 1.) enable improved accuracy when multiple views vs. a single one are invoked, and 2.) outperform state of the art alternatives particularly when training imagery is limited.
Active Vibration Control for Helicopter Interior Noise Reduction Using Power Minimization
NASA Technical Reports Server (NTRS)
Mendoza, J.; Chevva, K.; Sun, F.; Blanc, A.; Kim, S. B.
2014-01-01
This report describes work performed by United Technologies Research Center (UTRC) for NASA Langley Research Center (LaRC) under Contract NNL11AA06C. The objective of this program is to develop technology to reduce helicopter interior noise resulting from multiple gear meshing frequencies. A novel active vibration control approach called Minimum Actuation Power (MAP) is developed. MAP is an optimal control strategy that minimizes the total input power into a structure by monitoring and varying the input power of controlling sources. MAP control was implemented without explicit knowledge of the phasing and magnitude of the excitation sources by driving the real part of the input power from the controlling sources to zero. It is shown that this occurs when the total mechanical input power from the excitation and controlling sources is a minimum. MAP theory is developed for multiple excitation sources with arbitrary relative phasing for single or multiple discrete frequencies and controlled by a single or multiple controlling sources. Simulations and experimental results demonstrate the feasibility of MAP for structural vibration reduction of a realistic rotorcraft interior structure. MAP control resulted in significant average global vibration reduction of a single frequency and multiple frequency excitations with one controlling actuator. Simulations also demonstrate the potential effectiveness of the observed vibration reductions on interior radiated noise.
Bardy, Fabrice; Dillon, Harvey; Van Dun, Bram
2014-04-01
Rapid presentation of stimuli in an evoked response paradigm can lead to overlap of multiple responses and consequently difficulties interpreting waveform morphology. This paper presents a deconvolution method allowing overlapping multiple responses to be disentangled. The deconvolution technique uses a least-squared error approach. A methodology is proposed to optimize the stimulus sequence associated with the deconvolution technique under low-jitter conditions. It controls the condition number of the matrices involved in recovering the responses. Simulations were performed using the proposed deconvolution technique. Multiple overlapping responses can be recovered perfectly in noiseless conditions. In the presence of noise, the amount of error introduced by the technique can be controlled a priori by the condition number of the matrix associated with the used stimulus sequence. The simulation results indicate the need for a minimum amount of jitter, as well as a sufficient number of overlap combinations to obtain optimum results. An aperiodic model is recommended to improve reconstruction. We propose a deconvolution technique allowing multiple overlapping responses to be extracted and a method of choosing the stimulus sequence optimal for response recovery. This technique may allow audiologists, psychologists, and electrophysiologists to optimize their experimental designs involving rapidly presented stimuli, and to recover evoked overlapping responses. Copyright © 2013 International Federation of Clinical Neurophysiology. All rights reserved.
High-quality slab-based intermixing method for fusion rendering of multiple medical objects.
Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil
2016-01-01
The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network
NASA Astrophysics Data System (ADS)
Li, Yiming; Bhanu, Bir
Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.
Experimental Methods for Protein Interaction Identification and Characterization
NASA Astrophysics Data System (ADS)
Uetz, Peter; Titz, Björn; Cagney, Gerard
There are dozens of methods for the detection of protein-protein interactions but they fall into a few broad categories. Fragment complementation assays such as the yeast two-hybrid (Y2H) system are based on split proteins that are functionally reconstituted by fusions of interacting proteins. Biophysical methods include structure determination and mass spectrometric (MS) identification of proteins in complexes. Biochemical methods include methods such as far western blotting and peptide arrays. Only the Y2H and protein complex purification combined with MS have been used on a larger scale. Due to the lack of data it is still difficult to compare these methods with respect to their efficiency and error rates. Current data does not favor any particular method and thus multiple experimental approaches are necessary to maximally cover the interactome of any target cell or organism.
Essential amino acids interacting with flavonoids: A theoretical approach
NASA Astrophysics Data System (ADS)
Codorniu-Hernández, Edelsys; Mesa-Ibirico, Ariel; Hernández-Santiesteban, Richel; Montero-Cabrera, Luis A.; Martínez-Luzardo, Francisco; Santana-Romero, Jorge L.; Borrmann, Tobias; Stohrer, Wolf-D.
The interaction of two flavonoid species (resorcinolic and fluoroglucinolic) with the 20 essential amino acids was studied by the multiple minima hypersurface (MMH) procedures, through the AM1 and PM3 semiempirical methods. Remarkable thermodynamic data related to the properties of the molecular association of these compounds were obtained, which will be of great utility for future investigations concerning the interaction of flavonoids with proteins. These results are compared with experimental and classical force field results reported in the available literature, and new evidences and criteria are shown. The hydrophilic amino acids demonstrated high affinity in the interaction with flavonoid molecules; the complexes with lysine are especially extremely stable. An affinity order for the interaction of both flavonoid species with the essential amino acids is suggested. Our theoretical results are compared with experimental evidence on flavonoid interactions with proteins of biomedical interest.
Structural damage diagnostics via wave propagation-based filtering techniques
NASA Astrophysics Data System (ADS)
Ayers, James T., III
Structural health monitoring (SHM) of aerospace components is a rapidly emerging field due in part to commercial and military transport vehicles remaining in operation beyond their designed life cycles. Damage detection strategies are sought that provide real-time information of the structure's integrity. One approach that has shown promise to accurately identify and quantify structural defects is based on guided ultrasonic wave (GUW) inspections, where low amplitude attenuation properties allow for long range and large specimen evaluation. One drawback to GUWs is that they exhibit a complex multi-modal response, such that each frequency corresponds to at least two excited modes, and thus intelligent signal processing is required for even the simplest of structures. In addition, GUWs are dispersive, whereby the wave velocity is a function of frequency, and the shape of the wave packet changes over the spatial domain, requiring sophisticated detection algorithms. Moreover, existing damage quantification measures are typically formulated as a comparison of the damaged to undamaged response, which has proven to be highly sensitive to changes in environment, and therefore often unreliable. As a response to these challenges inherent to GUW inspections, this research develops techniques to locate and estimate the severity of the damage. Specifically, a phase gradient based localization algorithm is introduced to identify the defect position independent of excitation frequency and damage size. Mode separation through the filtering technique is central in isolating and extracting single mode components, such as reflected, converted, and transmitted modes that may arise from the incident wave impacting a damage. Spatially-integrated single and multiple component mode coefficients are also formulated with the intent to better characterize wave reflections and conversions and to increase the signal to noise ratios. The techniques are applied to damaged isotropic finite element plate models and experimental data obtained from Scanning Laser Doppler Vibrometry tests. Numerical and experimental parametric studies are conducted, and the current strengths and weaknesses of the proposed approaches are discussed. In particular, limitations to the damage profiling characterization are shown for low ultrasonic frequency regimes, whereas the multiple component mode conversion coefficients provide excellent noise mitigation. Multiple component estimation relies on an experimental technique developed for the estimation of Lamb wave polarization using a 1D Laser Vibrometer. Lastly, suggestions are made to apply the techniques to more structurally complex geometries.
Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A
2014-01-01
Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).
Hyper sausage neuron: Recognition of transgenic sugar-beet based on terahertz spectroscopy
NASA Astrophysics Data System (ADS)
Liu, Jianjun; Li, Zhi; Hu, Fangrong; Chen, Tao; Du, Yong; Xin, Haitao
2015-01-01
This paper presents a novel approach for identification of terahertz (THz) spectral of genetically modified organisms (GMOs) based on Hyper Sausage Neuron (HSN), and THz transmittance spectra of some typical transgenic sugar-beet samples are investigated to demonstrate its feasibility. Principal component analysis (PCA) is applied to extract features of the spectrum data, and instead of the original spectrum data, the feature signals are fed into the HSN pattern recognition, a new multiple weights neural network (MWNN). The experimental result shows that the HSN model not only can correctly classify different types of transgenic sugar-beets, but also can reject identity non similar samples in the same type. The proposed approach provides a new effective method for detection and identification of GMOs by using THz spectroscopy.
Identification of Transgenic Organisms Based on Terahertz Spectroscopy and Hyper Sausage Neuron
NASA Astrophysics Data System (ADS)
Liu, J.; Li, Zh.; Hu, F.; Chen, T.; Du, Y.; Xin, H.
2015-03-01
This paper presents a novel approach for identifi cation of terahertz (THz) spectra of genetically modifi ed organisms (GMOs) based on hyper sausage neuron (HSN), and THz transmittance spectra of some typical transgenic sugarbeet samples are investigated to demonstrate its feasibility. Principal component analysis (PCA) is applied to extract features of the spectrum data, and instead of the original spectrum data, the feature signals are fed into the HSN pattern recognition, a new multiple weights neural network (MWNN). The experimental result shows that the HSN model not only can correctly classify different types of transgenic sugar-beets, but also can reject nonsimilar samples of the same type. The proposed approach provides a new effective method for detection and identification of genetically modified organisms by using THz spectroscopy.
Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew
2012-01-01
The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.
NASA Astrophysics Data System (ADS)
Jones, A. A.; Holt, R. M.
2017-12-01
Image capturing in flow experiments has been used for fluid mechanics research since the early 1970s. Interactions of fluid flow between the vadose zone and permanent water table are of great interest because this zone is responsible for all recharge waters, pollutant transport and irrigation efficiency for agriculture. Griffith, et al. (2011) developed an approach where constructed reproducible "geologically realistic" sand configurations are deposited in sandfilled experimental chambers for light-transmitted flow visualization experiments. This method creates reproducible, reverse graded, layered (stratified) thin-slab sand chambers for point source experiments visualizing multiphase flow through porous media. Reverse-graded stratification of sand chambers mimic many naturally occurring sedimentary deposits. Sandfilled chambers use light as nonintrusive tools for measuring water saturation in two-dimensions (2-D). Homogeneous and heterogeneous sand configurations can be produced to visualize the complex physics of the unsaturated zone. The experimental procedure developed by Griffith, et al. (2011) was designed using now outdated and obsolete equipment. We have modernized this approach with new Parker Deadel linear actuator and programed projects/code for multiple configurations. We have also updated the Roper CCD software and image processing software with the latest in industry standards. Modernization of transmitted-light source, robotic equipment, redesigned experimental chambers, and newly developed analytical procedures have greatly reduced time and cost per experiment. We have verified the ability of the new equipment to generate reproducible heterogeneous sand-filled chambers and demonstrated the functionality of the new equipment and procedures by reproducing several gravity-driven fingering experiments conducted by Griffith (2008).
Determination of ion mobility in EHD flow zone of plasma generator
NASA Astrophysics Data System (ADS)
Sumariyah, Kusminarto, Hermanto, Arief; Nuswantoro, Pekik
2015-12-01
Determination has been carried out for ion mobility in EHD flow zone generated using a pin-concentric multiple ring electrodes and a pin-single ring electrode used as a comparator. The pin needle was made from stainless steel with a tip diameter of 0.18 mm. The concentris multiple ring electrode in form three/two concentric ring electrodes which made of metal material connected to each other. Each ring of three concentric ring electrode has a diameter of 24 mm, 16 mm and 8 mm. And each ring of two concentric ring electrode has a diameter of 24 mm and 16 mm. Single ring electrode has a diameter24 mm. The all ring has same of width and thickness were 2 mm and 3 mm. EHD was generated by using a DC high voltage of 10 kV. Pin functional as an active electrode of corona discharge while the all ring electrodes acted as ions collector and passive electrodes. The experimental results show that the ion current is proportional to V2 according to calculations by Chouelo for hyperbolic-field approach. Ion mobility obtained from the quadratic polynomial fitting of experimental data were current and voltage as well as Choelo formulation. The results showed that the mobility of ions in the EHD flow zones utilizing pin-consentric multiple ring electrode larger than utilizing pin-single ring electrode. Pin-three Consentic ring electrode has the largest of ion mobility
Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.
A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications
NASA Astrophysics Data System (ADS)
Robin, J.; Tanter, M.; Pernot, M.
2017-09-01
Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.
Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system
Min, Jianliang; Wang, Ping
2017-01-01
Driver fatigue is an important contributor to road accidents, and fatigue detection has major implications for transportation safety. The aim of this research is to analyze the multiple entropy fusion method and evaluate several channel regions to effectively detect a driver's fatigue state based on electroencephalogram (EEG) records. First, we fused multiple entropies, i.e., spectral entropy, approximate entropy, sample entropy and fuzzy entropy, as features compared with autoregressive (AR) modeling by four classifiers. Second, we captured four significant channel regions according to weight-based electrodes via a simplified channel selection method. Finally, the evaluation model for detecting driver fatigue was established with four classifiers based on the EEG data from four channel regions. Twelve healthy subjects performed continuous simulated driving for 1–2 hours with EEG monitoring on a static simulator. The leave-one-out cross-validation approach obtained an accuracy of 98.3%, a sensitivity of 98.3% and a specificity of 98.2%. The experimental results verified the effectiveness of the proposed method, indicating that the multiple entropy fusion features are significant factors for inferring the fatigue state of a driver. PMID:29220351
Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji
2017-03-23
In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.
Anatomisation with slicing: a new privacy preservation approach for multiple sensitive attributes.
Susan, V Shyamala; Christopher, T
2016-01-01
An enormous quantity of personal health information is available in recent decades and tampering of any part of this information imposes a great risk to the health care field. Existing anonymization methods are only apt for single sensitive and low dimensional data to keep up with privacy specifically like generalization and bucketization. In this paper, an anonymization technique is proposed that is a combination of the benefits of anatomization, and enhanced slicing approach adhering to the principle of k-anonymity and l-diversity for the purpose of dealing with high dimensional data along with multiple sensitive data. The anatomization approach dissociates the correlation observed between the quasi identifier attributes and sensitive attributes (SA) and yields two separate tables with non-overlapping attributes. In the enhanced slicing algorithm, vertical partitioning does the grouping of the correlated SA in ST together and thereby minimizes the dimensionality by employing the advanced clustering algorithm. In order to get the optimal size of buckets, tuple partitioning is conducted by MFA. The experimental outcomes indicate that the proposed method can preserve privacy of data with numerous SA. The anatomization approach minimizes the loss of information and slicing algorithm helps in the preservation of correlation and utility which in turn results in reducing the data dimensionality and information loss. The advanced clustering algorithms prove its efficiency by minimizing the time and complexity. Furthermore, this work sticks to the principle of k-anonymity, l-diversity and thus avoids privacy threats like membership, identity and attributes disclosure.
Orsini, Luisa; Spanier, Katina I; DE Meester, Luc
2012-05-01
Natural populations are confronted with multiple selection pressures resulting in a mosaic of environmental stressors at the landscape level. Identifying the genetic underpinning of adaptation to these complex selection environments and assigning causes of natural selection within multidimensional selection regimes in the wild is challenging. The water flea Daphnia is a renowned ecological model system with its well-documented ecology, the possibility to analyse subfossil dormant egg banks and the short generation time allowing an experimental evolution approach. Capitalizing on the strengths of this model system, we here link candidate genome regions to three selection pressures, known to induce micro-evolutionary responses in Daphnia magna: fish predation, parasitism and land use. Using a genome scan approach in space, time and experimental evolution trials, we provide solid evidence of selection at the genome level under well-characterized environmental gradients in the wild and identify candidate genes linked to the three environmental stressors. Our study reveals differential selection at the genome level in Daphnia populations and provides evidence for repeatable patterns of local adaptation in a geographic mosaic of environmental stressors fuelled by standing genetic variation. Our results imply high evolutionary potential of local populations, which is relevant to understand the dynamics of trait changes in natural populations and their impact on community and ecosystem responses through eco-evolutionary feedbacks. © 2012 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schüler, Emil; Trovati, Stefania; King, Gregory
Purpose: A key factor limiting the effectiveness of radiation therapy is normal tissue toxicity, and recent preclinical data have shown that ultra-high dose rate irradiation (>50 Gy/s, “FLASH”) potentially mitigates this effect. However, research in this field has been strongly limited by the availability of FLASH irradiators suitable for small animal experiments. We present a simple methodologic approach for FLASH electron small animal irradiation with a clinically available linear accelerator (LINAC). Methods and Materials: We investigated the FLASH irradiation potential of a Varian Clinac 21EX in both clinical mode and after tuning of the LINAC. We performed detailed FLUKA Monte Carlomore » and experimental dosimetric characterization at multiple experimental locations within the LINAC head. Results: Average dose rates of ≤74 Gy/s were achieved in clinical mode, and the dose rate after tuning exceeded 900 Gy/s. We obtained 220 Gy/s at 1-cm depth for a >4-cm field size with 90% homogeneity throughout a 2-cm-thick volume. Conclusions: We present an approach for using a clinical LINAC for FLASH irradiation. We obtained dose rates exceeding 200 Gy/s after simple tuning of the LINAC, with excellent dosimetric properties for small animal experiments. This will allow for increased availability of FLASH irradiation to the general research community.« less
Park, Chorong; Song, Misoon; Cho, Belong; Lim, Jaeyoung; Song, Wook; Chang, Heekyung; Park, Yeon-Hwan
2015-04-01
The purpose of this study was to develop a multi-disciplinary self-management intervention based on empowerment theory and to evaluate the effectiveness of the intervention for older adults with chronic illness. A randomized controlled trial design was used with 43 Korean older adults with chronic illness (Experimental group=22, Control group=21). The intervention consisted of two phases: (1) 8-week multi-disciplinary, team guided, group-based health education, exercise session, and individual empowerment counseling, (2) 16-week self-help group activities including weekly exercise and group discussion to maintain acquired self-management skills and problem-solving skills. Baseline, 8-week, and 24-week assessments measured health empowerment, exercise self-efficacy, physical activity, and physical function. Health empowerment, physical activity, and physical function in the experimental group increased significantly compared to the control group over time. Exercise self-efficacy significantly increased in experimental group over time but there was no significant difference between the two groups. The self-management program based on empowerment theory improved health empowerment, physical activity, and physical function in older adults. The study finding suggests that a health empowerment strategy may be an effective approach for older adults with multiple chronic illnesses in terms of achieving a sense of control over their chronic illness and actively engaging self-management.
A review of covariate selection for non-experimental comparative effectiveness research.
Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler
2013-11-01
This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.
Der, Bryan S.; Kluwe, Christien; Miklos, Aleksandr E.; Jacak, Ron; Lyskov, Sergey; Gray, Jeffrey J.; Georgiou, George; Ellington, Andrew D.; Kuhlman, Brian
2013-01-01
Reengineering protein surfaces to exhibit high net charge, referred to as “supercharging”, can improve reversibility of unfolding by preventing aggregation of partially unfolded states. Incorporation of charged side chains should be optimized while considering structural and energetic consequences, as numerous mutations and accumulation of like-charges can also destabilize the native state. A previously demonstrated approach deterministically mutates flexible polar residues (amino acids DERKNQ) with the fewest average neighboring atoms per side chain atom (AvNAPSA). Our approach uses Rosetta-based energy calculations to choose the surface mutations. Both protocols are available for use through the ROSIE web server. The automated Rosetta and AvNAPSA approaches for supercharging choose dissimilar mutations, raising an interesting division in surface charging strategy. Rosetta-supercharged variants of GFP (RscG) ranging from −11 to −61 and +7 to +58 were experimentally tested, and for comparison, we re-tested the previously developed AvNAPSA-supercharged variants of GFP (AscG) with +36 and −30 net charge. Mid-charge variants demonstrated ∼3-fold improvement in refolding with retention of stability. However, as we pushed to higher net charges, expression and soluble yield decreased, indicating that net charge or mutational load may be limiting factors. Interestingly, the two different approaches resulted in GFP variants with similar refolding properties. Our results show that there are multiple sets of residues that can be mutated to successfully supercharge a protein, and combining alternative supercharge protocols with experimental testing can be an effective approach for charge-based improvement to refolding. PMID:23741319
Adiabatic Quantum Computation: Coherent Control Back Action.
Goswami, Debabrata
2006-11-22
Though attractive from scalability aspects, optical approaches to quantum computing are highly prone to decoherence and rapid population loss due to nonradiative processes such as vibrational redistribution. We show that such effects can be reduced by adiabatic coherent control, in which quantum interference between multiple excitation pathways is used to cancel coupling to the unwanted, non-radiative channels. We focus on experimentally demonstrated adiabatic controlled population transfer experiments wherein the details on the coherence aspects are yet to be explored theoretically but are important for quantum computation. Such quantum computing schemes also form a back-action connection to coherent control developments.
Fok, Mable P; Prucnal, Paul R
2009-05-01
All-optical encryption for optical code-division multiple-access systems with interleaved waveband-switching modulation is experimentally demonstrated. The scheme explores dual-pump four-wave mixing in a 35 cm highly nonlinear bismuth oxide fiber to achieve XOR operation of the plaintext and the encryption key. Bit 0 and bit 1 of the encrypted data are represented by two different wavebands. Unlike on-off keying encryption methods, the encrypted data in this approach has the same intensity for both bit 0 and bit 1. Thus no plaintext or ciphertext signatures are observed.
Demonstration of an SOA-assisted open metro-access infrastructure for heterogeneous services.
Schmuck, H; Bonk, R; Poehlmann, W; Haslach, C; Kuebart, W; Karnick, D; Meyer, J; Fritzsche, D; Weis, E; Becker, J; Freude, W; Pfeiffer, T
2014-01-13
An open converged metro-access network approach allows for sharing optical layer resources like fibers and optical spectrum among different services and operators. We demonstrated experimentally the feasibility of such a concept by the simultaneous operation of multiple services showing different modulation formats and multiplexing techniques. Flexible access nodes are implemented including semiconductor optical amplifiers to create a transparent and reconfigurable optical ring network. The impact of cascaded optical amplifiers on the signal quality is studied along the ring. In addition, the influence of high power rival signals in the same waveband and in the same fiber is analyzed.
Zilkha-Falb, Rina; Yosef-Hemo, Reut; Cohen, Lydia; Ben-Nun, Avraham
2011-01-01
Antigen-induced peripheral tolerance is potentially one of the most efficient and specific therapeutic approaches for autoimmune diseases. Although highly effective in animal models, antigen-based strategies have not yet been translated into practicable human therapy, and several clinical trials using a single antigen or peptidic-epitope in multiple sclerosis (MS) yielded disappointing results. In these clinical trials, however, the apparent complexity and dynamics of the pathogenic autoimmunity associated with MS, which result from the multiplicity of potential target antigens and “epitope spread”, have not been sufficiently considered. Thus, targeting pathogenic T-cells reactive against a single antigen/epitope is unlikely to be sufficient; to be effective, immunospecific therapy to MS should logically neutralize concomitantly T-cells reactive against as many major target antigens/epitopes as possible. We investigated such “multi-epitope-targeting” approach in murine experimental autoimmune encephalomyelitis (EAE) associated with a single (“classical”) or multiple (“complex”) anti-myelin autoreactivities, using cocktail of different encephalitogenic peptides vis-a-vis artificial multi-epitope-protein (designated Y-MSPc) encompassing rationally selected MS-relevant epitopes of five major myelin antigens, as “multi-epitope-targeting” agents. Y-MSPc was superior to peptide(s) in concomitantly downregulating pathogenic T-cells reactive against multiple myelin antigens/epitopes, via inducing more effective, longer lasting peripheral regulatory mechanisms (cytokine shift, anergy, and Foxp3+ CTLA4+ regulatory T-cells). Y-MSPc was also consistently more effective than the disease-inducing single peptide or peptide cocktail, not only in suppressing the development of “classical” or “complex EAE” or ameliorating ongoing disease, but most importantly, in reversing chronic EAE. Overall, our data emphasize that a “multi-epitope-targeting” strategy is required for effective immune-specific therapy of organ-specific autoimmune diseases associated with complex and dynamic pathogenic autoimmunity, such as MS; our data further demonstrate that the “multi-epitope-targeting” approach to therapy is optimized through specifically designed multi-epitope-proteins, rather than myelin peptide cocktails, as “multi-epitope-targeting” agents. Such artificial multi-epitope proteins can be tailored to other organ-specific autoimmune diseases. PMID:22140475
Bayard, David S.; Neely, Michael
2016-01-01
An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a nonparametric model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher Information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the nonparametric model. Specifically, the problem of identifying an individual from a nonparametric prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient’s behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (Multiple-Model Optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications. PMID:27909942
Bayard, David S; Neely, Michael
2017-04-01
An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a NP model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the NP model. Specifically, the problem of identifying an individual from a NP prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient's behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (multiple-model optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications.
Giacoppo, Sabrina; Iori, Renato; Bramanti, Placido
2017-01-01
Background Neuropathic pain represents the major public health burden with a strong impact on quality life in multiple sclerosis patients. Although some advances have been obtained in the last years, the conventional therapies remain poorly effective. Thus, the discovery of innovative approaches to improve the outcomes for multiple sclerosis patients is a goal of primary importance. With this aim, we investigated the efficacy of the 4-(α−L-rhamnopyranosyloxy)benzyl isothiocyanate (moringin), purified from Moringa oleifera seeds and ready-to-use as topical treatment in experimental autoimmune encephalomyelitis, murine model of multiple sclerosis. Female C57BL/6 mice immunized with myelin oligodendrocyte glycoprotein (MOG35–55) were topically treated with 2% moringin cream twice daily from the onset of the symptoms until the sacrifice occurred about 21 days after experimental autoimmune encephalomyelitis induction. Results Our observations showed the efficacy of 2% moringin cream treatment in reducing clinical and histological disease score, as well as in alleviating neuropathic pain with consequent recovering of the hind limbs and response to mechanical stimuli. In particular, Western blot analysis and immunohistochemical evaluations revealed that 2% moringin cream was able to counteract the inflammatory cascade by reducing the production of pro-inflammatory cytokines (interleukin-17 and interferon-γ) and in parallel by increasing the expression of anti-inflammatory cytokine (interleukin-10). Interestingly, 2% moringin cream treatment was found to modulate the expression of voltage-gated ion channels (results focused on P2X7, Nav 1.7, Nav 1.8 KV4.2, and α2δ-1) as well as metabotropic glutamate receptors (mGluR5 and xCT) involved in neuropathic pain initiation and maintenance. Conclusions Finally, our evidences suggest 2% moringin cream as a new pharmacological trend in the management of multiple sclerosis-induced neuropathic pain. PMID:28741431
A framework for biomedical figure segmentation towards image-based document retrieval
2013-01-01
The figures included in many of the biomedical publications play an important role in understanding the biological experiments and facts described within. Recent studies have shown that it is possible to integrate the information that is extracted from figures in classical document classification and retrieval tasks in order to improve their accuracy. One important observation about the figures included in biomedical publications is that they are often composed of multiple subfigures or panels, each describing different methodologies or results. The use of these multimodal figures is a common practice in bioscience, as experimental results are graphically validated via multiple methodologies or procedures. Thus, for a better use of multimodal figures in document classification or retrieval tasks, as well as for providing the evidence source for derived assertions, it is important to automatically segment multimodal figures into subfigures and panels. This is a challenging task, however, as different panels can contain similar objects (i.e., barcharts and linecharts) with multiple layouts. Also, certain types of biomedical figures are text-heavy (e.g., DNA sequences and protein sequences images) and they differ from traditional images. As a result, classical image segmentation techniques based on low-level image features, such as edges or color, are not directly applicable to robustly partition multimodal figures into single modal panels. In this paper, we describe a robust solution for automatically identifying and segmenting unimodal panels from a multimodal figure. Our framework starts by robustly harvesting figure-caption pairs from biomedical articles. We base our approach on the observation that the document layout can be used to identify encoded figures and figure boundaries within PDF files. Taking into consideration the document layout allows us to correctly extract figures from the PDF document and associate their corresponding caption. We combine pixel-level representations of the extracted images with information gathered from their corresponding captions to estimate the number of panels in the figure. Thus, our approach simultaneously identifies the number of panels and the layout of figures. In order to evaluate the approach described here, we applied our system on documents containing protein-protein interactions (PPIs) and compared the results against a gold standard that was annotated by biologists. Experimental results showed that our automatic figure segmentation approach surpasses pure caption-based and image-based approaches, achieving a 96.64% accuracy. To allow for efficient retrieval of information, as well as to provide the basis for integration into document classification and retrieval systems among other, we further developed a web-based interface that lets users easily retrieve panels containing the terms specified in the user queries. PMID:24565394
Yu, Jingkai; Finley, Russell L
2009-01-01
High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.
Statistical Engineering in Air Traffic Management Research
NASA Technical Reports Server (NTRS)
Wilson, Sara R.
2015-01-01
NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
Mouse epileptic seizure detection with multiple EEG features and simple thresholding technique
NASA Astrophysics Data System (ADS)
Tieng, Quang M.; Anbazhagan, Ashwin; Chen, Min; Reutens, David C.
2017-12-01
Objective. Epilepsy is a common neurological disorder characterized by recurrent, unprovoked seizures. The search for new treatments for seizures and epilepsy relies upon studies in animal models of epilepsy. To capture data on seizures, many applications require prolonged electroencephalography (EEG) with recordings that generate voluminous data. The desire for efficient evaluation of these recordings motivates the development of automated seizure detection algorithms. Approach. A new seizure detection method is proposed, based on multiple features and a simple thresholding technique. The features are derived from chaos theory, information theory and the power spectrum of EEG recordings and optimally exploit both linear and nonlinear characteristics of EEG data. Main result. The proposed method was tested with real EEG data from an experimental mouse model of epilepsy and distinguished seizures from other patterns with high sensitivity and specificity. Significance. The proposed approach introduces two new features: negative logarithm of adaptive correlation integral and power spectral coherence ratio. The combination of these new features with two previously described features, entropy and phase coherence, improved seizure detection accuracy significantly. Negative logarithm of adaptive correlation integral can also be used to compute the duration of automatically detected seizures.
Adapting Word Embeddings from Multiple Domains to Symptom Recognition from Psychiatric Notes
Zhang, Yaoyun; Li, Hee-Jin; Wang, Jingqi; Cohen, Trevor; Roberts, Kirk; Xu, Hua
2018-01-01
Mental health is increasingly recognized an important topic in healthcare. Information concerning psychiatric symptoms is critical for the timely diagnosis of mental disorders, as well as for the personalization of interventions. However, the diversity and sparsity of psychiatric symptoms make it challenging for conventional natural language processing techniques to automatically extract such information from clinical text. To address this problem, this study takes the initiative to use and adapt word embeddings from four source domains – intensive care, biomedical literature, Wikipedia and Psychiatric Forum – to recognize symptoms in the target domain of psychiatry. We investigated four different approaches including 1) only using word embeddings of the source domain, 2) directly combining data of the source and target to generate word embeddings, 3) assigning different weights to word embeddings, and 4) retraining the word embedding model of the source domain using a corpus of the target domain. To the best of our knowledge, this is the first work of adapting multiple word embeddings of external domains to improve psychiatric symptom recognition in clinical text. Experimental results showed that the last two approaches outperformed the baseline methods, indicating the effectiveness of our new strategies to leverage embeddings from other domains. PMID:29888086
Emotion recognition based on multiple order features using fractional Fourier transform
NASA Astrophysics Data System (ADS)
Ren, Bo; Liu, Deyin; Qi, Lin
2017-07-01
In order to deal with the insufficiency of recently algorithms based on Two Dimensions Fractional Fourier Transform (2D-FrFT), this paper proposes a multiple order features based method for emotion recognition. Most existing methods utilize the feature of single order or a couple of orders of 2D-FrFT. However, different orders of 2D-FrFT have different contributions on the feature extraction of emotion recognition. Combination of these features can enhance the performance of an emotion recognition system. The proposed approach obtains numerous features that extracted in different orders of 2D-FrFT in the directions of x-axis and y-axis, and uses the statistical magnitudes as the final feature vectors for recognition. The Support Vector Machine (SVM) is utilized for the classification and RML Emotion database and Cohn-Kanade (CK) database are used for the experiment. The experimental results demonstrate the effectiveness of the proposed method.
Multiple-instance ensemble learning for hyperspectral images
NASA Astrophysics Data System (ADS)
Ergul, Ugur; Bilgin, Gokhan
2017-10-01
An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.
NASA Astrophysics Data System (ADS)
Gao, Shibo; Cheng, Yongmei; Song, Chunhua
2013-09-01
The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.
Nonlinear two-dimensional terahertz photon echo and rotational spectroscopy in the gas phase.
Lu, Jian; Zhang, Yaqing; Hwang, Harold Y; Ofori-Okai, Benjamin K; Fleischer, Sharly; Nelson, Keith A
2016-10-18
Ultrafast 2D spectroscopy uses correlated multiple light-matter interactions for retrieving dynamic features that may otherwise be hidden under the linear spectrum; its extension to the terahertz regime of the electromagnetic spectrum, where a rich variety of material degrees of freedom reside, remains an experimental challenge. We report a demonstration of ultrafast 2D terahertz spectroscopy of gas-phase molecular rotors at room temperature. Using time-delayed terahertz pulse pairs, we observe photon echoes and other nonlinear signals resulting from molecular dipole orientation induced by multiple terahertz field-dipole interactions. The nonlinear time domain orientation signals are mapped into the frequency domain in 2D rotational spectra that reveal J-state-resolved nonlinear rotational dynamics. The approach enables direct observation of correlated rotational transitions and may reveal rotational coupling and relaxation pathways in the ground electronic and vibrational state.
NASA Astrophysics Data System (ADS)
Volz, Pierre; Brodwolf, Robert; Zoschke, Christian; Haag, Rainer; Schäfer-Korting, Monika; Alexiev, Ulrike
2018-05-01
We report here on a custom-built time-correlated single photon-counting (TCSPC)-based fluorescence lifetime imaging microscopy (FLIM) setup with a continuously tunable white-light supercontinuum laser combined with acousto-optical tunable filters (AOTF) as an excitation source for simultaneous excitation of multiple spectrally separated fluorophores. We characterized the wavelength dependence of the white-light supercontinuum laser pulse properties and demonstrated the performance of the FLIM setup, aiming to show the experimental setup in depth together with a biomedical application. We herein summarize the physical-technical parameters as well as our approach to map the skin uptake of nanocarriers using FLIM with a resolution compared to spectroscopy. As an example, we focus on the penetration study of indocarbocyanine-labeled dendritic core-multishell nanocarriers (CMS-ICC) into reconstructed human epidermis. Unique fluorescence lifetime signatures of indocarbocyanine-labeled nanocarriers indicate nanocarrier-tissue interactions within reconstructed human epidermis, bringing FLIM close to spectroscopic analysis.
Highly efficient generation of broadband cascaded four-wave mixing products.
Cerqueira S, Arismar; Boggio, J M Chavez; Rieznik, A A; Hernandez-Figueroa, H E; Fragnito, H L; Knight, J C
2008-02-18
We propose a novel way to efficiently generate broadband cascaded Four-Wave Mixing (FWM) products. It consists of launching two strong pump waves near the zero-dispersion wavelength of a very short (of order a few meters) optical fiber. Simulations based on Split Step Fourier Method (SSFM) and experimental data demonstrate the efficiency of our new approach. Multiple FWM products have been investigated by using conventional fibers and ultra-flattened dispersion photonic crystal fibers (UFD-PCFs). Measured results present bandwidths of 300 nm with up to 118 FWM products. We have also demonstrated a flat bandwidth of 110 nm covering the C and L bands, with a small variation of only 1.2 dB between the powers of FWM products, has been achieved using highly nonlinear fibers (HNLFs). The use of UFD-PCFs has been shown interesting for improving the multiple FWM efficiency and reducing the separation between the pump wavelengths.
Mayyas, Fadia; Alzoubi, Karem H.; Van Wagoner, David R.
2014-01-01
Atrial fibrillation (AF), the most common cardiac arrhythmia, is an electrocardiographic description of a condition with multiple and complex underlying mechanisms. Oxidative stress is an important driver of structural remodeling that creates a substrate for AF. Oxidant radicals may promote increase of atrial oxidative damage, electrical and structural remodeling, and atrial inflammation. AF and other cardiovascular morbidities activate angiotensin (Ang-II)-dependent and independent cascades. A key component of the renin–angiotensin-aldosterone system (RAAS) is the mineralocorticoid aldosterone. Recent studies provide evidence of myocardial aldosterone synthesis. Aldosterone promotes cardiac oxidative stress, inflammation and structural/electrical remodeling via multiple mechanisms. In HF patients, aldosterone production is enhanced. In patients and in experimental HF and AF models, aldosterone receptor antagonists have favorable influences on cardiac remodeling and oxidative stress. Therapeutic approaches that seek to reduce AF burden by modulating the aldosterone system are likely beneficial but underutilized. PMID:23993726
Tracking by Identification Using Computer Vision and Radio
Mandeljc, Rok; Kovačič, Stanislav; Kristan, Matej; Perš, Janez
2013-01-01
We present a novel system for detection, localization and tracking of multiple people, which fuses a multi-view computer vision approach with a radio-based localization system. The proposed fusion combines the best of both worlds, excellent computer-vision-based localization, and strong identity information provided by the radio system, and is therefore able to perform tracking by identification, which makes it impervious to propagated identity switches. We present comprehensive methodology for evaluation of systems that perform person localization in world coordinate system and use it to evaluate the proposed system as well as its components. Experimental results on a challenging indoor dataset, which involves multiple people walking around a realistically cluttered room, confirm that proposed fusion of both systems significantly outperforms its individual components. Compared to the radio-based system, it achieves better localization results, while at the same time it successfully prevents propagation of identity switches that occur in pure computer-vision-based tracking. PMID:23262485
Ji, Jim; Wright, Steven
2005-01-01
Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.
James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun
2014-07-01
The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).
High Dynamic Range Imaging Using Multiple Exposures
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei
2017-06-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.
Snapshot Hyperspectral Volumetric Microscopy
NASA Astrophysics Data System (ADS)
Wu, Jiamin; Xiong, Bo; Lin, Xing; He, Jijun; Suo, Jinli; Dai, Qionghai
2016-04-01
The comprehensive analysis of biological specimens brings about the demand for capturing the spatial, temporal and spectral dimensions of visual information together. However, such high-dimensional video acquisition faces major challenges in developing large data throughput and effective multiplexing techniques. Here, we report the snapshot hyperspectral volumetric microscopy that computationally reconstructs hyperspectral profiles for high-resolution volumes of ~1000 μm × 1000 μm × 500 μm at video rate by a novel four-dimensional (4D) deconvolution algorithm. We validated the proposed approach with both numerical simulations for quantitative evaluation and various real experimental results on the prototype system. Different applications such as biological component analysis in bright field and spectral unmixing of multiple fluorescence are demonstrated. The experiments on moving fluorescent beads and GFP labelled drosophila larvae indicate the great potential of our method for observing multiple fluorescent markers in dynamic specimens.
Experimental evaluation of candidate graphical microburst alert displays
NASA Technical Reports Server (NTRS)
Wanke, Craig R.; Hansman, R. John
1992-01-01
A piloted flight simulator experiment was conducted to evaluate issues related to the display of microburst alerts on electronic cockpit instrumentation. Issues addressed include display clarity, usefulness of multilevel microburst intensity information, and whether information from multiple sensors should be presented separately or 'fused' into combined alerts. Nine active airline pilots of 'glass cockpit' aircraft participated in the study. Microburst alerts presented on a moving map display were found to be visually clear and useful to pilots. Also, multilevel intensity information coded by colors or patterns was found to be important for decision making purposes. Pilot opinion was mixed on whether to 'fuse' data from multiple sensors, and some resulting design tradeoffs were identified. The positional information included in the graphical alert presentation was found useful by the pilots for planning lateral missed approach maneuvers, but may result in deviations which could interfere with normal airport operations. A number of flight crew training issues were also identified.
Multiple Correlation versus Multiple Regression.
ERIC Educational Resources Information Center
Huberty, Carl J.
2003-01-01
Describes differences between multiple correlation analysis (MCA) and multiple regression analysis (MRA), showing how these approaches involve different research questions and study designs, different inferential approaches, different analysis strategies, and different reported information. (SLD)
Testing a Dutch web-based tailored lifestyle programme among adults: a study protocol.
Schulz, Daniela N; Kremers, Stef Pj; van Osch, Liesbeth Adm; Schneider, Francine; van Adrichem, Mathieu Jg; de Vries, Hein
2011-02-16
Smoking, high alcohol consumption, unhealthy eating habits and physical inactivity often lead to (chronic) diseases, such as cardiovascular diseases and cancer. Tailored online interventions have been proven to be effective in changing health behaviours. The aim of this study is to test and compare the effectiveness of two different tailoring strategies for changing lifestyle compared to a control group using a multiple health behaviour web-based approach. In our Internet-based tailored programme, the five lifestyle behaviours of smoking, alcohol intake, fruit consumption, vegetable consumption, and physical activity are addressed. This randomized controlled trial, conducted among Dutch adults, includes two experimental groups (i.e., a sequential behaviour tailoring condition and a simultaneous behaviour tailoring condition) and a control group. People in the sequential behaviour tailoring condition obtain feedback on whether their lifestyle behaviours meet the Dutch recommendations. Using a step-by-step approach, they are stimulated to continue with a computer tailored module to change only one unhealthy behaviour first. In the course of the study, they can proceed to change a second behaviour. People in the simultaneous behaviour tailoring condition receive computer tailored feedback about all their unhealthy behaviours during their first visit as a stimulation to change all unhealthy behaviours. The experimental groups can re-visit the website and can then receive ipsative feedback (i.e., current scores are compared to previous scores in order to give feedback about potential changes). The (difference in) effectiveness of the different versions of the programme will be tested and compared to a control group, in which respondents only receive a short health risk appraisal. Programme evaluations will assess satisfaction with and appreciation and personal relevance of the intervention among the respondents. Finally, potential subgroup differences pertaining to gender, age and socioeconomic status regarding the behaviour effects and programme evaluation will be assessed. Research regarding multiple behaviour change is in its infancy. We study how to offer multiple behaviour change interventions optimally. Using these results could strengthen the effectiveness of web-based computer-tailoring lifestyle programmes. This study will yield new results about the need for differential lifestyle approaches using Internet-based expert systems and potential differences in subgroups concerning the effectiveness and appreciation. Dutch Trial Register NTR2168.
Hierarchical approaches for systems modeling in cardiac development.
Gould, Russell A; Aboulmouna, Lina M; Varner, Jeffrey D; Butcher, Jonathan T
2013-01-01
Ordered cardiac morphogenesis and function are essential for all vertebrate life. The heart begins as a simple contractile tube, but quickly grows and morphs into a multichambered pumping organ complete with valves, while maintaining regulation of blood flow and nutrient distribution. Though not identical, cardiac morphogenesis shares many molecular and morphological processes across vertebrate species. Quantitative data across multiple time and length scales have been gathered through decades of reductionist single variable analyses. These range from detailed molecular signaling pathways at the cellular levels to cardiac function at the tissue/organ levels. However, none of these components act in true isolation from others, and each, in turn, exhibits short- and long-range effects in both time and space. With the absence of a gene, entire signaling cascades and genetic profiles may be shifted, resulting in complex feedback mechanisms. Also taking into account local microenvironmental changes throughout development, it is apparent that a systems level approach is an essential resource to accelerate information generation concerning the functional relationships across multiple length scales (molecular data vs physiological function) and structural development. In this review, we discuss relevant in vivo and in vitro experimental approaches, compare different computational frameworks for systems modeling, and the latest information about systems modeling of cardiac development. Finally, we conclude with some important future directions for cardiac systems modeling. Copyright © 2013 Wiley Periodicals, Inc.
Davidsen, Peter K; Turan, Nil; Egginton, Stuart; Falciani, Francesco
2016-02-01
The overall aim of physiological research is to understand how living systems function in an integrative manner. Consequently, the discipline of physiology has since its infancy attempted to link multiple levels of biological organization. Increasingly this has involved mathematical and computational approaches, typically to model a small number of components spanning several levels of biological organization. With the advent of "omics" technologies, which can characterize the molecular state of a cell or tissue (intended as the level of expression and/or activity of its molecular components), the number of molecular components we can quantify has increased exponentially. Paradoxically, the unprecedented amount of experimental data has made it more difficult to derive conceptual models underlying essential mechanisms regulating mammalian physiology. We present an overview of state-of-the-art methods currently used to identifying biological networks underlying genomewide responses. These are based on a data-driven approach that relies on advanced computational methods designed to "learn" biology from observational data. In this review, we illustrate an application of these computational methodologies using a case study integrating an in vivo model representing the transcriptional state of hypoxic skeletal muscle with a clinical study representing muscle wasting in chronic obstructive pulmonary disease patients. The broader application of these approaches to modeling multiple levels of biological data in the context of modern physiology is discussed. Copyright © 2016 the American Physiological Society.
Comparative multi-goal tradeoffs in systems engineering of microbial metabolism
2012-01-01
Background Metabolic engineering design methodology has evolved from using pathway-centric, random and empirical-based methods to using systems-wide, rational and integrated computational and experimental approaches. Persistent during these advances has been the desire to develop design strategies that address multiple simultaneous engineering goals, such as maximizing productivity, while minimizing raw material costs. Results Here, we use constraint-based modeling to systematically design multiple combinations of medium compositions and gene-deletion strains for three microorganisms (Escherichia coli, Saccharomyces cerevisiae, and Shewanella oneidensis) and six industrially important byproducts (acetate, D-lactate, hydrogen, ethanol, formate, and succinate). We evaluated over 435 million simulated conditions and 36 engineering metabolic traits, including product rates, costs, yields and purity. Conclusions The resulting metabolic phenotypes can be classified into dominant clusters (meta-phenotypes) for each organism. These meta-phenotypes illustrate global phenotypic variation and sensitivities, trade-offs associated with multiple engineering goals, and fundamental differences in organism-specific capabilities. Given the increasing number of sequenced genomes and corresponding stoichiometric models, we envisage that the proposed strategy could be extended to address a growing range of biological questions and engineering applications. PMID:23009214
How hot? Systematic convergence of the replica exchange method using multiple reservoirs.
Ruscio, Jory Z; Fawzi, Nicolas L; Head-Gordon, Teresa
2010-02-01
We have devised a systematic approach to converge a replica exchange molecular dynamics simulation by dividing the full temperature range into a series of higher temperature reservoirs and a finite number of lower temperature subreplicas. A defined highest temperature reservoir of equilibrium conformations is used to help converge a lower but still hot temperature subreplica, which in turn serves as the high-temperature reservoir for the next set of lower temperature subreplicas. The process is continued until an optimal temperature reservoir is reached to converge the simulation at the target temperature. This gradual convergence of subreplicas allows for better and faster convergence at the temperature of interest and all intermediate temperatures for thermodynamic analysis, as well as optimizing the use of multiple processors. We illustrate the overall effectiveness of our multiple reservoir replica exchange strategy by comparing sampling and computational efficiency with respect to replica exchange, as well as comparing methods when converging the structural ensemble of the disordered Abeta(21-30) peptide simulated with explicit water by comparing calculated Rotating Overhauser Effect Spectroscopy intensities to experimentally measured values. Copyright 2009 Wiley Periodicals, Inc.
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.
Building an organic computing device with multiple interconnected brains
Pais-Vieira, Miguel; Chiuffa, Gabriela; Lebedev, Mikhail; Yadav, Amol; Nicolelis, Miguel A. L.
2015-01-01
Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as discrete classification, image processing, storage and retrieval of tactile information, and even weather forecasting. Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers. PMID:26158615
Entropy Measurement for Biometric Verification Systems.
Lim, Meng-Hui; Yuen, Pong C
2016-05-01
Biometric verification systems are designed to accept multiple similar biometric measurements per user due to inherent intrauser variations in the biometric data. This is important to preserve reasonable acceptance rate of genuine queries and the overall feasibility of the recognition system. However, such acceptance of multiple similar measurements decreases the imposter's difficulty of obtaining a system-acceptable measurement, thus resulting in a degraded security level. This deteriorated security needs to be measurable to provide truthful security assurance to the users. Entropy is a standard measure of security. However, the entropy formula is applicable only when there is a single acceptable possibility. In this paper, we develop an entropy-measuring model for biometric systems that accepts multiple similar measurements per user. Based on the idea of guessing entropy, the proposed model quantifies biometric system security in terms of adversarial guessing effort for two practical attacks. Excellent agreement between analytic and experimental simulation-based measurement results on a synthetic and a benchmark face dataset justify the correctness of our model and thus the feasibility of the proposed entropy-measuring approach.
Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research
Crump, Matthew J. C.; McDonnell, John V.; Gureckis, Todd M.
2013-01-01
Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money. The service has attracted attention from experimental psychologists interested in gathering human subject data more efficiently. However, relative to traditional laboratory studies, many aspects of the testing environment are not under the experimenter's control. In this paper, we attempt to empirically evaluate the fidelity of the AMT system for use in cognitive behavioral experiments. These types of experiment differ from simple surveys in that they require multiple trials, sustained attention from participants, comprehension of complex instructions, and millisecond accuracy for response recording and stimulus presentation. We replicate a diverse body of tasks from experimental psychology including the Stroop, Switching, Flanker, Simon, Posner Cuing, attentional blink, subliminal priming, and category learning tasks using participants recruited using AMT. While most of replications were qualitatively successful and validated the approach of collecting data anonymously online using a web-browser, others revealed disparity between laboratory results and online results. A number of important lessons were encountered in the process of conducting these replications that should be of value to other researchers. PMID:23516406
Fatigue response of perforated titanium for application in laminar flow control
NASA Technical Reports Server (NTRS)
Johnson, W. Steven; Miller, Jennifer L.; Newman, Jr., James
1996-01-01
The room temperature tensile and fatigue response of non-perforated and perforated titanium for laminar flow control application was investigated both experimentally and analytically. Results showed that multiple perforations did not affect the tensile response, but did reduce the fatigue life. A two dimensional finite element stress analysis was used to determine that the stress fields from adjacent perforations did not influence one another. The stress fields around the holes did not overlap one another, allowing the materials to be modeled as a plate with a center hole. Fatigue life was predicted using an equivalent MW flow size approach to relate the experimental results to microstructural features of the titanium. Predictions using flaw sizes ranging from 1 to 15 microns correlated within a factor of 2 with the experimental results by using a flow stress of 260 MPa. By using two different flow stresses in the crack closure model and correcting for plasticity, the experimental results were bounded by the predictions for high applied stresses. Further analysis of the complex geometry of the perforations and the local material chemistry is needed to further understand the fatigue behavior of the perforated titanium.
Schmitz, Oswald J; Miller, Jennifer R B; Trainor, Anne M; Abrahms, Briana
2017-09-01
Community ecology was traditionally an integrative science devoted to studying interactions between species and their abiotic environments in order to predict species' geographic distributions and abundances. Yet for philosophical and methodological reasons, it has become divided into two enterprises: one devoted to local experimentation on species interactions to predict community dynamics; the other devoted to statistical analyses of abiotic and biotic information to describe geographic distribution. Our goal here is to instigate thinking about ways to reconnect the two enterprises and thereby return to a tradition to do integrative science. We focus specifically on the community ecology of predators and prey, which is ripe for integration. This is because there is active, simultaneous interest in experimentally resolving the nature and strength of predator-prey interactions as well as explaining patterns across landscapes and seascapes. We begin by describing a conceptual theory rooted in classical analyses of non-spatial food web modules used to predict species interactions. We show how such modules can be extended to consideration of spatial context using the concept of habitat domain. Habitat domain describes the spatial extent of habitat space that predators and prey use while foraging, which differs from home range, the spatial extent used by an animal to meet all of its daily needs. This conceptual theory can be used to predict how different spatial relations of predators and prey could lead to different emergent multiple predator-prey interactions such as whether predator consumptive or non-consumptive effects should dominate, and whether intraguild predation, predator interference or predator complementarity are expected. We then review the literature on studies of large predator-prey interactions that make conclusions about the nature of multiple predator-prey interactions. This analysis reveals that while many studies provide sufficient information about predator or prey spatial locations, and thus meet necessary conditions of the habitat domain conceptual theory for drawing conclusions about the nature of the predator-prey interactions, several studies do not. We therefore elaborate how modern technology and statistical approaches for animal movement analysis could be used to test the conceptual theory, using experimental or quasi-experimental analyses at landscape scales. © 2017 by the Ecological Society of America.
Impaired neurosteroid synthesis in multiple sclerosis
Noorbakhsh, Farshid; Ellestad, Kristofor K.; Maingat, Ferdinand; Warren, Kenneth G.; Han, May H.; Steinman, Lawrence; Baker, Glen B.
2011-01-01
High-throughput technologies have led to advances in the recognition of disease pathways and their underlying mechanisms. To investigate the impact of micro-RNAs on the disease process in multiple sclerosis, a prototypic inflammatory neurological disorder, we examined cerebral white matter from patients with or without the disease by micro-RNA profiling, together with confirmatory reverse transcription–polymerase chain reaction analysis, immunoblotting and gas chromatography-mass spectrometry. These observations were verified using the in vivo multiple sclerosis model, experimental autoimmune encephalomyelitis. Brains of patients with or without multiple sclerosis demonstrated differential expression of multiple micro-RNAs, but expression of three neurosteroid synthesis enzyme-specific micro-RNAs (miR-338, miR-155 and miR-491) showed a bias towards induction in patients with multiple sclerosis (P < 0.05). Analysis of the neurosteroidogenic pathways targeted by micro-RNAs revealed suppression of enzyme transcript and protein levels in the white matter of patients with multiple sclerosis (P < 0.05). This was confirmed by firefly/Renilla luciferase micro-RNA target knockdown experiments (P < 0.05) and detection of specific micro-RNAs by in situ hybridization in the brains of patients with or without multiple sclerosis. Levels of important neurosteroids, including allopregnanolone, were suppressed in the white matter of patients with multiple sclerosis (P < 0.05). Induction of the murine micro-RNAs, miR-338 and miR-155, accompanied by diminished expression of neurosteroidogenic enzymes and allopregnanolone, was also observed in the brains of mice with experimental autoimmune encephalomyelitis (P < 0.05). Allopregnanolone treatment of the experimental autoimmune encephalomyelitis mouse model limited the associated neuropathology, including neuroinflammation, myelin and axonal injury and reduced neurobehavioral deficits (P < 0.05). These multi-platform studies point to impaired neurosteroidogenesis in both multiple sclerosis and experimental autoimmune encephalomyelitis. The findings also indicate that allopregnanolone and perhaps other neurosteroid-like compounds might represent potential biomarkers or therapies for multiple sclerosis. PMID:21908875
Yoon, Hyejin; Leitner, Thomas
2014-12-17
Analyses of entire viral genomes or mtDNA requires comprehensive design of many primers across their genomes. In addition, simultaneous optimization of several DNA primer design criteria may improve overall experimental efficiency and downstream bioinformatic processing. To achieve these goals, we developed PrimerDesign-M. It includes several options for multiple-primer design, allowing researchers to efficiently design walking primers that cover long DNA targets, such as entire HIV-1 genomes, and that optimizes primers simultaneously informed by genetic diversity in multiple alignments and experimental design constraints given by the user. PrimerDesign-M can also design primers that include DNA barcodes and minimize primer dimerization. PrimerDesign-Mmore » finds optimal primers for highly variable DNA targets and facilitates design flexibility by suggesting alternative designs to adapt to experimental conditions.« less
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
NASA Astrophysics Data System (ADS)
Jin, Yongmei
In recent years, theoretical modeling and computational simulation of microstructure evolution and materials property has been attracting much attention. While significant advances have been made, two major challenges remain. One is the integration of multiple physical phenomena for simulation of complex materials behavior, the other is the bridging over multiple length and time scales in materials modeling and simulation. The research presented in this Thesis is focused mainly on tackling the first major challenge. In this Thesis, a unified Phase Field Microelasticity (PFM) approach is developed. This approach is an advanced version of the phase field method that takes into account the exact elasticity of arbitrarily anisotropic, elastically and structurally inhomogeneous systems. The proposed theory and models are applicable to infinite solids, elastic half-space, and finite bodies with arbitrary-shaped free surfaces, which may undergo various concomitant physical processes. The Phase Field Microelasticity approach is employed to formulate the theories and models of martensitic transformation, dislocation dynamics, and crack evolution in single crystal and polycrystalline solids. It is also used to study strain relaxation in heteroepitaxial thin films through misfit dislocation and surface roughening. Magnetic domain evolution in nanocrystalline thin films is also investigated. Numerous simulation studies are performed. Comparison with analytical predictions and experimental observations are presented. Agreement verities the theory and models as realistic simulation tools for computational materials science and engineering. The same Phase Field Microelasticity formalism of individual models of different physical phenomena makes it easy to integrate multiple physical processes into one unified simulation model, where multiple phenomena are treated as various relaxation modes that together act as one common cooperative phenomenon. The model does not impose a priori constraints on possible microstructure evolution paths. This gives the model predicting power, where material system itself "chooses" the optimal path for multiple processes. The advances made in this Thesis present a significant step forward to overcome the first challenge, mesoscale multi-physics modeling and simulation of materials. At the end of this Thesis, the way to tackle the second challenge, bridging over multiple length and time scales in materials modeling and simulation, is discussed based on connection between the mesoscale Phase Field Microelasticity modeling and microscopic atomistic calculation as well as macroscopic continuum theory.
NASA Astrophysics Data System (ADS)
Nichols, Brandon S.; Rajaram, Narasimhan; Tunnell, James W.
2012-05-01
Diffuse optical spectroscopy (DOS) provides a powerful tool for fast and noninvasive disease diagnosis. The ability to leverage DOS to accurately quantify tissue optical parameters hinges on the model used to estimate light-tissue interaction. We describe the accuracy of a lookup table (LUT)-based inverse model for measuring optical properties under different conditions relevant to biological tissue. The LUT is a matrix of reflectance values acquired experimentally from calibration standards of varying scattering and absorption properties. Because it is based on experimental values, the LUT inherently accounts for system response and probe geometry. We tested our approach in tissue phantoms containing multiple absorbers, different sizes of scatterers, and varying oxygen saturation of hemoglobin. The LUT-based model was able to extract scattering and absorption properties under most conditions with errors of less than 5 percent. We demonstrate the validity of the lookup table over a range of source-detector separations from 0.25 to 1.48 mm. Finally, we describe the rapid fabrication of a lookup table using only six calibration standards. This optimized LUT was able to extract scattering and absorption properties with average RMS errors of 2.5 and 4 percent, respectively.
NASA Astrophysics Data System (ADS)
Kopsaftopoulos, Fotios; Nardari, Raphael; Li, Yu-Hung; Wang, Pengchuan; Chang, Fu-Kuo
2016-04-01
In this work, the system design, integration, and wind tunnel experimental evaluation are presented for a bioinspired self-sensing intelligent composite unmanned aerial vehicle (UAV) wing. A total of 148 micro-sensors, including piezoelectric, strain, and temperature sensors, in the form of stretchable sensor networks are embedded in the layup of a composite wing in order to enable its self-sensing capabilities. Novel stochastic system identification techniques based on time series models and statistical parameter estimation are employed in order to accurately interpret the sensing data and extract real-time information on the coupled air flow-structural dynamics. Special emphasis is given to the wind tunnel experimental assessment under various flight conditions defined by multiple airspeeds and angles of attack. A novel modeling approach based on the recently introduced Vector-dependent Functionally Pooled (VFP) model structure is employed for the stochastic identification of the "global" coupled airflow-structural dynamics of the wing and their correlation with dynamic utter and stall. The obtained results demonstrate the successful system-level integration and effectiveness of the stochastic identification approach, thus opening new perspectives for the state sensing and awareness capabilities of the next generation of "fly-by-fee" UAVs.
The Fluid Dynamics of Nascent Biofilms
NASA Astrophysics Data System (ADS)
Farthing, Nicola; Snow, Ben; Wilson, Laurence; Bees, Martin
2017-11-01
Many anti-biofilm approaches target mature biofilms with biochemical or physio-chemical interventions. We investigate the mechanics of interventions at an early stage that aim to inhibit biofilm maturation, focusing on hydrodynamics as cells transition from planktonic to surface-attached. Surface-attached cells generate flow fields that are relatively long-range compared with cells that are freely-swimming. We look at the effect of these flows on the biofilm formation. In particular, we use digital inline holographic microscopy to determine the three-dimensional flow due to a surface-attached cell and the effect this flow has on both tracers and other cells in the fluid. We compare experimental data with two models of cells on boundaries. The first approach utilizes slender body theory and captures many of the features of the experimental field. The second model develops a simple description in terms of singularity solutions of Stokes' flow, which produces qualitatively similar dynamics to both the experiments and more complex model but with significant computational savings. The range of validity of multiple cell arrangements is investigated. These two descriptions can be used to investigate the efficacy of actives developed by Unilever on nascent biofilms.
A new biodegradation prediction model specific to petroleum hydrocarbons.
Howard, Philip; Meylan, William; Aronson, Dallas; Stiteler, William; Tunkel, Jay; Comber, Michael; Parkerton, Thomas F
2005-08-01
A new predictive model for determining quantitative primary biodegradation half-lives of individual petroleum hydrocarbons has been developed. This model uses a fragment-based approach similar to that of several other biodegradation models, such as those within the Biodegradation Probability Program (BIOWIN) estimation program. In the present study, a half-life in days is estimated using multiple linear regression against counts of 31 distinct molecular fragments. The model was developed using a data set consisting of 175 compounds with environmentally relevant experimental data that was divided into training and validation sets. The original fragments from the Ministry of International Trade and Industry BIOWIN model were used initially as structural descriptors and additional fragments were then added to better describe the ring systems found in petroleum hydrocarbons and to adjust for nonlinearity within the experimental data. The training and validation sets had r2 values of 0.91 and 0.81, respectively.
A Wave Chaotic Study of Quantum Graphs with Microwave Networks
NASA Astrophysics Data System (ADS)
Fu, Ziyuan
Quantum graphs provide a setting to test the hypothesis that all ray-chaotic systems show universal wave chaotic properties. I study the quantum graphs with a wave chaotic approach. Here, an experimental setup consisting of a microwave coaxial cable network is used to simulate quantum graphs. Some basic features and the distributions of impedance statistics are analyzed from experimental data on an ensemble of tetrahedral networks. The random coupling model (RCM) is applied in an attempt to uncover the universal statistical properties of the system. Deviations from RCM predictions have been observed in that the statistics of diagonal and off-diagonal impedance elements are different. Waves trapped due to multiple reflections on bonds between nodes in the graph most likely cause the deviations from universal behavior in the finite-size realization of a quantum graph. In addition, I have done some investigations on the Random Coupling Model, which are useful for further research.
Taking Ockham's razor to enzyme dynamics and catalysis.
Glowacki, David R; Harvey, Jeremy N; Mulholland, Adrian J
2012-01-29
The role of protein dynamics in enzyme catalysis is a matter of intense current debate. Enzyme-catalysed reactions that involve significant quantum tunnelling can give rise to experimental kinetic isotope effects with complex temperature dependences, and it has been suggested that standard statistical rate theories, such as transition-state theory, are inadequate for their explanation. Here we introduce aspects of transition-state theory relevant to the study of enzyme reactivity, taking cues from chemical kinetics and dynamics studies of small molecules in the gas phase and in solution--where breakdowns of statistical theories have received significant attention and their origins are relatively better understood. We discuss recent theoretical approaches to understanding enzyme activity and then show how experimental observations for a number of enzymes may be reproduced using a transition-state-theory framework with physically reasonable parameters. Essential to this simple model is the inclusion of multiple conformations with different reactivity.
An overview of clinical and experimental treatment modalities for port wine stains
Chen, Jennifer K.; Ghasri, Pedram; Aguilar, Guillermo; van Drooge, Anne Margreet; Wolkerstorfer, Albert; Kelly, Kristen M.; Heger, Michal
2014-01-01
Port wine stains (PWS) are the most common vascular malformation of the skin, occurring in 0.3% to 0.5% of the population. Noninvasive laser irradiation with flashlamp-pumped pulsed dye lasers (selective photothermolysis) currently comprises the gold standard treatment of PWS; however, the majority of PWS fail to clear completely after selective photothermolysis. In this review, the clinically used PWS treatment modalities (pulsed dye lasers, alexandrite lasers, neodymium:yttrium-aluminum-garnet lasers, and intense pulsed light) and techniques (combination approaches, multiple passes, and epidermal cooling) are discussed. Retrospective analysis of clinical studies published between 1990 and 2011 was performed to determine therapeutic efficacies for each clinically used modality/technique. In addition, factors that have resulted in the high degree of therapeutic recalcitrance are identified, and emerging experimental treatment strategies are addressed, including the use of photodynamic therapy, immunomodulators, angiogenesis inhibitors, hypobaric pressure, and site-specific pharmaco-laser therapy. PMID:22305042
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
NASA Astrophysics Data System (ADS)
Wikswo, John; Kolli, Aditya; Shankaran, Harish; Wagoner, Matthew; Mettetal, Jerome; Reiserer, Ronald; Gerken, Gregory; Britt, Clayton; Schaffer, David
Genetic, proteomic, and metabolic networks describing biological signaling can have 102 to 103 nodes. Transcriptomics and mass spectrometry can quantify 104 different dynamical experimental variables recorded from in vitro experiments with a time resolution approaching 1 s. It is difficult to infer metabolic and signaling models from such massive data sets, and it is unlikely that causality can be determined simply from observed temporal correlations. There is a need to design and apply specific system perturbations, which will be difficult to perform manually with 10 to 102 externally controlled variables. Machine learning and optimal experimental design can select an experiment that best discriminates between multiple conflicting models, but a remaining problem is to control in real time multiple variables in the form of concentrations of growth factors, toxins, nutrients and other signaling molecules. With time-division multiplexing, a microfluidic MicroFormulator (μF) can create in real time complex mixtures of reagents in volumes suitable for biological experiments. Initial 96-channel μF implementations control the exposure profile of cells in a 96-well plate to different temporal profiles of drugs; future experiments will include challenge compounds. Funded in part by AstraZeneca, NIH/NCATS HHSN271201600009C and UH3TR000491, and VIIBRE.
Dang, Shipeng; Xu, Huanbai; Xu, Congfeng; Cai, Wei; Li, Qian; Cheng, Yiji; Jin, Min; Wang, Ru-Xing; Peng, Yongde; Zhang, Yi; Wu, Changping; He, Xiaozhou; Wan, Bing; Zhang, Yanyun
2014-01-01
Mesenchymal stem cell (MSC)-based therapy is a promising approach to treat various inflammatory disorders including multiple sclerosis. However, the fate of MSCs in the inflammatory microenvironment is largely unknown. Experimental autoimmune encephalomyelitis (EAE) is a well-studied animal model of multiple sclerosis. We demonstrated that autophagy occurred in MSCs during their application for EAE treatment. Inflammatory cytokines, e.g., interferon gamma and tumor necrosis factor, induced autophagy in MSCs synergistically by inducing expression of BECN1/Beclin 1. Inhibition of autophagy by knockdown of Becn1 significantly improved the therapeutic effects of MSCs on EAE, which was mainly attributable to enhanced suppression upon activation and expansion of CD4+ T cells. Mechanistically, inhibition of autophagy increased reactive oxygen species generation and mitogen-activated protein kinase 1/3 activation in MSCs, which were essential for PTGS2 (prostaglandin-endoperoxide synthase 2 [prostaglandin G/H synthase and cyclooxygenase]) and downstream prostaglandin E2 expression to exert immunoregulatory function. Furthermore, pharmacological treatment of MSCs to inhibit autophagy increased their immunosuppressive effects on T cell-mediated EAE. Our findings indicate that inflammatory microenvironment-induced autophagy downregulates the immunosuppressive function of MSCs. Therefore, modulation of autophagy in MSCs would provide a novel strategy to improve MSC-based immunotherapy. PMID:24905997
Dang, Shipeng; Xu, Huanbai; Xu, Congfeng; Cai, Wei; Li, Qian; Cheng, Yiji; Jin, Min; Wang, Ru-Xing; Peng, Yongde; Zhang, Yi; Wu, Changping; He, Xiaozhou; Wan, Bing; Zhang, Yanyun
2014-07-01
Mesenchymal stem cell (MSC)-based therapy is a promising approach to treat various inflammatory disorders including multiple sclerosis. However, the fate of MSCs in the inflammatory microenvironment is largely unknown. Experimental autoimmune encephalomyelitis (EAE) is a well-studied animal model of multiple sclerosis. We demonstrated that autophagy occurred in MSCs during their application for EAE treatment. Inflammatory cytokines, e.g., interferon gamma and tumor necrosis factor, induced autophagy in MSCs synergistically by inducing expression of BECN1/Beclin 1. Inhibition of autophagy by knockdown of Becn1 significantly improved the therapeutic effects of MSCs on EAE, which was mainly attributable to enhanced suppression upon activation and expansion of CD4(+) T cells. Mechanistically, inhibition of autophagy increased reactive oxygen species generation and mitogen-activated protein kinase 1/3 activation in MSCs, which were essential for PTGS2 (prostaglandin-endoperoxide synthase 2 [prostaglandin G/H synthase and cyclooxygenase]) and downstream prostaglandin E2 expression to exert immunoregulatory function. Furthermore, pharmacological treatment of MSCs to inhibit autophagy increased their immunosuppressive effects on T cell-mediated EAE. Our findings indicate that inflammatory microenvironment-induced autophagy downregulates the immunosuppressive function of MSCs. Therefore, modulation of autophagy in MSCs would provide a novel strategy to improve MSC-based immunotherapy.