Sample records for model-based testability assessment

  1. Model-Based Testability Assessment and Directed Troubleshooting of Shuttle Wiring Systems

    NASA Technical Reports Server (NTRS)

    Deb, Somnath; Domagala, Chuck; Shrestha, Roshan; Malepati, Venkatesh; Cavanaugh, Kevin; Patterson-Hine, Ann; Sanderfer, Dwight; Cockrell, Jim; Norvig, Peter (Technical Monitor)

    2000-01-01

    We have recently completed a pilot study on the Space shuttle wiring system commissioned by the Wiring Integrity Research (WIRe) team at NASA Ames Research Center, As the space shuttle ages, it is experiencing wiring degradation problems including arcing, chaffing insulation breakdown and broken conductors. A systematic and comprehensive test process is required to thoroughly test and quality assure (QA) the wiring systems. The NASA WIRe team recognized the value of a formal model based analysis for risk-assessment and fault coverage analysis. However. wiring systems are complex and involve over 50,000 wire segments. Therefore, NASA commissioned this pilot study with Qualtech Systems. Inc. (QSI) to explore means of automatically extracting high fidelity multi-signal models from wiring information database for use with QSI's Testability Engineering and Maintenance System (TEAMS) tool.

  2. Testability analysis on a hydraulic system in a certain equipment based on simulation model

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou

    2018-03-01

    Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.

  3. Software Users Manual (SUM): Extended Testability Analysis (ETA) Tool

    NASA Technical Reports Server (NTRS)

    Maul, William A.; Fulton, Christopher E.

    2011-01-01

    This software user manual describes the implementation and use the Extended Testability Analysis (ETA) Tool. The ETA Tool is a software program that augments the analysis and reporting capabilities of a commercial-off-the-shelf (COTS) testability analysis software package called the Testability Engineering And Maintenance System (TEAMS) Designer. An initial diagnostic assessment is performed by the TEAMS Designer software using a qualitative, directed-graph model of the system being analyzed. The ETA Tool utilizes system design information captured within the diagnostic model and testability analysis output from the TEAMS Designer software to create a series of six reports for various system engineering needs. The ETA Tool allows the user to perform additional studies on the testability analysis results by determining the detection sensitivity to the loss of certain sensors or tests. The ETA Tool was developed to support design and development of the NASA Ares I Crew Launch Vehicle. The diagnostic analysis provided by the ETA Tool was proven to be valuable system engineering output that provided consistency in the verification of system engineering requirements. This software user manual provides a description of each output report generated by the ETA Tool. The manual also describes the example diagnostic model and supporting documentation - also provided with the ETA Tool software release package - that were used to generate the reports presented in the manual

  4. BETA: Behavioral testability analyzer and its application to high-level test generation and synthesis for testability. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Hsing

    1992-01-01

    In this thesis, a behavioral-level testability analysis approach is presented. This approach is based on analyzing the circuit behavioral description (similar to a C program) to estimate its testability by identifying controllable and observable circuit nodes. This information can be used by a test generator to gain better access to internal circuit nodes and to reduce its search space. The results of the testability analyzer can also be used to select test points or partial scan flip-flops in the early design phase. Based on selection criteria, a novel Synthesis for Testability approach call Test Statement Insertion (TSI) is proposed, which modifies the circuit behavioral description directly. Test Statement Insertion can also be used to modify circuit structural description to improve its testability. As a result, Synthesis for Testability methodology can be combined with an existing behavioral synthesis tool to produce more testable circuits.

  5. Multidisciplinary analysis and design of printed wiring boards

    NASA Astrophysics Data System (ADS)

    Fulton, Robert E.; Hughes, Joseph L.; Scott, Waymond R., Jr.; Umeagukwu, Charles; Yeh, Chao-Pin

    1991-04-01

    Modern printed wiring board design depends on electronic prototyping using computer-based simulation and design tools. Existing electrical computer-aided design (ECAD) tools emphasize circuit connectivity with only rudimentary analysis capabilities. This paper describes a prototype integrated PWB design environment denoted Thermal Structural Electromagnetic Testability (TSET) being developed at Georgia Tech in collaboration with companies in the electronics industry. TSET provides design guidance based on enhanced electrical and mechanical CAD capabilities including electromagnetic modeling testability analysis thermal management and solid mechanics analysis. TSET development is based on a strong analytical and theoretical science base and incorporates an integrated information framework and a common database design based on a systematic structured methodology.

  6. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  7. Advanced Diagnostic and Prognostic Testbed (ADAPT) Testability Analysis Report

    NASA Technical Reports Server (NTRS)

    Ossenfort, John

    2008-01-01

    As system designs become more complex, determining the best locations to add sensors and test points for the purpose of testing and monitoring these designs becomes more difficult. Not only must the designer take into consideration all real and potential faults of the system, he or she must also find efficient ways of detecting and isolating those faults. Because sensors and cabling take up valuable space and weight on a system, and given constraints on bandwidth and power, it is even more difficult to add sensors into these complex designs after the design has been completed. As a result, a number of software tools have been developed to assist the system designer in proper placement of these sensors during the system design phase of a project. One of the key functions provided by many of these software programs is a testability analysis of the system essentially an evaluation of how observable the system behavior is using available tests. During the design phase, testability metrics can help guide the designer in improving the inherent testability of the design. This may include adding, removing, or modifying tests; breaking up feedback loops, or changing the system to reduce fault propagation. Given a set of test requirements, the analysis can also help to verify that the system will meet those requirements. Of course, a testability analysis requires that a software model of the physical system is available. For the analysis to be most effective in guiding system design, this model should ideally be constructed in parallel with these efforts. The purpose of this paper is to present the final testability results of the Advanced Diagnostic and Prognostic Testbed (ADAPT) after the system model was completed. The tool chosen to build the model and to perform the testability analysis with is the Testability Engineering and Maintenance System Designer (TEAMS-Designer). The TEAMS toolset is intended to be a solution to span all phases of the system, from design and development through health management and maintenance. TEAMS-Designer is the model-building and testability analysis software in that suite.

  8. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    2017-11-01

    Understanding the dynamic processes of a real game system requires an appropriate dynamics model, and rigorously testing a dynamics model is nontrivial. In our methodological research, we develop an approach to testing the validity of game dynamics models that considers the dynamic patterns of angular momentum and speed as measurement variables. Using Rock-Paper-Scissors (RPS) games as an example, we illustrate the geometric patterns in the experiment data. We then derive the related theoretical patterns from a series of typical dynamics models. By testing the goodness-of-fit between the experimental and theoretical patterns, we show that the validity of these models can be evaluated quantitatively. Our approach establishes a link between dynamics models and experimental systems, which is, to the best of our knowledge, the most effective and rigorous strategy for ascertaining the testability of evolutionary game dynamics models.

  9. Generating Testable Questions in the Science Classroom: The BDC Model

    ERIC Educational Resources Information Center

    Tseng, ChingMei; Chen, Shu-Bi Shu-Bi; Chang, Wen-Hua

    2015-01-01

    Guiding students to generate testable scientific questions is essential in the inquiry classroom, but it is not easy. The purpose of the BDC ("Big Idea, Divergent Thinking, and Convergent Thinking") instructional model is to to scaffold students' inquiry learning. We illustrate the use of this model with an example lesson, designed…

  10. Should tumbling E go out of date in amblyopia screening? Evidence from a population-based sample normative in children aged 3-4 years.

    PubMed

    Guimaraes, Sandra; Fernandes, Tiago; Costa, Patrício; Silva, Eduardo

    2018-06-01

    To determine a normative of tumbling E optotype and its feasibility for visual acuity (VA) assessment in children aged 3-4 years. A cross-sectional study of 1756 children who were invited to participate in a comprehensive non-invasive eye exam. Uncorrected monocular VA with crowded tumbling E with a comprehensive ophthalmological examination were assessed. Testability rates of the whole population and VA of the healthy children for different age subgroups, gender, school type and the order of testing in which the ophthalmological examination was performed were evaluated. The overall testability rate was 95% (92% and 98% for children aged 3 and 4 years, respectively). The mean VA of the first-day assessment (first-VA) and best-VA over 2 days' assessments was 0.14 logMAR (95% CI 0.14 to 0.15) (decimal=0.72, 95% CI 0.71 to 0.73) and 0.13 logMAR (95% CI 0.13 to 0.14) (decimal=0.74, 95% CI 0.73 to 0.74). Analysis with age showed differences between groups in first-VA (F(3,1146)=10.0; p<0.001; η2=0.026) and best-VA (F(3,1155)=8.8; p<0.001; η2=0.022). Our normative was very highly correlated with previous reported HOTV-Amblyopia-Treatment-Study (HOTV-ATS) (first-VA, r=0.97; best-VA, r=0.99), with 0.8 to 0.7 lines consistent overestimation for HOTV-ATS as described in literature. Overall false-positive referral was 1.3%, being specially low regarding anisometropias of ≥2 logMAR lines (0.17%). Interocular difference ≥1 line VA logMAR was not associated with age (p=0.195). This is the first normative for European Caucasian children with single crowded tumbling E in healthy eyes and the largest study comparing 3 and 4 years old testability. Testability rates are higher than found in literature with other optotypes, especially in children aged 3 years, where we found 5%-11% better testability rates. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Models of cooperative dynamics from biomolecules to magnets

    NASA Astrophysics Data System (ADS)

    Mobley, David Lowell

    This work details application of computer models to several biological systems (prion diseases and Alzheimer's disease) and a magnetic system. These share some common themes, which are discussed. Here, simple lattice-based models are applied to aggregation of misfolded protein in prion diseases like Mad Cow disease. These can explain key features of the diseases. The modeling is based on aggregation being essential in establishing the time-course of infectivity. Growth of initial aggregates is assumed to dominate the experimentally observed lag phase. Subsequent fission, regrowth, and fission set apart the exponential doubling phase in disease progression. We explore several possible modes of growth for 2-D aggregates and suggest the model providing the best explanation for the experimental data. We develop testable predictions from this model. Like prion disease, Alzheimer's disease (AD) is an amyloid disease characterized by large aggregates in the brain. However, evidence increasingly points away from these as the toxic agent and towards oligomers of the Abeta peptide. We explore one possible toxicity mechanism---insertion of Abeta into cell membranes and formation of harmful ion channels. We find that mutations in this peptide which cause familial Alzheimer's disease (FAD) also affect the insertion of this peptide into membranes in a fairly consistent way, suggesting that this toxicity mechanism may be relevant biologically. We find a particular inserted configuration which may be especially harmful and develop testable predictions to verify whether or not this is the case. Nucleation is an essential feature of our models for prion disease, in that it protects normal, healthy individuals from getting prion disease. Nucleation is important in many other areas, and we modify our lattice-based nucleation model to apply to a hysteretic magnetic system where nucleation has been suggested to be important. From a simple model, we find qualitative agreement with experiment, and make testable experimental predictions concerning time-dependence and temperature-dependence of the major hysteresis loop and reversal curves which have been experimentally verified. We argue why this model may be suitable for systems like these and explain implications for Ising-like models. We suggest implications for future modeling work. Finally, we present suggestions for future work in all three areas.

  12. Software reliability through fault-avoidance and fault-tolerance

    NASA Technical Reports Server (NTRS)

    Vouk, Mladen A.; Mcallister, David F.

    1992-01-01

    Accomplishments in the following research areas are summarized: structure based testing, reliability growth, and design testability with risk evaluation; reliability growth models and software risk management; and evaluation of consensus voting, consensus recovery block, and acceptance voting. Four papers generated during the reporting period are included as appendices.

  13. State-of-the-Art Assessment of Testing and Testability of Custom LSI/VLSI Circuits. Volume VI. Redundancy, Testing Circuits, and Codes.

    DTIC Science & Technology

    1982-10-01

    e.g., providing voters in TMR systems and detection-switching requirements in standby-sparing sys- tems. The application of mathematical thoery of...and time redundancy required for error detection and correction, are interrelated. Mathematical modeling, when applied to fault tolerant systems, can...9 1.1 Some Fundamental Principles............................. 11 1.2 Mathematical Theory of

  14. Software Tools to Support the Assessment of System Health

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin J.

    2013-01-01

    This presentation provides an overview of three software tools that were developed by the NASA Glenn Research Center to support the assessment of system health: the Propulsion Diagnostic Method Evaluation Strategy (ProDIMES), the Systematic Sensor Selection Strategy (S4), and the Extended Testability Analysis (ETA) tool. Originally developed to support specific NASA projects in aeronautics and space, these software tools are currently available to U.S. citizens through the NASA Glenn Software Catalog. The ProDiMES software tool was developed to support a uniform comparison of propulsion gas path diagnostic methods. Methods published in the open literature are typically applied to dissimilar platforms with different levels of complexity. They often address different diagnostic problems and use inconsistent metrics for evaluating performance. As a result, it is difficult to perform a one ]to ]one comparison of the various diagnostic methods. ProDIMES solves this problem by serving as a theme problem to aid in propulsion gas path diagnostic technology development and evaluation. The overall goal is to provide a tool that will serve as an industry standard, and will truly facilitate the development and evaluation of significant Engine Health Management (EHM) capabilities. ProDiMES has been developed under a collaborative project of The Technical Cooperation Program (TTCP) based on feedback provided by individuals within the aircraft engine health management community. The S4 software tool provides a framework that supports the optimal selection of sensors for health management assessments. S4 is structured to accommodate user ]defined applications, diagnostic systems, search techniques, and system requirements/constraints. One or more sensor suites that maximize this performance while meeting other user ]defined system requirements that are presumed to exist. S4 provides a systematic approach for evaluating combinations of sensors to determine the set or sets of sensors that optimally meet the performance goals and the constraints. It identifies optimal sensor suite solutions by utilizing a merit (i.e., cost) function with one of several available optimization approaches. As part of its analysis, S4 can expose fault conditions that are difficult to diagnose due to an incomplete diagnostic philosophy and/or a lack of sensors. S4 was originally developed and applied to liquid rocket engines. It was subsequently used to study the optimized selection of sensors for a simulation ]based aircraft engine diagnostic system. The ETA Tool is a software ]based analysis tool that augments the testability analysis and reporting capabilities of a commercial ]off ]the ]shelf (COTS) package. An initial diagnostic assessment is performed by the COTS software using a user ]developed, qualitative, directed ]graph model of the system being analyzed. The ETA Tool accesses system design information captured within the model and the associated testability analysis output to create a series of six reports for various system engineering needs. These reports are highlighted in the presentation. The ETA Tool was developed by NASA to support the verification of fault management requirements early in the Launch Vehicle process. Due to their early development during the design process, the TEAMS ]based diagnostic model and the ETA Tool were able to positively influence the system design by highlighting gaps in failure detection, fault isolation, and failure recovery.

  15. Easily Testable PLA-Based Finite State Machines

    DTIC Science & Technology

    1989-03-01

    PLATYPUS (20]. Then, justifi- type 1, 4 and 5 can be guaranteed to be testable via cation paths are obtained from the STG using simple logic...next state lines is found, if such a vector par that is gnrt d y the trupt eexists, using PLATYPUS [20]. pair that is generated by the first corrupted

  16. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  17. Bayesian naturalness, simplicity, and testability applied to the B ‑ L MSSM GUT

    NASA Astrophysics Data System (ADS)

    Fundira, Panashe; Purves, Austin

    2018-04-01

    Recent years have seen increased use of Bayesian model comparison to quantify notions such as naturalness, simplicity, and testability, especially in the area of supersymmetric model building. After demonstrating that Bayesian model comparison can resolve a paradox that has been raised in the literature concerning the naturalness of the proton mass, we apply Bayesian model comparison to GUTs, an area to which it has not been applied before. We find that the GUTs are substantially favored over the nonunifying puzzle model. Of the GUTs we consider, the B ‑ L MSSM GUT is the most favored, but the MSSM GUT is almost equally favored.

  18. The diffusion of evidence-based decision making among local health department practitioners in the United States.

    PubMed

    Harris, Jenine K; Erwin, Paul C; Smith, Carson; Brownson, Ross C

    2015-01-01

    Evidence-based decision making (EBDM) is the process, in local health departments (LHDs) and other settings, of translating the best available scientific evidence into practice. Local health departments are more likely to be successful if they use evidence-based strategies. However, EBDM and use of evidence-based strategies by LHDs are not widespread. Drawing on diffusion of innovations theory, we sought to understand how LHD directors and program managers perceive the relative advantage, compatibility, simplicity, and testability of EBDM. Directors and managers of programs in chronic disease, environmental health, and infectious disease from LHDs nationwide completed a survey including demographic information and questions about diffusion attributes (advantage, compatibility, simplicity, and testability) related to EBDM. Bivariate inferential tests were used to compare responses between directors and managers and to examine associations between participant characteristics and diffusion attributes. Relative advantage and compatibility scores were high for directors and managers, whereas simplicity and testability scores were lower. Although health department directors and managers of programs in chronic disease generally had higher scores than other groups, there were few significant or large differences between directors and managers across the diffusion attributes. Larger jurisdiction population size was associated with higher relative advantage and compatibility scores for both directors and managers. Overall, directors and managers were in strong agreement on the relative advantage of an LHD using EBDM, with directors in stronger agreement than managers. Perceived relative advantage has been demonstrated to be the most important factor in the rate of innovation adoption, suggesting an opportunity for directors to speed EBDM adoption. However, lower average scores across all groups for simplicity and testability may be hindering EBDM adoption. Recommended strategies for increasing perceived EBDM simplicity and testability are provided.

  19. A Framework for Evidence-Based Licensure of Adaptive Autonomous Systems

    DTIC Science & Technology

    2016-03-01

    insights gleaned to DoD. The autonomy community has identified significant challenges associated with test, evaluation verification and validation of...licensure as a test, evaluation, verification , and validation (TEVV) framework that can address these challenges. IDA found that traditional...language requirements to testable (preferably machine testable) specifications • Design of architectures that treat development and verification of

  20. A Quantitative Geochemical Target for Modeling the Formation of the Earth and Moon

    NASA Technical Reports Server (NTRS)

    Boyce, Jeremy W.; Barnes, Jessica J.; McCubbin, Francis M.

    2017-01-01

    The past decade has been one of geochemical, isotopic, and computational advances that are bringing the laboratory measurements and computational modeling neighborhoods of the Earth-Moon community to ever closer proximity. We are now however in the position to become even better neighbors: modelers can generate testable hypthotheses for geochemists; and geochemists can provide quantitive targets for modelers. Here we present a robust example of the latter based on Cl isotope measurements of mare basalts.

  1. Design for testability and diagnosis at the system-level

    NASA Technical Reports Server (NTRS)

    Simpson, William R.; Sheppard, John W.

    1993-01-01

    The growing complexity of full-scale systems has surpassed the capabilities of most simulation software to provide detailed models or gate-level failure analyses. The process of system-level diagnosis approaches the fault-isolation problem in a manner that differs significantly from the traditional and exhaustive failure mode search. System-level diagnosis is based on a functional representation of the system. For example, one can exercise one portion of a radar algorithm (the Fast Fourier Transform (FFT) function) by injecting several standard input patterns and comparing the results to standardized output results. An anomalous output would point to one of several items (including the FFT circuit) without specifying the gate or failure mode. For system-level repair, identifying an anomalous chip is sufficient. We describe here an information theoretic and dependency modeling approach that discards much of the detailed physical knowledge about the system and analyzes its information flow and functional interrelationships. The approach relies on group and flow associations and, as such, is hierarchical. Its hierarchical nature allows the approach to be applicable to any level of complexity and to any repair level. This approach has been incorporated in a product called STAMP (System Testability and Maintenance Program) which was developed and refined through more than 10 years of field-level applications to complex system diagnosis. The results have been outstanding, even spectacular in some cases. In this paper we describe system-level testability, system-level diagnoses, and the STAMP analysis approach, as well as a few STAMP applications.

  2. Do Groundwater Management Plans Work? A statistical evaluation of the effectiveness of groundwater management plans towards achieving water supply and environmental objectives under a changing climate.

    NASA Astrophysics Data System (ADS)

    White, E.; Peterson, T. J.; Costelloe, J. F.; Western, A. W.; Carrara, E.

    2017-12-01

    Regulation of groundwater through the use of management plans is becoming increasingly prevalent as global groundwater levels decline. But plans are seldom systematically and quantitatively assessed for effectiveness. Instead, the state of an aquifer is commonly considered a proxy for plan effectiveness despite a lack of casaulity. Groundwater managers face myraid challenges such as finite resources, conflicting uses and the uncertainty inherent in any groundwater investigation. Groundwater models have been used to provide insights into what may happen to the aquifer under various levels of stress. Generally, these models simulate the impact of predefined stresses for a certain time-span. However, this is not how management occurs in reality. Managers only see a fraction of the aquifer and use this limited knowledgeto make aquifer-wide decisions. Also, management changes over time in response to aquifer state, and groundwater management plans commonly contain trigger levels in monitoring wells that prompt management intervention. In this way there is a feedback between the aquifer state and management that is rarely captured by groundwater management models. To capture this management/aquifer feedback, groundwater management was structured as a systems control problem, and using this framework, a testability assessment rubric developed. The rubric was applied to 15 Australian groundwater management plans and 47% of plans were found to be testable. To numerically quantify the effectiveness of groundwater managment, the impact of extraction restrictions was probabilistically assessed by simulating "the act of management" of a simple unconfined groundwater system using MODFLOW and Flopy. Water managers were privy only to head levels in a varying number of grid cells assigned as monitoring wells, and used that limited information to make allocation decisions at each time step. Extraction rates for each simulated management period were determined based upon the observed heads from the previous management period and adjusted depending upon triggers outlined in the management plan. The effectiveness of water restrictions as a management technique for the purpose of maintaining supply reliability under various decision making frequencies, aquifer response times and climate scenarios was explored.

  3. Are there two processes in reasoning? The dimensionality of inductive and deductive inferences.

    PubMed

    Stephens, Rachel G; Dunn, John C; Hayes, Brett K

    2018-03-01

    Single-process accounts of reasoning propose that the same cognitive mechanisms underlie inductive and deductive inferences. In contrast, dual-process accounts propose that these inferences depend upon 2 qualitatively different mechanisms. To distinguish between these accounts, we derived a set of single-process and dual-process models based on an overarching signal detection framework. We then used signed difference analysis to test each model against data from an argument evaluation task, in which induction and deduction judgments are elicited for sets of valid and invalid arguments. Three data sets were analyzed: data from Singmann and Klauer (2011), a database of argument evaluation studies, and the results of an experiment designed to test model predictions. Of the large set of testable models, we found that almost all could be rejected, including all 2-dimensional models. The only testable model able to account for all 3 data sets was a model with 1 dimension of argument strength and independent decision criteria for induction and deduction judgments. We conclude that despite the popularity of dual-process accounts, current results from the argument evaluation task are best explained by a single-process account that incorporates separate decision thresholds for inductive and deductive inferences. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Assessment of Scientific Reasoning: the Effects of Task Context, Data, and Design on Student Reasoning in Control of Variables.

    PubMed

    Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao; Bao, Lei

    2016-03-01

    Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students' abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction.

  5. Assessment of Scientific Reasoning: the Effects of Task Context, Data, and Design on Student Reasoning in Control of Variables

    PubMed Central

    Zhou, Shaona; Han, Jing; Koenig, Kathleen; Raplinger, Amy; Pi, Yuan; Li, Dan; Xiao, Hua; Fu, Zhao

    2015-01-01

    Scientific reasoning is an important component under the cognitive strand of the 21st century skills and is highly emphasized in the new science education standards. This study focuses on the assessment of student reasoning in control of variables (COV), which is a core sub-skill of scientific reasoning. The main research question is to investigate the extent to which the existence of experimental data in questions impacts student reasoning and performance. This study also explores the effects of task contexts on student reasoning as well as students’ abilities to distinguish between testability and causal influences of variables in COV experiments. Data were collected with students from both USA and China. Students received randomly one of two test versions, one with experimental data and one without. The results show that students from both populations (1) perform better when experimental data are not provided, (2) perform better in physics contexts than in real-life contexts, and (3) students have a tendency to equate non-influential variables to non-testable variables. In addition, based on the analysis of both quantitative and qualitative data, a possible progression of developmental levels of student reasoning in control of variables is proposed, which can be used to inform future development of assessment and instruction. PMID:26949425

  6. Development of a dynamic computational model of social cognitive theory.

    PubMed

    Riley, William T; Martin, Cesar A; Rivera, Daniel E; Hekler, Eric B; Adams, Marc A; Buman, Matthew P; Pavel, Misha; King, Abby C

    2016-12-01

    Social cognitive theory (SCT) is among the most influential theories of behavior change and has been used as the conceptual basis of health behavior interventions for smoking cessation, weight management, and other health behaviors. SCT and other behavior theories were developed primarily to explain differences between individuals, but explanatory theories of within-person behavioral variability are increasingly needed as new technologies allow for intensive longitudinal measures and interventions adapted from these inputs. These within-person explanatory theoretical applications can be modeled as dynamical systems. SCT constructs, such as reciprocal determinism, are inherently dynamical in nature, but SCT has not been modeled as a dynamical system. This paper describes the development of a dynamical system model of SCT using fluid analogies and control systems principles drawn from engineering. Simulations of this model were performed to assess if the model performed as predicted based on theory and empirical studies of SCT. This initial model generates precise and testable quantitative predictions for future intensive longitudinal research. Dynamic modeling approaches provide a rigorous method for advancing health behavior theory development and refinement and for guiding the development of more potent and efficient interventions.

  7. Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.

    2005-12-01

    Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time

  8. Soy-Based Therapeutic Baby Formulas: Testable Hypotheses Regarding the Pros and Cons.

    PubMed

    Westmark, Cara J

    2016-01-01

    Soy-based infant formulas have been consumed in the United States since 1909, and currently constitute a significant portion of the infant formula market. There are efforts underway to generate genetically modified soybeans that produce therapeutic agents of interest with the intent to deliver those agents in a soy-based infant formula platform. The threefold purpose of this review article is to first discuss the pros and cons of soy-based infant formulas, then present testable hypotheses to discern the suitability of a soy platform for drug delivery in babies, and finally start a discussion to inform public policy on this important area of infant nutrition.

  9. Nursing theory and concept development: a theoretical model of clinical nurses' intentions to stay in their current positions.

    PubMed

    Cowden, Tracy L; Cummings, Greta G

    2012-07-01

    We describe a theoretical model of staff nurses' intentions to stay in their current positions. The global nursing shortage and high nursing turnover rate demand evidence-based retention strategies. Inconsistent study outcomes indicate a need for testable theoretical models of intent to stay that build on previously published models, are reflective of current empirical research and identify causal relationships between model concepts. Two systematic reviews of electronic databases of English language published articles between 1985-2011. This complex, testable model expands on previous models and includes nurses' affective and cognitive responses to work and their effects on nurses' intent to stay. The concepts of desire to stay, job satisfaction, joy at work, and moral distress are included in the model to capture the emotional response of nurses to their work environments. The influence of leadership is integrated within the model. A causal understanding of clinical nurses' intent to stay and the effects of leadership on the development of that intention will facilitate the development of effective retention strategies internationally. Testing theoretical models is necessary to confirm previous research outcomes and to identify plausible sequences of the development of behavioral intentions. Increased understanding of the causal influences on nurses' intent to stay should lead to strategies that may result in higher retention rates and numbers of nurses willing to work in the health sector. © 2012 Blackwell Publishing Ltd.

  10. Embedded performance validity testing in neuropsychological assessment: Potential clinical tools.

    PubMed

    Rickards, Tyler A; Cranston, Christopher C; Touradji, Pegah; Bechtold, Kathleen T

    2018-01-01

    The article aims to suggest clinically-useful tools in neuropsychological assessment for efficient use of embedded measures of performance validity. To accomplish this, we integrated available validity-related and statistical research from the literature, consensus statements, and survey-based data from practicing neuropsychologists. We provide recommendations for use of 1) Cutoffs for embedded performance validity tests including Reliable Digit Span, California Verbal Learning Test (Second Edition) Forced Choice Recognition, Rey-Osterrieth Complex Figure Test Combination Score, Wisconsin Card Sorting Test Failure to Maintain Set, and the Finger Tapping Test; 2) Selecting number of performance validity measures to administer in an assessment; and 3) Hypothetical clinical decision-making models for use of performance validity testing in a neuropsychological assessment collectively considering behavior, patient reporting, and data indicating invalid or noncredible performance. Performance validity testing helps inform the clinician about an individual's general approach to tasks: response to failure, task engagement and persistence, compliance with task demands. Data-driven clinical suggestions provide a resource to clinicians and to instigate conversation within the field to make more uniform, testable decisions to further the discussion, and guide future research in this area.

  11. Modelling protein functional domains in signal transduction using Maude

    NASA Technical Reports Server (NTRS)

    Sriram, M. G.

    2003-01-01

    Modelling of protein-protein interactions in signal transduction is receiving increased attention in computational biology. This paper describes recent research in the application of Maude, a symbolic language founded on rewriting logic, to the modelling of functional domains within signalling proteins. Protein functional domains (PFDs) are a critical focus of modern signal transduction research. In general, Maude models can simulate biological signalling networks and produce specific testable hypotheses at various levels of abstraction. Developing symbolic models of signalling proteins containing functional domains is important because of the potential to generate analyses of complex signalling networks based on structure-function relationships.

  12. Assessing the Health of Individuals and Populations in Surveys of the Elderly: Some Concepts and Approaches.

    ERIC Educational Resources Information Center

    Wallace, Robert B.

    1994-01-01

    Health survey research assesses health of individuals in population. Measures include prevalence/incidence of diseases, signs/symptoms, functional states, and health services utilization. Although assessing individual biologic robustness can be problematic, testable approaches do exist. Characteristics of health of populations/communities, not…

  13. Two fundamental questions about protein evolution.

    PubMed

    Penny, David; Zhong, Bojian

    2015-12-01

    Two basic questions are considered that approach protein evolution from different directions; the problems arising from using Markov models for the deeper divergences, and then the origin of proteins themselves. The real problem for the first question (going backwards in time) is that at deeper phylogenies the Markov models of sequence evolution must lose information exponentially at deeper divergences, and several testable methods are suggested that should help resolve these deeper divergences. For the second question (coming forwards in time) a problem is that most models for the origin of protein synthesis do not give a role for the very earliest stages of the process. From our knowledge of the importance of replication accuracy in limiting the length of a coding molecule, a testable hypothesis is proposed. The length of the code, the code itself, and tRNAs would all have prior roles in increasing the accuracy of RNA replication; thus proteins would have been formed only after the tRNAs and the length of the triplet code are already formed. Both questions lead to testable predictions. Copyright © 2014 Elsevier B.V. and Société Française de Biochimie et Biologie Moléculaire (SFBBM). All rights reserved.

  14. Soy-Based Therapeutic Baby Formulas: Testable Hypotheses Regarding the Pros and Cons

    PubMed Central

    Westmark, Cara J.

    2017-01-01

    Soy-based infant formulas have been consumed in the United States since 1909, and currently constitute a significant portion of the infant formula market. There are efforts underway to generate genetically modified soybeans that produce therapeutic agents of interest with the intent to deliver those agents in a soy-based infant formula platform. The threefold purpose of this review article is to first discuss the pros and cons of soy-based infant formulas, then present testable hypotheses to discern the suitability of a soy platform for drug delivery in babies, and finally start a discussion to inform public policy on this important area of infant nutrition. PMID:28149839

  15. Proposed Model of the Neurobiological Mechanisms Underlying Psychosocial Alcohol Interventions: The Example of Motivational Interviewing*

    PubMed Central

    Feldstein Ewing, Sarah W.; Filbey, Francesca M.; Hendershot, Christian S.; McEachern, Amber D.; Hutchison, Kent E.

    2011-01-01

    Objective: Despite the prevalence and profound consequences of alcohol use disorders, psychosocial alcohol interventions have widely varying outcomes. The range of behavior following psychosocial alcohol treatment indicates the need to gain a better understanding of active ingredients and how they may operate. Although this is an area of great interest, at this time there is a limited understanding of how in-session behaviors may catalyze changes in the brain and subsequent alcohol use behavior. Thus, in this review, we aim to identify the neurobiological routes through which psychosocial alcohol interventions may lead to post-session behavior change as well as offer an approach to conceptualize and evaluate these translational relationships. Method: PubMed and PsycINFO searches identified studies that successfully integrated functional magnetic resonance imaging and psychosocial interventions. Results: Based on this research, we identified potential neurobiological substrates through which behavioral alcohol interventions may initiate and sustain behavior change. In addition, we proposed a testable model linking within-session active ingredients to outside-of-session behavior change. Conclusions: Through this review, we present a testable translational model. Additionally, we illustrate how the proposed model can help facilitate empirical evaluations of psychotherapeutic factors and their underlying neural mechanisms, both in the context of motivational interviewing and in the treatment of alcohol use disorders. PMID:22051204

  16. Sources, Sinks, and Model Accuracy

    EPA Science Inventory

    Spatial demographic models are a necessary tool for understanding how to manage landscapes sustainably for animal populations. These models, therefore, must offer precise and testable predications about animal population dynamics and how animal demographic parameters respond to ...

  17. Simulating Cancer Growth with Multiscale Agent-Based Modeling

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.

    2014-01-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698

  18. A bioinformatics expert system linking functional data to anatomical outcomes in limb regeneration

    PubMed Central

    Lobo, Daniel; Feldman, Erica B.; Shah, Michelle; Malone, Taylor J.

    2014-01-01

    Abstract Amphibians and molting arthropods have the remarkable capacity to regenerate amputated limbs, as described by an extensive literature of experimental cuts, amputations, grafts, and molecular techniques. Despite a rich history of experimental effort, no comprehensive mechanistic model exists that can account for the pattern regulation observed in these experiments. While bioinformatics algorithms have revolutionized the study of signaling pathways, no such tools have heretofore been available to assist scientists in formulating testable models of large‐scale morphogenesis that match published data in the limb regeneration field. Major barriers to preventing an algorithmic approach are the lack of formal descriptions for experimental regenerative information and a repository to centralize storage and mining of functional data on limb regeneration. Establishing a new bioinformatics of shape would significantly accelerate the discovery of key insights into the mechanisms that implement complex regeneration. Here, we describe a novel mathematical ontology for limb regeneration to unambiguously encode phenotype, manipulation, and experiment data. Based on this formalism, we present the first centralized formal database of published limb regeneration experiments together with a user‐friendly expert system tool to facilitate its access and mining. These resources are freely available for the community and will assist both human biologists and artificial intelligence systems to discover testable, mechanistic models of limb regeneration. PMID:25729585

  19. Two New Empirically Derived Reasons To Use the Assessment of Basic Learning Abilities.

    ERIC Educational Resources Information Center

    Richards, David F.; Williams, W. Larry; Follette, William C.

    2002-01-01

    Scores on the Assessment of Basic Learning Abilities (ABLA), Vineland Adaptive Behavior Scales, and the Wechsler Intelligences Scale-Revised (WAIS-R) were obtained for 30 adults with mental retardation. Correlations between the Vineland domains and ABLA were all significant. No participants performing below ABLA Level 6 were testable on the…

  20. Anxiety psychopathology in African American adults: literature review and development of an empirically informed sociocultural model.

    PubMed

    Hunter, Lora Rose; Schmidt, Norman B

    2010-03-01

    In this review, the extant literature concerning anxiety psychopathology in African American adults is summarized to develop a testable, explanatory framework with implications for future research. The model was designed to account for purported lower rates of anxiety disorders in African Americans compared to European Americans, along with other ethnoracial differences reported in the literature. Three specific beliefs or attitudes related to the sociocultural experience of African Americans are identified: awareness of racism, stigma of mental illness, and salience of physical illnesses. In our model, we propose that these psychological processes influence interpretations and behaviors relevant to the expression of nonpathological anxiety as well as features of diagnosable anxiety conditions. Moreover, differences in these processes may explain the differential assessed rates of anxiety disorders in African Americans. The model is discussed in the context of existing models of anxiety etiology. Specific follow-up research is also suggested, along with implications for clinical assessment, diagnosis, and treatment.

  1. A systems analysis of the erythropoietic responses to weightlessness. Volume 1: Mathematical model simulations of the erythropoietic responses to weightlessness

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1985-01-01

    Theoretical responses to weightlessness are summarized. The studies include development and validation of a model of erythropoiesis regulation, analysis of the behavior of erythropoiesis under a variety of conditions, simulations of bed rest and space flight, and an evaluation of ground-based animal studies which were conducted as analogs of zero-g. A review of all relevant space flight findings and a set of testable hypotheses which attempt to explain how red cell mass decreases in space flight are presented. An additional document describes details of the mathematical model used in these studies.

  2. What are health-related users tweeting? A qualitative content analysis of health-related users and their messages on twitter.

    PubMed

    Lee, Joy L; DeCamp, Matthew; Dredze, Mark; Chisolm, Margaret S; Berger, Zackary D

    2014-10-15

    Twitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals' behavior on Twitter. Our goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health. We performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a "testable" claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others. Health-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health. Twitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users-especially patients-interpret the content of tweets posted by health providers.

  3. Simulating cancer growth with multiscale agent-based modeling.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S

    2015-02-01

    There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  5. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  6. Earthquake Forecasting System in Italy

    NASA Astrophysics Data System (ADS)

    Falcone, G.; Marzocchi, W.; Murru, M.; Taroni, M.; Faenza, L.

    2017-12-01

    In Italy, after the 2009 L'Aquila earthquake, a procedure was developed for gathering and disseminating authoritative information about the time dependence of seismic hazard to help communities prepare for a potentially destructive earthquake. The most striking time dependency of the earthquake occurrence process is the time clustering, which is particularly pronounced in time windows of days and weeks. The Operational Earthquake Forecasting (OEF) system that is developed at the Seismic Hazard Center (Centro di Pericolosità Sismica, CPS) of the Istituto Nazionale di Geofisica e Vulcanologia (INGV) is the authoritative source of seismic hazard information for Italian Civil Protection. The philosophy of the system rests on a few basic concepts: transparency, reproducibility, and testability. In particular, the transparent, reproducible, and testable earthquake forecasting system developed at CPS is based on ensemble modeling and on a rigorous testing phase. Such phase is carried out according to the guidance proposed by the Collaboratory for the Study of Earthquake Predictability (CSEP, international infrastructure aimed at evaluating quantitatively earthquake prediction and forecast models through purely prospective and reproducible experiments). In the OEF system, the two most popular short-term models were used: the Epidemic-Type Aftershock Sequences (ETAS) and the Short-Term Earthquake Probabilities (STEP). Here, we report the results from OEF's 24hour earthquake forecasting during the main phases of the 2016-2017 sequence occurred in Central Apennines (Italy).

  7. Linking short-term responses to ecologically-relevant outcomes

    EPA Pesticide Factsheets

    Opportunity to participate in the conduct of collaborative integrative lab, field and modelling efforts to characterize molecular-to-organismal level responses and make quantitative testable predictions of population level outcomes

  8. Astrobiological Phase Transition: Towards Resolution of Fermi's Paradox

    NASA Astrophysics Data System (ADS)

    Ćirković, Milan M.; Vukotić, Branislav

    2008-12-01

    Can astrophysics explain Fermi’s paradox or the “Great Silence” problem? If available, such explanation would be advantageous over most of those suggested in literature which rely on unverifiable cultural and/or sociological assumptions. We suggest, instead, a general astrobiological paradigm which might offer a physical and empirically testable paradox resolution. Based on the idea of James Annis, we develop a model of an astrobiological phase transition of the Milky Way, based on the concept of the global regulation mechanism(s). The dominant regulation mechanisms, arguably, are γ-ray bursts, whose properties and cosmological evolution are becoming well-understood. Secular evolution of regulation mechanisms leads to the brief epoch of phase transition: from an essentially dead place, with pockets of low-complexity life restricted to planetary surfaces, it will, on a short (Fermi-Hart) timescale, become filled with high-complexity life. An observation selection effect explains why we are not, in spite of the very small prior probability, to be surprised at being located in that brief phase of disequilibrium. In addition, we show that, although the phase-transition model may explain the “Great Silence”, it is not supportive of the “contact pessimist” position. To the contrary, the phase-transition model offers a rational motivation for continuation and extension of our present-day Search for ExtraTerrestrial Intelligence (SETI) endeavours. Some of the unequivocal and testable predictions of our model include the decrease of extinction risk in the history of terrestrial life, the absence of any traces of Galactic societies significantly older than human society, complete lack of any extragalactic intelligent signals or phenomena, and the presence of ubiquitous low-complexity life in the Milky Way.

  9. Astrobiological phase transition: towards resolution of Fermi's paradox.

    PubMed

    Cirković, Milan M; Vukotić, Branislav

    2008-12-01

    Can astrophysics explain Fermi's paradox or the "Great Silence" problem? If available, such explanation would be advantageous over most of those suggested in literature which rely on unverifiable cultural and/or sociological assumptions. We suggest, instead, a general astrobiological paradigm which might offer a physical and empirically testable paradox resolution. Based on the idea of James Annis, we develop a model of an astrobiological phase transition of the Milky Way, based on the concept of the global regulation mechanism(s). The dominant regulation mechanisms, arguably, are gamma-ray bursts, whose properties and cosmological evolution are becoming well-understood. Secular evolution of regulation mechanisms leads to the brief epoch of phase transition: from an essentially dead place, with pockets of low-complexity life restricted to planetary surfaces, it will, on a short (Fermi-Hart) timescale, become filled with high-complexity life. An observation selection effect explains why we are not, in spite of the very small prior probability, to be surprised at being located in that brief phase of disequilibrium. In addition, we show that, although the phase-transition model may explain the "Great Silence", it is not supportive of the "contact pessimist" position. To the contrary, the phase-transition model offers a rational motivation for continuation and extension of our present-day Search for ExtraTerrestrial Intelligence (SETI) endeavours. Some of the unequivocal and testable predictions of our model include the decrease of extinction risk in the history of terrestrial life, the absence of any traces of Galactic societies significantly older than human society, complete lack of any extragalactic intelligent signals or phenomena, and the presence of ubiquitous low-complexity life in the Milky Way.

  10. Architectural Analysis of Dynamically Reconfigurable Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly

    2010-01-01

    oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.

  11. Scientific realism and wishful thinking in soil hydrology

    NASA Astrophysics Data System (ADS)

    Flühler, H.

    2009-04-01

    In our field we often learn - or could have learned - more from failures than from successes provided we had postulated testable hypotheses to be accepted or rejected. In soil hydrology, hypotheses are testable if independent information quantifying the pertinent system features is at hand. This view on how to operate is an idealized concept of how we could or should have worked. In reality, the path to success is more tortuous and we usually progress differently obeying to other professional musts. Although we missed some shortcuts over the past few decades, we definitely made significant progress in understanding vadose zone progresses, but we could have advanced our system understanding faster by more rigorously questioning the fundamental assumptions. I will try to illustrate the tortuous path of learning and identify some causes of the slowed-down learning curve. In the pioneering phase of vadose zone research many models have been mapped in our minds and implemented on our computers. Many of them are now well established, powerful and represent the state-of-the-art even when they do not work. Some of them are based on erroneous or misleading concepts. Even when based on adequate concepts they might have been applied in the wrong context or inadequate models may have lead to apparent success. I address this process of collective learning with the intention that we spend more time and efforts to find the right question instead of improving tools, which are questionably suitable for solving the main problems.

  12. Oyster reefs can outpace sea-level rise

    NASA Astrophysics Data System (ADS)

    Rodriguez, Antonio B.; Fodrie, F. Joel; Ridge, Justin T.; Lindquist, Niels L.; Theuerkauf, Ethan J.; Coleman, Sara E.; Grabowski, Jonathan H.; Brodeur, Michelle C.; Gittman, Rachel K.; Keller, Danielle A.; Kenworthy, Matthew D.

    2014-06-01

    In the high-salinity seaward portions of estuaries, oysters seek refuge from predation, competition and disease in intertidal areas, but this sanctuary will be lost if vertical reef accretion cannot keep pace with sea-level rise (SLR). Oyster-reef abundance has already declined ~85% globally over the past 100 years, mainly from over harvesting, making any additional losses due to SLR cause for concern. Before any assessment of reef response to accelerated SLR can be made, direct measures of reef growth are necessary. Here, we present direct measurements of intertidal oyster-reef growth from cores and terrestrial lidar-derived digital elevation models. On the basis of our measurements collected within a mid-Atlantic estuary over a 15-year period, we developed a globally testable empirical model of intertidal oyster-reef accretion. We show that previous estimates of vertical reef growth, based on radiocarbon dates and bathymetric maps, may be greater than one order of magnitude too slow. The intertidal reefs we studied should be able to keep up with any future accelerated rate of SLR (ref. ) and may even benefit from the additional subaqueous space allowing extended vertical accretion.

  13. The use of models to predict potential contamination aboard orbital vehicles

    NASA Technical Reports Server (NTRS)

    Boraas, Martin E.; Seale, Dianne B.

    1989-01-01

    A model of fungal growth on air-exposed, nonnutritive solid surfaces, developed for utilization aboard orbital vehicles is presented. A unique feature of this testable model is that the development of a fungal mycelium can facilitate its own growth by condensation of water vapor from its environment directly onto fungal hyphae. The fungal growth rate is limited by the rate of supply of volatile nutrients and fungal biomass is limited by either the supply of nonvolatile nutrients or by metabolic loss processes. The model discussed is structurally simple, but its dynamics can be quite complex. Biofilm accumulation can vary from a simple linear increase to sustained exponential growth, depending on the values of the environmental variable and model parameters. The results of the model are consistent with data from aquatic biofilm studies, insofar as the two types of systems are comparable. It is shown that the model presented is experimentally testable and provides a platform for the interpretation of observational data that may be directly relevant to the question of growth of organisms aboard the proposed Space Station.

  14. Probability judgments under ambiguity and conflict

    PubMed Central

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at “best” probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of “best” estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity. PMID:26042081

  15. Probability judgments under ambiguity and conflict.

    PubMed

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at "best" probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of "best" estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity.

  16. Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines

    DTIC Science & Technology

    1989-09-01

    Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas

  17. Quantitative model of price diffusion and market friction based on trading as a mechanistic random process.

    PubMed

    Daniels, Marcus G; Farmer, J Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-14

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  18. Quantitative Model of Price Diffusion and Market Friction Based on Trading as a Mechanistic Random Process

    NASA Astrophysics Data System (ADS)

    Daniels, Marcus G.; Farmer, J. Doyne; Gillemot, László; Iori, Giulia; Smith, Eric

    2003-03-01

    We model trading and price formation in a market under the assumption that order arrival and cancellations are Poisson random processes. This model makes testable predictions for the most basic properties of markets, such as the diffusion rate of prices (which is the standard measure of financial risk) and the spread and price impact functions (which are the main determinants of transaction cost). Guided by dimensional analysis, simulation, and mean-field theory, we find scaling relations in terms of order flow rates. We show that even under completely random order flow the need to store supply and demand to facilitate trading induces anomalous diffusion and temporal structure in prices.

  19. Ewing's Sarcoma: Development of RNA Interference-Based Therapy for Advanced Disease

    PubMed Central

    Simmons, Olivia; Maples, Phillip B.; Senzer, Neil; Nemunaitis, John

    2012-01-01

    Ewing's sarcoma tumors are associated with chromosomal translocation between the EWS gene and the ETS transcription factor gene. These unique target sequences provide opportunity for RNA interference(i)-based therapy. A summary of RNAi mechanism and therapeutically designed products including siRNA, shRNA and bi-shRNA are described. Comparison is made between each of these approaches. Systemic RNAi-based therapy, however, requires protected delivery to the Ewing's sarcoma tumor site for activity. Delivery systems which have been most effective in preclinical and clinical testing are reviewed, followed by preclinical assessment of various silencing strategies with demonstration of effectiveness to EWS/FLI-1 target sequences. It is concluded that RNAi-based therapeutics may have testable and achievable activity in management of Ewing's sarcoma. PMID:22523703

  20. Integrated PK-PD and agent-based modeling in oncology.

    PubMed

    Wang, Zhihui; Butner, Joseph D; Cristini, Vittorio; Deisboeck, Thomas S

    2015-04-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed.

  1. Integrated PK-PD and Agent-Based Modeling in Oncology

    PubMed Central

    Wang, Zhihui; Butner, Joseph D.; Cristini, Vittorio

    2016-01-01

    Mathematical modeling has become a valuable tool that strives to complement conventional biomedical research modalities in order to predict experimental outcome, generate new medical hypotheses, and optimize clinical therapies. Two specific approaches, pharmacokinetic-pharmacodynamic (PK-PD) modeling, and agent-based modeling (ABM), have been widely applied in cancer research. While they have made important contributions on their own (e.g., PK-PD in examining chemotherapy drug efficacy and resistance, and ABM in describing and predicting tumor growth and metastasis), only a few groups have started to combine both approaches together in an effort to gain more insights into the details of drug dynamics and the resulting impact on tumor growth. In this review, we focus our discussion on some of the most recent modeling studies building on a combined PK-PD and ABM approach that have generated experimentally testable hypotheses. Some future directions are also discussed. PMID:25588379

  2. Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model

    NASA Astrophysics Data System (ADS)

    Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.

    2016-02-01

    Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.

  3. The four hundred years of planetary science since Galileo and Kepler.

    PubMed

    Burns, Joseph A

    2010-07-29

    For 350 years after Galileo's discoveries, ground-based telescopes and theoretical modelling furnished everything we knew about the Sun's planetary retinue. Over the past five decades, however, spacecraft visits to many targets transformed these early notions, revealing the diversity of Solar System bodies and displaying active planetary processes at work. Violent events have punctuated the histories of many planets and satellites, changing them substantially since their birth. Contemporary knowledge has finally allowed testable models of the Solar System's origin to be developed and potential abodes for extraterrestrial life to be explored. Future planetary research should involve focused studies of selected targets, including exoplanets.

  4. Left-handed and right-handed U(1) gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-01-01

    We propose a model with the left-handed and right-handed continuous Abelian gauge symmetry; U(1) L × U(1) R . Then three right-handed neutrinos are naturally required to achieve U(1) R anomaly cancellations, while several mirror fermions are also needed to do U(1) L anomaly cancellations. Then we formulate the model, and discuss its testability of the new gauge interactions at collider physics such as the large hadron collider (LHC) and the international linear collider (ILC). In particular, we can investigate chiral structure of the interactions by the analysis of forward-backward asymmetry based on polarized beam at the ILC.

  5. A Unified Approach to the Synthesis of Fully Testable Sequential Machines

    DTIC Science & Technology

    1989-10-01

    N A Unified Approach to the Synthesis of Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer Abstract • In this paper we attempt to...research was supported in part by the Defense Advanced Research Projects Agency under contract N00014-87-K-0825. Author Information Devadas : Department...Fully Testable Sequential Maine(S P Sritiivas Devadas Departinent of Electrical Engineerinig anid Comivi Sciec Massachusetts Institute of Technology

  6. Factors That Affect Software Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.

    1991-01-01

    Software faults that infrequently affect software's output are dangerous. When a software fault causes frequent software failures, testing is likely to reveal the fault before the software is releases; when the fault remains undetected during testing, it can cause disaster after the software is installed. A technique for predicting whether a particular piece of software is likely to reveal faults within itself during testing is found in [Voas91b]. A piece of software that is likely to reveal faults within itself during testing is said to have high testability. A piece of software that is not likely to reveal faults within itself during testing is said to have low testability. It is preferable to design software with higher testabilities from the outset, i.e., create software with as high of a degree of testability as possible to avoid the problems of having undetected faults that are associated with low testability. Information loss is a phenomenon that occurs during program execution that increases the likelihood that a fault will remain undetected. In this paper, I identify two brad classes of information loss, define them, and suggest ways of predicting the potential for information loss to occur. We do this in order to decrease the likelihood that faults will remain undetected during testing.

  7. AgMIP Coordinated Global and Regional Assessments for 1.5°C and 2.0°C

    NASA Astrophysics Data System (ADS)

    Rosenzweig, C.

    2017-12-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has developed novel methods for Coordinated Global and Regional Assessments (CGRA) of agriculture and food security in a changing world. The present study performs a proof-of-concept of the CGRA to demonstrate advantages and challenges of the framework. This effort responds to the request by UNFCCC for the implications of limiting global temperature increases to 1.5°C and 2.0°C above pre-industrial conditions. The protocols for the 1.5°C/2.0°C assessment establish explicit and testable linkages across disciplines and scales, connecting outputs and inputs from the Shared Socio-economic Pathways (SSPs), Representative Agricultural Pathways (RAPs), HAPPI and CMIP5 ensemble scenarios, global gridded crop models, global agricultural economic models, site-based crop models, and within-country regional economic models. CGRA results show that at the global scale, mixed areas of positive and negative simulated yield changes, with declines in some breadbasket regions led to overall declines in productivity at both 1.5°C and 2.0°C. These projected global yield changes resulted in increases in prices of major commodities in a global economic model. Simulations for 1.5°C and 2.0°C using site-based crop models had mixed results depending on region and crop, but with more negative effects on productivity at 2.0°C than at 1.5°C for the most part. In conjunction with price changes from the global economics models, these productivity declines resulted generally in small positive effects on regional farm livelihoods, showing that farming systems should continue to be viable under high mitigation scenarios. CGRA protocols focus on how mitigation actions and effects differ across scales, with main mechanisms studied in the integrated assessment models being policies and technologies that reduce direct non-CO2 emissions from agriculture, reduce CO2 emissions from land use change and forest sink enhancement, and utilize biomass for energy production. At regional scales, increasing soil organic carbon (SOC) is of active interest.

  8. Toward a Graded Psycholexical Space Mapping Model: Sublexical and Lexical Representations in Chinese Character Reading Development.

    PubMed

    Tong, Xiuli; McBride, Catherine

    2017-07-01

    Following a review of contemporary models of word-level processing for reading and their limitations, we propose a new hypothetical model of Chinese character reading, namely, the graded lexical space mapping model that characterizes how sublexical radicals and lexical information are involved in Chinese character reading development. The underlying assumption of this model is that Chinese character recognition is a process of competitive mappings of phonology, semantics, and orthography in both lexical and sublexical systems, operating as functions of statistical properties of print input based on the individual's specific level of reading. This model leads to several testable predictions concerning how the quasiregularity and continuity of Chinese-specific radicals are organized in memory for both child and adult readers at different developmental stages of reading.

  9. Inductive reasoning.

    PubMed

    Hayes, Brett K; Heit, Evan; Swendsen, Haruka

    2010-03-01

    Inductive reasoning entails using existing knowledge or observations to make predictions about novel cases. We review recent findings in research on category-based induction as well as theoretical models of these results, including similarity-based models, connectionist networks, an account based on relevance theory, Bayesian models, and other mathematical models. A number of touchstone empirical phenomena that involve taxonomic similarity are described. We also examine phenomena involving more complex background knowledge about premises and conclusions of inductive arguments and the properties referenced. Earlier models are shown to give a good account of similarity-based phenomena but not knowledge-based phenomena. Recent models that aim to account for both similarity-based and knowledge-based phenomena are reviewed and evaluated. Among the most important new directions in induction research are a focus on induction with uncertain premise categories, the modeling of the relationship between inductive and deductive reasoning, and examination of the neural substrates of induction. A common theme in both the well-established and emerging lines of induction research is the need to develop well-articulated and empirically testable formal models of induction. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  10. What Is Scientific Thinking?

    ERIC Educational Resources Information Center

    Tweney, Ryan D.

    Drawing parallels with critical thinking and creative thinking, this document describes some ways that scientific thinking is utilized. Cognitive approaches to scientific thinking are discussed, and it is argued that all science involves an attempt to construct a testable mental model of some aspect of reality. The role of mental models is…

  11. Saccadic vector optokinetic perimetry in children with neurodisability or isolated visual pathway lesions: observational cohort study.

    PubMed

    Tailor, Vijay; Glaze, Selina; Unwin, Hilary; Bowman, Richard; Thompson, Graham; Dahlmann-Noor, Annegret

    2016-10-01

    Children and adults with neurological impairments are often not able to access conventional perimetry; however, information about the visual field is valuable. A new technology, saccadic vector optokinetic perimetry (SVOP), may have improved accessibility, but its accuracy has not been evaluated. We aimed to explore accessibility, testability and accuracy of SVOP in children with neurodisability or isolated visual pathway deficits. Cohort study; recruitment October 2013-May 2014, at children's eye clinics at a tertiary referral centre and a regional Child Development Centre; full orthoptic assessment, SVOP (central 30° of the visual field) and confrontation visual fields (CVF). Group 1: age 1-16 years, neurodisability (n=16), group 2: age 10-16 years, confirmed or suspected visual field defect (n=21); group 2 also completed Goldmann visual field testing (GVFT). Group 1: testability with a full 40-point test protocol is 12.5%; with reduced test protocols, testability is 100%, but plots may be clinically meaningless. Children (44%) and parents/carers (62.5%) find the test easy. SVOP and CVF agree in 50%. Group 2: testability is 62% for the 40-point protocol, and 90.5% for reduced protocols. Corneal changes in childhood glaucoma interfere with SVOP testing. All children and parents/carers find SVOP easy. Overall agreement with GVFT is 64.7%. While SVOP is highly accessible to children, many cannot complete a full 40-point test. Agreement with current standard tests is moderate to poor. Abnormal saccades cause an apparent non-specific visual field defect. In children with glaucoma or nystagmus SVOP calibration often fails. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Researching the Study Abroad Experience

    ERIC Educational Resources Information Center

    McLeod, Mark; Wainwright, Philip

    2009-01-01

    The authors propose a paradigm for rigorous scientific assessment of study abroad programs, with the focus being on how study abroad experiences affect psychological constructs as opposed to looking solely at study-abroad-related outcomes. Social learning theory is used as a possible theoretical basis for making testable hypotheses and guiding…

  13. Perceptual Decision-Making as Probabilistic Inference by Neural Sampling.

    PubMed

    Haefner, Ralf M; Berkes, Pietro; Fiser, József

    2016-05-04

    We address two main challenges facing systems neuroscience today: understanding the nature and function of cortical feedback between sensory areas and of correlated variability. Starting from the old idea of perception as probabilistic inference, we show how to use knowledge of the psychophysical task to make testable predictions for the influence of feedback signals on early sensory representations. Applying our framework to a two-alternative forced choice task paradigm, we can explain multiple empirical findings that have been hard to account for by the traditional feedforward model of sensory processing, including the task dependence of neural response correlations and the diverging time courses of choice probabilities and psychophysical kernels. Our model makes new predictions and characterizes a component of correlated variability that represents task-related information rather than performance-degrading noise. It demonstrates a normative way to integrate sensory and cognitive components into physiologically testable models of perceptual decision-making. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Can mixed assessment methods make biology classes more equitable?

    PubMed

    Cotner, Sehoya; Ballen, Cissy J

    2017-01-01

    Many factors have been proposed to explain the attrition of women in science, technology, engineering and math fields, among them the lower performance of women in introductory courses resulting from deficits in incoming preparation. We focus on the impact of mixed methods of assessment, which minimizes the impact of high-stakes exams and rewards other methods of assessment such as group participation, low-stakes quizzes and assignments, and in-class activities. We hypothesized that these mixed methods would benefit individuals who otherwise underperform on high-stakes tests. Here, we analyze gender-based performance trends in nine large (N > 1000 students) introductory biology courses in fall 2016. Females underperformed on exams compared to their male counterparts, a difference that does not exist with other methods of assessment that compose course grade. Further, we analyzed three case studies of courses that transitioned their grading schemes to either de-emphasize or emphasize exams as a proportion of total course grade. We demonstrate that the shift away from an exam emphasis consequently benefits female students, thereby closing gaps in overall performance. Further, the exam performance gap itself is reduced when the exams contribute less to overall course grade. We discuss testable predictions that follow from our hypothesis, and advocate for the use of mixed methods of assessments (possibly as part of an overall shift to active learning techniques). We conclude by challenging the student deficit model, and suggest a course deficit model as explanatory of these performance gaps, whereby the microclimate of the classroom can either raise or lower barriers to success for underrepresented groups in STEM.

  15. Can mixed assessment methods make biology classes more equitable?

    PubMed Central

    Ballen, Cissy J.

    2017-01-01

    Many factors have been proposed to explain the attrition of women in science, technology, engineering and math fields, among them the lower performance of women in introductory courses resulting from deficits in incoming preparation. We focus on the impact of mixed methods of assessment, which minimizes the impact of high-stakes exams and rewards other methods of assessment such as group participation, low-stakes quizzes and assignments, and in-class activities. We hypothesized that these mixed methods would benefit individuals who otherwise underperform on high-stakes tests. Here, we analyze gender-based performance trends in nine large (N > 1000 students) introductory biology courses in fall 2016. Females underperformed on exams compared to their male counterparts, a difference that does not exist with other methods of assessment that compose course grade. Further, we analyzed three case studies of courses that transitioned their grading schemes to either de-emphasize or emphasize exams as a proportion of total course grade. We demonstrate that the shift away from an exam emphasis consequently benefits female students, thereby closing gaps in overall performance. Further, the exam performance gap itself is reduced when the exams contribute less to overall course grade. We discuss testable predictions that follow from our hypothesis, and advocate for the use of mixed methods of assessments (possibly as part of an overall shift to active learning techniques). We conclude by challenging the student deficit model, and suggest a course deficit model as explanatory of these performance gaps, whereby the microclimate of the classroom can either raise or lower barriers to success for underrepresented groups in STEM. PMID:29281676

  16. Encoding dependence in Bayesian causal networks

    USDA-ARS?s Scientific Manuscript database

    Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...

  17. Theory analysis of the Dental Hygiene Human Needs Conceptual Model.

    PubMed

    MacDonald, L; Bowen, D M

    2017-11-01

    Theories provide a structural knowing about concept relationships, practice intricacies, and intuitions and thus shape the distinct body of the profession. Capturing ways of knowing and being is essential to any professions' practice, education and research. This process defines the phenomenon of the profession - its existence or experience. Theory evaluation is a systematic criterion-based assessment of a specific theory. This study presents a theory analysis of the Dental Hygiene Human Needs Conceptual Model (DH HNCM). Using the Walker and Avant Theory Analysis, a seven-step process, the DH HNCM, was analysed and evaluated for its meaningfulness and contribution to dental hygiene. The steps include the following: (i) investigate the origins; (ii) examine relationships of the theory's concepts; (iii) assess the logic of the theory's structure; (iv) consider the usefulness to practice; (v) judge the generalizability; (vi) evaluate the parsimony; and (vii) appraise the testability of the theory. Human needs theory in nursing and Maslow's Hierarchy of Need Theory prompted this theory's development. The DH HNCM depicts four concepts based on the paradigm concepts of the profession: client, health/oral health, environment and dental hygiene actions, and includes validated eleven human needs that evolved overtime to eight. It is logical, simplistic, allows scientific predictions and testing, and provides a unique lens for the dental hygiene practitioner. With this model, dental hygienists have entered practice, knowing they enable clients to meet their human needs. For the DH HNCM, theory analysis affirmed that the model is reasonable and insightful and adds to the dental hygiene professions' epistemology and ontology. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Emotional intervention strategies for dementia-related behavior: a theory synthesis.

    PubMed

    Yao, Lan; Algase, Donna

    2008-04-01

    Behavioral disturbances of elders with dementia are prevalent. Yet the science guiding development and testing of effective intervention strategies is limited by rudimentary and often-conflicting theories. Using a theory-synthesis approach conducted within the perspective of the need-driven dementia-compromised behavior model, this article presents the locomoting responses to environment in elders with dementia (LRE-EWD) model. This new model, based on empirical and theoretical evidence, integrates the role of emotion with that of cognition in explicating a person-environment dynamic supporting wandering and other dementia-related disturbances. Included is evidence of the theory's testability and elaboration of its implications. The LRE-EWD model resolves conflicting views and evidence from current research on environmental interventions for behavior disturbances and opens new avenues to advance this field of study and practice.

  19. Sneutrino dark matter in gauged inverse seesaw models for neutrinos.

    PubMed

    An, Haipeng; Dev, P S Bhupal; Cai, Yi; Mohapatra, R N

    2012-02-24

    Extending the minimal supersymmetric standard model to explain small neutrino masses via the inverse seesaw mechanism can lead to a new light supersymmetric scalar partner which can play the role of inelastic dark matter (IDM). It is a linear combination of the superpartners of the neutral fermions in the theory (the light left-handed neutrino and two heavy standard model singlet neutrinos) which can be very light with mass in ~5-20 GeV range, as suggested by some current direct detection experiments. The IDM in this class of models has keV-scale mass splitting, which is intimately connected to the small Majorana masses of neutrinos. We predict the differential scattering rate and annual modulation of the IDM signal which can be testable at future germanium- and xenon-based detectors.

  20. Surface fire effects on conifer and hardwood crowns--applications of an integral plume model

    Treesearch

    Matthew Dickinson; Anthony Bova; Kathleen Kavanagh; Antoine Randolph; Lawrence Band

    2009-01-01

    An integral plume model was applied to the problems of tree death from canopy injury in dormant-season hardwoods and branch embolism in Douglas fir (Pseudotsuga menziesii) crowns. Our purpose was to generate testable hypotheses. We used the integral plume models to relate crown injury to bole injury and to explore the effects of variation in fire...

  1. Narrative meaning making and integration: Toward a better understanding of the way falling ill influences quality of life.

    PubMed

    Hartog, Iris; Scherer-Rath, Michael; Kruizinga, Renske; Netjes, Justine; Henriques, José; Nieuwkerk, Pythia; Sprangers, Mirjam; van Laarhoven, Hanneke

    2017-09-01

    Falling seriously ill is often experienced as a life event that causes conflict with people's personal goals and expectations in life and evokes existential questions. This article presents a new humanities approach to the way people make meaning of such events and how this influences their quality of life. Incorporating theories on contingency, narrative identity, and quality of life, we developed a theoretical model entailing the concepts life event, worldview, ultimate life goals, experience of contingency, narrative meaning making, narrative integration, and quality of life. We formulate testable hypotheses and describe the self-report questionnaire that was developed based on the model.

  2. Modelling the spread of innovation in wild birds.

    PubMed

    Shultz, Thomas R; Montrey, Marcel; Aplin, Lucy M

    2017-06-01

    We apply three plausible algorithms in agent-based computer simulations to recent experiments on social learning in wild birds. Although some of the phenomena are simulated by all three learning algorithms, several manifestations of social conformity bias are simulated by only the approximate majority (AM) algorithm, which has roots in chemistry, molecular biology and theoretical computer science. The simulations generate testable predictions and provide several explanatory insights into the diffusion of innovation through a population. The AM algorithm's success raises the possibility of its usefulness in studying group dynamics more generally, in several different scientific domains. Our differential-equation model matches simulation results and provides mathematical insights into the dynamics of these algorithms. © 2017 The Author(s).

  3. Semantic modeling for theory clarification: The realist vs liberal international relations perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bray, O.H.

    This paper describes a natural language based, semantic information modeling methodology and explores its use and value in clarifying and comparing political science theories and frameworks. As an example, the paper uses this methodology to clarify and compare some of the basic concepts and relationships in the realist (e.g. Waltz) and the liberal (e.g. Rosenau) paradigms for international relations. The methodology can provide three types of benefits: (1) it can clarify and make explicit exactly what is meant by a concept; (2) it can often identify unanticipated implications and consequence of concepts and relationships; and (3) it can help inmore » identifying and operationalizing testable hypotheses.« less

  4. Optimal assessment of multiple cues.

    PubMed Central

    Fawcett, Tim W; Johnstone, Rufus A

    2003-01-01

    In a wide range of contexts from mate choice to foraging, animals are required to discriminate between alternative options on the basis of multiple cues. How should they best assess such complex multicomponent stimuli? Here, we construct a model to investigate this problem, focusing on a simple case where a 'chooser' faces a discrimination task involving two cues. These cues vary in their accuracy and in how costly they are to assess. As an example, we consider a mate-choice situation where females choose between males of differing quality. Our model predicts the following: (i) females should become less choosy as the cost of finding new males increases; (ii) females should prioritize cues differently depending on how choosy they are; (iii) females may sometimes prioritize less accurate cues; and (iv) which cues are most important depends on the abundance of desirable mates. These predictions are testable in mate-choice experiments where the costs of choice can be manipulated. Our findings are applicable to other discrimination tasks besides mate choice, for example a predator's choice between palatable and unpalatable prey, or an altruist's choice between kin and non-kin. PMID:12908986

  5. Multiple transitions and HIV risk among orphaned Kenyan schoolgirls.

    PubMed

    Mojola, Sanyu A

    2011-03-01

    Why are orphaned girls at particular risk of acquiring HIV infection? Using a transition-to-adulthood framework, this study employs qualitative data from Nyanza Province, Kenya, to explore pathways to HIV risk among orphaned and nonorphaned high-school girls. It shows how simultaneous processes such as leaving their parental home, negotiating financial access, and relationship transitions interact to produce disproportionate risk for orphaned girls. The role of financial provision and parental love in modifying girls' trajectories to risk are also explored. A testable theoretical model is proposed based on the qualitative findings, and policy implications are suggested.

  6. MULTIPLE TRANSITIONS AND HIV RISK AMONG AFRICAN SCHOOL GIRLS

    PubMed Central

    Mojola, Sanyu A

    2012-01-01

    Why are orphaned girls at particular risk of contracting HIV? Using a transition to adulthood framework, this paper uses qualitative data from Nyanza province, Kenya to explore pathways to HIV risk among orphaned and non-orphaned high school girls. I show how co-occurring processes such as residential transition out of the parental home, negotiating financial access and relationship transitions interact to produce disproportionate risk for orphan girls. I also explore the role of financial provision and parental love in modifying girls’ trajectories to risk. I propose a testable theoretical model based on the qualitative findings and suggest policy implications. PMID:21500699

  7. Instructional Design: Science, Technology, Both, Neither

    ERIC Educational Resources Information Center

    Gropper, George L.

    2017-01-01

    What would it take for instructional design to qualify as a bona fide applied discipline? First and foremost, a fundamental requirement is a testable and tested theoretical base. Untested rationales until verified remain in limbo. Secondly, the discipline's applied prescriptions must be demonstrably traceable to the theoretical base once it is…

  8. The diffusion decision model: theory and data for two-choice decision tasks.

    PubMed

    Ratcliff, Roger; McKoon, Gail

    2008-04-01

    The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral data-accuracy, mean response times, and response time distributions-into components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.

  9. Non-animal photosafety assessment approaches for cosmetics based on the photochemical and photobiochemical properties.

    PubMed

    Onoue, Satomi; Suzuki, Gen; Kato, Masashi; Hirota, Morihiko; Nishida, Hayato; Kitagaki, Masato; Kouzuki, Hirokazu; Yamada, Shizuo

    2013-12-01

    The main purpose of the present study was to establish a non-animal photosafety assessment approach for cosmetics using in vitro photochemical and photobiochemical screening systems. Fifty-one cosmetics, pharmaceutics and other chemicals were selected as model chemicals on the basis of animal and/or clinical photosafety information. The model chemicals were assessed in terms of photochemical properties by UV/VIS spectral analysis, reactive oxygen species (ROS) assay and 3T3 neutral red uptake phototoxicity testing (3T3 NRU PT). Most phototoxins exhibited potent UV/VIS absorption with molar extinction coefficients of over 1000M(-1)cm(-1), although false-negative prediction occurred for 2 cosmetic phototoxins owing to weak UV/VIS absorption. Among all the cosmetic ingredients, ca. 42% of tested chemicals were non-testable in the ROS assay because of low water solubility; thereby, micellar ROS (mROS) assay using a solubilizing surfactant was employed for follow-up screening. Upon combination use of ROS and mROS assays, the individual specificity was 88.2%, and the positive and negative predictivities were estimated to be 94.4% and 100%, respectively. In the 3T3 NRU PT, 3 cosmetics and 4 drugs were incorrectly predicted not to be phototoxic, although some of them were typical photoallergens. Thus, these in vitro screening systems individually provide false predictions; however, a systematic tiered approach using these assays could provide reliable photosafety assessment without any false-negatives. The combined use of in vitro assays might enable simple and fast non-animal photosafety evaluation of cosmetic ingredients. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. LSI/VLSI design for testability analysis and general approach

    NASA Technical Reports Server (NTRS)

    Lam, A. Y.

    1982-01-01

    The incorporation of testability characteristics into large scale digital design is not only necessary for, but also pertinent to effective device testing and enhancement of device reliability. There are at least three major DFT techniques, namely, the self checking, the LSSD, and the partitioning techniques, each of which can be incorporated into a logic design to achieve a specific set of testability and reliability requirements. Detailed analysis of the design theory, implementation, fault coverage, hardware requirements, application limitations, etc., of each of these techniques are also presented.

  11. Anxiety Psychopathology in African American Adults: Literature Review and Development of an Empirically Informed Sociocultural Model

    ERIC Educational Resources Information Center

    Hunter, Lora Rose; Schmidt, Norman B.

    2010-01-01

    In this review, the extant literature concerning anxiety psychopathology in African American adults is summarized to develop a testable, explanatory framework with implications for future research. The model was designed to account for purported lower rates of anxiety disorders in African Americans compared to European Americans, along with other…

  12. A Theoretical Model of Health Information Technology Usage Behaviour with Implications for Patient Safety

    ERIC Educational Resources Information Center

    Holden, Richard J.; Karsh, Ben-Tzion

    2009-01-01

    Primary objective: much research and practice related to the design and implementation of information technology in health care has been atheoretical. It is argued that using extant theory to develop testable models of health information technology (HIT) benefits both research and practice. Methods and procedures: several theories of motivation,…

  13. Comparative Model Evaluation Studies of Biogenic Trace Gas Fluxes in Tropical Forests

    NASA Technical Reports Server (NTRS)

    Potter, C. S.; Peterson, David L. (Technical Monitor)

    1997-01-01

    Simulation modeling can play a number of important roles in large-scale ecosystem studies, including synthesis of patterns and changes in carbon and nutrient cycling dynamics, scaling up to regional estimates, and formulation of testable hypotheses for process studies. Recent comparative studies have shown that ecosystem models of soil trace gas exchange with the atmosphere are evolving into several distinct simulation approaches. Different levels of detail exist among process models in the treatment of physical controls on ecosystem nutrient fluxes and organic substrate transformations leading to gas emissions. These differences are is in part from distinct objectives of scaling and extrapolation. Parameter requirements for initialization scalings, boundary conditions, and time-series driven therefore vary among ecosystem simulation models, such that the design of field experiments for integration with modeling should consider a consolidated series of measurements that will satisfy most of the various model requirements. For example, variables that provide information on soil moisture holding capacity, moisture retention characteristics, potential evapotranspiration and drainage rates, and rooting depth appear to be of the first order in model evaluation trials for tropical moist forest ecosystems. The amount and nutrient content of labile organic matter in the soil, based on accurate plant production estimates, are also key parameters that determine emission model response. Based on comparative model results, it is possible to construct a preliminary evaluation matrix along categories of key diagnostic parameters and temporal domains. Nevertheless, as large-scale studied are planned, it is notable that few existing models age designed to simulate transient states of ecosystem change, a feature which will be essential for assessment of anthropogenic disturbance on regional gas budgets, and effects of long-term climate variability on biosphere-atmosphere exchange.

  14. QUANTITATIVE TESTS OF ELMS AS INTERMEDIATE N PEELING-BALOONING MODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LAO,LL; SNYDER,PB; LEONARD,AW

    2003-03-01

    A271 QUANTITATIVE TESTS OF ELMS AS INTERMEDIATE N PEELING-BALOONING MODES. Several testable features of the working model of edge localized modes (ELMs) as intermediate toroidal mode number peeling-ballooning modes are evaluated quantitatively using DIII-D and JT-60U experimental data and the ELITE MHD stability code. These include the hypothesis that ELM sizes are related to the radial widths of the unstable MHD modes, the unstable modes have a strong ballooning character localized in the outboard bad curvature region, and ELM size generally becomes smaller at high edge collisionality. ELMs are triggered when the growth rates of the unstable MHD modes becomemore » significantly large. These testable features are consistent with many ELM observations in DIII-D and JT-60U discharges.« less

  15. Mathematical models of the fate of lymphoma B cells after antigen receptor ligation with specific antibodies.

    PubMed

    Alarcón, Tomás; Marches, Radu; Page, Karen M

    2006-05-07

    We formulate models of the mechanism(s) by which B cell lymphoma cells stimulated with an antibody specific to the B cell receptor (IgM) become quiescent or apoptotic. In particular, we aim to reproduce experimental results by Marches et al. according to which the fate of the targeted cells (Daudi) depends on the levels of expression of p21(Waf1) (p21) cell-cycle inhibitor. A simple model is formulated in which the basic ingredients are p21 and caspase activity, and their mutual inhibition. We show that this model does not reproduce the experimental results and that further refinement is needed. A second model successfully reproduces the experimental observations, for a given set of parameter values, indicating a critical role for Myc in the fate decision process. We use bifurcation analysis and objective sensitivity analysis to assess the robustness of our results. Importantly, this analysis yields experimentally testable predictions on the role of Myc, which could have therapeutic implications.

  16. Module generation for self-testing integrated systems

    NASA Astrophysics Data System (ADS)

    Vanriessen, Ronald Pieter

    Hardware used for self test in VLSI (Very Large Scale Integrated) systems is reviewed, and an architecture to control the test hardware in an integrated system is presented. Because of the increase of test times, the use of self test techniques has become practically and economically viable for VLSI systems. Beside the reduction in test times and costs, self test also provides testing at operational speeds. Therefore, a suitable combination of scan path and macrospecific (self) tests is required to reduce test times and costs. An expert system that can be used in a silicon compilation environment is presented. The approach requires a minimum of testability knowledge from a system designer. A user friendly interface was described for specifying and modifying testability requirements by a testability expert. A reason directed backtracking mechanism is used to solve selection failures. Both the hierarchical testable architecture and the design for testability expert system are used in a self test compiler. The definition of a self test compiler was given. A self test compiler is a software tool that selects an appropriate test method for every macro in a design. The hardware to control a macro test will be included in the design automatically. As an example, the integration of the self-test compiler in a silicon compilation system PIRAMID was described. The design of a demonstrator circuit by self test compiler is described. This circuit consists of two self testable macros. Control of the self test hardware is carried out via the test access port of the boundary scan standard.

  17. a Heavy Higgs Boson from Flavor and Electroweak Symmetry Unification

    NASA Astrophysics Data System (ADS)

    Fabbrichesi, Marco

    2005-08-01

    We present a unified picture of flavor and electroweak symmetry breaking based on a nonlinear sigma model spontaneously broken at the TeV scale. Flavor and Higgs bosons arise as pseudo-Goldstone modes. Explicit collective symmetry breaking yields stable vacuum expectation values and masses protected at one loop by the little-Higgs mechanism. The coupling to the fermions generates well-definite mass textures--according to a U(1) global flavor symmetry--that correctly reproduce the mass hierarchies and mixings of quarks and leptons. The model is more constrained than usual little-Higgs models because of bounds on weak and flavor physics. The main experimental signatures testable at the LHC are a rather large mass mh0 = 317 ± 80 GeV for the (lightest) Higgs boson.

  18. Students' Awareness and Perceptions of Learning Engineering: Content and Construct Validation of an Instrument

    ERIC Educational Resources Information Center

    Duncan-Wiles, Daphne S.

    2012-01-01

    With the recent addition of engineering to most K-12 testable state standards, efficient and comprehensive instruments are needed to assess changes in student knowledge and perceptions of engineering. In this study, I developed the Students' Awareness and Perceptions of Learning Engineering (STAPLE) instrument to quantitatively measure fourth…

  19. Wichita's Hispanics: Tensions, Concerns, and the Migrant Stream.

    ERIC Educational Resources Information Center

    Johnson, Kenneth F.; And Others

    In an attempt to formulate a set of testable propositions about the dynamics of Hispanic life that will be valuable pedagogically and as a basis for public policy formation, this study assesses the impact of Hispanic Americans on Wichita, Kansas. Chapter 1 identifies the Hispanic origins of Kansas' 63,339 Hispanics who represent 2.7% of the…

  20. A systems framework for identifying candidate microbial assemblages for disease management

    USDA-ARS?s Scientific Manuscript database

    Network models of soil and plant microbiomes present new opportunities for enhancing disease management, but also challenges for interpretation. We present a framework for interpreting microbiome networks, illustrating how the observed structure of networks can be used to generate testable hypothese...

  1. Stereoacuity of preschool children with and without vision disorders.

    PubMed

    Ciner, Elise B; Ying, Gui-Shuang; Kulp, Marjean Taylor; Maguire, Maureen G; Quinn, Graham E; Orel-Bixler, Deborah; Cyert, Lynn A; Moore, Bruce; Huang, Jiayan

    2014-03-01

    To evaluate associations between stereoacuity and presence, type, and severity of vision disorders in Head Start preschool children and determine testability and levels of stereoacuity by age in children without vision disorders. Stereoacuity of children aged 3 to 5 years (n = 2898) participating in the Vision in Preschoolers (VIP) Study was evaluated using the Stereo Smile II test during a comprehensive vision examination. This test uses a two-alternative forced-choice paradigm with four stereoacuity levels (480 to 60 seconds of arc). Children were classified by the presence (n = 871) or absence (n = 2027) of VIP Study-targeted vision disorders (amblyopia, strabismus, significant refractive error, or unexplained reduced visual acuity), including type and severity. Median stereoacuity between groups and among severity levels of vision disorders was compared using Wilcoxon rank sum and Kruskal-Wallis tests. Testability and stereoacuity levels were determined for children without VIP Study-targeted disorders overall and by age. Children with VIP Study-targeted vision disorders had significantly worse median stereoacuity than that of children without vision disorders (120 vs. 60 seconds of arc, p < 0.001). Children with the most severe vision disorders had worse stereoacuity than that of children with milder disorders (median 480 vs. 120 seconds of arc, p < 0.001). Among children without vision disorders, testability was 99.6% overall, increasing with age to 100% for 5-year-olds (p = 0.002). Most of the children without vision disorders (88%) had stereoacuity at the two best disparities (60 or 120 seconds of arc); the percentage increasing with age (82% for 3-, 89% for 4-, and 92% for 5-year-olds; p < 0.001). The presence of any VIP Study-targeted vision disorder was associated with significantly worse stereoacuity in preschool children. Severe vision disorders were more likely associated with poorer stereopsis than milder or no vision disorders. Testability was excellent at all ages. These results support the validity of the Stereo Smile II for assessing random-dot stereoacuity in preschool children.

  2. What Are Health-Related Users Tweeting? A Qualitative Content Analysis of Health-Related Users and Their Messages on Twitter

    PubMed Central

    DeCamp, Matthew; Dredze, Mark; Chisolm, Margaret S; Berger, Zackary D

    2014-01-01

    Background Twitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals’ behavior on Twitter. Objective Our goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health. Methods We performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a “testable” claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others. Results Health-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health. Conclusions Twitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users—especially patients—interpret the content of tweets posted by health providers. PMID:25591063

  3. Refinement of Representation Theorems for Context-Free Languages

    NASA Astrophysics Data System (ADS)

    Fujioka, Kaoru

    In this paper, we obtain some refinement of representation theorems for context-free languages by using Dyck languages, insertion systems, strictly locally testable languages, and morphisms. For instance, we improved the Chomsky-Schützenberger representation theorem and show that each context-free language L can be represented in the form L = h (D ∩ R), where D is a Dyck language, R is a strictly 3-testable language, and h is a morphism. A similar representation for context-free languages can be obtained, using insertion systems of weight (3, 0) and strictly 4-testable languages.

  4. An empirical comparison of a dynamic software testability metric to static cyclomatic complexity

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.

    1993-01-01

    This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.

  5. An agent-based model of dialect evolution in killer whales.

    PubMed

    Filatova, Olga A; Miller, Patrick J O

    2015-05-21

    The killer whale is one of the few animal species with vocal dialects that arise from socially learned group-specific call repertoires. We describe a new agent-based model of killer whale populations and test a set of vocal-learning rules to assess which mechanisms may lead to the formation of dialect groupings observed in the wild. We tested a null model with genetic transmission and no learning, and ten models with learning rules that differ by template source (mother or matriline), variation type (random errors or innovations) and type of call change (no divergence from kin vs. divergence from kin). The null model without vocal learning did not produce the pattern of group-specific call repertoires we observe in nature. Learning from either mother alone or the entire matriline with calls changing by random errors produced a graded distribution of the call phenotype, without the discrete call types observed in nature. Introducing occasional innovation or random error proportional to matriline variance yielded more or less discrete and stable call types. A tendency to diverge from the calls of related matrilines provided fast divergence of loose call clusters. A pattern resembling the dialect diversity observed in the wild arose only when rules were applied in combinations and similar outputs could arise from different learning rules and their combinations. Our results emphasize the lack of information on quantitative features of wild killer whale dialects and reveal a set of testable questions that can draw insights into the cultural evolution of killer whale dialects. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.

  7. Taxes in a Labor Supply Model with Joint Wage-Hours Determination.

    ERIC Educational Resources Information Center

    Rosen, Harvey S.

    1976-01-01

    Payroll and progressive income taxes play an enormous role in the American fiscal system. The purpose of this study is to present some econometric evidence on the effects of taxes on married women, a group of growing importance in the American labor force. A testable model of labor supply is developed which permits statistical estimation of a…

  8. A tweaking principle for executive control: neuronal circuit mechanism for rule-based task switching and conflict resolution.

    PubMed

    Ardid, Salva; Wang, Xiao-Jing

    2013-12-11

    A hallmark of executive control is the brain's agility to shift between different tasks depending on the behavioral rule currently in play. In this work, we propose a "tweaking hypothesis" for task switching: a weak rule signal provides a small bias that is dramatically amplified by reverberating attractor dynamics in neural circuits for stimulus categorization and action selection, leading to an all-or-none reconfiguration of sensory-motor mapping. Based on this principle, we developed a biologically realistic model with multiple modules for task switching. We found that the model quantitatively accounts for complex task switching behavior: switch cost, congruency effect, and task-response interaction; as well as monkey's single-neuron activity associated with task switching. The model yields several testable predictions, in particular, that category-selective neurons play a key role in resolving sensory-motor conflict. This work represents a neural circuit model for task switching and sheds insights in the brain mechanism of a fundamental cognitive capability.

  9. Biodiversity and agriculture in dynamic landscapes: Integrating ground and remotely-sensed baseline surveys.

    PubMed

    Gillison, Andrew N; Asner, Gregory P; Fernandes, Erick C M; Mafalacusser, Jacinto; Banze, Aurélio; Izidine, Samira; da Fonseca, Ambrósio R; Pacate, Hermenegildo

    2016-07-15

    Sustainable biodiversity and land management require a cost-effective means of forecasting landscape response to environmental change. Conventional species-based, regional biodiversity assessments are rarely adequate for policy planning and decision making. We show how new ground and remotely-sensed survey methods can be coordinated to help elucidate and predict relationships between biodiversity, land use and soil properties along complex biophysical gradients that typify many similar landscapes worldwide. In the lower Zambezi valley, Mozambique we used environmental, gradient-directed transects (gradsects) to sample vascular plant species, plant functional types, vegetation structure, soil properties and land-use characteristics. Soil fertility indices were derived using novel multidimensional scaling of soil properties. To facilitate spatial analysis, we applied a probabilistic remote sensing approach, analyzing Landsat 7 satellite imagery to map photosynthetically active and inactive vegetation and bare soil along each gradsect. Despite the relatively low sample number, we found highly significant correlations between single and combined sets of specific plant, soil and remotely sensed variables that permitted testable spatial projections of biodiversity and soil fertility across the regional land-use mosaic. This integrative and rapid approach provides a low-cost, high-return and readily transferable methodology that permits the ready identification of testable biodiversity indicators for adaptive management of biodiversity and potential agricultural productivity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Empirical approaches to the study of language evolution.

    PubMed

    Fitch, W Tecumseh

    2017-02-01

    The study of language evolution, and human cognitive evolution more generally, has often been ridiculed as unscientific, but in fact it differs little from many other disciplines that investigate past events, such as geology or cosmology. Well-crafted models of language evolution make numerous testable hypotheses, and if the principles of strong inference (simultaneous testing of multiple plausible hypotheses) are adopted, there is an increasing amount of relevant data allowing empirical evaluation of such models. The articles in this special issue provide a concise overview of current models of language evolution, emphasizing the testable predictions that they make, along with overviews of the many sources of data available to test them (emphasizing comparative, neural, and genetic data). The key challenge facing the study of language evolution is not a lack of data, but rather a weak commitment to hypothesis-testing approaches and strong inference, exacerbated by the broad and highly interdisciplinary nature of the relevant data. This introduction offers an overview of the field, and a summary of what needed to evolve to provide our species with language-ready brains. It then briefly discusses different contemporary models of language evolution, followed by an overview of different sources of data to test these models. I conclude with my own multistage model of how different components of language could have evolved.

  11. Simple Model for Identifying Critical Regions in Atrial Fibrillation

    NASA Astrophysics Data System (ADS)

    Christensen, Kim; Manani, Kishan A.; Peters, Nicholas S.

    2015-01-01

    Atrial fibrillation (AF) is the most common abnormal heart rhythm and the single biggest cause of stroke. Ablation, destroying regions of the atria, is applied largely empirically and can be curative but with a disappointing clinical success rate. We design a simple model of activation wave front propagation on an anisotropic structure mimicking the branching network of heart muscle cells. This integration of phenomenological dynamics and pertinent structure shows how AF emerges spontaneously when the transverse cell-to-cell coupling decreases, as occurs with age, beyond a threshold value. We identify critical regions responsible for the initiation and maintenance of AF, the ablation of which terminates AF. The simplicity of the model allows us to calculate analytically the risk of arrhythmia and express the threshold value of transversal cell-to-cell coupling as a function of the model parameters. This threshold value decreases with increasing refractory period by reducing the number of critical regions which can initiate and sustain microreentrant circuits. These biologically testable predictions might inform ablation therapies and arrhythmic risk assessment.

  12. Singlet-triplet fermionic dark matter and LHC phenomenology

    NASA Astrophysics Data System (ADS)

    Choubey, Sandhya; Khan, Sarif; Mitra, Manimala; Mondal, Subhadeep

    2018-04-01

    It is well known that for the pure standard model triplet fermionic WIMP-type dark matter (DM), the relic density is satisfied around 2 TeV. For such a heavy mass particle, the production cross-section at 13 TeV run of LHC will be very small. Extending the model further with a singlet fermion and a triplet scalar, DM relic density can be satisfied for even much lower masses. The lower mass DM can be copiously produced at LHC and hence the model can be tested at collider. For the present model we have studied the multi jet (≥ 2 j) + missing energy ([InlineEquation not available: see fulltext.]) signal and show that this can be detected in the near future of the LHC 13 TeV run. We also predict that the present model is testable by the earth based DM direct detection experiments like Xenon-1T and in future by Darwin.

  13. Expanding the role of reactive transport models in critical zone processes

    USGS Publications Warehouse

    Li, Li; Maher, Kate; Navarre-Sitchler, Alexis; Druhan, Jennifer; Meile, Christof; Lawrence, Corey; Moore, Joel; Perdrial, Julia; Sullivan, Pamela; Thompson, Aaron; Jin, Lixin; Bolton, Edward W.; Brantley, Susan L.; Dietrich, William E.; Mayer, K. Ulrich; Steefel, Carl; Valocchi, Albert J.; Zachara, John M.; Kocar, Benjamin D.; McIntosh, Jennifer; Tutolo, Benjamin M.; Kumar, Mukesh; Sonnenthal, Eric; Bao, Chen; Beisman, Joe

    2017-01-01

    Models test our understanding of processes and can reach beyond the spatial and temporal scales of measurements. Multi-component Reactive Transport Models (RTMs), initially developed more than three decades ago, have been used extensively to explore the interactions of geothermal, hydrologic, geochemical, and geobiological processes in subsurface systems. Driven by extensive data sets now available from intensive measurement efforts, there is a pressing need to couple RTMs with other community models to explore non-linear interactions among the atmosphere, hydrosphere, biosphere, and geosphere. Here we briefly review the history of RTM development, summarize the current state of RTM approaches, and identify new research directions, opportunities, and infrastructure needs to broaden the use of RTMs. In particular, we envision the expanded use of RTMs in advancing process understanding in the Critical Zone, the veneer of the Earth that extends from the top of vegetation to the bottom of groundwater. We argue that, although parsimonious models are essential at larger scales, process-based models offer tools to explore the highly nonlinear coupling that characterizes natural systems. We present seven testable hypotheses that emphasize the unique capabilities of process-based RTMs for (1) elucidating chemical weathering and its physical and biogeochemical drivers; (2) understanding the interactions among roots, micro-organisms, carbon, water, and minerals in the rhizosphere; (3) assessing the effects of heterogeneity across spatial and temporal scales; and (4) integrating the vast quantity of novel data, including “omics” data (genomics, transcriptomics, proteomics, metabolomics), elemental concentration and speciation data, and isotope data into our understanding of complex earth surface systems. With strong support from data-driven sciences, we are now in an exciting era where integration of RTM framework into other community models will facilitate process understanding across disciplines and across scales.

  14. Equilibration: Developing the Hard Core of the Piagetian Research Program.

    ERIC Educational Resources Information Center

    Rowell, J.A.

    1983-01-01

    Argues that the status of the concept of equilibration is classified by considering Piagetian theory as a research program in the sense elaborated in 1974 by Lakatos. A pilot study was made to examine the precision and testability of equilibration in Piaget's 1977 model.(Author/RH)

  15. Toward a Testable Developmental Model of Pedophilia: The Development of Erotic Age Preference.

    ERIC Educational Resources Information Center

    Freund, Kurt; Kuban, Michael

    1993-01-01

    Analysis of retrospective self-reports about childhood curiosity to see persons in the nude, with heterosexual and homosexual pedophiles, gynephiles, and androphiles, suggests that establishment of erotic sex preference proceeded that of age preference, and a greater proportion of pedophiles than gynephiles or androphiles remembered childhood…

  16. Mapping the landscape of metabolic goals of a cell

    DOE PAGES

    Zhao, Qi; Stettner, Arion I.; Reznik, Ed; ...

    2016-05-23

    Here, genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptationmore » trajectories.« less

  17. New physics at the TeV scale

    NASA Astrophysics Data System (ADS)

    Chakdar, Shreyashi

    The Standard Model of particle physics is assumed to be a low-energy effective theory with new physics theoretically motivated to be around TeV scale. The thesis presents theories with new physics beyond the Standard Model in the TeV scale testable in the colliders. Work done in chapters 2, 3 and 5 in this thesis present some models incorporating different approaches of enlarging the Standard Model gauge group to a grand unified symmetry with each model presenting its unique signatures in the colliders. The study on leptoquarks gauge bosons in reference to TopSU(5) model in chapter 2 showed that their discovery mass range extends up to 1.5 TeV at 14 TeV LHC with luminosity of 100 fb--1. On the other hand, in chapter 3 we studied the collider phenomenology of TeV scale mirror fermions in Left-Right Mirror model finding that the reaches for the mirror quarks goes upto 750 GeV at the 14 TeV LHC with 300 fb--1 luminosity. In chapter 4 we have enlarged the bosonic symmetry to fermi-bose symmetry e.g. supersymmetry and have shown that SUSY with non-universalities in gaugino or scalar masses within high scale SUGRA set up can still be accessible at LHC with 14 TeV. In chapter 5, we performed a study in respect to the e+e-- collider and find that precise measurements of the higgs boson mass splittings up to ˜ 100 MeV may be possible with high luminosity in the International Linear Collider (ILC). In chapter 6 we have shown that the experimental data on neutrino masses and mixings are consistent with the proposed 4/5 parameter Dirac neutrino models yielding a solution for the neutrino masses with inverted mass hierarchy and large CP violating phase delta and thus can be tested experimentally. Chapter 7 of the thesis incorporates a warm dark matter candidate in context of two Higgs doublet model. The model has several testable consequences at colliders with the charged scalar and pseudoscalar being in few hundred GeV mass range. This thesis presents an endeavor to study beyond standard model physics at the TeV scale with testable signals in the Colliders.

  18. A collider observable QCD axion

    DOE PAGES

    Dimopoulos, Savas; Hook, Anson; Huang, Junwu; ...

    2016-11-09

    Here, we present a model where the QCD axion is at the TeV scale and visible at a collider via its decays. Conformal dynamics and strong CP considerations account for the axion coupling strongly enough to the standard model to be produced as well as the coincidence between the weak scale and the axion mass. The model predicts additional pseudoscalar color octets whose properties are completely determined by the axion properties rendering the theory testable.

  19. Soviet Economic Policy Towards Eastern Europe

    DTIC Science & Technology

    1988-11-01

    high. Without specifying the determinants of Soviet demand for "allegiance" in more detail, the model is not testable; we cannot predict how subsidy...trade inside (Czechoslovakia, Bulgaria). These countries are behaving as predicted by the model . If this hypothesis is true, the pattern of subsidies...also compares the sum of per capita subsidies by country between 1970 and 1982 with the sum of subsidies predicted by the model . Because of the poor

  20. Chiral primordial blue tensor spectra from the axion-gauge couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obata, Ippei, E-mail: obata@tap.scphys.kyoto-u.ac.jp

    We suggest the new feature of primordial gravitational waves sourced by the axion-gauge couplings, whose forms are motivated by the dimensional reduction of the form field in the string theory. In our inflationary model, as an inflaton we adopt two types of axion, dubbed the model-independent axion and the model-dependent axion, which couple with two gauge groups with different sign combination each other. Due to these forms both polarization modes of gauge fields are amplified and enhance both helicies of tensor modes during inflation. We point out the possibility that a primordial blue-tilted tensor power spectra with small chirality aremore » provided by the combination of these axion-gauge couplings, intriguingly both amplitudes and chirality are potentially testable by future space-based gravitational wave interferometers such as DECIGO and BBO project.« less

  1. Coordinating AgMIP data and models across global and regional scales for 1.5°C and 2.0°C assessments

    NASA Astrophysics Data System (ADS)

    Rosenzweig, Cynthia; Ruane, Alex C.; Antle, John; Elliott, Joshua; Ashfaq, Muhammad; Chatta, Ashfaq Ahmad; Ewert, Frank; Folberth, Christian; Hathie, Ibrahima; Havlik, Petr; Hoogenboom, Gerrit; Lotze-Campen, Hermann; MacCarthy, Dilys S.; Mason-D'Croz, Daniel; Contreras, Erik Mencos; Müller, Christoph; Perez-Dominguez, Ignacio; Phillips, Meridel; Porter, Cheryl; Raymundo, Rubi M.; Sands, Ronald D.; Schleussner, Carl-Friedrich; Valdivia, Roberto O.; Valin, Hugo; Wiebe, Keith

    2018-05-01

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has developed novel methods for Coordinated Global and Regional Assessments (CGRA) of agriculture and food security in a changing world. The present study aims to perform a proof of concept of the CGRA to demonstrate advantages and challenges of the proposed framework. This effort responds to the request by the UN Framework Convention on Climate Change (UNFCCC) for the implications of limiting global temperature increases to 1.5°C and 2.0°C above pre-industrial conditions. The protocols for the 1.5°C/2.0°C assessment establish explicit and testable linkages across disciplines and scales, connecting outputs and inputs from the Shared Socio-economic Pathways (SSPs), Representative Agricultural Pathways (RAPs), Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) and Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble scenarios, global gridded crop models, global agricultural economics models, site-based crop models and within-country regional economics models. The CGRA consistently links disciplines, models and scales in order to track the complex chain of climate impacts and identify key vulnerabilities, feedbacks and uncertainties in managing future risk. CGRA proof-of-concept results show that, at the global scale, there are mixed areas of positive and negative simulated wheat and maize yield changes, with declines in some breadbasket regions, at both 1.5°C and 2.0°C. Declines are especially evident in simulations that do not take into account direct CO2 effects on crops. These projected global yield changes mostly resulted in increases in prices and areas of wheat and maize in two global economics models. Regional simulations for 1.5°C and 2.0°C using site-based crop models had mixed results depending on the region and the crop. In conjunction with price changes from the global economics models, productivity declines in the Punjab, Pakistan, resulted in an increase in vulnerable households and the poverty rate. This article is part of the theme issue `The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.

  2. Coordinating AgMIP data and models across global and regional scales for 1.5°C and 2.0°C assessments.

    PubMed

    Rosenzweig, Cynthia; Ruane, Alex C; Antle, John; Elliott, Joshua; Ashfaq, Muhammad; Chatta, Ashfaq Ahmad; Ewert, Frank; Folberth, Christian; Hathie, Ibrahima; Havlik, Petr; Hoogenboom, Gerrit; Lotze-Campen, Hermann; MacCarthy, Dilys S; Mason-D'Croz, Daniel; Contreras, Erik Mencos; Müller, Christoph; Perez-Dominguez, Ignacio; Phillips, Meridel; Porter, Cheryl; Raymundo, Rubi M; Sands, Ronald D; Schleussner, Carl-Friedrich; Valdivia, Roberto O; Valin, Hugo; Wiebe, Keith

    2018-05-13

    The Agricultural Model Intercomparison and Improvement Project (AgMIP) has developed novel methods for Coordinated Global and Regional Assessments (CGRA) of agriculture and food security in a changing world. The present study aims to perform a proof of concept of the CGRA to demonstrate advantages and challenges of the proposed framework. This effort responds to the request by the UN Framework Convention on Climate Change (UNFCCC) for the implications of limiting global temperature increases to 1.5°C and 2.0°C above pre-industrial conditions. The protocols for the 1.5°C/2.0°C assessment establish explicit and testable linkages across disciplines and scales, connecting outputs and inputs from the Shared Socio-economic Pathways (SSPs), Representative Agricultural Pathways (RAPs), Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) and Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble scenarios, global gridded crop models, global agricultural economics models, site-based crop models and within-country regional economics models. The CGRA consistently links disciplines, models and scales in order to track the complex chain of climate impacts and identify key vulnerabilities, feedbacks and uncertainties in managing future risk. CGRA proof-of-concept results show that, at the global scale, there are mixed areas of positive and negative simulated wheat and maize yield changes, with declines in some breadbasket regions, at both 1.5°C and 2.0°C. Declines are especially evident in simulations that do not take into account direct CO 2 effects on crops. These projected global yield changes mostly resulted in increases in prices and areas of wheat and maize in two global economics models. Regional simulations for 1.5°C and 2.0°C using site-based crop models had mixed results depending on the region and the crop. In conjunction with price changes from the global economics models, productivity declines in the Punjab, Pakistan, resulted in an increase in vulnerable households and the poverty rate.This article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'. © 2018 The Authors.

  3. Cognitive architectures and language acquisition: a case study in pronoun comprehension.

    PubMed

    VAN Rij, Jacolien; VAN Rijn, Hedderik; Hendriks, Petra

    2010-06-01

    In this paper we discuss a computational cognitive model of children's poor performance on pronoun interpretation (the so-called Delay of Principle B Effect, or DPBE). This cognitive model is based on a theoretical account that attributes the DPBE to children's inability as hearers to also take into account the speaker's perspective. The cognitive model predicts that child hearers are unable to do so because their speed of linguistic processing is too limited to perform this second step in interpretation. We tested this hypothesis empirically in a psycholinguistic study, in which we slowed down the speech rate to give children more time for interpretation, and in a computational simulation study. The results of the two studies confirm the predictions of our model. Moreover, these studies show that embedding a theory of linguistic competence in a cognitive architecture allows for the generation of detailed and testable predictions with respect to linguistic performance.

  4. Phenemenological vs. biophysical models of thermal stress in aquatic eggs

    NASA Astrophysics Data System (ADS)

    Martin, B.

    2016-12-01

    Predicting species responses to climate change is a central challenge in ecology, with most efforts relying on lab derived phenomenological relationships between temperature and fitness metrics. We tested one of these models using the embryonic stage of a Chinook salmon population. We parameterized the model with laboratory data, applied it to predict survival in the field, and found that it significantly underestimated field-derived estimates of thermal mortality. We used a biophysical model based on mass-transfer theory to show that the discrepancy was due to the differences in water flow velocities between the lab and the field. This mechanistic approach provides testable predictions for how the thermal tolerance of embryos depends on egg size and flow velocity of the surrounding water. We found support for these predictions across more than 180 fish species, suggesting that flow and temperature mediated oxygen limitation is a general mechanism underlying the thermal tolerance of embryos.

  5. Avoidant/Restrictive Food Intake Disorder: a Three-Dimensional Model of Neurobiology with Implications for Etiology and Treatment.

    PubMed

    Thomas, Jennifer J; Lawson, Elizabeth A; Micali, Nadia; Misra, Madhusmita; Deckersbach, Thilo; Eddy, Kamryn T

    2017-08-01

    DSM-5 defined avoidant/restrictive food intake disorder (ARFID) as a failure to meet nutritional needs leading to low weight, nutritional deficiency, dependence on supplemental feedings, and/or psychosocial impairment. We summarize what is known about ARFID and introduce a three-dimensional model to inform research. Because ARFID prevalence, risk factors, and maintaining mechanisms are not known, prevailing treatment approaches are based on clinical experience rather than data. Furthermore, most ARFID research has focused on children, rather than adolescents or adults. We hypothesize a three-dimensional model wherein neurobiological abnormalities in sensory perception, homeostatic appetite, and negative valence systems underlie the three primary ARFID presentations of sensory sensitivity, lack of interest in eating, and fear of aversive consequences, respectively. Now that ARFID has been defined, studies investigating risk factors, prevalence, and pathophysiology are needed. Our model suggests testable hypotheses about etiology and highlights cognitive-behavioral therapy as one possible treatment.

  6. The Colloquium

    NASA Astrophysics Data System (ADS)

    Amoroso, Richard L.

    HÉCTOR A.A brief introductory survey of Unified Field Mechanics (UFM) is given from the perspective of a Holographic Anthropic Multiverse cosmology in 12 `continuous-state' dimensions. The paradigm with many new parameters is cast in a scale-invariant conformal covariant Dirac polarized vacuum utilizing extended HD forms of the de Broglie-Bohm and Cramer interpretations of quantum theory. The model utilizes a unique form of M-Theory based in part on the original hadronic form of string theory that had a variable string tension, TS and included a tachyon. The model is experimentally testable, thus putatively able to demonstrate the existence of large-scale additional dimensionality (LSXD), test for QED violating tight-bound state spectral lines in hydrogen `below' the lowest Bohr orbit, and surmount the quantum uncertainty principle utilizing a hyperincursive Sagnac Effect resonance hierarchy.

  7. A Model-based Health Monitoring and Diagnostic System for the UH-60 Helicopter. Appendix D

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Hindson, William; Sanderfer, Dwight; Deb, Somnath; Domagala, Chuck

    2001-01-01

    Model-based reasoning techniques hold much promise in providing comprehensive monitoring and diagnostics capabilities for complex systems. We are exploring the use of one of these techniques, which utilizes multi-signal modeling and the TEAMS-RT real-time diagnostic engine, on the UH-60 Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) flight research aircraft. We focus on the engine and transmission systems, and acquire sensor data across the 1553 bus as well as by direct analog-to-digital conversion from sensors to the QHuMS (Qualtech health and usage monitoring system) computer. The QHuMS computer uses commercially available components and is rack-mounted in the RASCAL facility. A multi-signal model of the transmission and engine subsystems enables studies of system testability and analysis of the degree of fault isolation available with various instrumentation suites. The model and examples of these analyses will be described and the data architectures enumerated. Flight tests of this system will validate the data architecture and provide real-time flight profiles to be further analyzed in the laboratory.

  8. The Labor Market and the Second Economy in the Soviet Union

    DTIC Science & Technology

    1991-01-01

    model . WHO WORKS "ON THE LEFT"? 15 (The non-second economy income (V) is in turn composed of official first economy income , pilferage from the first...demands. In other words, the model assumes that the family "pools" all unearned income regardless of source. This is one of the few testable assumptions...of the neoclassical model .16 In the labor supply model in this paper, we have assumed that all first economy income , for both husband and wife, is

  9. Changing Perspectives on Basic Research in Adult Learning and Memory

    ERIC Educational Resources Information Center

    Hultsch, David F.

    1977-01-01

    It is argued that wheather the course of cognitive development is characterized by growth, stability, or decline is less a matter of the metamodel on which the theories and data are based. Such metamodels are representations of reality that are not empirically testable. (Author)

  10. The Process of Mentoring Pregnant Adolescents: An Exploratory Study.

    ERIC Educational Resources Information Center

    Blinn-Pike, Lynn; Kuschel, Diane; McDaniel, Annette; Mingus, Suzanne; Mutti, Megan Poole

    1998-01-01

    The process that occurs in relationships between volunteer adult mentors and pregnant adolescent "mentees" is described empirically; testable hypotheses based on findings concerning the mentor role are proposed. Case records from 20 mentors are analyzed; findings regarding mentors' roles are discussed. Criteria for conceptualizing quasi-parenting…

  11. Performance Models of Testability.

    DTIC Science & Technology

    1984-08-01

    4.1.17 Cost of Isolating Component/Part (CPI) 5J Cost of isolating components or parts at the depot is at CPI - n1 (HDC)(TPI)(NPI) where TPI = average...testing component N Deec N a aiur Yes(PFD D)S Cos ofi oaig op nn -- Cost ofcmpnn rmva n relaemn Exece Cost of omponn reatmovae anda

  12. What Does It Mean to Know?

    ERIC Educational Resources Information Center

    Kirch, Susan A.; Stetsenko, Anna

    2012-01-01

    What do people mean when they say they "know" something in science? It usually means they did an investigation and expended considerable intellectual effort to build a useful explanatory model. It means they are confident about an explanation, believe others should trust what they say, and believe that their claim is testable. It means they can…

  13. A Cognitive Approach to Brailling Errors

    ERIC Educational Resources Information Center

    Wells-Jensen, Sheri; Schwartz, Aaron; Gosche, Bradley

    2007-01-01

    This article analyzes a corpus of 1,600 brailling errors made by one expert braillist. It presents a testable model of braille writing and shows that the subject braillist stores standard braille contractions as part of the orthographic representation of words, rather than imposing contractions on a serially ordered string of letters. (Contains 1…

  14. Thinking about Evolution: Combinatorial Play as a Strategy for Exercising Scientific Creativity

    ERIC Educational Resources Information Center

    Wingate, Richard J. T.

    2011-01-01

    An enduring focus in education on how scientists formulate experiments and "do science" in the laboratory has excluded a vital element of scientific practice: the creative and imaginative thinking that generates models and testable hypotheses. In this case study, final-year biomedical sciences university students were invited to create and justify…

  15. LSI (Large Scale Integrated) Design for Testability. Final Report of Design, Demonstration, and Testability Analysis.

    DTIC Science & Technology

    1983-11-01

    compound operations, with status. (h) Pre-programmed CRC and double-precision multiply/divide algo- rithms. (i) Double length accumulator with full...IH1.25 _ - MICROCOP ’ RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS-1963-A .4 ’* • • . - . .. •. . . . . . . . . . . . . . • - -. .• ,. o. . . .- "o

  16. Online testable concept maps: benefits for learning about the pathogenesis of disease.

    PubMed

    Ho, Veronica; Kumar, Rakesh K; Velan, Gary

    2014-07-01

    Concept maps have been used to promote meaningful learning and critical thinking. Although these are crucially important in all disciplines, evidence for the benefits of concept mapping for learning in medicine is limited. We performed a randomised crossover study to assess the benefits of online testable concept maps for learning in pathology by volunteer junior medical students. Participants (n = 65) were randomly allocated to either of two groups with equivalent mean prior academic performance, in which they were given access to either online maps or existing online resources for a 2-week block on renal disease. Groups then crossed over for a 2-week block on hepatic disease. Outcomes were assessed using timed online quizzes, which included questions unrelated to topics in the pathogenesis maps as an internal control. Questionnaires were administered to evaluate students' acceptance of the maps. In both blocks, the group with access to pathogenesis maps achieved significantly higher average scores than the control group on quiz questions related to topics covered by the maps (Block 1: p < 0.001, Cohen's d = 0.9; Block 2: p = 0.008, Cohen's d = 0.7). However, mean scores on unrelated questions did not differ significantly between the groups. In a third block on pancreatic disease, both groups received pathogenesis maps and collectively performed significantly better on quiz topics related to the maps than on unrelated topics (p < 0.01, Cohen's d = 0.5). Regression analysis revealed that access to pathogenesis maps was the dominant contributor to variance in performance on map-related quiz questions. Responses to questionnaire items on pathogenesis maps were overwhelmingly positive in both groups. These results indicate that online testable pathogenesis maps are well accepted and can improve learning of concepts in pathology by medical students. © 2014 John Wiley & Sons Ltd.

  17. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    PubMed Central

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems. PMID:28079187

  18. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems.

    PubMed

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-12

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  19. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    NASA Astrophysics Data System (ADS)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  20. Educating health care trainees and professionals about suicide prevention in depressed adolescents.

    PubMed

    Rice, Timothy R; Sher, Leo

    2013-01-01

    Adolescent depression is a highly prevalent disorder with significant morbidity and suicide mortality. It is simultaneously highly responsive to treatment. Adolescents wish to discuss depression with their providers, and providers routinely receive opportunities to do so. These characteristics of prevalence, morbidity, mortality, responsiveness, and accessibility make adolescent depression an excellent target of care. However, most health care trainees and professionals report low confidence in caring for adolescent depression. As a caregiver community, we fare poorly in routine matters of assessment and management of adolescent depression. All health care professionals are trained within a medical model. In this light, the conceptualization of adolescent depression and suicidality within the medical model may increase provider confidence and performance. Epidemiology and neurobiology are presented with emphasis in this review. Legal concerns also affect health care professionals. For example, providers may deviate from evidence-based medicine owing to anxieties that the identification and treatment of depression may induce suicide and consequent legal culpability. A review of the historical context and relevant outcome trials concerning the increased risk of suicidality in depressed adolescents treated with selective-serotonin reuptake inhibitors may increase provider comfort. Furthermore, increased didactic and experiential training improve provider performance. In this work, proven models were discussed, and the testable hypothesis that education incorporating the views of this article can produce the best care for depressed adolescents.

  1. Eye Examination Testability in Children with Autism and in Typical Peers

    PubMed Central

    Coulter, Rachel Anastasia; Bade, Annette; Tea, Yin; Fecho, Gregory; Amster, Deborah; Jenewein, Erin; Rodena, Jacqueline; Lyons, Kara Kelley; Mitchell, G. Lynn; Quint, Nicole; Dunbar, Sandra; Ricamato, Michele; Trocchio, Jennie; Kabat, Bonnie; Garcia, Chantel; Radik, Irina

    2015-01-01

    ABSTRACT Purpose To compare testability of vision and eye tests in an examination protocol of 9- to 17-year-old patients with autism spectrum disorder (ASD) to typically developing (TD) peers. Methods In a prospective pilot study, 61 children and adolescents (34 with ASD and 27 who were TD) aged 9 to 17 years completed an eye examination protocol including tests of visual acuity, refraction, convergence (eye teaming), stereoacuity (depth perception), ocular motility, and ocular health. Patients who required new refractive correction were retested after wearing their updated spectacle prescription for 1 month. The specialized protocol incorporated visual, sensory, and communication supports. A psychologist determined group status/eligibility using DSM-IV-TR (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision) criteria by review of previous evaluations and parent responses on the Social Communication Questionnaire. Before the examination, parents provided information regarding patients’ sex, race, ethnicity, and, for ASD patients, verbal communication level (nonverbal, uses short words, verbal). Parents indicated whether the patient wore a refractive correction, whether the patient had ever had an eye examination, and the age at the last examination. Chi-square tests compared testability results for TD and ASD groups. Results Typically developing and ASD groups did not differ by age (p = 0.54), sex (p = 0.53), or ethnicity (p = 0.22). Testability was high on most tests (TD, 100%; ASD, 88 to 100%), except for intraocular pressure (IOP), which was reduced for both the ASD (71%) and the TD (89%) patients. Among ASD patients, IOP testability varied greatly with verbal communication level (p < 0.001). Although IOP measurements were completed on all verbal patients, only 37.5% of nonverbal and 44.4% of ASD patients who used short words were successful. Conclusions Patients with ASD can complete most vision and eye tests within an examination protocol. Testability of IOPs is reduced, particularly for nonverbal patients and patients who use short words to communicate. PMID:25415280

  2. Pediatric Amblyopia Risk Investigation Study (PARIS).

    PubMed

    Savage, Howard I; Lee, Hester H; Zaetta, Deneen; Olszowy, Ronald; Hamburger, Ellie; Weissman, Mark; Frick, Kevin

    2005-12-01

    To assess the learning curve, testability, and reliability of vision screening modalities administered by pediatric health extenders. Prospective masked clinical trial. Two hundred subjects aged 3 to 6 underwent timed screening for amblyopia by physician extenders, including LEA visual acuity (LEA), stereopsis (RDE), and noncycloplegic autorefraction (NCAR). Patients returned for a comprehensive diagnostic eye examination performed by an ophthalmologist or optometrist. Average screening time was 5.4 +/- 1.6 minutes (LEA), 1.9 +/- 0.9 minutes (RDE), and 1.7 +/- 1.0 minutes (NCAR). Test time for NCAR and RDE fell by 40% during the study period. Overall testability was 92% (LEA), 96% (RDE), and 94% (NCAR). Testability among 3-year-olds was 73% (LEA), 96% (RDE), and 89% (NCAR). Reliability of LEA was moderate (r = .59). Reliability of NCAR was high for astigmatism (Cyl) (r = .89), moderate for spherical equivalent (SE) (r = .66), and low for anisometropia (ANISO) (r = .38). Correlation of cycloplegic autorefraction (CAR) with gold standard cycloplegic retinoscopic refraction (CRR) was very high for SE (.85), CYL (.77), and moderate for ANISO (.48). With NCAR, physician extenders can quickly and reliably detect astigmatism and spherical refractive error in one-third the time it takes to obtain visual acuity. LEA has a lower initial cost, but is time consuming, moderately reliable, and more difficult for 3-year-olds. Shorter examination time and higher reliability may make NCAR a more efficient screening tool for refractive amblyopia in younger children. Future study is needed to determine the sensitivity and specificity of NCAR and other screening methods in detecting amblyopia and amblyopia risk factors.

  3. Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization

    NASA Technical Reports Server (NTRS)

    Baker, Robert L.

    1993-01-01

    The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28.

  4. Improving Health Care for Assisted Living Residents

    ERIC Educational Resources Information Center

    Kane, Robert L.; Mach, John R., Jr.

    2007-01-01

    Purpose: The purpose of this article is to explore how medical care is delivered to older people in assisted living (AL) settings and to suggest ways for improving it. Design and Methods: We present a review of the limited research available on health care for older AL residents and on building testable models of better ways to organize primary…

  5. Network Simulation Models

    DTIC Science & Technology

    2008-12-01

    1979; Wasserman and Faust, 1994). SNA thus relies heavily on graph theory to make predictions about network structure and thus social behavior...becomes a tool for increasing the specificity of theory , thinking through the theoretical implications, and generating testable predictions. In...to summarize Construct and its roots in constructural sociological theory . We discover that the (LPM) provides a mathematical bridge between

  6. Creation of a Mouse with Stress-Induced Dystonia: Control of an ATPase Chaperone

    DTIC Science & Technology

    2013-04-01

    was successful, and a mouse with the desired dystonic symptoms was obtained. It has two mutations , one a dominantly inherited gene with 100...the hallmark of dystonia. 15. SUBJECT TERMS Dystonia, genetically modified mice, stress, gene mutations , animal model of disease. 16...there are a variety of hypotheses that should be testable if there were a realistic animal model. Mice with mutations in genes known to cause dystonia

  7. Multidisciplinary approaches to understanding collective cell migration in developmental biology.

    PubMed

    Schumacher, Linus J; Kulesa, Paul M; McLennan, Rebecca; Baker, Ruth E; Maini, Philip K

    2016-06-01

    Mathematical models are becoming increasingly integrated with experimental efforts in the study of biological systems. Collective cell migration in developmental biology is a particularly fruitful application area for the development of theoretical models to predict the behaviour of complex multicellular systems with many interacting parts. In this context, mathematical models provide a tool to assess the consistency of experimental observations with testable mechanistic hypotheses. In this review, we showcase examples from recent years of multidisciplinary investigations of neural crest cell migration. The neural crest model system has been used to study how collective migration of cell populations is shaped by cell-cell interactions, cell-environmental interactions and heterogeneity between cells. The wide range of emergent behaviours exhibited by neural crest cells in different embryonal locations and in different organisms helps us chart out the spectrum of collective cell migration. At the same time, this diversity in migratory characteristics highlights the need to reconcile or unify the array of currently hypothesized mechanisms through the next generation of experimental data and generalized theoretical descriptions. © 2016 The Authors.

  8. Clinical and neurocognitive aspects of hallucinations in Alzheimer's disease.

    PubMed

    El Haj, Mohamad; Roche, Jean; Jardri, Renaud; Kapogiannis, Dimitrios; Gallouj, Karim; Antoine, Pascal

    2017-12-01

    Due to their prevalence, hallucinations are considered as one of the most frequent psychotic symptoms in Alzheimer's disease (AD). These psychotic manifestations reduce patients' well-being, increase the burden of caregivers, contribute to early institutionalization, and are related with the course of cognitive decline in AD. Considering their consequences, we provide a comprehensive account of the current state of knowledge about the prevalence and characteristics of hallucinations in AD. We propose a comprehensive and testable theoretical model about hallucinations in AD: the ALZHA (ALZheimer and HAllucinations) model. In this model, neurological, genetic, cognitive, affective, and iatrogenic factors associated with hallucinations in AD are highlighted. According to the ALZHA model, hallucinations in AD first involve trait markers (i.e., cognitive deficits, neurological deficits, genetic predisposition and/or sensory deficits) to which state markers that may trigger these experiences are added (e.g., psychological distress and/or iatrogenic factors). Finally, we provide recommendations for assessment and management of these psychotic manifestations in AD, with the aim to benefit patients, caregivers, and health professionals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Sampling and assessment accuracy in mate choice: a random-walk model of information processing in mating decision.

    PubMed

    Castellano, Sergio; Cermelli, Paolo

    2011-04-07

    Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Visual attention and flexible normalization pools

    PubMed Central

    Schwartz, Odelia; Coen-Cagli, Ruben

    2013-01-01

    Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413

  11. Integrating Environmental Genomics and Biogeochemical Models: a Gene-centric Approach

    NASA Astrophysics Data System (ADS)

    Reed, D. C.; Algar, C. K.; Huber, J. A.; Dick, G.

    2013-12-01

    Rapid advances in molecular microbial ecology have yielded an unprecedented amount of data about the evolutionary relationships and functional traits of microbial communities that regulate global geochemical cycles. Biogeochemical models, however, are trailing in the wake of the environmental genomics revolution and such models rarely incorporate explicit representations of bacteria and archaea, nor are they compatible with nucleic acid or protein sequence data. Here, we present a functional gene-based framework for describing microbial communities in biogeochemical models that uses genomics data and provides predictions that are readily testable using cutting-edge molecular tools. To demonstrate the approach in practice, nitrogen cycling in the Arabian Sea oxygen minimum zone (OMZ) was modelled to examine key questions about cryptic sulphur cycling and dinitrogen production pathways in OMZs. By directly linking geochemical dynamics to the genetic composition of microbial communities, the method provides mechanistic insights into patterns and biogeochemical consequences of marine microbes. Such an approach is critical for informing our understanding of the key role microbes play in modulating Earth's biogeochemistry.

  12. Authors’ response: mirror neurons: tests and testability.

    PubMed

    Catmur, Caroline; Press, Clare; Cook, Richard; Bird, Geoffrey; Heyes, Cecilia

    2014-04-01

    Commentators have tended to focus on the conceptual framework of our article, the contrast between genetic and associative accounts of mirror neurons, and to challenge it with additional possibilities rather than empirical data. This makes the empirically focused comments especially valuable. The mirror neuron debate is replete with ideas; what it needs now are system-level theories and careful experiments – tests and testability.

  13. Distinct physiological effects of β1- and β2-adrenoceptors in mouse ventricular myocytes: insights from a compartmentalized mathematical model.

    PubMed

    Rozier, Kelvin; Bondarenko, Vladimir E

    2017-05-01

    The β 1 - and β 2 -adrenergic signaling systems play different roles in the functioning of cardiac cells. Experimental data show that the activation of the β 1 -adrenergic signaling system produces significant inotropic, lusitropic, and chronotropic effects in the heart, whereas the effects of the β 2 -adrenergic signaling system is less apparent. In this paper, a comprehensive compartmentalized experimentally based mathematical model of the combined β 1 - and β 2 -adrenergic signaling systems in mouse ventricular myocytes is developed to simulate the experimental findings and make testable predictions of the behavior of the cardiac cells under different physiological conditions. Simulations describe the dynamics of major signaling molecules in different subcellular compartments; kinetics and magnitudes of phosphorylation of ion channels, transporters, and Ca 2+ handling proteins; modifications of action potential shape and duration; and [Ca 2+ ] i and [Na + ] i dynamics upon stimulation of β 1 - and β 2 -adrenergic receptors (β 1 - and β 2 -ARs). The model reveals physiological conditions when β 2 -ARs do not produce significant physiological effects and when their effects can be measured experimentally. Simulations demonstrated that stimulation of β 2 -ARs with isoproterenol caused a marked increase in the magnitude of the L-type Ca 2+ current, [Ca 2+ ] i transient, and phosphorylation of phospholamban only upon additional application of pertussis toxin or inhibition of phosphodiesterases of type 3 and 4. The model also made testable predictions of the changes in magnitudes of [Ca 2+ ] i and [Na + ] i fluxes, the rate of decay of [Na + ] i concentration upon both combined and separate stimulation of β 1 - and β 2 -ARs, and the contribution of phosphorylation of PKA targets to the changes in the action potential and [Ca 2+ ] i transient. Copyright © 2017 the American Physiological Society.

  14. Differentiation without distancing. explaining bi-polarization of opinions without negative influence.

    PubMed

    Mäs, Michael; Flache, Andreas

    2013-01-01

    Explanations of opinion bi-polarization hinge on the assumption of negative influence, individuals' striving to amplify differences to disliked others. However, empirical evidence for negative influence is inconclusive, which motivated us to search for an alternative explanation. Here, we demonstrate that bi-polarization can be explained without negative influence, drawing on theories that emphasize the communication of arguments as central mechanism of influence. Due to homophily, actors interact mainly with others whose arguments will intensify existing tendencies for or against the issue at stake. We develop an agent-based model of this theory and compare its implications to those of existing social-influence models, deriving testable hypotheses about the conditions of bi-polarization. Hypotheses were tested with a group-discussion experiment (N = 96). Results demonstrate that argument exchange can entail bi-polarization even when there is no negative influence.

  15. The biology and polymer physics underlying large‐scale chromosome organization

    PubMed Central

    2017-01-01

    Chromosome large‐scale organization is a beautiful example of the interplay between physics and biology. DNA molecules are polymers and thus belong to the class of molecules for which physicists have developed models and formulated testable hypotheses to understand their arrangement and dynamic properties in solution, based on the principles of polymer physics. Biologists documented and discovered the biochemical basis for the structure, function and dynamic spatial organization of chromosomes in cells. The underlying principles of chromosome organization have recently been revealed in unprecedented detail using high‐resolution chromosome capture technology that can simultaneously detect chromosome contact sites throughout the genome. These independent lines of investigation have now converged on a model in which DNA loops, generated by the loop extrusion mechanism, are the basic organizational and functional units of the chromosome. PMID:29105235

  16. There's No Such Thing as Value-Free Science.

    ERIC Educational Resources Information Center

    Makosky, Vivian Parker

    This paper is based on the view that, although scientists rely on research values such as predictive accuracy and testability, scientific research is still subject to the unscientific values, attitudes, and emotions of the scientists. It is noted that undergraduate students are likely not to think critically about the science they encounter. A…

  17. Purposeful Instruction: Mixing up the "I," "We," and "You"

    ERIC Educational Resources Information Center

    Grant, Maria; Lapp, Diane; Fisher, Douglas; Johnson, Kelly; Frey, Nancy

    2012-01-01

    This article discusses the flexible nature of the gradual release of responsibility (GRR) as a frame for inquiry-based science instruction. Given the mandate for the use of text-supported learning (Common Core Standards), the GRR can be used to allow students to learn as scientists as they collaboratively develop testable questions and experiments…

  18. Estimating skin sensitization potency from a single dose LLNA.

    PubMed

    Roberts, David W

    2015-04-01

    Skin sensitization is an important aspect of safety assessment. The mouse local lymph node assay (LLNA) developed in the 1990 s is an in vivo test used for skin sensitization hazard identification and characterization. More recently a reduced version of the LLNA (rLLNA) has been developed as a means of identifying, but not quantifying, sensitization hazard. The work presented here is aimed at enabling rLLNA data to be used to give quantitative potency information that can be used, inter alia, in modeling and read-across approaches to non-animal based potency estimation. A probit function has been derived enabling estimation of EC3 from a single dose. This has led to development of a modified version of the rLLNA, whereby as a general principle the SI value at 10%, or at a lower concentration if 10% is not testable, is used to calculate the EC3. This version of the rLLNA has been evaluated against a selection of chemicals for which full LLNA data are available, and has been shown to give EC3 values in good agreement with those derived from the full LLNA. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Larval transport modeling of deep-sea invertebrates can aid the search for undiscovered populations.

    PubMed

    Yearsley, Jon M; Sigwart, Julia D

    2011-01-01

    Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate 'stepping stone' populations yet to be discovered. We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess.

  20. Larval Transport Modeling of Deep-Sea Invertebrates Can Aid the Search for Undiscovered Populations

    PubMed Central

    Yearsley, Jon M.; Sigwart, Julia D.

    2011-01-01

    Background Many deep-sea benthic animals occur in patchy distributions separated by thousands of kilometres, yet because deep-sea habitats are remote, little is known about their larval dispersal. Our novel method simulates dispersal by combining data from the Argo array of autonomous oceanographic probes, deep-sea ecological surveys, and comparative invertebrate physiology. The predicted particle tracks allow quantitative, testable predictions about the dispersal of benthic invertebrate larvae in the south-west Pacific. Principal Findings In a test case presented here, using non-feeding, non-swimming (lecithotrophic trochophore) larvae of polyplacophoran molluscs (chitons), we show that the likely dispersal pathways in a single generation are significantly shorter than the distances between the three known population centres in our study region. The large-scale density of chiton populations throughout our study region is potentially much greater than present survey data suggest, with intermediate ‘stepping stone’ populations yet to be discovered. Conclusions/Significance We present a new method that is broadly applicable to studies of the dispersal of deep-sea organisms. This test case demonstrates the power and potential applications of our new method, in generating quantitative, testable hypotheses at multiple levels to solve the mismatch between observed and expected distributions: probabilistic predictions of locations of intermediate populations, potential alternative dispersal mechanisms, and expected population genetic structure. The global Argo data have never previously been used to address benthic biology, and our method can be applied to any non-swimming larvae of the deep-sea, giving information upon dispersal corridors and population densities in habitats that remain intrinsically difficult to assess. PMID:21857992

  1. Disentangling the adult attention-deficit hyperactivity disorder endophenotype: parametric measurement of attention.

    PubMed

    Finke, Kathrin; Schwarzkopf, Wolfgang; Müller, Ulrich; Frodl, Thomas; Müller, Hermann J; Schneider, Werner X; Engel, Rolf R; Riedel, Michael; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2011-11-01

    Attention deficit hyperactivity disorder (ADHD) persists frequently into adulthood. The decomposition of endophenotypes by means of experimental neuro-cognitive assessment has the potential to improve diagnostic assessment, evaluation of treatment response, and disentanglement of genetic and environmental influences. We assessed four parameters of attentional capacity and selectivity derived from simple psychophysical tasks (verbal report of briefly presented letter displays) and based on a "theory of visual attention." These parameters are mathematically independent, quantitative measures, and previous studies have shown that they are highly sensitive for subtle attention deficits. Potential reductions of attentional capacity, that is, of perceptual processing speed and working memory storage capacity, were assessed with a whole report paradigm. Furthermore, possible pathologies of attentional selectivity, that is, selection of task-relevant information and bias in the spatial distribution of attention, were measured with a partial report paradigm. A group of 30 unmedicated adult ADHD patients and a group of 30 demographically matched healthy controls were tested. ADHD patients showed significant reductions of working memory storage capacity of a moderate to large effect size. Perceptual processing speed, task-based, and spatial selection were unaffected. The results imply a working memory deficit as an important source of behavioral impairments. The theory of visual attention parameter working memory storage capacity might constitute a quantifiable and testable endophenotype of ADHD.

  2. A General, Synthetic Model for Predicting Biodiversity Gradients from Environmental Geometry.

    PubMed

    Gross, Kevin; Snyder-Beattie, Andrew

    2016-10-01

    Latitudinal and elevational biodiversity gradients fascinate ecologists, and have inspired dozens of explanations. The geometry of the abiotic environment is sometimes thought to contribute to these gradients, yet evaluations of geometric explanations are limited by a fragmented understanding of the diversity patterns they predict. This article presents a mathematical model that synthesizes multiple pathways by which environmental geometry can drive diversity gradients. The model characterizes species ranges by their environmental niches and limits on range sizes and places those ranges onto the simplified geometries of a sphere or cone. The model predicts nuanced and realistic species-richness gradients, including latitudinal diversity gradients with tropical plateaus and mid-latitude inflection points and elevational diversity gradients with low-elevation diversity maxima. The model also illustrates the importance of a mid-environment effect that augments species richness at locations with intermediate environments. Model predictions match multiple empirical biodiversity gradients, depend on ecological traits in a testable fashion, and formally synthesize elements of several geometric models. Together, these results suggest that previous assessments of geometric hypotheses should be reconsidered and that environmental geometry may play a deeper role in driving biodiversity gradients than is currently appreciated.

  3. Crystal study and econometric model

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.

  4. Understanding protein evolution: from protein physics to Darwinian selection.

    PubMed

    Zeldovich, Konstantin B; Shakhnovich, Eugene I

    2008-01-01

    Efforts in whole-genome sequencing and structural proteomics start to provide a global view of the protein universe, the set of existing protein structures and sequences. However, approaches based on the selection of individual sequences have not been entirely successful at the quantitative description of the distribution of structures and sequences in the protein universe because evolutionary pressure acts on the entire organism, rather than on a particular molecule. In parallel to this line of study, studies in population genetics and phenomenological molecular evolution established a mathematical framework to describe the changes in genome sequences in populations of organisms over time. Here, we review both microscopic (physics-based) and macroscopic (organism-level) models of protein-sequence evolution and demonstrate that bridging the two scales provides the most complete description of the protein universe starting from clearly defined, testable, and physiologically relevant assumptions.

  5. Model and Analytic Processes for Export License Assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Sandra E.; Whitney, Paul D.; Weimar, Mark R.

    2011-09-29

    This paper represents the Department of Energy Office of Nonproliferation Research and Development (NA-22) Simulations, Algorithms and Modeling (SAM) Program's first effort to identify and frame analytical methods and tools to aid export control professionals in effectively predicting proliferation intent; a complex, multi-step and multi-agency process. The report focuses on analytical modeling methodologies that alone, or combined, may improve the proliferation export control license approval process. It is a follow-up to an earlier paper describing information sources and environments related to international nuclear technology transfer. This report describes the decision criteria used to evaluate modeling techniques and tools to determinemore » which approaches will be investigated during the final 2 years of the project. The report also details the motivation for why new modeling techniques and tools are needed. The analytical modeling methodologies will enable analysts to evaluate the information environment for relevance to detecting proliferation intent, with specific focus on assessing risks associated with transferring dual-use technologies. Dual-use technologies can be used in both weapons and commercial enterprises. A decision-framework was developed to evaluate which of the different analytical modeling methodologies would be most appropriate conditional on the uniqueness of the approach, data availability, laboratory capabilities, relevance to NA-22 and Office of Arms Control and Nonproliferation (NA-24) research needs and the impact if successful. Modeling methodologies were divided into whether they could help micro-level assessments (e.g., help improve individual license assessments) or macro-level assessment. Macro-level assessment focuses on suppliers, technology, consumers, economies, and proliferation context. Macro-level assessment technologies scored higher in the area of uniqueness because less work has been done at the macro level. An approach to developing testable hypotheses for the macro-level assessment methodologies is provided. The outcome of this works suggests that we should develop a Bayes Net for micro-level analysis and continue to focus on Bayes Net, System Dynamics and Economic Input/Output models for assessing macro-level problems. Simultaneously, we need to develop metrics for assessing intent in export control, including the risks and consequences associated with all aspects of export control.« less

  6. Artificial Intelligence Applications to Testability.

    DTIC Science & Technology

    1984-10-01

    general software assistant; examining testability utilization of it should wait a few years until the software assistant is a well defined product ...ago. It provides a single host which satisfies the needs of developers, product developers, and end users . As shown in table 5.10-2, it also provides...follows a trend towards more user -oriented design approaches to interactive computer systems. The implicit goal in this trend is the

  7. The role of Dynamic Energy Budget theory in predictive modeling of stressor impacts on ecological systems. Comment on: ;Physics of metabolic organization; by Marko Jusup et al.

    NASA Astrophysics Data System (ADS)

    Galic, Nika; Forbes, Valery E.

    2017-03-01

    Human activities have been modifying ecosystems for centuries, from pressures on wild populations we harvest to modifying habitats through urbanization and agricultural activities. Changes in global climate patterns are adding another layer of, often unpredictable, perturbations to ecosystems on which we rely for life support [1,2]. To ensure the sustainability of ecosystem services, especially at this point in time when the human population is estimated to grow by another 2 billion by 2050 [3], we need to predict possible consequences of our actions and suggest relevant solutions [4,5]. We face several challenges when estimating adverse impacts of our actions on ecosystems. We describe these in the context of ecological risk assessment of chemicals. Firstly, when attempting to assess risk from exposure to chemicals, we base our decisions on a very limited number of species that are easily cultured and kept in the lab. We assume that preventing risk to these species will also protect all of the untested species present in natural ecosystems [6]. Secondly, although we know that chemicals interact with other stressors in the field, the number of stressors that we can test is limited due to logistical and ethical reasons. Similarly, empirical approaches are limited in both spatial and temporal scale due to logistical, financial and ethical reasons [7,8]. To bypass these challenges, we can develop ecological models that integrate relevant life history and other information and make testable predictions across relevant spatial and temporal scales [8-10].

  8. Subjective Prior Distributions for Modeling Longitudinal Continuous Outcomes with Non-Ignorable Dropout

    PubMed Central

    Paddock, Susan M.; Ebener, Patricia

    2010-01-01

    Substance abuse treatment research is complicated by the pervasive problem of non-ignorable missing data – i.e., the occurrence of the missing data is related to the unobserved outcomes. Missing data frequently arise due to early client departure from treatment. Pattern-mixture models (PMMs) are often employed in such situations to jointly model the outcome and the missing data mechanism. PMMs require non-testable assumptions to identify model parameters. Several approaches to parameter identification have therefore been explored for longitudinal modeling of continuous outcomes, and informative priors have been developed in other contexts. In this paper, we describe an expert interview conducted with five substance abuse treatment clinical experts who have familiarity with the Therapeutic Community modality of substance abuse treatment and with treatment process scores collected using the Dimensions of Change Instrument. The goal of the interviews was to obtain expert opinion about the rate of change in continuous client-level treatment process scores for clients who leave before completing two assessments and whose rate of change (slope) in treatment process scores is unidentified by the data. We find that the experts’ opinions differed dramatically from widely-utilized assumptions used to identify parameters in the PMM. Further, subjective prior assessment allows one to properly address the uncertainty inherent in the subjective decisions required to identify parameters in the PMM and to measure their effect on conclusions drawn from the analysis. PMID:19012279

  9. The threshold hypothesis: solving the equation of nurture vs nature in type 1 diabetes.

    PubMed

    Wasserfall, C; Nead, K; Mathews, C; Atkinson, M A

    2011-09-01

    For more than 40 years, the contributions of nurture (i.e. the environment) and nature (i.e. genetics) have been touted for their aetiological importance in type 1 diabetes. Disappointingly, knowledge gains in these areas, while individually successful, have to a large extent occurred in isolation from each other. One reason underlying this divide is the lack of a testable model that simultaneously considers the contributions of genetic and environmental determinants in the formation of this and potentially other disorders that are subject to these variables. To address this void, we have designed a model based on the hypothesis that the aetiological influences of genetics and environment, when evaluated as intersecting and reciprocal trend lines based on odds ratios, result in a method of concurrently evaluating both facets and defining the attributable risk of clinical onset of type 1 diabetes. The model, which we have elected to term the 'threshold hypothesis', also provides a novel means of conceptualising the complex interactions of nurture with nature in type 1 diabetes across various geographical populations.

  10. Testing the low scale seesaw and leptogenesis

    NASA Astrophysics Data System (ADS)

    Drewes, Marco; Garbrecht, Björn; Gueter, Dario; Klarić, Juraj

    2017-08-01

    Heavy neutrinos with masses below the electroweak scale can simultaneously generate the light neutrino masses via the seesaw mechanism and the baryon asymmetry of the universe via leptogenesis. The requirement to explain these phenomena imposes constraints on the mass spectrum of the heavy neutrinos, their flavour mixing pattern and their CP properties. We first combine bounds from different experiments in the past to map the viable parameter regions in which the minimal low scale seesaw model can explain the observed neutrino oscillations, while being consistent with the negative results of past searches for physics beyond the Standard Model. We then study which additional predictions for the properties of the heavy neutrinos can be made based on the requirement to explain the observed baryon asymmetry of the universe. Finally, we comment on the perspectives to find traces of heavy neutrinos in future experimental searches at the LHC, NA62, BELLE II, T2K, SHiP or a future high energy collider, such as ILC, CEPC or FCC-ee. If any heavy neutral leptons are discovered in the future, our results can be used to assess whether these particles are indeed the common origin of the light neutrino masses and the baryon asymmetry of the universe. If the magnitude of their couplings to all Standard Model flavours can be measured individually, and if the Dirac phase in the lepton mixing matrix is determined in neutrino oscillation experiments, then all model parameters can in principle be determined from this data. This makes the low scale seesaw a fully testable model of neutrino masses and baryogenesis.

  11. Modeling spatial patterns of limits to production of deposit-feeders and ectothermic predators in the northern Bering Sea

    NASA Astrophysics Data System (ADS)

    Lovvorn, James R.; Jacob, Ute; North, Christopher A.; Kolts, Jason M.; Grebmeier, Jacqueline M.; Cooper, Lee W.; Cui, Xuehua

    2015-03-01

    Network models can help generate testable predictions and more accurate projections of food web responses to environmental change. Such models depend on predator-prey interactions throughout the network. When a predator currently consumes all of its prey's production, the prey's biomass may change substantially with loss of the predator or invasion by others. Conversely, if production of deposit-feeding prey is limited by organic matter inputs, system response may be predictable from models of primary production. For sea floor communities of shallow Arctic seas, increased temperature could lead to invasion or loss of predators, while reduced sea ice or change in wind-driven currents could alter organic matter inputs. Based on field data and models for three different sectors of the northern Bering Sea, we found a number of cases where all of a prey's production was consumed but the taxa involved varied among sectors. These differences appeared not to result from numerical responses of predators to abundance of preferred prey. Rather, they appeared driven by stochastic variations in relative biomass among taxa, due largely to abiotic conditions that affect colonization and early post-larval survival. Oscillatory tendencies of top-down versus bottom-up interactions may augment these variations. Required inputs of settling microalgae exceeded existing estimates of annual primary production by 50%; thus, assessing limits to bottom-up control depends on better corrections of satellite estimates to account for production throughout the water column. Our results suggest that in this Arctic system, stochastic abiotic conditions outweigh deterministic species interactions in food web responses to a varying environment.

  12. A new computational account of cognitive control over reinforcement-based decision-making: Modeling of a probabilistic learning task.

    PubMed

    Zendehrouh, Sareh

    2015-11-01

    Recent work on decision-making field offers an account of dual-system theory for decision-making process. This theory holds that this process is conducted by two main controllers: a goal-directed system and a habitual system. In the reinforcement learning (RL) domain, the habitual behaviors are connected with model-free methods, in which appropriate actions are learned through trial-and-error experiences. However, goal-directed behaviors are associated with model-based methods of RL, in which actions are selected using a model of the environment. Studies on cognitive control also suggest that during processes like decision-making, some cortical and subcortical structures work in concert to monitor the consequences of decisions and to adjust control according to current task demands. Here a computational model is presented based on dual system theory and cognitive control perspective of decision-making. The proposed model is used to simulate human performance on a variant of probabilistic learning task. The basic proposal is that the brain implements a dual controller, while an accompanying monitoring system detects some kinds of conflict including a hypothetical cost-conflict one. The simulation results address existing theories about two event-related potentials, namely error related negativity (ERN) and feedback related negativity (FRN), and explore the best account of them. Based on the results, some testable predictions are also presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. A parallel implementation of an off-lattice individual-based model of multicellular populations

    NASA Astrophysics Data System (ADS)

    Harvey, Daniel G.; Fletcher, Alexander G.; Osborne, James M.; Pitt-Francis, Joe

    2015-07-01

    As computational models of multicellular populations include ever more detailed descriptions of biophysical and biochemical processes, the computational cost of simulating such models limits their ability to generate novel scientific hypotheses and testable predictions. While developments in microchip technology continue to increase the power of individual processors, parallel computing offers an immediate increase in available processing power. To make full use of parallel computing technology, it is necessary to develop specialised algorithms. To this end, we present a parallel algorithm for a class of off-lattice individual-based models of multicellular populations. The algorithm divides the spatial domain between computing processes and comprises communication routines that ensure the model is correctly simulated on multiple processors. The parallel algorithm is shown to accurately reproduce the results of a deterministic simulation performed using a pre-existing serial implementation. We test the scaling of computation time, memory use and load balancing as more processes are used to simulate a cell population of fixed size. We find approximate linear scaling of both speed-up and memory consumption on up to 32 processor cores. Dynamic load balancing is shown to provide speed-up for non-regular spatial distributions of cells in the case of a growing population.

  14. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  15. Discrimination between induced, triggered, and natural earthquakes close to hydrocarbon reservoirs: A probabilistic approach based on the modeling of depletion-induced stress changes and seismological source parameters

    NASA Astrophysics Data System (ADS)

    Dahm, Torsten; Cesca, Simone; Hainzl, Sebastian; Braun, Thomas; Krüger, Frank

    2015-04-01

    Earthquakes occurring close to hydrocarbon fields under production are often under critical view of being induced or triggered. However, clear and testable rules to discriminate the different events have rarely been developed and tested. The unresolved scientific problem may lead to lengthy public disputes with unpredictable impact on the local acceptance of the exploitation and field operations. We propose a quantitative approach to discriminate induced, triggered, and natural earthquakes, which is based on testable input parameters. Maxima of occurrence probabilities are compared for the cases under question, and a single probability of being triggered or induced is reported. The uncertainties of earthquake location and other input parameters are considered in terms of the integration over probability density functions. The probability that events have been human triggered/induced is derived from the modeling of Coulomb stress changes and a rate and state-dependent seismicity model. In our case a 3-D boundary element method has been adapted for the nuclei of strain approach to estimate the stress changes outside the reservoir, which are related to pore pressure changes in the field formation. The predicted rate of natural earthquakes is either derived from the background seismicity or, in case of rare events, from an estimate of the tectonic stress rate. Instrumentally derived seismological information on the event location, source mechanism, and the size of the rupture plane is of advantage for the method. If the rupture plane has been estimated, the discrimination between induced or only triggered events is theoretically possible if probability functions are convolved with a rupture fault filter. We apply the approach to three recent main shock events: (1) the Mw 4.3 Ekofisk 2001, North Sea, earthquake close to the Ekofisk oil field; (2) the Mw 4.4 Rotenburg 2004, Northern Germany, earthquake in the vicinity of the Söhlingen gas field; and (3) the Mw 6.1 Emilia 2012, Northern Italy, earthquake in the vicinity of a hydrocarbon reservoir. The three test cases cover the complete range of possible causes: clearly "human induced," "not even human triggered," and a third case in between both extremes.

  16. Social response to technological disaster: the accident at Three Mile Island

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, B.B.

    1984-01-01

    Until recently the sociological study of man environment relations under extreme circumstances has been restricted to natural hazards (e.g., floods, hurricanes, tornadoes). Technological disasters are becoming more commonplace (e.g., Times Beach, MO, Love Canal, TMI-2) and are growing as potential sources of impact upon human populations. However, theory regarding the social impact of such disasters has not been developed. While research on natural disasters is in part applicable to technological disasters, theory adapted from environmental sociology and psychology are also utilized to develop a theory of social response to extreme environmental events produced by technology. Hypotheses are developed in themore » form of an empirically testable model based on the literature reviewed.« less

  17. The Assurance Challenges of Advanced Packaging Technologies for Electronics

    NASA Technical Reports Server (NTRS)

    Sampson, Michael J.

    2010-01-01

    Advances in microelectronic parts performance are driving towards finer feature sizes, three-dimensional geometries and ever-increasing number of transistor equivalents that are resulting in increased die sizes and interconnection (I/O) counts. The resultant packaging necessary to provide assemble-ability, environmental protection, testability and interconnection to the circuit board for the active die creates major challenges, particularly for space applications, Traditionally, NASA has used hermetically packaged microcircuits whenever available but the new demands make hermetic packaging less and less practical at the same time as more and more expensive, Some part types of great interest to NASA designers are currently only available in non-hermetic packaging. It is a far more complex quality and reliability assurance challenge to gain confidence in the long-term survivability and effectiveness of nonhermetic packages than for hermetic ones. Although they may provide more rugged environmental protection than the familiar Plastic Encapsulated Microcircuits (PEMs), the non-hermetic Ceramic Column Grid Array (CCGA) packages that are the focus of this presentation present a unique combination of challenges to assessing their suitability for spaceflight use. The presentation will discuss the bases for these challenges, some examples of the techniques proposed to mitigate them and a proposed approach to a US MIL specification Class for non-hermetic microcircuits suitable for space application, Class Y, to be incorporated into M. IL-PRF-38535. It has recently emerged that some major packaging suppliers are offering hermetic area array packages that may offer alternatives to the nonhermetic CCGA styles but have also got their own inspectability and testability issues which will be briefly discussed in the presentation,

  18. How Do Structure and Charge Affect Metal-Complex Binding to DNA? An Upper-Division Integrated Laboratory Project Using Cyclic Voltammetry

    ERIC Educational Resources Information Center

    Kulczynska, Agnieszka; Johnson, Reed; Frost, Tony; Margerum, Lawrence D.

    2011-01-01

    An advanced undergraduate laboratory project is described that integrates inorganic, analytical, physical, and biochemical techniques to reveal differences in binding between cationic metal complexes and anionic DNA (herring testes). Students were guided to formulate testable hypotheses based on the title question and a list of different metal…

  19. Validating a conceptual model for an inter-professional approach to shared decision making: a mixed methods study

    PubMed Central

    Légaré, France; Stacey, Dawn; Gagnon, Susie; Dunn, Sandy; Pluye, Pierre; Frosch, Dominick; Kryworuchko, Jennifer; Elwyn, Glyn; Gagnon, Marie-Pierre; Graham, Ian D

    2011-01-01

    Rationale, aims and objectives Following increased interest in having inter-professional (IP) health care teams engage patients in decision making, we developed a conceptual model for an IP approach to shared decision making (SDM) in primary care. We assessed the validity of the model with stakeholders in Canada. Methods In 15 individual interviews and 7 group interviews with 79 stakeholders, we asked them to: (1) propose changes to the IP-SDM model; (2) identify barriers and facilitators to the model's implementation in clinical practice; and (3) assess the model using a theory appraisal questionnaire. We performed a thematic analysis of the transcripts and a descriptive analysis of the questionnaires. Results Stakeholders suggested placing the patient at its centre; extending the concept of family to include significant others; clarifying outcomes; highlighting the concept of time; merging the micro, meso and macro levels in one figure; and recognizing the influence of the environment and emotions. The most common barriers identified were time constraints, insufficient resources and an imbalance of power among health professionals. The most common facilitators were education and training in inter-professionalism and SDM, motivation to achieve an IP approach to SDM, and mutual knowledge and understanding of disciplinary roles. Most stakeholders considered that the concepts and relationships between the concepts were clear and rated the model as logical, testable, having clear schematic representation, and being relevant to inter-professional collaboration, SDM and primary care. Conclusions Stakeholders validated the new IP-SDM model for primary care settings and proposed few modifications. Future research should assess if the model helps implement SDM in IP clinical practice. PMID:20695950

  20. Validating a conceptual model for an inter-professional approach to shared decision making: a mixed methods study.

    PubMed

    Légaré, France; Stacey, Dawn; Gagnon, Susie; Dunn, Sandy; Pluye, Pierre; Frosch, Dominick; Kryworuchko, Jennifer; Elwyn, Glyn; Gagnon, Marie-Pierre; Graham, Ian D

    2011-08-01

    Following increased interest in having inter-professional (IP) health care teams engage patients in decision making, we developed a conceptual model for an IP approach to shared decision making (SDM) in primary care. We assessed the validity of the model with stakeholders in Canada. In 15 individual interviews and 7 group interviews with 79 stakeholders, we asked them to: (1) propose changes to the IP-SDM model; (2) identify barriers and facilitators to the model's implementation in clinical practice; and (3) assess the model using a theory appraisal questionnaire. We performed a thematic analysis of the transcripts and a descriptive analysis of the questionnaires. Stakeholders suggested placing the patient at its centre; extending the concept of family to include significant others; clarifying outcomes; highlighting the concept of time; merging the micro, meso and macro levels in one figure; and recognizing the influence of the environment and emotions. The most common barriers identified were time constraints, insufficient resources and an imbalance of power among health professionals. The most common facilitators were education and training in inter-professionalism and SDM, motivation to achieve an IP approach to SDM, and mutual knowledge and understanding of disciplinary roles. Most stakeholders considered that the concepts and relationships between the concepts were clear and rated the model as logical, testable, having clear schematic representation, and being relevant to inter-professional collaboration, SDM and primary care. Stakeholders validated the new IP-SDM model for primary care settings and proposed few modifications. Future research should assess if the model helps implement SDM in IP clinical practice. © 2010 Blackwell Publishing Ltd.

  1. There is more than one way to model an elephant. Experiment-driven modeling of the actin cytoskeleton.

    PubMed

    Ditlev, Jonathon A; Mayer, Bruce J; Loew, Leslie M

    2013-02-05

    Mathematical modeling has established its value for investigating the interplay of biochemical and mechanical mechanisms underlying actin-based motility. Because of the complex nature of actin dynamics and its regulation, many of these models are phenomenological or conceptual, providing a general understanding of the physics at play. But the wealth of carefully measured kinetic data on the interactions of many of the players in actin biochemistry cries out for the creation of more detailed and accurate models that could permit investigators to dissect interdependent roles of individual molecular components. Moreover, no human mind can assimilate all of the mechanisms underlying complex protein networks; so an additional benefit of a detailed kinetic model is that the numerous binding proteins, signaling mechanisms, and biochemical reactions can be computationally organized in a fully explicit, accessible, visualizable, and reusable structure. In this review, we will focus on how comprehensive and adaptable modeling allows investigators to explain experimental observations and develop testable hypotheses on the intracellular dynamics of the actin cytoskeleton. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. There is More Than One Way to Model an Elephant. Experiment-Driven Modeling of the Actin Cytoskeleton

    PubMed Central

    Ditlev, Jonathon A.; Mayer, Bruce J.; Loew, Leslie M.

    2013-01-01

    Mathematical modeling has established its value for investigating the interplay of biochemical and mechanical mechanisms underlying actin-based motility. Because of the complex nature of actin dynamics and its regulation, many of these models are phenomenological or conceptual, providing a general understanding of the physics at play. But the wealth of carefully measured kinetic data on the interactions of many of the players in actin biochemistry cries out for the creation of more detailed and accurate models that could permit investigators to dissect interdependent roles of individual molecular components. Moreover, no human mind can assimilate all of the mechanisms underlying complex protein networks; so an additional benefit of a detailed kinetic model is that the numerous binding proteins, signaling mechanisms, and biochemical reactions can be computationally organized in a fully explicit, accessible, visualizable, and reusable structure. In this review, we will focus on how comprehensive and adaptable modeling allows investigators to explain experimental observations and develop testable hypotheses on the intracellular dynamics of the actin cytoskeleton. PMID:23442903

  3. Dayside auroral arcs and convection

    NASA Technical Reports Server (NTRS)

    Reiff, P. H.; Burch, J. L.; Heelis, R. A.

    1978-01-01

    Recent Defense Meteorological Satellite Program and International Satellite for Ionospheric Studies dayside auroral observations show two striking features: a lack of visible auroral arcs near noon and occasional fan shaped arcs radiating away from noon on both the morning and afternoon sides of the auroral oval. A simple model which includes these two features is developed by reference to the dayside convection pattern of Heelis et al. (1976). The model may be testable in the near future with simultaneous convection, current and auroral light data.

  4. Thrills, chills, frissons, and skin orgasms: toward an integrative model of transcendent psychophysiological experiences in music

    PubMed Central

    Harrison, Luke; Loui, Psyche

    2014-01-01

    Music has a unique power to elicit moments of intense emotional and psychophysiological response. These moments – termed “chills,” “thrills”, “frissons,” etc. – are subjects of introspection and philosophical debate, as well as scientific study in music perception and cognition. The present article integrates the existing multidisciplinary literature in an attempt to define a comprehensive, testable, and ecologically valid model of transcendent psychophysiological moments in music. PMID:25101043

  5. White-nose syndrome initiates a cascade of physiologic disturbances in the hibernating bat host

    USGS Publications Warehouse

    Verant, Michelle L.; Meteyer, Carol U.; Speakman, John R.; Cryan, Paul M.; Lorch, Jeffrey M.; Blehert, David S.

    2014-01-01

    Integrating these novel findings on the physiological changes that occur in early-stage WNS with those previously documented in late-stage infections, we propose a multi-stage disease progression model that mechanistically describes the pathologic and physiologic effects underlying mortality of WNS in hibernating bats. This model identifies testable hypotheses for better understanding this disease, knowledge that will be critical for defining effective disease mitigation strategies aimed at reducing morbidity and mortality that results from WNS.

  6. Beyond Critical Exponents in Neuronal Avalanches

    NASA Astrophysics Data System (ADS)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  7. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  8. Developing Cognitive Models for Social Simulation from Survey Data

    NASA Astrophysics Data System (ADS)

    Alt, Jonathan K.; Lieberman, Stephen

    The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.

  9. A prospective earthquake forecast experiment for Japan

    NASA Astrophysics Data System (ADS)

    Yokoi, Sayoko; Nanjo, Kazuyoshi; Tsuruoka, Hiroshi; Hirata, Naoshi

    2013-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013) is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. On 1 November in 2009, we started the 1st earthquake forecast testing experiment for the Japan area. We use the unified JMA catalogue compiled by the Japan Meteorological Agency as authorized catalogue. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called All Japan, Mainland, and Kanto. A total of 91 models were submitted to CSEP-Japan, and are evaluated with the CSEP official suite of tests about forecast performance. In this presentation, we show the results of the experiment of the 3-month testing class for 5 rounds. HIST-ETAS7pa, MARFS and RI10K models corresponding to the All Japan, Mainland and Kanto regions showed the best score based on the total log-likelihood. It is also clarified that time dependency of model parameters is no effective factor to pass the CSEP consistency tests for the 3-month testing class in all regions. Especially, spatial distribution in the All Japan region was too difficult to pass consistency test due to multiple events at a bin. Number of target events for a round in the Mainland region tended to be smaller than model's expectation during all rounds, which resulted in rejections of consistency test because of overestimation. In the Kanto region, pass ratios of consistency tests in each model showed more than 80%, which was associated with good balanced forecasting of event number and spatial distribution. Due to the multiple rounds of the experiment, we are now understanding the stability of models, robustness of model selection and earthquake predictability in each region beyond stochastic fluctuations of seismicity. We plan to use the results for design of 3 dimensional earthquake forecasting model in Kanto region, which is supported by the special project for reducing vulnerability for urban mega earthquake disasters from Ministy of Education, Culture, Sports and Technology of Japan.

  10. The biology and polymer physics underlying large-scale chromosome organization.

    PubMed

    Sazer, Shelley; Schiessel, Helmut

    2018-02-01

    Chromosome large-scale organization is a beautiful example of the interplay between physics and biology. DNA molecules are polymers and thus belong to the class of molecules for which physicists have developed models and formulated testable hypotheses to understand their arrangement and dynamic properties in solution, based on the principles of polymer physics. Biologists documented and discovered the biochemical basis for the structure, function and dynamic spatial organization of chromosomes in cells. The underlying principles of chromosome organization have recently been revealed in unprecedented detail using high-resolution chromosome capture technology that can simultaneously detect chromosome contact sites throughout the genome. These independent lines of investigation have now converged on a model in which DNA loops, generated by the loop extrusion mechanism, are the basic organizational and functional units of the chromosome. © 2017 The Authors. Traffic published by John Wiley & Sons Ltd.

  11. Differentiation without Distancing. Explaining Bi-Polarization of Opinions without Negative Influence

    PubMed Central

    Mäs, Michael; Flache, Andreas

    2013-01-01

    Explanations of opinion bi-polarization hinge on the assumption of negative influence, individuals’ striving to amplify differences to disliked others. However, empirical evidence for negative influence is inconclusive, which motivated us to search for an alternative explanation. Here, we demonstrate that bi-polarization can be explained without negative influence, drawing on theories that emphasize the communication of arguments as central mechanism of influence. Due to homophily, actors interact mainly with others whose arguments will intensify existing tendencies for or against the issue at stake. We develop an agent-based model of this theory and compare its implications to those of existing social-influence models, deriving testable hypotheses about the conditions of bi-polarization. Hypotheses were tested with a group-discussion experiment (N = 96). Results demonstrate that argument exchange can entail bi-polarization even when there is no negative influence. PMID:24312164

  12. Functional neurological symptom disorder (conversion disorder): A role for microglial-based plasticity mechanisms?

    PubMed

    Stephenson, Chris P; Baguley, Ian J

    2018-02-01

    Functional Neurological Symptom Disorder (FND) is a relatively common neurological condition, accounting for approximately 3-6% of neurologist referrals. FND is considered a transient disorder of neuronal function, sometimes linked to physical trauma and psychological stress. Despite this, chronic disability is common, for example, around 40% of adults with motor FND have permanent disability. Building on current theoretical models, this paper proposes that microglial dysfunction could perpetuate functional changes within acute motor FND, thus providing a pathophysiological mechanism underlying the chronic stage of the motor FND phenotypes seen clinically. Core to our argument is microglia's dual role in modulating neuroimmunity and their control of synaptic plasticity, which places them at a pathophysiological nexus wherein coincident physical trauma and psychological stress could cause long-term change in neuronal networks without producing macroscopic structural abnormality. This model proposes a range of hypotheses that are testable with current technologies. Copyright © 2017. Published by Elsevier Ltd.

  13. On the Importance of Comparative Research for the Understanding of Human Behavior and Development: A Reply to Gottlieb & Lickliter (2004)

    ERIC Educational Resources Information Center

    Maestripieri, Dario

    2005-01-01

    Comparative behavioral research is important for a number of reasons and can contribute to the understanding of human behavior and development in many different ways. Research with animal models of human behavior and development can be a source not only of general principles and testable hypotheses but also of empirical information that may be…

  14. TESTING MODELS FOR THE SHALLOW DECAY PHASE OF GAMMA-RAY BURST AFTERGLOWS WITH POLARIZATION OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lan, Mi-Xiang; Dai, Zi-Gao; Wu, Xue-Feng, E-mail: dzg@nju.edu.cn

    2016-08-01

    The X-ray afterglows of almost one-half of gamma-ray bursts have been discovered by the Swift satellite to have a shallow decay phase of which the origin remains mysterious. Two main models have been proposed to explain this phase: relativistic wind bubbles (RWBs) and structured ejecta, which could originate from millisecond magnetars and rapidly rotating black holes, respectively. Based on these models, we investigate polarization evolution in the shallow decay phase of X-ray and optical afterglows. We find that in the RWB model, a significant bump of the polarization degree evolution curve appears during the shallow decay phase of both opticalmore » and X-ray afterglows, while the polarization position angle abruptly changes its direction by 90°. In the structured ejecta model, however, the polarization degree does not evolve significantly during the shallow decay phase of afterglows whether the magnetic field configuration in the ejecta is random or globally large-scale. Therefore, we conclude that these two models for the shallow decay phase and relevant central engines would be testable with future polarization observations.« less

  15. Testability Design Rating System: Testability Handbook. Volume 1

    DTIC Science & Technology

    1992-02-01

    4-10 4.7.5 Summary of False BIT Alarms (FBA) ............................. 4-10 4.7.6 Smart BIT Technique...Circuit Board PGA Pin Grid Array PLA Programmable Logic Array PLD Programmable Logic Device PN Pseudo-Random Number PREDICT Probabilistic Estimation of...11 4.7.6 Smart BIT ( reference: RADC-TR-85-198). " Smart " BIT is a term given to BIT circuitry in a system LRU which includes dedicated processor/memory

  16. Smart substrates: Making multi-chip modules smarter

    NASA Astrophysics Data System (ADS)

    Wunsch, T. F.; Treece, R. K.

    1995-05-01

    A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.

  17. Goal-directed decision making as probabilistic inference: A computational framework and potential neural correlates

    PubMed Central

    Solway, A.; Botvinick, M.

    2013-01-01

    Recent work has given rise to the view that reward-based decision making is governed by two key controllers: a habit system, which stores stimulus-response associations shaped by past reward, and a goal-oriented system that selects actions based on their anticipated outcomes. The current literature provides a rich body of computational theory addressing habit formation, centering on temporal-difference learning mechanisms. Less progress has been made toward formalizing the processes involved in goal-directed decision making. We draw on recent work in cognitive neuroscience, animal conditioning, cognitive and developmental psychology and machine learning, to outline a new theory of goal-directed decision making. Our basic proposal is that the brain, within an identifiable network of cortical and subcortical structures, implements a probabilistic generative model of reward, and that goal-directed decision making is effected through Bayesian inversion of this model. We present a set of simulations implementing the account, which address benchmark behavioral and neuroscientific findings, and which give rise to a set of testable predictions. We also discuss the relationship between the proposed framework and other models of decision making, including recent models of perceptual choice, to which our theory bears a direct connection. PMID:22229491

  18. Applications of species distribution modeling to paleobiology

    NASA Astrophysics Data System (ADS)

    Svenning, Jens-Christian; Fløjgaard, Camilla; Marske, Katharine A.; Nógues-Bravo, David; Normand, Signe

    2011-10-01

    Species distribution modeling (SDM: statistical and/or mechanistic approaches to the assessment of range determinants and prediction of species occurrence) offers new possibilities for estimating and studying past organism distributions. SDM complements fossil and genetic evidence by providing (i) quantitative and potentially high-resolution predictions of the past organism distributions, (ii) statistically formulated, testable ecological hypotheses regarding past distributions and communities, and (iii) statistical assessment of range determinants. In this article, we provide an overview of applications of SDM to paleobiology, outlining the methodology, reviewing SDM-based studies to paleobiology or at the interface of paleo- and neobiology, discussing assumptions and uncertainties as well as how to handle them, and providing a synthesis and outlook. Key methodological issues for SDM applications to paleobiology include predictor variables (types and properties; special emphasis is given to paleoclimate), model validation (particularly important given the emphasis on cross-temporal predictions in paleobiological applications), and the integration of SDM and genetics approaches. Over the last few years the number of studies using SDM to address paleobiology-related questions has increased considerably. While some of these studies only use SDM (23%), most combine them with genetically inferred patterns (49%), paleoecological records (22%), or both (6%). A large number of SDM-based studies have addressed the role of Pleistocene glacial refugia in biogeography and evolution, especially in Europe, but also in many other regions. SDM-based approaches are also beginning to contribute to a suite of other research questions, such as historical constraints on current distributions and diversity patterns, the end-Pleistocene megafaunal extinctions, past community assembly, human paleobiogeography, Holocene paleoecology, and even deep-time biogeography (notably, providing insights into biogeographic dynamics >400 million years ago). We discuss important assumptions and uncertainties that affect the SDM approach to paleobiology - the equilibrium postulate, niche stability, changing atmospheric CO 2 concentrations - as well as ways to address these (ensemble, functional SDM, and non-SDM ecoinformatics approaches). We conclude that the SDM approach offers important opportunities for advances in paleobiology by providing a quantitative ecological perspective, and hereby also offers the potential for an enhanced contribution of paleobiology to ecology and conservation biology, e.g., for estimating climate change impacts and for informing ecological restoration.

  19. Modifying dementia risk and trajectories of cognitive decline in aging: the Cache County Memory Study.

    PubMed

    Welsh-Bohmer, Kathleen A; Breitner, John C S; Hayden, Kathleen M; Lyketsos, Constantine; Zandi, Peter P; Tschanz, Joann T; Norton, Maria C; Munger, Ron

    2006-07-01

    The Cache County Study of Memory, Health, and Aging, more commonly referred to as the "Cache County Memory Study (CCMS)" is a longitudinal investigation of aging and Alzheimer's disease (AD) based in an exceptionally long-lived population residing in northern Utah. The study begun in 1994 has followed an initial cohort of 5,092 older individuals (many over age 84) and has examined the development of cognitive impairment and dementia in relation to genetic and environmental antecedents. This article summarizes the major contributions of the CCMS towards the understanding of mild cognitive disorders and AD across the lifespan, underscoring the role of common health exposures in modifying dementia risk and trajectories of cognitive change. The study now in its fourth wave of ascertainment illustrates the role of population-based approaches in informing testable models of cognitive aging and Alzheimer's disease.

  20. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2011-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  1. Japanese earthquake predictability experiment with multiple runs before and after the 2011 Tohoku-oki earthquake

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Tsuruoka, H.; Yokoi, S.

    2013-12-01

    The current Japanese national earthquake prediction program emphasizes the importance of modeling as well as monitoring for a sound scientific development of earthquake prediction research. One major focus of the current program is to move toward creating testable earthquake forecast models. For this purpose, in 2009 we joined the Collaboratory for the Study of Earthquake Predictability (CSEP) and installed, through an international collaboration, the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan. We started Japanese earthquake predictability experiment on November 1, 2009. The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year and 3 years) and 3 testing regions called 'All Japan,' 'Mainland,' and 'Kanto.' A total of 160 models, as of August 2013, were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. We will present results of prospective forecast and testing for periods before and after the 2011 Tohoku-oki earthquake. Because a seismic activity has changed dramatically since the 2011 event, performances of models have been affected very much. In addition, as there is the problem of authorized catalogue related to the completeness magnitude, most models did not pass the CSEP consistency tests. Also, we will discuss the retrospective earthquake forecast experiments for aftershocks of the 2011 Tohoku-oki earthquake. Our aim is to describe what has turned out to be the first occasion for setting up a research environment for rigorous earthquake forecasting in Japan.

  2. Broadening conceptions of learning in medical education: the message from teamworking.

    PubMed

    Bleakley, Alan

    2006-02-01

    There is a mismatch between the broad range of learning theories offered in the wider education literature and a relatively narrow range of theories privileged in the medical education literature. The latter are usually described under the heading of 'adult learning theory'. This paper critically addresses the limitations of the current dominant learning theories informing medical education. An argument is made that such theories, which address how an individual learns, fail to explain how learning occurs in dynamic, complex and unstable systems such as fluid clinical teams. Models of learning that take into account distributed knowing, learning through time as well as space, and the complexity of a learning environment including relationships between persons and artefacts, are more powerful in explaining and predicting how learning occurs in clinical teams. Learning theories may be privileged for ideological reasons, such as medicine's concern with autonomy. Where an increasing amount of medical education occurs in workplace contexts, sociocultural learning theories offer a best-fit exploration and explanation of such learning. We need to continue to develop testable models of learning that inform safe work practice. One type of learning theory will not inform all practice contexts and we need to think about a range of fit-for-purpose theories that are testable in practice. Exciting current developments include dynamicist models of learning drawing on complexity theory.

  3. Examining the nature of retrocausal effects in biology and psychology

    NASA Astrophysics Data System (ADS)

    Mossbridge, Julia

    2017-05-01

    Multiple laboratories have reported physiological and psychological changes associated with future events that are designed to be unpredictable by normal sensory means. Such phenomena seem to be examples of retrocausality at the macroscopic level. Here I will discuss the characteristics of seemingly retrocausal effects in biology and psychology, specifically examining a biological and a psychological form of precognition, predictive anticipatory activity (PAA) and implicit precognition. The aim of this examination is to offer an analysis of the constraints posed by the characteristics of macroscopic retrocausal effects. Such constraints are critical to assessing any physical theory that purports to explain these effects. Following a brief introduction to recent research on PAA and implicit precognition, I will describe what I believe we have learned so far about the nature of these effects, and conclude with a testable, yet embryonic, model of macroscopic retrocausal phenomena.

  4. E-learning platform for automated testing of electronic circuits using signature analysis method

    NASA Astrophysics Data System (ADS)

    Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel

    2016-12-01

    Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.

  5. Efficient design of CMOS TSC checkers

    NASA Technical Reports Server (NTRS)

    Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling

    1990-01-01

    This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.

  6. Using Backward Design in Education Research: A Research Methods Essay †

    PubMed Central

    Jensen, Jamie L.; Bailey, Elizabeth G.; Kummer, Tyler A.; Weber, K. Scott

    2017-01-01

    Education research within the STEM disciplines applies a scholarly approach to teaching and learning, with the intent of better understanding how people learn and of improving pedagogy at the undergraduate level. Most of the professionals practicing in this field have ‘crossed over’ from other disciplinary fields and thus have faced challenges in becoming experts in a new discipline. In this article, we offer a novel framework for approaching education research design called Backward Design in Education Research. It is patterned on backward curricular design and provides a three-step, systematic approach to designing education projects: 1) Define a research question that leads to a testable causal hypothesis based on a theoretical rationale; 2) Choose or design the assessment instruments to test the research hypothesis; and 3) Develop an experimental protocol that will be effective in testing the research hypothesis. This approach provides a systematic method to develop and carry out evidence-based research design. PMID:29854045

  7. Russians in treatment: the evidence base supporting cultural adaptations.

    PubMed

    Jurcik, Tomas; Chentsova-Dutton, Yulia E; Solopieieva-Jurcikova, Ielyzaveta; Ryder, Andrew G

    2013-07-01

    Despite large waves of westward migration, little is known about how to adapt services to assist Russian-speaking immigrants. In an attempt to bridge the scientist-practitioner gap, the current review synthesizes diverse literatures regarding what is known about immigrants from the Former Soviet Union. Relevant empirical studies and reviews from cross-cultural and cultural psychology, sociology, psychiatric epidemiology, mental health, management, linguistics, history, and anthropology literature were synthesized into three broad topics: culture of origin issues, common psychosocial challenges, and clinical recommendations. Russian speakers probably differ in their form of collectivism, gender relations, emotion norms, social support, and parenting styles from what many clinicians are familiar with and exhibit an apparent paradoxical mix of modern and traditional values. While some immigrant groups from the Former Soviet Union are adjusting well, others have shown elevated levels of depression, somatization, and alcoholism, which can inform cultural adaptations. Testable assessment and therapy adaptations for Russians were outlined based on integrating clinical and cultural psychology perspectives. © 2013 Wiley Periodicals, Inc.

  8. Phase 1 Space Fission Propulsion Energy Source Design

    NASA Technical Reports Server (NTRS)

    Houts, Mike; VanDyke, Melissa; Godfroy, Tom; Pedersen, Kevin; Martin, James; Dickens, Ricky; Salvail, Pat; Hrbud, Ivana; Carter, Robert; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Fission technology can enable rapid, affordable access to any point in the solar system. If fission propulsion systems are to be developed to their full potential; however, near-term customers must be identified and initial fission systems successfully developed, launched, and operated. Studies conducted in fiscal year 2001 (IISTP, 2001) show that fission electric propulsion (FEP) systems with a specific mass at or below 50 kg/kWjet could enhance or enable numerous robotic outer solar system missions of interest. At the required specific mass, it is possible to develop safe, affordable systems that meet mission requirements. To help select the system design to pursue, eight evaluation criteria were identified: system integration, safety, reliability, testability, specific mass, cost, schedule, and programmatic risk. A top-level comparison of four potential concepts was performed: a Testable, Passive, Redundant Reactor (TPRR), a Testable Multi-Cell In-Core Thermionic Reactor (TMCT), a Direct Gas Cooled Reactor (DGCR), and a Pumped Liquid Metal Reactor.(PLMR). Development of any of the four systems appears feasible. However, for power levels up to at least 500 kWt (enabling electric power levels of 125-175 kWe, given 25-35% power conversion efficiency) the TPRR has advantages related to several criteria and is competitive with respect to all. Hardware-based research and development has further increased confidence in the TPRR approach. Successful development and utilization of a "Phase I" fission electric propulsion system will enable advanced Phase 2 and Phase 3 systems capable of providing rapid, affordable access to any point in the solar system.

  9. Conformal standard model, leptogenesis, and dark matter

    NASA Astrophysics Data System (ADS)

    Lewandowski, Adrian; Meissner, Krzysztof A.; Nicolai, Hermann

    2018-02-01

    The conformal standard model is a minimal extension of the Standard Model (SM) of particle physics based on the assumed absence of large intermediate scales between the TeV scale and the Planck scale, which incorporates only right-chiral neutrinos and a new complex scalar in addition to the usual SM degrees of freedom, but no other features such as supersymmetric partners. In this paper, we present a comprehensive quantitative analysis of this model, and show that all outstanding issues of particle physics proper can in principle be solved "in one go" within this framework. This includes in particular the stabilization of the electroweak scale, "minimal" leptogenesis and the explanation of dark matter, with a small mass and very weakly interacting Majoron as the dark matter candidate (for which we propose to use the name "minoron"). The main testable prediction of the model is a new and almost sterile scalar boson that would manifest itself as a narrow resonance in the TeV region. We give a representative range of parameter values consistent with our assumptions and with observation.

  10. Are some BL Lac objects artefacts of gravitational lensing?

    NASA Technical Reports Server (NTRS)

    Ostriker, J. P.; Vietri, M.

    1985-01-01

    It is proposed here that a significant fraction of BL Lac objects are optically violently variable quasars whose continuum emission has been greatly amplified, relative to the line emission, by pointlike gravitational lenses in intervening galaxies. Several anomalous physical and statistical properties of BL Lacs can be understood on the basis of this model, which is immediately testable on the basis of absorption line studies and by direct imaging.

  11. Phenoscape: Identifying Candidate Genes for Evolutionary Phenotypes

    PubMed Central

    Edmunds, Richard C.; Su, Baofeng; Balhoff, James P.; Eames, B. Frank; Dahdul, Wasila M.; Lapp, Hilmar; Lundberg, John G.; Vision, Todd J.; Dunham, Rex A.; Mabee, Paula M.; Westerfield, Monte

    2016-01-01

    Phenotypes resulting from mutations in genetic model organisms can help reveal candidate genes for evolutionarily important phenotypic changes in related taxa. Although testing candidate gene hypotheses experimentally in nonmodel organisms is typically difficult, ontology-driven information systems can help generate testable hypotheses about developmental processes in experimentally tractable organisms. Here, we tested candidate gene hypotheses suggested by expert use of the Phenoscape Knowledgebase, specifically looking for genes that are candidates responsible for evolutionarily interesting phenotypes in the ostariophysan fishes that bear resemblance to mutant phenotypes in zebrafish. For this, we searched ZFIN for genetic perturbations that result in either loss of basihyal element or loss of scales phenotypes, because these are the ancestral phenotypes observed in catfishes (Siluriformes). We tested the identified candidate genes by examining their endogenous expression patterns in the channel catfish, Ictalurus punctatus. The experimental results were consistent with the hypotheses that these features evolved through disruption in developmental pathways at, or upstream of, brpf1 and eda/edar for the ancestral losses of basihyal element and scales, respectively. These results demonstrate that ontological annotations of the phenotypic effects of genetic alterations in model organisms, when aggregated within a knowledgebase, can be used effectively to generate testable, and useful, hypotheses about evolutionary changes in morphology. PMID:26500251

  12. Thalamocortical mechanisms for integrating musical tone and rhythm

    PubMed Central

    Musacchia, Gabriella; Large, Edward

    2014-01-01

    Studies over several decades have identified many of the neuronal substrates of music perception by pursuing pitch and rhythm perception separately. Here, we address the question of how these mechanisms interact, starting with the observation that the peripheral pathways of the so-called “Core” and “Matrix” thalamocortical system provide the anatomical bases for tone and rhythm channels. We then examine the hypothesis that these specialized inputs integrate tonal content within rhythm context in auditory cortex using classical types of “driving” and “modulatory” mechanisms. This hypothesis provides a framework for deriving testable predictions about the early stages of music processing. Furthermore, because thalamocortical circuits are shared by speech and music processing, such a model provides concrete implications for how music experience contributes to the development of robust speech encoding mechanisms. PMID:24103509

  13. Color Vision Deficiency in Preschool Children

    PubMed Central

    Xie, John Z.; Tarczy-Hornoch, Kristina; Lin, Jesse; Cotter, Susan A.; Torres, Mina; Varma, Rohit

    2016-01-01

    Purpose To determine the sex- and ethnicity-specific prevalence of color vision deficiency (CVD) in black, Asian, Hispanic, and non-Hispanic white preschool children. Design Population-based, cross-sectional study. Participants The Multi-Ethnic Pediatric Eye Disease Study is a population-based evaluation of the prevalence of vision disorders in children in Southern California. A total of 5960 subjects 30 to 72 months of age were recruited for the study, of whom 4177 were able to complete color vision testing (1265 black, 812 Asian, 1280 Hispanic, and 820 non-Hispanic white). Methods Color vision testing was performed using Color Vision Testing Made Easy color plates (Home Vision Care, Gulf Breeze, FL), and diagnostic confirmatory testing was performed using the Waggoner HRR Diagnostic Test color plates (Home Vision Care). Main Outcome Measures Testability of color vision in preschool children between 30 and 72 months of age and prevalence of CVD stratified by age, sex, and ethnicity. Results Testability was 17% in children younger than 37 months of age, increasing to 57% in children 37 to 48 months of age, 89% in children 49 to 60 months of age, and 98% in children 61 to 72 months of age. The prevalence of CVD among boys was 1.4% for black, 3.1% for Asian, 2.6% for Hispanic, and 5.6% for non-Hispanic white children; the prevalence in girls was 0.0% to 0.5% for all ethnicities. The ethnic difference in CVD was statistically significant between black and non-Hispanic white children (P = 0.0003) and between Hispanic and non-Hispanic white children (P = 0.02). In boys, most CVD cases were either deutan (51%) or protan (34%); 32% were classified as mild, 15% as moderate, and 41% as severe. Conclusions Testability for CVD in preschool children is high by 4 years of age. The prevalence of CVD in preschool boys varies by ethnicity, with the highest prevalence in non-Hispanic white and lowest in black children. PMID:24702753

  14. Testable solution of the cosmological constant and coincidence problems

    NASA Astrophysics Data System (ADS)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-01

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, Λ, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of Λ≈(9.3Gyrs)-2 [≈10-120 in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvature of Ωk0=-0.0056(ζb/0.5), where ζb˜1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given Λ. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between tΛ=Λ-1/2 and the age of the Universe, tU, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different Λ values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.

  15. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).

  16. How Important is Medical Ethics and History of Medicine Teaching in the Medical Curriculum? An Empirical Approach towards Students' Views

    PubMed Central

    Schulz, Stefan; Woestmann, Barbara; Huenges, Bert; Schweikardt, Christoph; Schäfer, Thorsten

    2012-01-01

    Objectives: It was investigated how students judge the teaching of medical ethics and the history of medicine at the start and during their studies, and the influence which subject-specific teaching of the history, theory and ethics of medicine (GTE) - or the lack thereof - has on the judgement of these subjects. Methods: From a total of 533 students who were in their first and 5th semester of the Bochum Model curriculum (GTE teaching from the first semester onwards) or followed the traditional curriculum (GTE teaching in the 5th/6th semester), questionnaires were requested in the winter semester 2005/06 and in the summer semester 2006. They were asked both before and after the 1st and 5th (model curriculum) or 6th semester (traditional curriculum). We asked students to judge the importance of teaching medical ethics and the history of medicine, the significance of these subjects for physicians and about teachability and testability (Likert scale from -2 (do not agree at all) to +2 (agree completely)). Results: 331 questionnaire pairs were included in the study. There were no significant differences between the students of the two curricula at the start of the 1st semester. The views on medical ethics and the history of medicine, in contrast, were significantly different at the start of undergraduate studies: The importance of medical ethics for the individual and the physician was considered very high but their teachability and testability were rated considerably worse. For the history of medicine, the results were exactly opposite. GTE teaching led to a more positive assessment of items previously ranked less favourably in both curricula. A lack of teaching led to a drop in the assessment of both subjects which had previously been rated well. Conclusion: Consistent with the literature, our results support the hypothesis that the teaching of GTE has a positive impact on the views towards the history and ethics of medicine, with a lack of teaching having a negative impact. Therefore the teaching of GTE should already begin in the 1st semester. The teaching of GTE must take into account that even right at the start of their studies, students judge medical ethics and the history of medicine differently. PMID:22403593

  17. How important is medical ethics and history of medicine teaching in the medical curriculum? An empirical approach towards students' views.

    PubMed

    Schulz, Stefan; Woestmann, Barbara; Huenges, Bert; Schweikardt, Christoph; Schäfer, Thorsten

    2012-01-01

    It was investigated how students judge the teaching of medical ethics and the history of medicine at the start and during their studies, and the influence which subject-specific teaching of the history, theory and ethics of medicine (GTE)--or the lack thereof--has on the judgement of these subjects. From a total of 533 students who were in their first and 5th semester of the Bochum Model curriculum (GTE teaching from the first semester onwards) or followed the traditional curriculum (GTE teaching in the 5th/6th semester), questionnaires were requested in the winter semester 2005/06 and in the summer semester 2006. They were asked both before and after the 1st and 5th (model curriculum) or 6th semester (traditional curriculum). We asked students to judge the importance of teaching medical ethics and the history of medicine, the significance of these subjects for physicians and about teachability and testability (Likert scale from -2 (do not agree at all) to +2 (agree completely)). 331 questionnaire pairs were included in the study. There were no significant differences between the students of the two curricula at the start of the 1st semester. The views on medical ethics and the history of medicine, in contrast, were significantly different at the start of undergraduate studies: The importance of medical ethics for the individual and the physician was considered very high but their teachability and testability were rated considerably worse. For the history of medicine, the results were exactly opposite. GTE teaching led to a more positive assessment of items previously ranked less favourably in both curricula. A lack of teaching led to a drop in the assessment of both subjects which had previously been rated well. Consistent with the literature, our results support the hypothesis that the teaching of GTE has a positive impact on the views towards the history and ethics of medicine, with a lack of teaching having a negative impact. Therefore the teaching of GTE should already begin in the 1st semester. The teaching of GTE must take into account that even right at the start of their studies, students judge medical ethics and the history of medicine differently.

  18. An analytical model of non-photorespiratory CO₂release in the light and dark in leaves of C₃species based on stoichiometric flux balance.

    PubMed

    Buckley, Thomas N; Adams, Mark A

    2011-01-01

    Leaf respiration continues in the light but at a reduced rate. This inhibition is highly variable, and the mechanisms are poorly known, partly due to the lack of a formal model that can generate testable hypotheses. We derived an analytical model for non-photorespiratory CO₂ release by solving steady-state supply/demand equations for ATP, NADH and NADPH, coupled to a widely used photosynthesis model. We used this model to evaluate causes for suppression of respiration by light. The model agrees with many observations, including highly variable suppression at saturating light, greater suppression in mature leaves, reduced assimilatory quotient (ratio of net CO₂ and O₂ exchange) concurrent with nitrate reduction and a Kok effect (discrete change in quantum yield at low light). The model predicts engagement of non-phosphorylating pathways at moderate to high light, or concurrent with processes that yield ATP and NADH, such as fatty acid or terpenoid synthesis. Suppression of respiration is governed largely by photosynthetic adenylate balance, although photorespiratory NADH may contribute at sub-saturating light. Key questions include the precise diel variation of anabolism and the ATP : 2e⁻ ratio for photophosphorylation. Our model can focus experimental research and is a step towards a fully process-based model of CO₂ exchange. © 2010 Blackwell Publishing Ltd.

  19. Forest management under uncertainty for multiple bird population objectives

    USGS Publications Warehouse

    Moore, C.T.; Plummer, W.T.; Conroy, M.J.; Ralph, C. John; Rich, Terrell D.

    2005-01-01

    We advocate adaptive programs of decision making and monitoring for the management of forest birds when responses by populations to management, and particularly management trade-offs among populations, are uncertain. Models are necessary components of adaptive management. Under this approach, uncertainty about the behavior of a managed system is explicitly captured in a set of alternative models. The models generate testable predictions about the response of populations to management, and monitoring data provide the basis for assessing these predictions and informing future management decisions. To illustrate these principles, we examine forest management at the Piedmont National Wildlife Refuge, where management attention is focused on the recovery of the Red-cockaded Woodpecker (Picoides borealis) population. However, managers are also sensitive to the habitat needs of many non-target organisms, including Wood Thrushes (Hylocichla mustelina) and other forest interior Neotropical migratory birds. By simulating several management policies on a set of-alternative forest and bird models, we found a decision policy that maximized a composite response by woodpeckers and Wood Thrushes despite our complete uncertainty regarding system behavior. Furthermore, we used monitoring data to update our measure of belief in each alternative model following one cycle of forest management. This reduction of uncertainty translates into a reallocation of model influence on the choice of optimal decision action at the next decision opportunity.

  20. A Survey of Reliability, Maintainability, Supportability, and Testability Software Tools

    DTIC Science & Technology

    1991-04-01

    designs in terms of their contributions toward forced mission termination and vehicle or function loss . Includes the ability to treat failure modes of...ABSTRACT: Inputs: MTBFs, MTTRs, support equipment costs, equipment weights and costs, available targets, military occupational specialty skill level and...US Army CECOM NAME: SPARECOST ABSTRACT: Calculates expected number of failures and performs spares holding optimization based on cost, weight , or

  1. Minimal realization of right-handed gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-01-01

    We propose a minimally extended gauge symmetry model with U (1 )R , where only the right-handed fermions have nonzero charges in the fermion sector. To achieve both anomaly cancellations and minimality, three right-handed neutrinos are naturally required, and the standard model Higgs has to have nonzero charge under this symmetry. Then we find that its breaking scale(Λ ) is restricted by precise measurement of neutral gauge boson in the standard model; therefore, O (10 ) TeV ≲Λ . We also discuss its testability of the new gauge boson and discrimination of U (1 )R model from U (1 )B-L one at collider physics such as LHC and ILC.

  2. A genetic programming approach for Burkholderia Pseudomallei diagnostic pattern discovery

    PubMed Central

    Yang, Zheng Rong; Lertmemongkolchai, Ganjana; Tan, Gladys; Felgner, Philip L.; Titball, Richard

    2009-01-01

    Motivation: Finding diagnostic patterns for fighting diseases like Burkholderia pseudomallei using biomarkers involves two key issues. First, exhausting all subsets of testable biomarkers (antigens in this context) to find a best one is computationally infeasible. Therefore, a proper optimization approach like evolutionary computation should be investigated. Second, a properly selected function of the antigens as the diagnostic pattern which is commonly unknown is a key to the diagnostic accuracy and the diagnostic effectiveness in clinical use. Results: A conversion function is proposed to convert serum tests of antigens on patients to binary values based on which Boolean functions as the diagnostic patterns are developed. A genetic programming approach is designed for optimizing the diagnostic patterns in terms of their accuracy and effectiveness. During optimization, it is aimed to maximize the coverage (the rate of positive response to antigens) in the infected patients and minimize the coverage in the non-infected patients while maintaining the fewest number of testable antigens used in the Boolean functions as possible. The final coverage in the infected patients is 96.55% using 17 of 215 (7.4%) antigens with zero coverage in the non-infected patients. Among these 17 antigens, BPSL2697 is the most frequently selected one for the diagnosis of Burkholderia Pseudomallei. The approach has been evaluated using both the cross-validation and the Jack–knife simulation methods with the prediction accuracy as 93% and 92%, respectively. A novel approach is also proposed in this study to evaluate a model with binary data using ROC analysis. Contact: z.r.yang@ex.ac.uk PMID:19561021

  3. Family nonuniversal Z' models with protected flavor-changing interactions

    NASA Astrophysics Data System (ADS)

    Celis, Alejandro; Fuentes-Martín, Javier; Jung, Martin; Serôdio, Hugo

    2015-07-01

    We define a new class of Z' models with neutral flavor-changing interactions at tree level in the down-quark sector. They are related in an exact way to elements of the quark mixing matrix due to an underlying flavored U(1)' gauge symmetry, rendering these models particularly predictive. The same symmetry implies lepton-flavor nonuniversal couplings, fully determined by the gauge structure of the model. Our models allow us to address presently observed deviations from the standard model and specific correlations among the new physics contributions to the Wilson coefficients C9,10' ℓ can be tested in b →s ℓ+ℓ- transitions. We furthermore predict lepton-universality violations in Z' decays, testable at the LHC.

  4. Toward a neighborhood resource-based theory of social capital for health: can Bourdieu and sociology help?

    PubMed

    Carpiano, Richard M

    2006-01-01

    Within the past several years, a considerable body of research on social capital has emerged in public health. Although offering the potential for new insights into how community factors impact health and well being, this research has received criticism for being undertheorized and methodologically flawed. In an effort to address some of these limitations, this paper applies Pierre Bourdieu's (1986) [Bourdieu, P. (1986). Handbook of theory and research for the sociology of education (pp. 241-258). New York: Greenwood] social capital theory to create a conceptual model of neighborhood socioeconomic processes, social capital (resources inhered within social networks), and health. After briefly reviewing the social capital conceptualizations of Bourdieu and Putnam, I attempt to integrate these authors' theories to better understand how social capital might operate within neighborhoods or local areas. Next, I describe a conceptual model that incorporates this theoretical integration of social capital into a framework of neighborhood social processes as health determinants. Discussion focuses on the utility of this Bourdieu-based neighborhood social capital theory and model for examining several under-addressed issues of social capital in the neighborhood effects literature and generating specific, empirically testable hypotheses for future research.

  5. Inferences about nested subsets structure when not all species are detected

    USGS Publications Warehouse

    Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    Comparisons of species composition among ecological communities of different size have often provided evidence that the species in communities with lower species richness form nested subsets of the species in larger communities. In the vast majority of studies, the question of nested subsets has been addressed using information on presence-absence, where a '0' is interpreted as the absence of a given species from a given location. Most of the methodological discussion in earlier studies investigating nestedness concerns the approach to generation of model-based matrices. However, it is most likely that in many situations investigators cannot detect all the species present in the location sampled. The possibility that zeros in incidence matrices reflect nondetection rather than absence of species has not been considered in studies addressing nested subsets, even though the position of zeros in these matrices forms the basis of earlier inference methods. These sampling artifacts are likely to lead to erroneous conclusions about both variation over space in species richness and the degree of similarity of the various locations. Here we propose an approach to investigation of nestedness, based on statistical inference methods explicitly incorporating species detection probability, that take into account the probabilistic nature of the sampling process. We use presence-absence data collected under Pollock?s robust capture-recapture design, and resort to an estimator of species richness originally developed for closed populations to assess the proportion of species shared by different locations. We develop testable predictions corresponding to the null hypothesis of a nonnested pattern, and an alternative hypothesis of perfect nestedness. We also present an index for assessing the degree of nestedness of a system of ecological communities. We illustrate our approach using avian data from the North American Breeding Bird Survey collected in Florida Keys.

  6. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 high alpha research vehicle (HARV) automated test systems are discussed. It is noted that operational experiences in developing and using these automated testing techniques have highlighted the need for incorporating target system features to improve testability. Improved target system testability can be accomplished with the addition of nonreal-time and real-time features. Online access to target system implementation details, unobtrusive real-time access to internal user-selectable variables, and proper software instrumentation are all desirable features of the target system. Also, test system and target system design issues must be addressed during the early stages of the target system development. Processing speeds of up to 20 million instructions/s and the development of high-bandwidth reflective memory systems have improved the ability to integrate the target system and test system for the application of automated testing techniques. It is concluded that new methods of designing testability into the target systems are required.

  7. Mean-field thalamocortical modeling of longitudinal EEG acquired during intensive meditation training.

    PubMed

    Saggar, Manish; Zanesco, Anthony P; King, Brandon G; Bridwell, David A; MacLean, Katherine A; Aichele, Stephen R; Jacobs, Tonya L; Wallace, B Alan; Saron, Clifford D; Miikkulainen, Risto

    2015-07-01

    Meditation training has been shown to enhance attention and improve emotion regulation. However, the brain processes associated with such training are poorly understood and a computational modeling framework is lacking. Modeling approaches that can realistically simulate neurophysiological data while conforming to basic anatomical and physiological constraints can provide a unique opportunity to generate concrete and testable hypotheses about the mechanisms supporting complex cognitive tasks such as meditation. Here we applied the mean-field computational modeling approach using the scalp-recorded electroencephalogram (EEG) collected at three assessment points from meditating participants during two separate 3-month-long shamatha meditation retreats. We modeled cortical, corticothalamic, and intrathalamic interactions to generate a simulation of EEG signals recorded across the scalp. We also present two novel extensions to the mean-field approach that allow for: (a) non-parametric analysis of changes in model parameter values across all channels and assessments; and (b) examination of variation in modeled thalamic reticular nucleus (TRN) connectivity over the retreat period. After successfully fitting whole-brain EEG data across three assessment points within each retreat, two model parameters were found to replicably change across both meditation retreats. First, after training, we observed an increased temporal delay between modeled cortical and thalamic cells. This increase provides a putative neural mechanism for a previously observed reduction in individual alpha frequency in these same participants. Second, we found decreased inhibitory connection strength between the TRN and secondary relay nuclei (SRN) of the modeled thalamus after training. This reduction in inhibitory strength was found to be associated with increased dynamical stability of the model. Altogether, this paper presents the first computational approach, taking core aspects of physiology and anatomy into account, to formally model brain processes associated with intensive meditation training. The observed changes in model parameters inform theoretical accounts of attention training through meditation, and may motivate future study on the use of meditation in a variety of clinical populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Moral injury: a mechanism for war-related psychological trauma in military family members.

    PubMed

    Nash, William P; Litz, Brett T

    2013-12-01

    Recent research has provided compelling evidence of mental health problems in military spouses and children, including post-traumatic stress disorder (PTSD), related to the war-zone deployments, combat exposures, and post-deployment mental health symptoms experienced by military service members in the family. One obstacle to further research and federal programs targeting the psychological health of military family members has been the lack of a clear, compelling, and testable model to explain how war-zone events can result in psychological trauma in military spouses and children. In this article, we propose a possible mechanism for deployment-related psychological trauma in military spouses and children based on the concept of moral injury, a model that has been developed to better understand how service members and veterans may develop PTSD and other serious mental and behavioral problems in the wake of war-zone events that inflict damage to moral belief systems rather by threatening personal life and safety. After describing means of adapting the moral injury model to family systems, we discuss the clinical implications of moral injury, and describe a model for its psychological treatment.

  9. A plausible radiobiological model of cardiovascular disease at low or fractionated doses

    NASA Astrophysics Data System (ADS)

    Little, Mark; Vandoolaeghe, Wendy; Gola, Anna; Tzoulaki, Ioanna

    Atherosclerosis is the main cause of coronary heart disease and stroke, the two major causes of death in developed society. There is emerging evidence of excess risk of cardiovascular disease at low radiation doses in various occupationally-exposed groups receiving small daily radia-tion doses. Assuming that they are causal, the mechanisms for effects of chronic fractionated radiation exposures on cardiovascular disease are unclear. We outline a spatial reaction-diffusion model for atherosclerosis, and perform stability analysis, based wherever possible on human data. We show that a predicted consequence of multiple small radiation doses is to cause mean chemo-attractant (MCP-1) concentration to increase linearly with cumulative dose. The main driver for the increase in MCP-1 is monocyte death, and consequent reduction in MCP-1 degradation. The radiation-induced risks predicted by the model are quantitatively consistent with those observed in a number of occupationally-exposed groups. The changes in equilibrium MCP-1 concentrations with low density lipoprotein cholesterol concentration are also consistent with experimental and epidemiologic data. This proposed mechanism would be experimentally testable. If true, it also has substantive implications for radiological protection, which at present does not take cardiovascular disease into account. The Japanese A-bomb survivor data implies that cardiovascular disease and can-cer mortality contribute similarly to radiogenic risk. The major uncertainty in assessing the low-dose risk of cardiovascular disease is the shape of the dose response relationship, which is unclear in the Japanese data. The analysis of the present paper suggests that linear extrapo-lation would be appropriate for this endpoint.

  10. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools.

    PubMed

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-02-01

    Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. 'Unable to test' was defined as inappropriate response or uncooperative despite best efforts of the screener. The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes ( P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Non verbal or "matching" approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities.

  11. A Genuine TEAM Player

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Qualtech Systems, Inc. developed a complete software system with capabilities of multisignal modeling, diagnostic analysis, run-time diagnostic operations, and intelligent interactive reasoners. Commercially available as the TEAMS (Testability Engineering and Maintenance System) tool set, the software can be used to reveal unanticipated system failures. The TEAMS software package is broken down into four companion tools: TEAMS-RT, TEAMATE, TEAMS-KB, and TEAMS-RDS. TEAMS-RT identifies good, bad, and suspect components in the system in real-time. It reports system health results from onboard tests, and detects and isolates failures within the system, allowing for rapid fault isolation. TEAMATE takes over from where TEAMS-RT left off by intelligently guiding the maintenance technician through the troubleshooting procedure, repair actions, and operational checkout. TEAMS-KB serves as a model management and collection tool. TEAMS-RDS (TEAMS-Remote Diagnostic Server) has the ability to continuously assess a system and isolate any failure in that system or its components, in real time. RDS incorporates TEAMS-RT, TEAMATE, and TEAMS-KB in a large-scale server architecture capable of providing advanced diagnostic and maintenance functions over a network, such as the Internet, with a web browser user interface.

  12. System Modeling and Diagnostics for Liquefying-Fuel Hybrid Rockets

    NASA Technical Reports Server (NTRS)

    Poll, Scott; Iverson, David; Ou, Jeremy; Sanderfer, Dwight; Patterson-Hine, Ann

    2003-01-01

    A Hybrid Combustion Facility (HCF) was recently built at NASA Ames Research Center to study the combustion properties of a new fuel formulation that burns approximately three times faster than conventional hybrid fuels. Researchers at Ames working in the area of Integrated Vehicle Health Management recognized a good opportunity to apply IVHM techniques to a candidate technology for next generation launch systems. Five tools were selected to examine various IVHM techniques for the HCF. Three of the tools, TEAMS (Testability Engineering and Maintenance System), L2 (Livingstone2), and RODON, are model-based reasoning (or diagnostic) systems. Two other tools in this study, ICS (Interval Constraint Simulator) and IMS (Inductive Monitoring System) do not attempt to isolate the cause of the failure but may be used for fault detection. Models of varying scope and completeness were created, both qualitative and quantitative. In each of the models, the structure and behavior of the physical system are captured. In the qualitative models, the temporal aspects of the system behavior and the abstraction of sensor data are handled outside of the model and require the development of additional code. In the quantitative model, less extensive processing code is also necessary. Examples of fault diagnoses are given.

  13. Cardiac rehabilitation delivery model for low-resource settings

    PubMed Central

    Grace, Sherry L; Turk-Adawi, Karam I; Contractor, Aashish; Atrey, Alison; Campbell, Norm; Derman, Wayne; Melo Ghisi, Gabriela L; Oldridge, Neil; Sarkar, Bidyut K; Yeo, Tee Joo; Lopez-Jimenez, Francisco; Mendis, Shanthi; Oh, Paul; Hu, Dayi; Sarrafzadegan, Nizal

    2016-01-01

    Objective Cardiovascular disease is a global epidemic, which is largely preventable. Cardiac rehabilitation (CR) is demonstrated to be cost-effective and efficacious in high-income countries. CR could represent an important approach to mitigate the epidemic of cardiovascular disease in lower-resource settings. The purpose of this consensus statement was to review low-cost approaches to delivering the core components of CR, to propose a testable model of CR which could feasibly be delivered in middle-income countries. Methods A literature review regarding delivery of each core CR component, namely: (1) lifestyle risk factor management (ie, physical activity, diet, tobacco and mental health), (2) medical risk factor management (eg, lipid control, blood pressure control), (3) education for self-management and (4) return to work, in low-resource settings was undertaken. Recommendations were developed based on identified articles, using a modified GRADE approach where evidence in a low-resource setting was available, or consensus where evidence was not. Results Available data on cost of CR delivery in low-resource settings suggests it is not feasible to deliver CR in low-resource settings as is delivered in high-resource ones. Strategies which can be implemented to deliver all of the core CR components in low-resource settings were summarised in practice recommendations, and approaches to patient assessment proffered. It is suggested that CR be adapted by delivery by non-physician healthcare workers, in non-clinical settings. Conclusions Advocacy to achieve political commitment for broad delivery of adapted CR services in low-resource settings is needed. PMID:27181874

  14. Mathematical modeling of the female reproductive system: from oocyte to delivery.

    PubMed

    Clark, Alys R; Kruger, Jennifer A

    2017-01-01

    From ovulation to delivery, and through the menstrual cycle, the female reproductive system undergoes many dynamic changes to provide an optimal environment for the embryo to implant, and to develop successfully. It is difficult ethically and practically to observe the system over the timescales involved in growth and development (often hours to days). Even in carefully monitored conditions clinicians and biologists can only see snapshots of the development process. Mathematical models are emerging as a key means to supplement our knowledge of the reproductive process, and to tease apart complexity in the reproductive system. These models have been used successfully to test existing hypotheses regarding the mechanisms of female infertility and pathological fetal development, and also to provide new experimentally testable hypotheses regarding the process of development. This new knowledge has allowed for improvements in assisted reproductive technologies and is moving toward translation to clinical practice via multiscale assessments of the dynamics of ovulation, development in pregnancy, and the timing and mechanics of delivery. WIREs Syst Biol Med 2017, 9:e1353. doi: 10.1002/wsbm.1353 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  15. Predicting the High Redshift Galaxy Population for JWST

    NASA Astrophysics Data System (ADS)

    Flynn, Zoey; Benson, Andrew

    2017-01-01

    The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.

  16. Pair production processes and flavor in gauge-invariant perturbation theory

    NASA Astrophysics Data System (ADS)

    Egger, Larissa; Maas, Axel; Sondenheimer, René

    2017-12-01

    Gauge-invariant perturbation theory is an extension of ordinary perturbation theory which describes strictly gauge-invariant states in theories with a Brout-Englert-Higgs effect. Such gauge-invariant states are composite operators which have necessarily only global quantum numbers. As a consequence, flavor is exchanged for custodial quantum numbers in the Standard Model, recreating the fermion spectrum in the process. Here, we study the implications of such a description, possibly also for the generation structure of the Standard Model. In particular, this implies that scattering processes are essentially bound-state-bound-state interactions, and require a suitable description. We analyze the implications for the pair-production process e+e-→f¯f at a linear collider to leading order. We show how ordinary perturbation theory is recovered as the leading contribution. Using a PDF-type language, we also assess the impact of sub-leading contributions. To lowest order, we find that the result is mainly influenced by how large the contribution of the Higgs at large x is. This gives an interesting, possibly experimentally testable, scenario for the formal field theory underlying the electroweak sector of the Standard Model.

  17. Capitalizing on Citizen Science Data for Validating Models and Generating Hypotheses Describing Meteorological Drivers of Mosquito-Borne Disease Risk

    NASA Astrophysics Data System (ADS)

    Boger, R. A.; Low, R.; Paull, S.; Anyamba, A.; Soebiyanto, R. P.

    2017-12-01

    Temperature and precipitation are important drivers of mosquito population dynamics, and a growing set of models have been proposed to characterize these relationships. Validation of these models, and development of broader theories across mosquito species and regions could nonetheless be improved by comparing observations from a global dataset of mosquito larvae with satellite-based measurements of meteorological variables. Citizen science data can be particularly useful for two such aspects of research into the meteorological drivers of mosquito populations: i) Broad-scale validation of mosquito distribution models and ii) Generation of quantitative hypotheses regarding changes to mosquito abundance and phenology across scales. The recently released GLOBE Observer Mosquito Habitat Mapper (GO-MHM) app engages citizen scientists in identifying vector taxa, mapping breeding sites and decommissioning non-natural habitats, and provides a potentially useful new tool for validating mosquito ubiquity projections based on the analysis of remotely sensed environmental data. Our early work with GO-MHM data focuses on two objectives: validating citizen science reports of Aedes aegypti distribution through comparison with accepted scientific data sources, and exploring the relationship between extreme temperature and precipitation events and subsequent observations of mosquito larvae. Ultimately the goal is to develop testable hypotheses regarding the shape and character of this relationship between mosquito species and regions.

  18. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model123

    PubMed Central

    Dong, Junzi; Colburn, H. Steven

    2016-01-01

    In multisource, “cocktail party” sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem. PMID:26866056

  19. Cortical Transformation of Spatial Processing for Solving the Cocktail Party Problem: A Computational Model(1,2,3).

    PubMed

    Dong, Junzi; Colburn, H Steven; Sen, Kamal

    2016-01-01

    In multisource, "cocktail party" sound environments, human and animal auditory systems can use spatial cues to effectively separate and follow one source of sound over competing sources. While mechanisms to extract spatial cues such as interaural time differences (ITDs) are well understood in precortical areas, how such information is reused and transformed in higher cortical regions to represent segregated sound sources is not clear. We present a computational model describing a hypothesized neural network that spans spatial cue detection areas and the cortex. This network is based on recent physiological findings that cortical neurons selectively encode target stimuli in the presence of competing maskers based on source locations (Maddox et al., 2012). We demonstrate that key features of cortical responses can be generated by the model network, which exploits spatial interactions between inputs via lateral inhibition, enabling the spatial separation of target and interfering sources while allowing monitoring of a broader acoustic space when there is no competition. We present the model network along with testable experimental paradigms as a starting point for understanding the transformation and organization of spatial information from midbrain to cortex. This network is then extended to suggest engineering solutions that may be useful for hearing-assistive devices in solving the cocktail party problem.

  20. What can we learn from a two-brain approach to verbal interaction?

    PubMed

    Schoot, Lotte; Hagoort, Peter; Segaert, Katrien

    2016-09-01

    Verbal interaction is one of the most frequent social interactions humans encounter on a daily basis. In the current paper, we zoom in on what the multi-brain approach has contributed, and can contribute in the future, to our understanding of the neural mechanisms supporting verbal interaction. Indeed, since verbal interaction can only exist between individuals, it seems intuitive to focus analyses on inter-individual neural markers, i.e. between-brain neural coupling. To date, however, there is a severe lack of theoretically-driven, testable hypotheses about what between-brain neural coupling actually reflects. In this paper, we develop a testable hypothesis in which between-pair variation in between-brain neural coupling is of key importance. Based on theoretical frameworks and empirical data, we argue that the level of between-brain neural coupling reflects speaker-listener alignment at different levels of linguistic and extra-linguistic representation. We discuss the possibility that between-brain neural coupling could inform us about the highest level of inter-speaker alignment: mutual understanding. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit

    NASA Technical Reports Server (NTRS)

    Penn, John

    2014-01-01

    This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.

  2. A simple theoretical framework for understanding heterogeneous differentiation of CD4+ T cells

    PubMed Central

    2012-01-01

    Background CD4+ T cells have several subsets of functional phenotypes, which play critical yet diverse roles in the immune system. Pathogen-driven differentiation of these subsets of cells is often heterogeneous in terms of the induced phenotypic diversity. In vitro recapitulation of heterogeneous differentiation under homogeneous experimental conditions indicates some highly regulated mechanisms by which multiple phenotypes of CD4+ T cells can be generated from a single population of naïve CD4+ T cells. Therefore, conceptual understanding of induced heterogeneous differentiation will shed light on the mechanisms controlling the response of populations of CD4+ T cells under physiological conditions. Results We present a simple theoretical framework to show how heterogeneous differentiation in a two-master-regulator paradigm can be governed by a signaling network motif common to all subsets of CD4+ T cells. With this motif, a population of naïve CD4+ T cells can integrate the signals from their environment to generate a functionally diverse population with robust commitment of individual cells. Notably, two positive feedback loops in this network motif govern three bistable switches, which in turn, give rise to three types of heterogeneous differentiated states, depending upon particular combinations of input signals. We provide three prototype models illustrating how to use this framework to explain experimental observations and make specific testable predictions. Conclusions The process in which several types of T helper cells are generated simultaneously to mount complex immune responses upon pathogenic challenges can be highly regulated, and a simple signaling network motif can be responsible for generating all possible types of heterogeneous populations with respect to a pair of master regulators controlling CD4+ T cell differentiation. The framework provides a mathematical basis for understanding the decision-making mechanisms of CD4+ T cells, and it can be helpful for interpreting experimental results. Mathematical models based on the framework make specific testable predictions that may improve our understanding of this differentiation system. PMID:22697466

  3. Field-aligned currents and ion convection at high altitudes

    NASA Technical Reports Server (NTRS)

    Burch, J. L.; Reiff, P. H.

    1985-01-01

    Hot plasma observations from Dynamics Explorer 1 have been used to investigate solar-wind ion injection, Birkeland currents, and plasma convection at altitudes above 2 earth-radii in the morning sector. The results of the study, along with the antiparallel merging hypothesis, have been used to construct a By-dependent global convection model. A significant element of the model is the coexistence of three types of convection cells (merging cells, viscous cells, and lobe cells). As the IMF direction varies, the model accounts for the changing roles of viscous and merging processes and makes testable predictions about several magnetospheric phenomena, including the newly-observed theta aurora in the polar cap.

  4. Testable solution of the cosmological constant and coincidence problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-15

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less

  5. The attention schema theory: a mechanistic account of subjective awareness

    PubMed Central

    Graziano, Michael S. A.; Webb, Taylor W.

    2015-01-01

    We recently proposed the attention schema theory, a novel way to explain the brain basis of subjective awareness in a mechanistic and scientifically testable manner. The theory begins with attention, the process by which signals compete for the brain’s limited computing resources. This internal signal competition is partly under a bottom–up influence and partly under top–down control. We propose that the top–down control of attention is improved when the brain has access to a simplified model of attention itself. The brain therefore constructs a schematic model of the process of attention, the ‘attention schema,’ in much the same way that it constructs a schematic model of the body, the ‘body schema.’ The content of this internal model leads a brain to conclude that it has a subjective experience. One advantage of this theory is that it explains how awareness and attention can sometimes become dissociated; the brain’s internal models are never perfect, and sometimes a model becomes dissociated from the object being modeled. A second advantage of this theory is that it explains how we can be aware of both internal and external events. The brain can apply attention to many types of information including external sensory information and internal information about emotions and cognitive states. If awareness is a model of attention, then this model should pertain to the same domains of information to which attention pertains. A third advantage of this theory is that it provides testable predictions. If awareness is the internal model of attention, used to help control attention, then without awareness, attention should still be possible but should suffer deficits in control. In this article, we review the existing literature on the relationship between attention and awareness, and suggest that at least some of the predictions of the theory are borne out by the evidence. PMID:25954242

  6. Scalar-tensor extension of the ΛCDM model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Algoner, W.C.; Velten, H.E.S.; Zimdahl, W., E-mail: w.algoner@cosmo-ufes.org, E-mail: velten@pq.cnpq.br, E-mail: winfried.zimdahl@pq.cnpq.br

    2016-11-01

    We construct a cosmological scalar-tensor-theory model in which the Brans-Dicke type scalar Φ enters the effective (Jordan-frame) Hubble rate as a simple modification of the Hubble rate of the ΛCDM model. This allows us to quantify differences between the background dynamics of scalar-tensor theories and general relativity (GR) in a transparent and observationally testable manner in terms of one single parameter. Problems of the mapping of the scalar-field degrees of freedom on an effective fluid description in a GR context are discused. Data from supernovae, the differential age of old galaxies and baryon acoustic oscillations are shown to strongly limitmore » potential deviations from the standard model.« less

  7. Optimal flight initiation distance.

    PubMed

    Cooper, William E; Frederick, William G

    2007-01-07

    Decisions regarding flight initiation distance have received scant theoretical attention. A graphical model by Ydenberg and Dill (1986. The economics of fleeing from predators. Adv. Stud. Behav. 16, 229-249) that has guided research for the past 20 years specifies when escape begins. In the model, a prey detects a predator, monitors its approach until costs of escape and of remaining are equal, and then flees. The distance between predator and prey when escape is initiated (approach distance = flight initiation distance) occurs where decreasing cost of remaining and increasing cost of fleeing intersect. We argue that prey fleeing as predicted cannot maximize fitness because the best prey can do is break even during an encounter. We develop two optimality models, one applying when all expected future contribution to fitness (residual reproductive value) is lost if the prey dies, the other when any fitness gained (increase in expected RRV) during the encounter is retained after death. Both models predict optimal flight initiation distance from initial expected fitness, benefits obtainable during encounters, costs of escaping, and probability of being killed. Predictions match extensively verified predictions of Ydenberg and Dill's (1986) model. Our main conclusion is that optimality models are preferable to break-even models because they permit fitness maximization, offer many new testable predictions, and allow assessment of prey decisions in many naturally occurring situations through modification of benefit, escape cost, and risk functions.

  8. Toward a formalized account of attitudes: The Causal Attitude Network (CAN) model.

    PubMed

    Dalege, Jonas; Borsboom, Denny; van Harreveld, Frenk; van den Berg, Helma; Conner, Mark; van der Maas, Han L J

    2016-01-01

    This article introduces the Causal Attitude Network (CAN) model, which conceptualizes attitudes as networks consisting of evaluative reactions and interactions between these reactions. Relevant evaluative reactions include beliefs, feelings, and behaviors toward the attitude object. Interactions between these reactions arise through direct causal influences (e.g., the belief that snakes are dangerous causes fear of snakes) and mechanisms that support evaluative consistency between related contents of evaluative reactions (e.g., people tend to align their belief that snakes are useful with their belief that snakes help maintain ecological balance). In the CAN model, the structure of attitude networks conforms to a small-world structure: evaluative reactions that are similar to each other form tight clusters, which are connected by a sparser set of "shortcuts" between them. We argue that the CAN model provides a realistic formalized measurement model of attitudes and therefore fills a crucial gap in the attitude literature. Furthermore, the CAN model provides testable predictions for the structure of attitudes and how they develop, remain stable, and change over time. Attitude strength is conceptualized in terms of the connectivity of attitude networks and we show that this provides a parsimonious account of the differences between strong and weak attitudes. We discuss the CAN model in relation to possible extensions, implication for the assessment of attitudes, and possibilities for further study. (c) 2015 APA, all rights reserved).

  9. Comparison on testability of visual acuity, stereo acuity and colour vision tests between children with learning disabilities and children without learning disabilities in government primary schools

    PubMed Central

    Abu Bakar, Nurul Farhana; Chen, Ai-Hong

    2014-01-01

    Context: Children with learning disabilities might have difficulties to communicate effectively and give reliable responses as required in various visual function testing procedures. Aims: The purpose of this study was to compare the testability of visual acuity using the modified Early Treatment Diabetic Retinopathy Study (ETDRS) and Cambridge Crowding Cards, stereo acuity using Lang Stereo test II and Butterfly stereo tests and colour perception using Colour Vision Test Made Easy (CVTME) and Ishihara's Test for Colour Deficiency (Ishihara Test) between children in mainstream classes and children with learning disabilities in special education classes in government primary schools. Materials and Methods: A total of 100 primary school children (50 children from mainstream classes and 50 children from special education classes) matched in age were recruited in this cross-sectional comparative study. The testability was determined by the percentage of children who were able to give reliable respond as required by the respective tests. ‘Unable to test’ was defined as inappropriate response or uncooperative despite best efforts of the screener. Results: The testability of the modified ETDRS, Butterfly stereo test and Ishihara test for respective visual function tests were found lower among children in special education classes (P < 0.001) but not in Cambridge Crowding Cards, Lang Stereo test II and CVTME. Conclusion: Non verbal or “matching” approaches were found to be more superior in testing visual functions in children with learning disabilities. Modifications of vision testing procedures are essential for children with learning disabilities. PMID:24008790

  10. Moses Lake Fishery Restoration Project : FY 1999 Annual Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None given

    2000-12-01

    The Moses Lake Project consists of 3 phases. Phase 1 is the assessment of all currently available physical and biological information, the collection of baseline biological data, the formulation of testable hypotheses, and the development of a detailed study plan to test the hypotheses. Phase 2 is dedicated to the implementation of the study plan including data collection, hypotheses testing, and the formulation of a management plan. Phase 3 of the project is the implementation of the management plan, monitoring and evaluation of the implemented recommendations. The project intends to restore the failed recreational fishery for panfish species (black crappie,more » bluegill and yellow perch) in Moses Lake as off site mitigation for lost recreational fishing opportunities for anadromous species in the upper Columbia River. This report summarizes the results of Phase 1 investigations and presents the study plan directed at initiating Phase 2 of the project. Phase 1of the project culminates with the formulation of testable hypotheses directed at investigating possible limiting factors to the production of panfish in Moses Lake. The limiting factors to be investigated will include water quality, habitat quantity and quality, food limitations, competition, recruitment, predation, over harvest, environmental requirements, and the physical and chemical limitations of the system in relation to the fishes.« less

  11. Modelling nutrition across organizational levels: from individuals to superorganisms.

    PubMed

    Lihoreau, Mathieu; Buhl, Jerome; Charleston, Michael A; Sword, Gregory A; Raubenheimer, David; Simpson, Stephen J

    2014-10-01

    The Geometric Framework for nutrition has been increasingly used to describe how individual animals regulate their intake of multiple nutrients to maintain target physiological states maximizing growth and reproduction. However, only a few studies have considered the potential influences of the social context in which these nutritional decisions are made. Social insects, for instance, have evolved extreme levels of nutritional interdependence in which food collection, processing, storage and disposal are performed by different individuals with different nutritional needs. These social interactions considerably complicate nutrition and raise the question of how nutrient regulation is achieved at multiple organizational levels, by individuals and groups. Here, we explore the connections between individual- and collective-level nutrition by developing a modelling framework integrating concepts of nutritional geometry into individual-based models. Using this approach, we investigate how simple nutritional interactions between individuals can mediate a range of emergent collective-level phenomena in social arthropods (insects and spiders) and provide examples of novel and empirically testable predictions. We discuss how our approach could be expanded to a wider range of species and social systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Minimally Disruptive Medicine: A Pragmatically Comprehensive Model for Delivering Care to Patients with Multiple Chronic Conditions

    PubMed Central

    Leppin, Aaron L.; Montori, Victor M.; Gionfriddo, Michael R.

    2015-01-01

    An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM) is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice. PMID:27417747

  13. Seasonal resource conditions favor a summertime increase in North Pacific diatom-diazotroph associations.

    PubMed

    Follett, Christopher L; Dutkiewicz, Stephanie; Karl, David M; Inomura, Keisuke; Follows, Michael J

    2018-06-01

    In the North Pacific Subtropical Gyre (NPSG), an annual pulse of sinking organic carbon is observed at 4000 m between July and August, driven by large diatoms found in association with nitrogen fixing, heterocystous, cyanobacteria: Diatom-Diazotroph Associations (DDAs). Here we ask what drives the bloom of DDAs and present a simplified trait-based model of subtropical phototroph populations driven by observed, monthly averaged, environmental characteristics. The ratio of resource supply rates favors nitrogen fixation year round. The relative fitness of DDA traits is most competitive in early summer when the mixed layer is shallow, solar irradiance is high, and phosphorus and iron are relatively abundant. Later in the season, as light intensity drops and phosphorus is depleted, the traits of small unicellular diazotrophs become more competitive. The competitive transition happens in August, at the time when the DDA export event occurs. This seasonal dynamic is maintained when embedded in a more complex, global-scale, ecological model, and provides predictions for the extent of the North Pacific DDA bloom. The model provides a parsimonious and testable hypothesis for the stimulation of DDA blooms.

  14. Scalable and High-Throughput Execution of Clinical Quality Measures from Electronic Health Records using MapReduce and the JBoss® Drools Engine

    PubMed Central

    Peterson, Kevin J.; Pathak, Jyotishman

    2014-01-01

    Automated execution of electronic Clinical Quality Measures (eCQMs) from electronic health records (EHRs) on large patient populations remains a significant challenge, and the testability, interoperability, and scalability of measure execution are critical. The High Throughput Phenotyping (HTP; http://phenotypeportal.org) project aligns with these goals by using the standards-based HL7 Health Quality Measures Format (HQMF) and Quality Data Model (QDM) for measure specification, as well as Common Terminology Services 2 (CTS2) for semantic interpretation. The HQMF/QDM representation is automatically transformed into a JBoss® Drools workflow, enabling horizontal scalability via clustering and MapReduce algorithms. Using Project Cypress, automated verification metrics can then be produced. Our results show linear scalability for nine executed 2014 Center for Medicare and Medicaid Services (CMS) eCQMs for eligible professionals and hospitals for >1,000,000 patients, and verified execution correctness of 96.4% based on Project Cypress test data of 58 eCQMs. PMID:25954459

  15. Xylella genomics and bacterial pathogenicity to plants.

    PubMed

    Dow, J M; Daniels, M J

    2000-12-01

    Xylella fastidiosa, a pathogen of citrus, is the first plant pathogenic bacterium for which the complete genome sequence has been published. Inspection of the sequence reveals high relatedness to many genes of other pathogens, notably Xanthomonas campestris. Based on this, we suggest that Xylella possesses certain easily testable properties that contribute to pathogenicity. We also present some general considerations for deriving information on pathogenicity from bacterial genomics. Copyright 2000 John Wiley & Sons, Ltd.

  16. An evolutionary scenario for the origin of flowers.

    PubMed

    Frohlich, Michael W

    2003-07-01

    The Mostly Male theory is the first to use evidence from gene phylogenies, genetics, modern plant morphology and fossils to explain the evolutionary origin of flowers. It proposes that flower organization derives more from the male structures of ancestral gymnosperms than from female structures. The theory arose from a hypothesis-based study. Such studies are the most likely to generate testable evolutionary scenarios, which should be the ultimate goal of evo-devo.

  17. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    PubMed

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  18. Making robust policy decisions using global biodiversity indicators.

    PubMed

    Nicholson, Emily; Collen, Ben; Barausse, Alberto; Blanchard, Julia L; Costelloe, Brendan T; Sullivan, Kathryn M E; Underwood, Fiona M; Burn, Robert W; Fritz, Steffen; Jones, Julia P G; McRae, Louise; Possingham, Hugh P; Milner-Gulland, E J

    2012-01-01

    In order to influence global policy effectively, conservation scientists need to be able to provide robust predictions of the impact of alternative policies on biodiversity and measure progress towards goals using reliable indicators. We present a framework for using biodiversity indicators predictively to inform policy choices at a global level. The approach is illustrated with two case studies in which we project forwards the impacts of feasible policies on trends in biodiversity and in relevant indicators. The policies are based on targets agreed at the Convention on Biological Diversity (CBD) meeting in Nagoya in October 2010. The first case study compares protected area policies for African mammals, assessed using the Red List Index; the second example uses the Living Planet Index to assess the impact of a complete halt, versus a reduction, in bottom trawling. In the protected areas example, we find that the indicator can aid in decision-making because it is able to differentiate between the impacts of the different policies. In the bottom trawling example, the indicator exhibits some counter-intuitive behaviour, due to over-representation of some taxonomic and functional groups in the indicator, and contrasting impacts of the policies on different groups caused by trophic interactions. Our results support the need for further research on how to use predictive models and indicators to credibly track trends and inform policy. To be useful and relevant, scientists must make testable predictions about the impact of global policy on biodiversity to ensure that targets such as those set at Nagoya catalyse effective and measurable change.

  19. Making Robust Policy Decisions Using Global Biodiversity Indicators

    PubMed Central

    Nicholson, Emily; Collen, Ben; Barausse, Alberto; Blanchard, Julia L.; Costelloe, Brendan T.; Sullivan, Kathryn M. E.; Underwood, Fiona M.; Burn, Robert W.; Fritz, Steffen; Jones, Julia P. G.; McRae, Louise; Possingham, Hugh P.; Milner-Gulland, E. J.

    2012-01-01

    In order to influence global policy effectively, conservation scientists need to be able to provide robust predictions of the impact of alternative policies on biodiversity and measure progress towards goals using reliable indicators. We present a framework for using biodiversity indicators predictively to inform policy choices at a global level. The approach is illustrated with two case studies in which we project forwards the impacts of feasible policies on trends in biodiversity and in relevant indicators. The policies are based on targets agreed at the Convention on Biological Diversity (CBD) meeting in Nagoya in October 2010. The first case study compares protected area policies for African mammals, assessed using the Red List Index; the second example uses the Living Planet Index to assess the impact of a complete halt, versus a reduction, in bottom trawling. In the protected areas example, we find that the indicator can aid in decision-making because it is able to differentiate between the impacts of the different policies. In the bottom trawling example, the indicator exhibits some counter-intuitive behaviour, due to over-representation of some taxonomic and functional groups in the indicator, and contrasting impacts of the policies on different groups caused by trophic interactions. Our results support the need for further research on how to use predictive models and indicators to credibly track trends and inform policy. To be useful and relevant, scientists must make testable predictions about the impact of global policy on biodiversity to ensure that targets such as those set at Nagoya catalyse effective and measurable change. PMID:22815938

  20. Attack of the Killer Fungus: A Hypothesis-Driven Lab Module †

    PubMed Central

    Sato, Brian K.

    2013-01-01

    Discovery-driven experiments in undergraduate laboratory courses have been shown to increase student learning and critical thinking abilities. To this end, a lab module involving worm capture by a nematophagous fungus was developed. The goals of this module are to enhance scientific understanding of the regulation of worm capture by soil-dwelling fungi and for students to attain a set of established learning goals, including the ability to develop a testable hypothesis and search for primary literature for data analysis, among others. Students in a ten-week majors lab course completed the lab module and generated novel data as well as data that agrees with the published literature. In addition, learning gains were achieved as seen through a pre-module and post-module test, student self-assessment, class exam, and lab report. Overall, this lab module enables students to become active participants in the scientific method while contributing to the understanding of an ecologically relevant model organism. PMID:24358387

  1. Multistate matrix population model to assess the contributions and impacts on population abundance of domestic cats in urban areas including owned cats, unowned cats, and cats in shelters

    PubMed Central

    Coe, Jason B.

    2018-01-01

    Concerns over cat homelessness, over-taxed animal shelters, public health risks, and environmental impacts has raised attention on urban-cat populations. To truly understand cat population dynamics, the collective population of owned cats, unowned cats, and cats in the shelter system must be considered simultaneously because each subpopulation contributes differently to the overall population of cats in a community (e.g., differences in neuter rates, differences in impacts on wildlife) and cats move among categories through human interventions (e.g., adoption, abandonment). To assess this complex socio-ecological system, we developed a multistate matrix model of cats in urban areas that include owned cats, unowned cats (free-roaming and feral), and cats that move through the shelter system. Our model requires three inputs—location, number of human dwellings, and urban area—to provide testable predictions of cat abundance for any city in North America. Model-predicted population size of unowned cats in seven Canadian cities were not significantly different than published estimates (p = 0.23). Model-predicted proportions of sterile feral cats did not match observed sterile cat proportions for six USA cities (p = 0.001). Using a case study from Guelph, Ontario, Canada, we compared model-predicted to empirical estimates of cat abundance in each subpopulation and used perturbation analysis to calculate relative sensitivity of vital rates to cat abundance to demonstrate how management or mismanagement in one portion of the population could have repercussions across all portions of the network. Our study provides a general framework to consider cat population abundance in urban areas and, with refinement that includes city-specific parameter estimates and modeling, could provide a better understanding of population dynamics of cats in our communities. PMID:29489854

  2. Multistate matrix population model to assess the contributions and impacts on population abundance of domestic cats in urban areas including owned cats, unowned cats, and cats in shelters.

    PubMed

    Flockhart, D T Tyler; Coe, Jason B

    2018-01-01

    Concerns over cat homelessness, over-taxed animal shelters, public health risks, and environmental impacts has raised attention on urban-cat populations. To truly understand cat population dynamics, the collective population of owned cats, unowned cats, and cats in the shelter system must be considered simultaneously because each subpopulation contributes differently to the overall population of cats in a community (e.g., differences in neuter rates, differences in impacts on wildlife) and cats move among categories through human interventions (e.g., adoption, abandonment). To assess this complex socio-ecological system, we developed a multistate matrix model of cats in urban areas that include owned cats, unowned cats (free-roaming and feral), and cats that move through the shelter system. Our model requires three inputs-location, number of human dwellings, and urban area-to provide testable predictions of cat abundance for any city in North America. Model-predicted population size of unowned cats in seven Canadian cities were not significantly different than published estimates (p = 0.23). Model-predicted proportions of sterile feral cats did not match observed sterile cat proportions for six USA cities (p = 0.001). Using a case study from Guelph, Ontario, Canada, we compared model-predicted to empirical estimates of cat abundance in each subpopulation and used perturbation analysis to calculate relative sensitivity of vital rates to cat abundance to demonstrate how management or mismanagement in one portion of the population could have repercussions across all portions of the network. Our study provides a general framework to consider cat population abundance in urban areas and, with refinement that includes city-specific parameter estimates and modeling, could provide a better understanding of population dynamics of cats in our communities.

  3. Modeling Physiological Processes That Relate Toxicant Exposure and Bacterial Population Dynamics

    PubMed Central

    Klanjscek, Tin; Nisbet, Roger M.; Priester, John H.; Holden, Patricia A.

    2012-01-01

    Quantifying effects of toxicant exposure on metabolic processes is crucial to predicting microbial growth patterns in different environments. Mechanistic models, such as those based on Dynamic Energy Budget (DEB) theory, can link physiological processes to microbial growth. Here we expand the DEB framework to include explicit consideration of the role of reactive oxygen species (ROS). Extensions considered are: (i) additional terms in the equation for the “hazard rate” that quantifies mortality risk; (ii) a variable representing environmental degradation; (iii) a mechanistic description of toxic effects linked to increase in ROS production and aging acceleration, and to non-competitive inhibition of transport channels; (iv) a new representation of the “lag time” based on energy required for acclimation. We estimate model parameters using calibrated Pseudomonas aeruginosa optical density growth data for seven levels of cadmium exposure. The model reproduces growth patterns for all treatments with a single common parameter set, and bacterial growth for treatments of up to 150 mg(Cd)/L can be predicted reasonably well using parameters estimated from cadmium treatments of 20 mg(Cd)/L and lower. Our approach is an important step towards connecting levels of biological organization in ecotoxicology. The presented model reveals possible connections between processes that are not obvious from purely empirical considerations, enables validation and hypothesis testing by creating testable predictions, and identifies research required to further develop the theory. PMID:22328915

  4. Cortical Surround Interactions and Perceptual Salience via Natural Scene Statistics

    PubMed Central

    Coen-Cagli, Ruben; Dayan, Peter; Schwartz, Odelia

    2012-01-01

    Spatial context in images induces perceptual phenomena associated with salience and modulates the responses of neurons in primary visual cortex (V1). However, the computational and ecological principles underlying contextual effects are incompletely understood. We introduce a model of natural images that includes grouping and segmentation of neighboring features based on their joint statistics, and we interpret the firing rates of V1 neurons as performing optimal recognition in this model. We show that this leads to a substantial generalization of divisive normalization, a computation that is ubiquitous in many neural areas and systems. A main novelty in our model is that the influence of the context on a target stimulus is determined by their degree of statistical dependence. We optimized the parameters of the model on natural image patches, and then simulated neural and perceptual responses on stimuli used in classical experiments. The model reproduces some rich and complex response patterns observed in V1, such as the contrast dependence, orientation tuning and spatial asymmetry of surround suppression, while also allowing for surround facilitation under conditions of weak stimulation. It also mimics the perceptual salience produced by simple displays, and leads to readily testable predictions. Our results provide a principled account of orientation-based contextual modulation in early vision and its sensitivity to the homogeneity and spatial arrangement of inputs, and lends statistical support to the theory that V1 computes visual salience. PMID:22396635

  5. Electrical test prediction using hybrid metrology and machine learning

    NASA Astrophysics Data System (ADS)

    Breton, Mary; Chao, Robin; Muthinti, Gangadhara Raja; de la Peña, Abraham A.; Simon, Jacques; Cepler, Aron J.; Sendelbach, Matthew; Gaudiello, John; Emans, Susan; Shifrin, Michael; Etzioni, Yoav; Urenski, Ronen; Lee, Wei Ti

    2017-03-01

    Electrical test measurement in the back-end of line (BEOL) is crucial for wafer and die sorting as well as comparing intended process splits. Any in-line, nondestructive technique in the process flow to accurately predict these measurements can significantly improve mean-time-to-detect (MTTD) of defects and improve cycle times for yield and process learning. Measuring after BEOL metallization is commonly done for process control and learning, particularly with scatterometry (also called OCD (Optical Critical Dimension)), which can solve for multiple profile parameters such as metal line height or sidewall angle and does so within patterned regions. This gives scatterometry an advantage over inline microscopy-based techniques, which provide top-down information, since such techniques can be insensitive to sidewall variations hidden under the metal fill of the trench. But when faced with correlation to electrical test measurements that are specific to the BEOL processing, both techniques face the additional challenge of sampling. Microscopy-based techniques are sampling-limited by their small probe size, while scatterometry is traditionally limited (for microprocessors) to scribe targets that mimic device ground rules but are not necessarily designed to be electrically testable. A solution to this sampling challenge lies in a fast reference-based machine learning capability that allows for OCD measurement directly of the electrically-testable structures, even when they are not OCD-compatible. By incorporating such direct OCD measurements, correlation to, and therefore prediction of, resistance of BEOL electrical test structures is significantly improved. Improvements in prediction capability for multiple types of in-die electrically-testable device structures is demonstrated. To further improve the quality of the prediction of the electrical resistance measurements, hybrid metrology using the OCD measurements as well as X-ray metrology (XRF) is used. Hybrid metrology is the practice of combining information from multiple sources in order to enable or improve the measurement of one or more critical parameters. Here, the XRF measurements are used to detect subtle changes in barrier layer composition and thickness that can have second-order effects on the electrical resistance of the test structures. By accounting for such effects with the aid of the X-ray-based measurements, further improvement in the OCD correlation to electrical test measurements is achieved. Using both types of solution incorporation of fast reference-based machine learning on nonOCD-compatible test structures, and hybrid metrology combining OCD with XRF technology improvement in BEOL cycle time learning could be accomplished through improved prediction capability.

  6. A parsimonious modular approach to building a mechanistic belowground carbon and nitrogen model

    NASA Astrophysics Data System (ADS)

    Abramoff, Rose Z.; Davidson, Eric A.; Finzi, Adrien C.

    2017-09-01

    Soil decomposition models range from simple empirical functions to those that represent physical, chemical, and biological processes. Here we develop a parsimonious, modular C and N cycle model, the Dual Arrhenius Michaelis-Menten-Microbial Carbon and Nitrogen Phyisology (DAMM-MCNiP), that generates testable hypotheses regarding the effect of temperature, moisture, and substrate supply on C and N cycling. We compared this model to DAMM alone and an empirical model of heterotrophic respiration based on Harvard Forest data. We show that while different model structures explain similar amounts of variation in respiration, they differ in their ability to infer processes that affect C flux. We applied DAMM-MCNiP to explain an observed seasonal hysteresis in the relationship between respiration and temperature and show using an exudation simulation that the strength of the priming effect depended on the stoichiometry of the inputs. Low C:N inputs stimulated priming of soil organic matter decomposition, but high C:N inputs were preferentially utilized by microbes as a C source with limited priming. The simplicity of DAMM-MCNiP's simultaneous representations of temperature, moisture, substrate supply, enzyme activity, and microbial growth processes is unique among microbial physiology models and is sufficiently parsimonious that it could be incorporated into larger-scale models of C and N cycling.

  7. Color vision deficiency in preschool children: the multi-ethnic pediatric eye disease study.

    PubMed

    Xie, John Z; Tarczy-Hornoch, Kristina; Lin, Jesse; Cotter, Susan A; Torres, Mina; Varma, Rohit

    2014-07-01

    To determine the sex- and ethnicity-specific prevalence of color vision deficiency (CVD) in black, Asian, Hispanic, and non-Hispanic white preschool children. Population-based, cross-sectional study. The Multi-Ethnic Pediatric Eye Disease Study is a population-based evaluation of the prevalence of vision disorders in children in Southern California. A total of 5960 subjects 30 to 72 months of age were recruited for the study, of whom 4177 were able to complete color vision testing (1265 black, 812 Asian, 1280 Hispanic, and 820 non-Hispanic white). Color vision testing was performed using Color Vision Testing Made Easy color plates (Home Vision Care, Gulf Breeze, FL), and diagnostic confirmatory testing was performed using the Waggoner HRR Diagnostic Test color plates (Home Vision Care). Testability of color vision in preschool children between 30 and 72 months of age and prevalence of CVD stratified by age, sex, and ethnicity. Testability was 17% in children younger than 37 months of age, increasing to 57% in children 37 to 48 months of age, 89% in children 49 to 60 months of age, and 98% in children 61 to 72 months of age. The prevalence of CVD among boys was 1.4% for black, 3.1% for Asian, 2.6% for Hispanic, and 5.6% for non-Hispanic white children; the prevalence in girls was 0.0% to 0.5% for all ethnicities. The ethnic difference in CVD was statistically significant between black and non-Hispanic white children (P = 0.0003) and between Hispanic and non-Hispanic white children (P = 0.02). In boys, most CVD cases were either deutan (51%) or protan (34%); 32% were classified as mild, 15% as moderate, and 41% as severe. Testability for CVD in preschool children is high by 4 years of age. The prevalence of CVD in preschool boys varies by ethnicity, with the highest prevalence in non-Hispanic white and lowest in black children. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  8. Exploring Operational Test and Evaluation of Unmanned Aircraft Systems: A Qualitative Case Study

    NASA Astrophysics Data System (ADS)

    Saliceti, Jose A.

    The purpose of this qualitative case study was to explore and identify strategies that may potentially remedy operational test and evaluation procedures used to evaluate Unmanned Aircraft Systems (UAS) technology. The sample for analysis consisted of organizations testing and evaluating UASs (e.g., U.S. Air Force, U.S. Navy, U.S. Army, U.S. Marine Corps, U.S. Coast Guard, and Customs Border Protection). A purposeful sampling technique was used to select 15 subject matter experts in the field of operational test and evaluation of UASs. A questionnaire was provided to participants to construct a descriptive and robust research. Analysis of responses revealed themes related to each research question. Findings revealed operational testers utilized requirements documents to extrapolate measures for testing UAS technology and develop critical operational issues. The requirements documents were (a) developed without the contribution of stakeholders and operational testers, (b) developed with vague or unrealistic measures, and (c) developed without a systematic method to derive requirements from mission tasks. Four approaches are recommended to develop testable operational requirements and assist operational testers: (a) use a mission task analysis tool to derive requirements for mission essential tasks for the system, (b) exercise collaboration among stakeholders and testers to ensure testable operational requirements based on mission tasks, (c) ensure testable measures are used in requirements documents, and (d) create a repository list of critical operational issues by mission areas. The preparation of operational test and evaluation processes for UAS technology is not uniform across testers. The processes in place are not standardized, thus test plan preparation and reporting are different among participants. A standard method to prepare and report UAS technology should be used when preparing and reporting on UAS technology. Using a systematic process, such as mission-based test design, resonated among participants as an analytical method to link UAS mission tasks and measures of performance to the capabilities of the system under test when developing operational test plans. Further research should examine system engineering designs for system requirements traceability matrix of mission tasks and subtasks while using an analysis tool that adequately evaluates UASs with an acceptable level of confidence in the results.

  9. Trajectory Recognition as the Basis for Object Individuation: A Functional Model of Object File Instantiation and Object-Token Encoding

    PubMed Central

    Fields, Chris

    2011-01-01

    The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599

  10. Pro-sustainability choices and child deaths averted: from project experience to investment strategy.

    PubMed

    Sarriot, Eric G; Swedberg, Eric A; Ricca, James G

    2011-05-01

    The pursuit of the Millennium Development Goals and advancing the 'global health agenda' demand the achievement of health impact at scale through efficient investments. We have previously offered that sustainability-a necessary condition for successful expansion of programmes-can be addressed in practical terms. Based on benchmarks from actual child survival projects, we assess the expected impact of translating pro-sustainability choices into investment strategies. We review the experience of Save the Children US in Guinea in terms of investment, approach to sustainability and impact. It offers three benchmarks for impact: Entry project (21 lives saved of children under age five per US$100 000), Expansion project (37 LS/US$100k), and Continuation project (100 LS/US$100k). Extrapolating this experience, we model the impact of a traditional investment scenario against a pro-sustainability scenario and compare the deaths averted per dollar spent over five project cycles. The impact per dollar spent on a pro-sustainability strategy is 3.4 times that of a traditional one over the long run (range from 2.2 to 5.7 times in a sensitivity analysis). This large efficiency differential between two investment approaches offers a testable hypothesis for large-scale/long-term studies. The 'bang for the buck' of health programmes could be greatly increased by following a pro-sustainability investment strategy.

  11. Towards a universal trait-based model of terrestrial primary production

    NASA Astrophysics Data System (ADS)

    Wang, H.; Prentice, I. C.; Cornwell, W.; Keenan, T. F.; Davis, T.; Wright, I. J.; Evans, B. J.; Peng, C.

    2015-12-01

    Systematic variations of plant traits along environmental gradients have been observed for decades. For example, the tendencies of leaf nitrogen per unit area to increase, and of the leaf-internal to ambient CO2 concentration ratio (ci:ca) to decrease, with aridity are well established. But ecosystem models typically represent trait variation based purely on empirical relationships, or on untested conjectures, or not at all. Neglect of quantitative trait variation and its adapative significance probably contributes to the persistent large uncertainties among models in predicting the response of the carbon cycle to environmental change. However, advances in ecological theory and the accumulation of extensive data sets during recent decades suggest that theoretically based and testable predictions of trait variation could be achieved. Based on well-established ecophysiological principles and consideration of the adaptive significance of traits, we propose universal relationships between photosynthetic traits (ci:ca, carbon fixation capacity, and the ratio of electron transport capacity to carbon fixation capacity) and primary environmental variables, which capture observed trait variations both within and between plant functional types. Moreover, incorporating these traits into the standard model of C3photosynthesis allows gross primary production (GPP) of natural vegetation to be predicted by a single equation with just two free parameters, which can be estimated from independent observations. The resulting model performs as well as much more complex models. Our results provide a fresh perspective with potentially high reward: the possibility of a deeper understanding of the relationships between plant traits and environment, simpler and more robust and reliable representation of land processes in Earth system models, and thus improved predictability for biosphere-atmosphere interactions and climate feedbacks.

  12. Tests of the Giant Impact Hypothesis

    NASA Technical Reports Server (NTRS)

    Jones, J. H.

    1998-01-01

    The giant impact hypothesis has gained popularity as a means of explaining a volatile-depleted Moon that still has a chemical affinity to the Earth. As Taylor's Axiom decrees, the best models of lunar origin are testable, but this is difficult with the giant impact model. The energy associated with the impact would be sufficient to totally melt and partially vaporize the Earth. And this means that there should he no geological vestige of Barber times. Accordingly, it is important to devise tests that may be used to evaluate the giant impact hypothesis. Three such tests are discussed here. None of these is supportive of the giant impact model, but neither do they disprove it.

  13. The polyadenylation code: a unified model for the regulation of mRNA alternative polyadenylation*

    PubMed Central

    Davis, Ryan; Shi, Yongsheng

    2014-01-01

    The majority of eukaryotic genes produce multiple mRNA isoforms with distinct 3′ ends through a process called mRNA alternative polyadenylation (APA). Recent studies have demonstrated that APA is dynamically regulated during development and in response to environmental stimuli. A number of mechanisms have been described for APA regulation. In this review, we attempt to integrate all the known mechanisms into a unified model. This model not only explains most of previous results, but also provides testable predictions that will improve our understanding of the mechanistic details of APA regulation. Finally, we briefly discuss the known and putative functions of APA regulation. PMID:24793760

  14. Systems Engineering and Integration (SE and I)

    NASA Technical Reports Server (NTRS)

    Chevers, ED; Haley, Sam

    1990-01-01

    The issue of technology advancement and future space transportation vehicles is addressed. The challenge is to develop systems which can be evolved and improved in small incremental steps where each increment reduces present cost, improves, reliability, or does neither but sets the stage for a second incremental upgrade that does. Future requirements are interface standards for commercial off the shelf products to aid in the development of integrated facilities; enhanced automated code generation system slightly coupled to specification and design documentation; modeling tools that support data flow analysis; and shared project data bases consisting of technical characteristics cast information, measurement parameters, and reusable software programs. Topics addressed include: advanced avionics development strategy; risk analysis and management; tool quality management; low cost avionics; cost estimation and benefits; computer aided software engineering; computer systems and software safety; system testability; and advanced avionics laboratories - and rapid prototyping. This presentation is represented by viewgraphs only.

  15. The ancestral flower of angiosperms and its early diversification

    PubMed Central

    Sauquet, Hervé; von Balthazar, Maria; Magallón, Susana; Doyle, James A.; Endress, Peter K.; Bailes, Emily J.; Barroso de Morais, Erica; Bull-Hereñu, Kester; Carrive, Laetitia; Chartier, Marion; Chomicki, Guillaume; Coiro, Mario; Cornette, Raphaël; El Ottra, Juliana H. L.; Epicoco, Cyril; Foster, Charles S. P.; Jabbour, Florian; Haevermans, Agathe; Haevermans, Thomas; Hernández, Rebeca; Little, Stefan A.; Löfstrand, Stefan; Luna, Javier A.; Massoni, Julien; Nadot, Sophie; Pamperl, Susanne; Prieu, Charlotte; Reyes, Elisabeth; dos Santos, Patrícia; Schoonderwoerd, Kristel M.; Sontag, Susanne; Soulebeau, Anaëlle; Staedler, Yannick; Tschan, Georg F.; Wing-Sze Leung, Amy; Schönenberger, Jürg

    2017-01-01

    Recent advances in molecular phylogenetics and a series of important palaeobotanical discoveries have revolutionized our understanding of angiosperm diversification. Yet, the origin and early evolution of their most characteristic feature, the flower, remains poorly understood. In particular, the structure of the ancestral flower of all living angiosperms is still uncertain. Here we report model-based reconstructions for ancestral flowers at the deepest nodes in the phylogeny of angiosperms, using the largest data set of floral traits ever assembled. We reconstruct the ancestral angiosperm flower as bisexual and radially symmetric, with more than two whorls of three separate perianth organs each (undifferentiated tepals), more than two whorls of three separate stamens each, and more than five spirally arranged separate carpels. Although uncertainty remains for some of the characters, our reconstruction allows us to propose a new plausible scenario for the early diversification of flowers, leading to new testable hypotheses for future research on angiosperms. PMID:28763051

  16. Mammalian DNA single-strand break repair: an X-ra(y)ted affair.

    PubMed

    Caldecott, K W

    2001-05-01

    The genetic stability of living cells is continuously threatened by the presence of endogenous reactive oxygen species and other genotoxic molecules. Of particular threat are the thousands of DNA single-strand breaks that arise in each cell, each day, both directly from disintegration of damaged sugars and indirectly from the excision repair of damaged bases. If un-repaired, single-strand breaks can be converted into double-strand breaks during DNA replication, potentially resulting in chromosomal rearrangement and genetic deletion. Consequently, cells have adopted multiple pathways to ensure the rapid and efficient removal of single-strand breaks. A general feature of these pathways appears to be the extensive employment of protein-protein interactions to stimulate both the individual component steps and the overall repair reaction. Our current understanding of DNA single-strand break repair is discussed, and testable models for the architectural coordination of this important process are presented. Copyright 2001 John Wiley & Sons, Inc.

  17. Testing Nonassociative Quantum Mechanics.

    PubMed

    Bojowald, Martin; Brahma, Suddhasattwa; Büyükçam, Umut

    2015-11-27

    The familiar concepts of state vectors and operators in quantum mechanics rely on associative products of observables. However, these notions do not apply to some exotic systems such as magnetic monopoles, which have long been known to lead to nonassociative algebras. Their quantum physics has remained obscure. This Letter presents the first derivation of potentially testable physical results in nonassociative quantum mechanics, based on effective potentials. They imply new effects which cannot be mimicked in usual quantum mechanics with standard magnetic fields.

  18. Almost periodic cellular neural networks with neutral-type proportional delays

    NASA Astrophysics Data System (ADS)

    Xiao, Songlin

    2018-03-01

    This paper presents a new result on the existence, uniqueness and generalised exponential stability of almost periodic solutions for cellular neural networks with neutral-type proportional delays and D operator. Based on some novel differential inequality techniques, a testable condition is derived to ensure that all the state trajectories of the system converge to an almost periodic solution with a positive exponential convergence rate. The effectiveness of the obtained result is illustrated by a numerical example.

  19. Continuous variation caused by genes with graduated effects.

    PubMed Central

    Matthysse, S; Lange, K; Wagener, D K

    1979-01-01

    The classical polygenic theory of inheritance postulates a large number of genes with small, and essentially similar, effects. We propose instead a model with genes of gradually decreasing effects. The resulting phenotypic distribution is not normal; if the gene effects are geometrically decreasing, it can be triangular. The joint distribution of parent and offspring genic value is calculated. The most readily testable difference between the two models is that, in the decreasing-effect model, the variance of the offspring distribution from given parents depends on the parents' genic values. The more the parents deviate from the mean, the smaller the variance of the offspring should be. In the equal-effect model the offspring variance is independent of the parents' genic values. PMID:288073

  20. Dynamical consequences of mantle heterogeneity in two-phase models of mid-ocean ridges

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2010-12-01

    The mid-ocean ridge system, over 50,000 km in length, samples the magmatic products of a large swath of the asthenosphere. It provides our best means to assess the heterogeneity structure of the upper mantle. Interpretation of the diverse array of observations of MOR petrology, geochemistry, tomography, etc requires models that can map heterogeneity structure onto predictions testable by comparison with these observations. I report on progress to this end; in particular, I describe numerical models of coupled magma/mantle dynamics at mid-ocean ridges [1,2]. These models incorporate heterogeneity in terms of a simple, two-component thermochemical system with specified amplitude and spatial distribution. They indicate that mantle heterogeneity has significant fluid-dynamical consequences for both mantle and magmatic flow. Models show that the distribution of enrichment can lead to asymmetry in the strength of upwelling across the ridge-axis and channelised magmatic transport to the axis. Furthermore, heterogeneity can cause off-axis upwelling of partially molten diapirs, trapping of enriched melts off-axis, and re-fertilization of the mantle by pooled and refrozen melts. Predicted consequences of geochemical heterogeneity may also be considered. References: [1] Katz, RF, (2008); Magma dynamics with the Enthalpy Method: Benchmark Solutions and Magmatic Focusing at Mid-ocean Ridges. Journal of Petrology, doi: 10.1093/petrology/egn058. [2] Katz RF, (2010); Porosity-driven convection and asymmetry beneath mid-ocean ridges. Submitted to G3.

  1. Electronic design of a multichannel programmable implant for neuromuscular electrical stimulation.

    PubMed

    Arabi, K; Sawan, M A

    1999-06-01

    An advanced stimulator for neuromuscular stimulation of spinal cord injured patients has been developed. The stimulator is externally controlled and powered by a single encoded radio frequency carrier and has four independently controlled bipolar stimulation channels. It offers a wide range of reprogrammability and flexibility, and can be used in many neuromuscular electrical stimulation applications. The implant system is adaptable to patient's needs and to future developments in stimulation algorithms by reprogramming the stimulator. The stimulator is capable of generating a wide range of stimulation waveforms and stimulation patterns and therefore is very suitable for selective nerve stimulation techniques. The reliability of the implant has been increased by using a forward error detection and correction communication protocol and by designing the chip for structural testability based on scan test approach. Implemented testability scheme makes it possible to verify the complete functionality of the implant before and after implantation. The stimulators architecture is designed to be modular and therefore its different blocks can be reused as standard building blocks in the design and implementation of other neuromuscular prostheses. Design for low-power techniques have also been employed to reduce power consumption of the electronic circuitry.

  2. Engineering Strategies to Decode and Enhance the Genomes of Coral Symbionts.

    PubMed

    Levin, Rachel A; Voolstra, Christian R; Agrawal, Shobhit; Steinberg, Peter D; Suggett, David J; van Oppen, Madeleine J H

    2017-01-01

    Elevated sea surface temperatures from a severe and prolonged El Niño event (2014-2016) fueled by climate change have resulted in mass coral bleaching (loss of dinoflagellate photosymbionts, Symbiodinium spp., from coral tissues) and subsequent coral mortality, devastating reefs worldwide. Genetic variation within and between Symbiodinium species strongly influences the bleaching tolerance of corals, thus recent papers have called for genetic engineering of Symbiodinium to elucidate the genetic basis of bleaching-relevant Symbiodinium traits. However, while Symbiodinium has been intensively studied for over 50 years, genetic transformation of Symbiodinium has seen little success likely due to the large evolutionary divergence between Symbiodinium and other model eukaryotes rendering standard transformation systems incompatible. Here, we integrate the growing wealth of Symbiodinium next-generation sequencing data to design tailored genetic engineering strategies. Specifically, we develop a testable expression construct model that incorporates endogenous Symbiodinium promoters, terminators, and genes of interest, as well as an internal ribosomal entry site from a Symbiodinium virus. Furthermore, we assess the potential for CRISPR/Cas9 genome editing through new analyses of the three currently available Symbiodinium genomes. Finally, we discuss how genetic engineering could be applied to enhance the stress tolerance of Symbiodinium , and in turn, coral reefs.

  3. Current challenges in fundamental physics

    NASA Astrophysics Data System (ADS)

    Egana Ugrinovic, Daniel

    The discovery of the Higgs boson at the Large Hadron Collider completed the Standard Model of particle physics. The Standard Model is a remarkably successful theory of fundamental physics, but it suffers from severe problems. It does not provide an explanation for the origin or stability of the electroweak scale nor for the origin and structure of flavor and CP violation. It predicts vanishing neutrino masses, in disagreement with experimental observations. It also fails to explain the matter-antimatter asymmetry of the universe, and it does not provide a particle candidate for dark matter. In this thesis we provide experimentally testable solutions for most of these problems and we study their phenomenology.

  4. Cancer stem cells: impact, heterogeneity, and uncertainty

    PubMed Central

    Magee, Jeffrey A.; Piskounova, Elena; Morrison, Sean J.

    2015-01-01

    The differentiation of tumorigenic cancer stem cells into non-tumorigenic cancer cells confers heterogeneity to some cancers beyond that explained by clonal evolution or environmental differences. In such cancers, functional differences between tumorigenic and non-tumorigenic cells influence response to therapy and prognosis. However, it remains uncertain whether the model applies to many, or few, cancers due to questions about the robustness of cancer stem cell markers and the extent to which existing assays underestimate the frequency of tumorigenic cells. In cancers with rapid genetic change, reversible changes in cell states, or biological variability among patients the stem cell model may not be readily testable. PMID:22439924

  5. Emergent quantum mechanics without wavefunctions

    NASA Astrophysics Data System (ADS)

    Mesa Pascasio, J.; Fussy, S.; Schwabl, H.; Grössing, G.

    2016-03-01

    We present our model of an Emergent Quantum Mechanics which can be characterized by “realism without pre-determination”. This is illustrated by our analytic description and corresponding computer simulations of Bohmian-like “surreal” trajectories, which are obtained classically, i.e. without the use of any quantum mechanical tool such as wavefunctions. However, these trajectories do not necessarily represent ontological paths of particles but rather mappings of the probability density flux in a hydrodynamical sense. Modelling emergent quantum mechanics in a high-low intesity double slit scenario gives rise to the “quantum sweeper effect” with a characteristic intensity pattern. This phenomenon should be experimentally testable via weak measurement techniques.

  6. Loss Aversion and Time-Differentiated Electricity Pricing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spurlock, C. Anna

    2015-06-01

    I develop a model of loss aversion over electricity expenditure, from which I derive testable predictions for household electricity consumption while on combination time-of-use (TOU) and critical peak pricing (CPP) plans. Testing these predictions results in evidence consistent with loss aversion: (1) spillover effects - positive expenditure shocks resulted in significantly more peak consumption reduction for several weeks thereafter; and (2) clustering - disproportionate probability of consuming such that expenditure would be equal between the TOUCPP or standard flat-rate pricing structures. This behavior is inconsistent with a purely neoclassical utility model, and has important implications for application of time-differentiated electricitymore » pricing.« less

  7. Expert knowledge as a foundation for the management of secretive species and their habitat

    USGS Publications Warehouse

    Drew, C. Ashton; Collazo, Jaime

    2012-01-01

    In this chapter, we share lessons learned during the elicitation and application of expert knowledge in the form of a belief network model for the habitat of a waterbird, the King Rail (Rallus elegans). A belief network is a statistical framework used to graphically represent and evaluate hypothesized cause and effect relationships among variables. Our model was a pilot project to explore the value of such a model as a tool to help the US Fish and Wildlife Service (USFWS) conserve species that lack sufficient empirical data to guide management decisions. Many factors limit the availability of empirical data that can support landscape-scale conservation planning. Globally, most species simply have not yet been subject to empirical study (Wilson 2000). Even for well-studied species, data are often restricted to specific geographic extents, to particular seasons, or to specific segments of a species’ life history. The USFWS mandates that the agency’s conservation actions (1) be coordinated across regional landscapes, (2) be founded on the best available science (with testable assumptions), and (3) support adaptive management through monitoring and assessment of action outcomes. Given limits on the available data, the concept of “best available science” in the context of conservation planning generally includes a mix of empirical data and expert knowledge (Sullivan et al. 2006).

  8. The ABC Model and its Applicability to Basal Angiosperms

    PubMed Central

    Soltis, Douglas E.; Chanderbali, André S.; Kim, Sangtae; Buzgo, Matyas; Soltis, Pamela S.

    2007-01-01

    Background Although the flower is the central feature of the angiosperms, little is known of its origin and subsequent diversification. The ABC model has long been the unifying paradigm for floral developmental genetics, but it is based on phylogenetically derived eudicot models. Synergistic research involving phylogenetics, classical developmental studies, genomics and developmental genetics has afforded valuable new insights into floral evolution in general, and the early flower in particular. Scope and Conclusions Genomic studies indicate that basal angiosperms, and by inference the earliest angiosperms, had a rich tool kit of floral genes. Homologues of the ABCE floral organ identity genes are also present in basal angiosperm lineages; however, C-, E- and particularly B-function genes are more broadly expressed in basal lineages. There is no single model of floral organ identity that applies to all angiosperms; there are multiple models that apply depending on the phylogenetic position and floral structure of the group in question. The classic ABC (or ABCE) model may work well for most eudicots. However, modifications are needed for basal eudicots and, the focus of this paper, basal angiosperms. We offer ‘fading borders’ as a testable hypothesis for the basal-most angiosperms and, by inference, perhaps some of the earliest (now extinct) angiosperms. PMID:17616563

  9. Synchronous versus asynchronous modeling of gene regulatory networks.

    PubMed

    Garg, Abhishek; Di Cara, Alessandro; Xenarios, Ioannis; Mendoza, Luis; De Micheli, Giovanni

    2008-09-01

    In silico modeling of gene regulatory networks has gained some momentum recently due to increased interest in analyzing the dynamics of biological systems. This has been further facilitated by the increasing availability of experimental data on gene-gene, protein-protein and gene-protein interactions. The two dynamical properties that are often experimentally testable are perturbations and stable steady states. Although a lot of work has been done on the identification of steady states, not much work has been reported on in silico modeling of cellular differentiation processes. In this manuscript, we provide algorithms based on reduced ordered binary decision diagrams (ROBDDs) for Boolean modeling of gene regulatory networks. Algorithms for synchronous and asynchronous transition models have been proposed and their corresponding computational properties have been analyzed. These algorithms allow users to compute cyclic attractors of large networks that are currently not feasible using existing software. Hereby we provide a framework to analyze the effect of multiple gene perturbation protocols, and their effect on cell differentiation processes. These algorithms were validated on the T-helper model showing the correct steady state identification and Th1-Th2 cellular differentiation process. The software binaries for Windows and Linux platforms can be downloaded from http://si2.epfl.ch/~garg/genysis.html.

  10. Brain mechanisms for perceptual and reward-related decision-making.

    PubMed

    Deco, Gustavo; Rolls, Edmund T; Albantakis, Larissa; Romo, Ranulfo

    2013-04-01

    Phenomenological models of decision-making, including the drift-diffusion and race models, are compared with mechanistic, biologically plausible models, such as integrate-and-fire attractor neuronal network models. The attractor network models show how decision confidence is an emergent property; and make testable predictions about the neural processes (including neuronal activity and fMRI signals) involved in decision-making which indicate that the medial prefrontal cortex is involved in reward value-based decision-making. Synaptic facilitation in these models can help to account for sequential vibrotactile decision-making, and for how postponed decision-related responses are made. The randomness in the neuronal spiking-related noise that makes the decision-making probabilistic is shown to be increased by the graded firing rate representations found in the brain, to be decreased by the diluted connectivity, and still to be significant in biologically large networks with thousands of synapses onto each neuron. The stability of these systems is shown to be influenced in different ways by glutamatergic and GABAergic efficacy, leading to a new field of dynamical neuropsychiatry with applications to understanding schizophrenia and obsessive-compulsive disorder. The noise in these systems is shown to be advantageous, and to apply to similar attractor networks involved in short-term memory, long-term memory, attention, and associative thought processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Measurement of a model of implementation for health care: toward a testable theory

    PubMed Central

    2012-01-01

    Background Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a first step in operationalizing Greenhalgh et al.’s model by providing background, rationale, working definitions, and measurement of key constructs. Methods A systematic review of the literature was conducted for key words representing 53 separate sub-constructs from six of the model’s broad constructs. Using an iterative process, we reviewed existing measures and utilized or adapted items. Where no one measure was deemed appropriate, we developed other items to measure the constructs through consensus. Results The review and iterative process of team consensus identified three types of data that can been used to operationalize the constructs in the model: survey items, interview questions, and administrative data. Specific examples of each of these are reported. Conclusion Despite limitations, the mixed-methods approach to measurement using the survey, interview measure, and administrative data can facilitate research on implementation by providing investigators with a measurement tool that captures most of the constructs identified by the Greenhalgh model. These measures are currently being used to collect data concerning the implementation of two evidence-based psychotherapies disseminated nationally within Department of Veterans Affairs. Testing of psychometric properties and subsequent refinement should enhance the utility of the measures. PMID:22759451

  12. Causes and consequences of reduced blood volume in space flight - A multi-discipline modeling study

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1983-01-01

    A group of mathematical models of various physiological systems have been developed and applied to studying problems associated with adaptation to weightlessness. One biomedical issue which could be addressed by at least three of these models from varying perspectives was the reduction in blood volume that universally occurs in astronauts. Accordingly, models of fluid-electrolyte, erythropoiesis, and cardiovascular regulation were employed to study the causes and consequences of blood volume loss during space flight. This analysis confirms the notion that alterations of blood volume are central to an understanding of adaptation to prolonged space flight. More importantly, the modeling studies resulted in specific hypotheses accounting for plasma volume and red cell mass losses and testable predictions concerning the behavior of the circulatory system.

  13. The development of guided inquiry-based learning devices on photosynthesis and respiration matter to train science literacy skills

    NASA Astrophysics Data System (ADS)

    Choirunnisak; Ibrahim, M.; Yuliani

    2018-01-01

    The purpose of this research was to develop a guided inquiry-based learning devices on photosynthesis and respiration matter that are feasible (valid, practical, and effective) to train students’ science literacy. This research used 4D development model and tested on 15 students of biology education 2016 the State University of Surabaya with using one group pretest-posttest design. Learning devices developed include (a) Semester Lesson Plan (b) Lecture Schedule, (c) Student Activity Sheet, (d) Student Textbook, and (e) testability of science literacy. Research data obtained through validation method, observation, test, and questionnaire. The results were analyzed descriptively quantitative and qualitative. The ability of science literacy was analyzed by n-gain. The results of this research showed that (a) learning devices that developed was categorically very valid, (b) learning activities performed very well, (c) student’s science literacy skills improved that was a category as moderate, and (d) students responses were very positively to the learning that already held. Based on the results of the analysis and discussion, it is concluded that the development of guided inquiry-based learning devices on photosynthesis and respiration matter was feasible to train students literacy science skills.

  14. cit: hypothesis testing software for mediation analysis in genomic applications.

    PubMed

    Millstein, Joshua; Chen, Gary K; Breton, Carrie V

    2016-08-01

    The challenges of successfully applying causal inference methods include: (i) satisfying underlying assumptions, (ii) limitations in data/models accommodated by the software and (iii) low power of common multiple testing approaches. The causal inference test (CIT) is based on hypothesis testing rather than estimation, allowing the testable assumptions to be evaluated in the determination of statistical significance. A user-friendly software package provides P-values and optionally permutation-based FDR estimates (q-values) for potential mediators. It can handle single and multiple binary and continuous instrumental variables, binary or continuous outcome variables and adjustment covariates. Also, the permutation-based FDR option provides a non-parametric implementation. Simulation studies demonstrate the validity of the cit package and show a substantial advantage of permutation-based FDR over other common multiple testing strategies. The cit open-source R package is freely available from the CRAN website (https://cran.r-project.org/web/packages/cit/index.html) with embedded C ++ code that utilizes the GNU Scientific Library, also freely available (http://www.gnu.org/software/gsl/). joshua.millstein@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. A Systems Biology Approach Reveals Converging Molecular Mechanisms that Link Different POPs to Common Metabolic Diseases.

    PubMed

    Ruiz, Patricia; Perlina, Ally; Mumtaz, Moiz; Fowler, Bruce A

    2016-07-01

    A number of epidemiological studies have identified statistical associations between persistent organic pollutants (POPs) and metabolic diseases, but testable hypotheses regarding underlying molecular mechanisms to explain these linkages have not been published. We assessed the underlying mechanisms of POPs that have been associated with metabolic diseases; three well-known POPs [2,3,7,8-tetrachlorodibenzodioxin (TCDD), 2,2´,4,4´,5,5´-hexachlorobiphenyl (PCB 153), and 4,4´-dichlorodiphenyldichloroethylene (p,p´-DDE)] were studied. We used advanced database search tools to delineate testable hypotheses and to guide laboratory-based research studies into underlying mechanisms by which this POP mixture could produce or exacerbate metabolic diseases. For our searches, we used proprietary systems biology software (MetaCore™/MetaDrug™) to conduct advanced search queries for the underlying interactions database, followed by directional network construction to identify common mechanisms for these POPs within two or fewer interaction steps downstream of their primary targets. These common downstream pathways belong to various cytokine and chemokine families with experimentally well-documented causal associations with type 2 diabetes. Our systems biology approach allowed identification of converging pathways leading to activation of common downstream targets. To our knowledge, this is the first study to propose an integrated global set of step-by-step molecular mechanisms for a combination of three common POPs using a systems biology approach, which may link POP exposure to diseases. Experimental evaluation of the proposed pathways may lead to development of predictive biomarkers of the effects of POPs, which could translate into disease prevention and effective clinical treatment strategies. Ruiz P, Perlina A, Mumtaz M, Fowler BA. 2016. A systems biology approach reveals converging molecular mechanisms that link different POPs to common metabolic diseases. Environ Health Perspect 124:1034-1041; http://dx.doi.org/10.1289/ehp.1510308.

  16. Causal Reasoning on Biological Networks: Interpreting Transcriptional Changes

    NASA Astrophysics Data System (ADS)

    Chindelevitch, Leonid; Ziemek, Daniel; Enayetallah, Ahmed; Randhawa, Ranjit; Sidders, Ben; Brockel, Christoph; Huang, Enoch

    Over the past decade gene expression data sets have been generated at an increasing pace. In addition to ever increasing data generation, the biomedical literature is growing exponentially. The PubMed database (Sayers et al., 2010) comprises more than 20 million citations as of October 2010. The goal of our method is the prediction of putative upstream regulators of observed expression changes based on a set of over 400,000 causal relationships. The resulting putative regulators constitute directly testable hypotheses for follow-up.

  17. VirtualLeaf: an open-source framework for cell-based modeling of plant tissue growth and development.

    PubMed

    Merks, Roeland M H; Guravage, Michael; Inzé, Dirk; Beemster, Gerrit T S

    2011-02-01

    Plant organs, including leaves and roots, develop by means of a multilevel cross talk between gene regulation, patterned cell division and cell expansion, and tissue mechanics. The multilevel regulatory mechanisms complicate classic molecular genetics or functional genomics approaches to biological development, because these methodologies implicitly assume a direct relation between genes and traits at the level of the whole plant or organ. Instead, understanding gene function requires insight into the roles of gene products in regulatory networks, the conditions of gene expression, etc. This interplay is impossible to understand intuitively. Mathematical and computer modeling allows researchers to design new hypotheses and produce experimentally testable insights. However, the required mathematics and programming experience makes modeling poorly accessible to experimental biologists. Problem-solving environments provide biologically intuitive in silico objects ("cells", "regulation networks") required for setting up a simulation and present those to the user in terms of familiar, biological terminology. Here, we introduce the cell-based computer modeling framework VirtualLeaf for plant tissue morphogenesis. The current version defines a set of biologically intuitive C++ objects, including cells, cell walls, and diffusing and reacting chemicals, that provide useful abstractions for building biological simulations of developmental processes. We present a step-by-step introduction to building models with VirtualLeaf, providing basic example models of leaf venation and meristem development. VirtualLeaf-based models provide a means for plant researchers to analyze the function of developmental genes in the context of the biophysics of growth and patterning. VirtualLeaf is an ongoing open-source software project (http://virtualleaf.googlecode.com) that runs on Windows, Mac, and Linux.

  18. The Hubble Web: The Dark Matter Problem and Cosmic Strings

    NASA Astrophysics Data System (ADS)

    Alexander, Stephon

    2009-07-01

    I propose a reinterpretation of cosmic dark matter in which a rigid network of cosmic strings formed at the end of inflation. The cosmic strings fulfill three functions: At recombination they provide an accretion mechanism for virializing baryonic and warm dark matter into disks. These cosmic strings survive as configurations which thread spiral and elliptical galaxies leading to the observed flatness of rotation curves and the Tully-Fisher relation. We find a relationship between the rotational velocity of the galaxy and the string tension and discuss the testability of this model.

  19. Possibility of dying as a unified explanation of why we discount the future, get weaker with age, and display risk-aversion.

    PubMed

    Chowdhry, Bhagwan

    2011-01-01

    I formulate a simple and parsimonious evolutionary model that shows that because most species face a possibility of dying because of external factors, called extrinsic mortality in the biology literature, it can simultaneously explain (a) why we discount the future, (b) get weaker with age, and (c) display risk-aversion. The paper suggests that testable restrictions—across species, across time, or across genders—among time preference, aging, and risk-aversion could be analyzed in a simple framework .

  20. Designing for competence: spaces that enhance collaboration readiness in healthcare.

    PubMed

    Lamb, Gerri; Shraiky, James

    2013-09-01

    Many universities in the United States are investing in classrooms and campuses designed to increase collaboration and teamwork among the health professions. To date, we know little about whether these learning spaces are having the intended impact on student performance. Recent advances in the identification of interprofessional teamwork competencies provide a much-needed step toward a defined outcome metric. Rigorous study of the relationship between design and student competence in collaboration also requires clear specification of design concepts and development of testable frameworks. Such theory-based evaluation is crucial for design to become an integral part of interprofessional education strategies and initiatives. Current classroom and campus designs were analyzed for common themes and features in collaborative spaces as a starting place for specification of design concepts and model development. Four major themes were identified: flexibility, visual transparency/proximity, technology and environmental infrastructure. Potential models linking this preliminary set of design concepts to student competencies are proposed and used to generate hypotheses for future study of the impact of collaborative design spaces on student outcomes.

  1. Steps in the bacterial flagellar motor.

    PubMed

    Mora, Thierry; Yu, Howard; Sowa, Yoshiyuki; Wingreen, Ned S

    2009-10-01

    The bacterial flagellar motor is a highly efficient rotary machine used by many bacteria to propel themselves. It has recently been shown that at low speeds its rotation proceeds in steps. Here we propose a simple physical model, based on the storage of energy in protein springs, that accounts for this stepping behavior as a random walk in a tilted corrugated potential that combines torque and contact forces. We argue that the absolute angular position of the rotor is crucial for understanding step properties and show this hypothesis to be consistent with the available data, in particular the observation that backward steps are smaller on average than forward steps. We also predict a sublinear speed versus torque relationship for fixed load at low torque, and a peak in rotor diffusion as a function of torque. Our model provides a comprehensive framework for understanding and analyzing stepping behavior in the bacterial flagellar motor and proposes novel, testable predictions. More broadly, the storage of energy in protein springs by the flagellar motor may provide useful general insights into the design of highly efficient molecular machines.

  2. Modeling the attenuation and failure of action potentials in the dendrites of hippocampal neurons.

    PubMed Central

    Migliore, M

    1996-01-01

    We modeled two different mechanisms, a shunting conductance and a slow sodium inactivation, to test whether they could modulate the active propagation of a train of action potentials in a dendritic tree. Computer simulations, using a compartmental model of a pyramidal neuron, suggest that each of these two mechanisms could account for the activity-dependent attenuation and failure of the action potentials in the dendrites during the train. Each mechanism is shown to be in good qualitative agreement with experimental findings on somatic or dendritic stimulation and on the effects of hyperpolarization. The conditions under which branch point failures can be observed, and a few experimentally testable predictions, are presented and discussed. PMID:8913580

  3. Initial eccentricity fluctuations and their relation to higher-order flow harmonics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lacey, R.; Wei,R.; Jia,J.

    2011-06-01

    Monte Carlo simulations are used to compute the centrality dependence of the participant eccentricities ({var_epsilon}{sub n}) in Au+Au collisions for the two primary models currently employed for eccentricity estimates - the Glauber and the factorized Kharzeev-Levin-Nardi (fKLN) models. They suggest specific testable predictions for the magnitude and centrality dependence of the flow coefficients v{sub n}, respectively measured relative to the event planes {Psi}{sub n}. They also indicate that the ratios of several of these coefficients may provide an additional constraint for distinguishing between the models. Such a constraint could be important for a more precise determination of the specific viscositymore » of the matter produced in heavy ion collisions.« less

  4. Majorana dark matter with B+L gauge symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao

    Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less

  5. Majorana dark matter with B+L gauge symmetry

    DOE PAGES

    Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao

    2017-04-07

    Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less

  6. Chromosomes, conflict, and epigenetics: chromosomal speciation revisited.

    PubMed

    Brown, Judith D; O'Neill, Rachel J

    2010-01-01

    Since Darwin first noted that the process of speciation was indeed the "mystery of mysteries," scientists have tried to develop testable models for the development of reproductive incompatibilities-the first step in the formation of a new species. Early theorists proposed that chromosome rearrangements were implicated in the process of reproductive isolation; however, the chromosomal speciation model has recently been questioned. In addition, recent data from hybrid model systems indicates that simple epistatic interactions, the Dobzhansky-Muller incompatibilities, are more complex. In fact, incompatibilities are quite broad, including interactions among heterochromatin, small RNAs, and distinct, epigenetically defined genomic regions such as the centromere. In this review, we will examine both classical and current models of chromosomal speciation and describe the "evolving" theory of genetic conflict, epigenetics, and chromosomal speciation.

  7. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  8. Noah, Joseph and Convex Hulls

    NASA Astrophysics Data System (ADS)

    Watkins, N. W.; Chau, Y.; Chapman, S. C.

    2010-12-01

    The idea of describing animal movement by mathematical models based on diffusion and Brownian motion has a long heritage. It has thus been natural to account for those aspects of motion that depart from the Brownian by the use of models incorporating long memory & subdiffusion (“the Joseph effect”) and/or heavy tails & superdiffusion (“the Noah effect”). My own interest in this problem was originally from a geoscience perspective, and was triggered by the need to model time series in space physics where both effects coincide. Subsequently I have been involved in animal foraging studies [e.g. Edwards et al, Nature, 2007]. I will describe some recent work [Watkins et al, PRE, 2009] which studies how fixed-timestep and variable-timestep formulations of anomalous diffusion are related in the presence of heavy tails and long range memory (stable processes versus the CTRW). Quantities for which different scaling relations are predicted between the two approaches are of particular interest, to aid testability. I will also present some of work in progress on the convex hull of anomalously diffusing walkers, inspired by its possible relevance to the idea of home range in biology, and by Randon-Furling et al’s recent analytical results in the Brownian case [PRL, 2009].

  9. A neuroscience perspective on sexual risk behavior in adolescence and emerging adulthood

    PubMed Central

    VICTOR, ELIZABETH C.; HARIRI, AHMAD R.

    2016-01-01

    Late adolescence and emerging adulthood (specifically ages 15–24) represent a period of heightened sexual risk taking resulting in the greatest annual rates of sexually transmitted infections and unplanned pregnancies in the US population. Ongoing efforts to prevent such negative consequences are likely to benefit from a deepening of our understanding of biological mechanisms through which sexual risk taking emerges and biases decision making during this critical window. Here we present a neuroscience framework from which a mechanistic examination of sexual risk taking can be advanced. Specifically, we adapt the neurodevelopmental triadic model, which outlines how motivated behavior is governed by three systems: approach, avoidance, and regulation, to sexual decision making and subsequent risk behavior. We further propose a testable hypothesis of the triadic model, wherein relatively decreased threat-related amygdala reactivity and increased reward-related ventral striatum reactivity leads to sexual risk taking, which is particularly exaggerated during adolescence and young adulthood when there is an overexpression of dopaminergic neurons coupled with immature top-down prefrontal cortex regulation. We conclude by discussing how future research based on our adapted triadic model can inform ongoing efforts to improve intervention and prevention efforts. PMID:26611719

  10. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    PubMed Central

    Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.

    2017-01-01

    Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187

  11. The evolution of social and semantic networks in epistemic communities

    NASA Astrophysics Data System (ADS)

    Margolin, Drew Berkley

    This study describes and tests a model of scientific inquiry as an evolving, organizational phenomenon. Arguments are derived from organizational ecology and evolutionary theory. The empirical subject of study is an epistemic community of scientists publishing on a research topic in physics: the string theoretic concept of "D-branes." The study uses evolutionary theory as a means of predicting change in the way members of the community choose concepts to communicate acceptable knowledge claims. It is argued that the pursuit of new knowledge is risky, because the reliability of a novel knowledge claim cannot be verified until after substantial resources have been invested. Using arguments from both philosophy of science and organizational ecology, it is suggested that scientists can mitigate and sensibly share the risks of knowledge discovery within the community by articulating their claims in legitimate forms, i.e., forms that are testable within and relevant to the community. Evidence from empirical studies of semantic usage suggests that the legitimacy of a knowledge claim is influenced by the characteristics of the concepts in which it is articulated. A model of conceptual retention, variation, and selection is then proposed for predicting the usage of concepts and conceptual co-occurrences in the future publications of the community, based on its past. Results substantially supported hypothesized retention and selection mechanisms. Future concept usage was predictable from previous concept usage, but was limited by conceptual carrying capacity as predicted by density dependence theory. Also as predicted, retention was stronger when the community showed a more cohesive social structure. Similarly, concepts that showed structural signatures of high testability and relevance were more likely to be selected after previous usage frequency was controlled for. By contrast, hypotheses for variation mechanisms were not supported. Surprisingly, concepts whose structural position suggested they would be easiest to discover through search processes were used less frequently, once previous usage frequency was controlled for. The study also makes a theoretical contribution by suggesting ways that evolutionary theory can be used to integrate findings from the study of science with insights from organizational communication. A variety of concrete directions for future studies of social and semantic network evolution are also proposed.

  12. High-throughput metabarcoding of eukaryotic diversity for environmental monitoring of offshore oil-drilling activities.

    PubMed

    Lanzén, Anders; Lekang, Katrine; Jonassen, Inge; Thompson, Eric M; Troedsson, Christofer

    2016-09-01

    As global exploitation of available resources increases, operations extend towards sensitive and previously protected ecosystems. It is important to monitor such areas in order to detect, understand and remediate environmental responses to stressors. The natural heterogeneity and complexity of communities means that accurate monitoring requires high resolution, both temporally and spatially, as well as more complete assessments of taxa. Increased resolution and taxonomic coverage is economically challenging using current microscopy-based monitoring practices. Alternatively, DNA sequencing-based methods have been suggested for cost-efficient monitoring, offering additional insights into ecosystem function and disturbance. Here, we applied DNA metabarcoding of eukaryotic communities in marine sediments, in areas of offshore drilling on the Norwegian continental shelf. Forty-five samples, collected from seven drilling sites in the Troll/Oseberg region, were assessed, using the small subunit ribosomal RNA gene as a taxonomic marker. In agreement with results based on classical morphology-based monitoring, we were able to identify changes in sediment communities surrounding oil platforms. In addition to overall changes in community structure, we identified several potential indicator taxa, responding to pollutants associated with drilling fluids. These included the metazoan orders Macrodasyida, Macrostomida and Ceriantharia, as well as several ciliates and other protist taxa, typically not targeted by environmental monitoring programmes. Analysis of a co-occurrence network to study the distribution of taxa across samples provided a framework for better understanding the impact of anthropogenic activities on the benthic food web, generating novel, testable hypotheses of trophic interactions structuring benthic communities. © 2016 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.

  13. Econometrics of exhaustible resource supply: a theory and an application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epple, D.

    1983-01-01

    This report takes a major step toward developing a fruitful approach to empirical analysis of resource supply. It is the first empirical application of resource theory that has successfully integrated the effects of depletion of nonrenewable resources with the effects of uncertainty about future costs and prices on supply behavior. Thus, the model is a major improvement over traditional engineering-optimization models that assume complete certainty, and over traditional econometrics models that are only implicitly related to the theory of resource supply. The model is used to test hypotheses about interdependence of oil and natural gas discoveries, depletion, ultimate recovery, andmore » the role of price expectations. This paper demonstrates the feasibility of using exhaustible resource theory in the development of empirically testable models. 19 refs., 1 fig., 5 tabs.« less

  14. Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)

    1998-01-01

    The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.

  15. Neutrino mass, dark matter, and Baryon asymmetry via TeV-scale physics without fine-tuning.

    PubMed

    Aoki, Mayumi; Kanemura, Shinya; Seto, Osamu

    2009-02-06

    We propose an extended version of the standard model, in which neutrino oscillation, dark matter, and the baryon asymmetry of the Universe can be simultaneously explained by the TeV-scale physics without assuming a large hierarchy among the mass scales. Tiny neutrino masses are generated at the three-loop level due to the exact Z2 symmetry, by which the stability of the dark matter candidate is guaranteed. The extra Higgs doublet is required not only for the tiny neutrino masses but also for successful electroweak baryogenesis. The model provides discriminative predictions especially in Higgs phenomenology, so that it is testable at current and future collider experiments.

  16. Reliability/maintainability/testability design for dormancy

    NASA Astrophysics Data System (ADS)

    Seman, Robert M.; Etzl, Julius M.; Purnell, Arthur W.

    1988-05-01

    This document has been prepared as a tool for designers of dormant military equipment and systems. The purpose of this handbook is to provide design engineers with Reliability/Maintainability/Testability design guidelines for systems which spend significant portions of their life cycle in a dormant state. The dormant state is defined as a nonoperating mode where a system experiences very little or no electrical stress. The guidelines in this report present design criteria in the following categories: (1) Part Selection and Control; (2) Derating Practices; (3) Equipment/System Packaging; (4) Transportation and Handling; (5) Maintainability Design; (6) Testability Design; (7) Evaluation Methods for In-Plant and Field Evaluation; and (8) Product Performance Agreements. Whereever applicable, design guidelines for operating systems were included with the dormant design guidelines. This was done in an effort to produce design guidelines for a more complete life cycle. Although dormant systems spend significant portions of their life cycle in a nonoperating mode, the designer must design the system for the complete life cycle, including nonoperating as well as operating modes. The guidelines are primarily intended for use in the design of equipment composed of electronic parts and components. However, they can also be used for the design of systems which encompass both electronic and nonelectronic parts, as well as for the modification of existing systems.

  17. Delay test generation for synchronous sequential circuits

    NASA Astrophysics Data System (ADS)

    Devadas, Srinivas

    1989-05-01

    We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.

  18. Ocular biometric parameters among 3-year-old Chinese children: testability, distribution and association with anthropometric parameters

    PubMed Central

    Huang, Dan; Chen, Xuejuan; Gong, Qi; Yuan, Chaoqun; Ding, Hui; Bai, Jing; Zhu, Hui; Fu, Zhujun; Yu, Rongbin; Liu, Hu

    2016-01-01

    This survey was conducted to determine the testability, distribution and associations of ocular biometric parameters in Chinese preschool children. Ocular biometric examinations, including the axial length (AL) and corneal radius of curvature (CR), were conducted on 1,688 3-year-old subjects by using an IOLMaster in August 2015. Anthropometric parameters, including height and weight, were measured according to a standardized protocol, and body mass index (BMI) was calculated. The testability was 93.7% for the AL and 78.6% for the CR overall, and both measures improved with age. Girls performed slightly better in AL measurements (P = 0.08), and the difference in CR was statistically significant (P < 0.05). The AL distribution was normal in girls (P = 0.12), whereas it was not in boys (P < 0.05). For CR1, all subgroups presented normal distributions (P = 0.16 for boys; P = 0.20 for girls), but the distribution varied when the subgroups were combined (P < 0.05). CR2 presented a normal distribution (P = 0.11), whereas the AL/CR ratio was abnormal (P < 0.001). Boys exhibited a significantly longer AL, a greater CR and a greater AL/CR ratio than girls (all P < 0.001). PMID:27384307

  19. Modelling the molecular mechanisms of aging

    PubMed Central

    Mc Auley, Mark T.; Guimera, Alvaro Martinez; Hodgson, David; Mcdonald, Neil; Mooney, Kathleen M.; Morgan, Amy E.

    2017-01-01

    The aging process is driven at the cellular level by random molecular damage that slowly accumulates with age. Although cells possess mechanisms to repair or remove damage, they are not 100% efficient and their efficiency declines with age. There are many molecular mechanisms involved and exogenous factors such as stress also contribute to the aging process. The complexity of the aging process has stimulated the use of computational modelling in order to increase our understanding of the system, test hypotheses and make testable predictions. As many different mechanisms are involved, a wide range of models have been developed. This paper gives an overview of the types of models that have been developed, the range of tools used, modelling standards and discusses many specific examples of models that have been grouped according to the main mechanisms that they address. We conclude by discussing the opportunities and challenges for future modelling in this field. PMID:28096317

  20. A discrete control model of PLANT

    NASA Technical Reports Server (NTRS)

    Mitchell, C. M.

    1985-01-01

    A model of the PLANT system using the discrete control modeling techniques developed by Miller is described. Discrete control models attempt to represent in a mathematical form how a human operator might decompose a complex system into simpler parts and how the control actions and system configuration are coordinated so that acceptable overall system performance is achieved. Basic questions include knowledge representation, information flow, and decision making in complex systems. The structure of the model is a general hierarchical/heterarchical scheme which structurally accounts for coordination and dynamic focus of attention. Mathematically, the discrete control model is defined in terms of a network of finite state systems. Specifically, the discrete control model accounts for how specific control actions are selected from information about the controlled system, the environment, and the context of the situation. The objective is to provide a plausible and empirically testable accounting and, if possible, explanation of control behavior.

  1. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  2. Evolutionary Perspectives on Genetic and Environmental Risk Factors for Psychiatric Disorders.

    PubMed

    Keller, Matthew C

    2018-05-07

    Evolutionary medicine uses evolutionary theory to help elucidate why humans are vulnerable to disease and disorders. I discuss two different types of evolutionary explanations that have been used to help understand human psychiatric disorders. First, a consistent finding is that psychiatric disorders are moderately to highly heritable, and many, such as schizophrenia, are also highly disabling and appear to decrease Darwinian fitness. Models used in evolutionary genetics to understand why genetic variation exists in fitness-related traits can be used to understand why risk alleles for psychiatric disorders persist in the population. The usual explanation for species-typical adaptations-natural selection-is less useful for understanding individual differences in genetic risk to disorders. Rather, two other types of models, mutation-selection-drift and balancing selection, offer frameworks for understanding why genetic variation in risk to psychiatric (and other) disorders exists, and each makes predictions that are now testable using whole-genome data. Second, species-typical capacities to mount reactions to negative events are likely to have been crafted by natural selection to minimize fitness loss. The pain reaction to tissue damage is almost certainly such an example, but it has been argued that the capacity to experience depressive symptoms such as sadness, anhedonia, crying, and fatigue in the face of adverse life situations may have been crafted by natural selection as well. I review the rationale and strength of evidence for this hypothesis. Evolutionary hypotheses of psychiatric disorders are important not only for offering explanations for why psychiatric disorders exist, but also for generating new, testable hypotheses and understanding how best to design studies and analyze data.

  3. Population-Based in Vitro Hazard and Concentration–Response Assessment of Chemicals: The 1000 Genomes High-Throughput Screening Study

    PubMed Central

    Abdo, Nour; Xia, Menghang; Brown, Chad C.; Kosyk, Oksana; Huang, Ruili; Sakamuru, Srilatha; Zhou, Yi-Hui; Jack, John R.; Gallins, Paul; Xia, Kai; Li, Yun; Chiu, Weihsueh A.; Motsinger-Reif, Alison A.; Austin, Christopher P.; Tice, Raymond R.

    2015-01-01

    Background: Understanding of human variation in toxicity to environmental chemicals remains limited, so human health risk assessments still largely rely on a generic 10-fold factor (10½ each for toxicokinetics and toxicodynamics) to account for sensitive individuals or subpopulations. Objectives: We tested a hypothesis that population-wide in vitro cytotoxicity screening can rapidly inform both the magnitude of and molecular causes for interindividual toxicodynamic variability. Methods: We used 1,086 lymphoblastoid cell lines from the 1000 Genomes Project, representing nine populations from five continents, to assess variation in cytotoxic response to 179 chemicals. Analysis included assessments of population variation and heritability, and genome-wide association mapping, with attention to phenotypic relevance to human exposures. Results: For about half the tested compounds, cytotoxic response in the 1% most “sensitive” individual occurred at concentrations within a factor of 10½ (i.e., approximately 3) of that in the median individual; however, for some compounds, this factor was > 10. Genetic mapping suggested important roles for variation in membrane and transmembrane genes, with a number of chemicals showing association with SNP rs13120371 in the solute carrier SLC7A11, previously implicated in chemoresistance. Conclusions: This experimental approach fills critical gaps unaddressed by recent large-scale toxicity testing programs, providing quantitative, experimentally based estimates of human toxicodynamic variability, and also testable hypotheses about mechanisms contributing to interindividual variation. Citation: Abdo N, Xia M, Brown CC, Kosyk O, Huang R, Sakamuru S, Zhou YH, Jack JR, Gallins P, Xia K, Li Y, Chiu WA, Motsinger-Reif AA, Austin CP, Tice RR, Rusyn I, Wright FA. 2015. Population-based in vitro hazard and concentration–response assessment of chemicals: the 1000 Genomes high-throughput screening study. Environ Health Perspect 123:458–466; http://dx.doi.org/10.1289/ehp.1408775 PMID:25622337

  4. Earthquake forecasts for the CSEP Japan experiment based on the RI algorithm

    NASA Astrophysics Data System (ADS)

    Nanjo, K. Z.

    2011-03-01

    An earthquake forecast testing experiment for Japan, the first of its kind, is underway within the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) under a controlled environment. Here we give an overview of the earthquake forecast models, based on the RI algorithm, which we have submitted to the CSEP Japan experiment. Models have been submitted to a total of 9 categories, corresponding to 3 testing classes (3 years, 1 year, and 3 months) and 3 testing regions. The RI algorithm is originally a binary forecast system based on the working assumption that large earthquakes are more likely to occur in the future at locations of higher seismicity in the past. It is based on simple counts of the number of past earthquakes, which is called the Relative Intensity (RI) of seismicity. To improve its forecast performance, we first expand the RI algorithm by introducing spatial smoothing. We then convert the RI representation from a binary system to a CSEP-testable model that produces forecasts for the number of earthquakes of predefined magnitudes. We use information on past seismicity to tune the parameters. The final submittal consists of 36 executable computer codes: 4 variants corresponding to different smoothing parameters for each of the 9 categories. They will help to elucidate which categories and which smoothing parameters are the most meaningful for the RI hypothesis. The main purpose of our participation in the experiment is to better understand the significance of the relative intensity of seismicity for earthquake forecastability in Japan.

  5. Is health care infected by Baumol's cost disease? Test of a new model.

    PubMed

    Atanda, Akinwande; Menclova, Andrea Kutinova; Reed, W Robert

    2018-05-01

    Rising health care costs are a policy concern across the Organisation for Economic Co-operation and Development, and relatively little consensus exists concerning their causes. One explanation that has received revived attention is Baumol's cost disease (BCD). However, developing a theoretically appropriate test of BCD has been a challenge. In this paper, we construct a 2-sector model firmly based on Baumol's axioms. We then derive several testable propositions. In particular, the model predicts that (a) the share of total labor employed in the health care sector and (b) the relative price index of the health and non-health care sectors should both be positively related to economy-wide productivity. The model also predicts that (c) the share of labor in the health sector will be negatively related and (d) the ratio of prices in the health and non-health sectors unrelated, to the demand for non-health services. Using annual data from 28 Organisation for Economic Co-operation and Development countries over the years 1995-2016 and from 14 U.S. industry groups over the years 1947-2015, we find little evidence to support the predictions of BCD once we address spurious correlation due to coincident trending and other econometric issues. Copyright © 2018 John Wiley & Sons, Ltd.

  6. `The Wildest Speculation of All': Lemaître and the Primeval-Atom Universe

    NASA Astrophysics Data System (ADS)

    Kragh, Helge

    Although there is no logical connection between the expanding universe and the idea of a big bang, from a historical perspective the two concepts were intimately connected. Four years after his pioneering work on the expanding universe, Lemaître suggested that the entire universe had originated in a kind of explosive act from what he called a primeval atom and which he likened to a huge atomic nucleus. His theory of 1931 was the first realistic finite-age model based upon relativistic cosmology, but it presupposed a material proto-universe and thus avoided an initial singularity. What were the sources of Lemaître's daring proposal? Well aware that his new cosmological model needed to have testable consequences, he argued that the cosmic rays were fossils of the original radioactive explosion. However, this hypothesis turned out to be untenable. The first big-bang model ever was received with a mixture of indifference and hostility. Why? The answer is not that contemporary cosmologists failed to recognize Lemaître's genius, but rather that his model was scientifically unconvincing. Although Lemaître was indeed the father of big-bang cosmology, his brilliant idea was only turned into a viable cosmological theory by later physicists.

  7. Stalk model of membrane fusion: solution of energy crisis.

    PubMed Central

    Kozlovsky, Yonathan; Kozlov, Michael M

    2002-01-01

    Membrane fusion proceeds via formation of intermediate nonbilayer structures. The stalk model of fusion intermediate is commonly recognized to account for the major phenomenology of the fusion process. However, in its current form, the stalk model poses a challenge. On one hand, it is able to describe qualitatively the modulation of the fusion reaction by the lipid composition of the membranes. On the other, it predicts very large values of the stalk energy, so that the related energy barrier for fusion cannot be overcome by membranes within a biologically reasonable span of time. We suggest a new structure for the fusion stalk, which resolves the energy crisis of the model. Our approach is based on a combined deformation of the stalk membrane including bending of the membrane surface and tilt of the hydrocarbon chains of lipid molecules. We demonstrate that the energy of the fusion stalk is a few times smaller than those predicted previously and the stalks are feasible in real systems. We account quantitatively for the experimental results on dependence of the fusion reaction on the lipid composition of different membrane monolayers. We analyze the dependence of the stalk energy on the distance between the fusing membranes and provide the experimentally testable predictions for the structural features of the stalk intermediates. PMID:11806930

  8. Cardiac rehabilitation delivery model for low-resource settings.

    PubMed

    Grace, Sherry L; Turk-Adawi, Karam I; Contractor, Aashish; Atrey, Alison; Campbell, Norm; Derman, Wayne; Melo Ghisi, Gabriela L; Oldridge, Neil; Sarkar, Bidyut K; Yeo, Tee Joo; Lopez-Jimenez, Francisco; Mendis, Shanthi; Oh, Paul; Hu, Dayi; Sarrafzadegan, Nizal

    2016-09-15

    Cardiovascular disease is a global epidemic, which is largely preventable. Cardiac rehabilitation (CR) is demonstrated to be cost-effective and efficacious in high-income countries. CR could represent an important approach to mitigate the epidemic of cardiovascular disease in lower-resource settings. The purpose of this consensus statement was to review low-cost approaches to delivering the core components of CR, to propose a testable model of CR which could feasibly be delivered in middle-income countries. A literature review regarding delivery of each core CR component, namely: (1) lifestyle risk factor management (ie, physical activity, diet, tobacco and mental health), (2) medical risk factor management (eg, lipid control, blood pressure control), (3) education for self-management and (4) return to work, in low-resource settings was undertaken. Recommendations were developed based on identified articles, using a modified GRADE approach where evidence in a low-resource setting was available, or consensus where evidence was not. Available data on cost of CR delivery in low-resource settings suggests it is not feasible to deliver CR in low-resource settings as is delivered in high-resource ones. Strategies which can be implemented to deliver all of the core CR components in low-resource settings were summarised in practice recommendations, and approaches to patient assessment proffered. It is suggested that CR be adapted by delivery by non-physician healthcare workers, in non-clinical settings. Advocacy to achieve political commitment for broad delivery of adapted CR services in low-resource settings is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  9. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions

    PubMed Central

    Fee, Michale S.

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current “time” in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources. PMID:22754501

  10. Oculomotor learning revisited: a model of reinforcement learning in the basal ganglia incorporating an efference copy of motor actions.

    PubMed

    Fee, Michale S

    2012-01-01

    In its simplest formulation, reinforcement learning is based on the idea that if an action taken in a particular context is followed by a favorable outcome, then, in the same context, the tendency to produce that action should be strengthened, or reinforced. While reinforcement learning forms the basis of many current theories of basal ganglia (BG) function, these models do not incorporate distinct computational roles for signals that convey context, and those that convey what action an animal takes. Recent experiments in the songbird suggest that vocal-related BG circuitry receives two functionally distinct excitatory inputs. One input is from a cortical region that carries context information about the current "time" in the motor sequence. The other is an efference copy of motor commands from a separate cortical brain region that generates vocal variability during learning. Based on these findings, I propose here a general model of vertebrate BG function that combines context information with a distinct motor efference copy signal. The signals are integrated by a learning rule in which efference copy inputs gate the potentiation of context inputs (but not efference copy inputs) onto medium spiny neurons in response to a rewarded action. The hypothesis is described in terms of a circuit that implements the learning of visually guided saccades. The model makes testable predictions about the anatomical and functional properties of hypothesized context and efference copy inputs to the striatum from both thalamic and cortical sources.

  11. Reliability Analysis of a Green Roof Under Different Storm Scenarios

    NASA Astrophysics Data System (ADS)

    William, R. K.; Stillwell, A. S.

    2015-12-01

    Urban environments continue to face the challenges of localized flooding and decreased water quality brought on by the increasing amount of impervious area in the built environment. Green infrastructure provides an alternative to conventional storm sewer design by using natural processes to filter and store stormwater at its source. However, there are currently few consistent standards available in North America to ensure that installed green infrastructure is performing as expected. This analysis offers a method for characterizing green roof failure using a visual aid commonly used in earthquake engineering: fragility curves. We adapted the concept of the fragility curve based on the efficiency in runoff reduction provided by a green roof compared to a conventional roof under different storm scenarios. We then used the 2D distributed surface water-groundwater coupled model MIKE SHE to model the impact that a real green roof might have on runoff in different storm events. We then employed a multiple regression analysis to generate an algebraic demand model that was input into the Matlab-based reliability analysis model FERUM, which was then used to calculate the probability of failure. The use of reliability analysis as a part of green infrastructure design code can provide insights into green roof weaknesses and areas for improvement. It also supports the design of code that is more resilient than current standards and is easily testable for failure. Finally, the understanding of reliability of a single green roof module under different scenarios can support holistic testing of system reliability.

  12. A Parameterized Model of Amylopectin Synthesis Provides Key Insights into the Synthesis of Granular Starch

    PubMed Central

    Wu, Alex Chi; Morell, Matthew K.; Gilbert, Robert G.

    2013-01-01

    A core set of genes involved in starch synthesis has been defined by genetic studies, but the complexity of starch biosynthesis has frustrated attempts to elucidate the precise functional roles of the enzymes encoded. The chain-length distribution (CLD) of amylopectin in cereal endosperm is modeled here on the basis that the CLD is produced by concerted actions of three enzyme types: starch synthases, branching and debranching enzymes, including their respective isoforms. The model, together with fitting to experiment, provides four key insights. (1) To generate crystalline starch, defined restrictions on particular ratios of enzymatic activities apply. (2) An independent confirmation of the conclusion, previously reached solely from genetic studies, of the absolute requirement for debranching enzyme in crystalline amylopectin synthesis. (3) The model provides a mechanistic basis for understanding how successive arrays of crystalline lamellae are formed, based on the identification of two independent types of long amylopectin chains, one type remaining in the amorphous lamella, while the other propagates into, and is integral to the formation of, an adjacent crystalline lamella. (4) The model provides a means by which a small number of key parameters defining the core enzymatic activities can be derived from the amylopectin CLD, providing the basis for focusing studies on the enzymatic requirements for generating starches of a particular structure. The modeling approach provides both a new tool to accelerate efforts to understand granular starch biosynthesis and a basis for focusing efforts to manipulate starch structure and functionality using a series of testable predictions based on a robust mechanistic framework. PMID:23762422

  13. The evolution of mimicry under constraints.

    PubMed

    Holen, Øistein Haugsten; Johnstone, Rufus A

    2004-11-01

    The resemblance between mimetic organisms and their models varies from near perfect to very crude. One possible explanation, which has received surprisingly little attention, is that evolution can improve mimicry only at some cost to the mimetic organism. In this article, an evolutionary game theory model of mimicry is presented that incorporates such constraints. The model generates novel and testable predictions. First, Batesian mimics that are very common and/or mimic very weakly defended models should evolve either inaccurate mimicry (by stabilizing selection) or mimetic polymorphism. Second, Batesian mimics that are very common and/or mimic very weakly defended models are more likely to evolve mimetic polymorphism if they encounter predators at high rates and/or are bad at evading predator attacks. The model also examines how cognitive constraints acting on signal receivers may help determine evolutionarily stable levels of mimicry. Surprisingly, improved discrimination abilities among signal receivers may sometimes select for less accurate mimicry.

  14. The evolution of dispersal in a Levins' type metapopulation model.

    PubMed

    Jansen, Vincent A A; Vitalis, Renaud

    2007-10-01

    We study the evolution of the dispersal rate in a metapopulation model with extinction and colonization dynamics, akin to the model as originally described by Levins. To do so we extend the metapopulation model with a description of the within patch dynamics. By means of a separation of time scales we analytically derive a fitness expression from first principles for this model. The fitness function can be written as an inclusive fitness equation (Hamilton's rule). By recasting this equation in a form that emphasizes the effects of competition we show the effect of the local competition and the local population size on the evolution of dispersal. We find that the evolution of dispersal cannot be easily interpreted in terms of avoidance of kin competition, but rather that increased dispersal reduces the competitive ability. Our model also yields a testable prediction in term of relatedness and life-history parameters.

  15. Minimal model linking two great mysteries: Neutrino mass and dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farzan, Yasaman

    2009-10-01

    We present an economic model that establishes a link between neutrino masses and properties of the dark matter candidate. The particle content of the model can be divided into two groups: light particles with masses lighter than the electroweak scale and heavy particles. The light particles, which also include the dark matter candidate, are predicted to show up in the low energy experiments such as (K{yields}l+missing energy), making the model testable. The heavy sector can show up at the LHC and may give rise to Br(l{sub i}{yields}l{sub j}{gamma}) close to the present bounds. In principle, the new couplings of themore » model can independently be derived from the data from the LHC and from the information on neutrino masses and lepton flavor violating rare decays, providing the possibility of an intensive cross-check of the model.« less

  16. The proper treatment of language acquisition and change in a population setting.

    PubMed

    Niyogi, Partha; Berwick, Robert C

    2009-06-23

    Language acquisition maps linguistic experience, primary linguistic data (PLD), onto linguistic knowledge, a grammar. Classically, computational models of language acquisition assume a single target grammar and one PLD source, the central question being whether the target grammar can be acquired from the PLD. However, real-world learners confront populations with variation, i.e., multiple target grammars and PLDs. Removing this idealization has inspired a new class of population-based language acquisition models. This paper contrasts 2 such models. In the first, iterated learning (IL), each learner receives PLD from one target grammar but different learners can have different targets. In the second, social learning (SL), each learner receives PLD from possibly multiple targets, e.g., from 2 parents. We demonstrate that these 2 models have radically different evolutionary consequences. The IL model is dynamically deficient in 2 key respects. First, the IL model admits only linear dynamics and so cannot describe phase transitions, attested rapid changes in languages over time. Second, the IL model cannot properly describe the stability of languages over time. In contrast, the SL model leads to nonlinear dynamics, bifurcations, and possibly multiple equilibria and so suffices to model both the case of stable language populations, mixtures of more than 1 language, as well as rapid language change. The 2 models also make distinct, empirically testable predictions about language change. Using historical data, we show that the SL model more faithfully replicates the dynamics of the evolution of Middle English.

  17. The Synchrotron Shock Model Confronts a "Line of Death" in the BATSE Gamma-Ray Burst Data

    NASA Technical Reports Server (NTRS)

    Preece, Robert D.; Briggs, Michael S.; Mallozzi, Robert S.; Pendleton, Geoffrey N.; Paciesas, W. S.; Band, David L.

    1998-01-01

    The synchrotron shock model (SSM) for gamma-ray burst emission makes a testable prediction: that the observed low-energy power-law photon number spectral index cannot exceed -2/3 (where the photon model is defined with a positive index: $dN/dE \\propto E{alpha}$). We have collected time-resolved spectral fit parameters for over 100 bright bursts observed by the Burst And Transient Source Experiment on board the {\\it Compton Gamma Ray Observatory}. Using this database, we find 23 bursts in which the spectral index limit of the SSM is violated, We discuss elements of the analysis methodology that affect the robustness of this result, as well as some of the escape hatches left for the SSM by theory.

  18. Complexity, Testability, and Fault Analysis of Digital, Analog, and Hybrid Systems.

    DTIC Science & Technology

    1984-09-30

    E. Moret Table 2. Decision Table. Example 3 ond and third rules are inconsistent, since Raining? Yes No No both could apparently apply when it is...misclassification; a similar to be approach based on game theory was de- p.. - 0.134. scribed in SLAG71 and a third in KULK76. When the class assignments are...to a change from (X, Y)=(I,1) to (X,Y)=(O,O); the second corresponds to a change in the value of the function g=X+Y; and the third corresponds to a

  19. Genetic models of homosexuality: generating testable predictions

    PubMed Central

    Gavrilets, Sergey; Rice, William R

    2006-01-01

    Homosexuality is a common occurrence in humans and other species, yet its genetic and evolutionary basis is poorly understood. Here, we formulate and study a series of simple mathematical models for the purpose of predicting empirical patterns that can be used to determine the form of selection that leads to polymorphism of genes influencing homosexuality. Specifically, we develop theory to make contrasting predictions about the genetic characteristics of genes influencing homosexuality including: (i) chromosomal location, (ii) dominance among segregating alleles and (iii) effect sizes that distinguish between the two major models for their polymorphism: the overdominance and sexual antagonism models. We conclude that the measurement of the genetic characteristics of quantitative trait loci (QTLs) found in genomic screens for genes influencing homosexuality can be highly informative in resolving the form of natural selection maintaining their polymorphism. PMID:17015344

  20. Computational model for living nematic

    NASA Astrophysics Data System (ADS)

    Genkin, Mikhail; Sokolov, Andrey; Lavrentovich, Oleg; Aranson, Igor

    A realization of an active system has been conceived by combining swimming bacteria and a lyotropic nematic liquid crystal. Here, by coupling the well-established and validated model of nematic liquid crystals with the bacterial dynamics we developed a computational model describing intricate properties of such a living nematic. In faithful agreement with the experiment, the model reproduces the onset of periodic undulation of the nematic director and consequent proliferation of topological defects with the increase in bacterial concentration. It yields testable prediction on the accumulation and transport of bacteria in the cores of +1/2 topological defects and depletion of bacteria in the cores of -1/2 defects. Our new experiment on motile bacteria suspended in a free-standing liquid crystalline film fully confirmed this prediction. This effect can be used to capture and manipulation of small amounts of bacteria.

  1. CROSS-DISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: A new mammalian circadian oscillator model including the cAMP module

    NASA Astrophysics Data System (ADS)

    Wang, Jun-Wei; Zhou, Tian-Shou

    2009-12-01

    In this paper, we develop a new mathematical model for the mammalian circadian clock, which incorporates both transcriptional/translational feedback loops (TTFLs) and a cAMP-mediated feedback loop. The model shows that TTFLs and cAMP signalling cooperatively drive the circadian rhythms. It reproduces typical experimental observations with qualitative similarities, e.g. circadian oscillations in constant darkness and entrainment to light-dark cycles. In addition, it can explain the phenotypes of cAMP-mutant and Rev-erbα-/--mutant mice, and help us make an experimentally-testable prediction: oscillations may be rescued when arrhythmic mice with constitutively low concentrations of cAMP are crossed with Rev-erbα-/- mutant mice. The model enhances our understanding of the mammalian circadian clockwork from the viewpoint of the entire cell.

  2. A Review of Diagnostic Techniques for ISHM Applications

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, Ann; Biswas, Gautam; Aaseng, Gordon; Narasimhan, Sriam; Pattipati, Krishna

    2005-01-01

    System diagnosis is an integral part of any Integrated System Health Management application. Diagnostic applications make use of system information from the design phase, such as safety and mission assurance analysis, failure modes and effects analysis, hazards analysis, functional models, fault propagation models, and testability analysis. In modern process control and equipment monitoring systems, topological and analytic , models of the nominal system, derived from design documents, are also employed for fault isolation and identification. Depending on the complexity of the monitored signals from the physical system, diagnostic applications may involve straightforward trending and feature extraction techniques to retrieve the parameters of importance from the sensor streams. They also may involve very complex analysis routines, such as signal processing, learning or classification methods to derive the parameters of importance to diagnosis. The process that is used to diagnose anomalous conditions from monitored system signals varies widely across the different approaches to system diagnosis. Rule-based expert systems, case-based reasoning systems, model-based reasoning systems, learning systems, and probabilistic reasoning systems are examples of the many diverse approaches ta diagnostic reasoning. Many engineering disciplines have specific approaches to modeling, monitoring and diagnosing anomalous conditions. Therefore, there is no "one-size-fits-all" approach to building diagnostic and health monitoring capabilities for a system. For instance, the conventional approaches to diagnosing failures in rotorcraft applications are very different from those used in communications systems. Further, online and offline automated diagnostic applications are integrated into an operations framework with flight crews, flight controllers and maintenance teams. While the emphasis of this paper is automation of health management functions, striking the correct balance between automated and human-performed tasks is a vital concern.

  3. Emergent Spatial Patterns of Excitatory and Inhibitory Synaptic Strengths Drive Somatotopic Representational Discontinuities and their Plasticity in a Computational Model of Primary Sensory Cortical Area 3b

    PubMed Central

    Grajski, Kamil A.

    2016-01-01

    Mechanisms underlying the emergence and plasticity of representational discontinuities in the mammalian primary somatosensory cortical representation of the hand are investigated in a computational model. The model consists of an input lattice organized as a three-digit hand forward-connected to a lattice of cortical columns each of which contains a paired excitatory and inhibitory cell. Excitatory and inhibitory synaptic plasticity of feedforward and lateral connection weights is implemented as a simple covariance rule and competitive normalization. Receptive field properties are computed independently for excitatory and inhibitory cells and compared within and across columns. Within digit representational zones intracolumnar excitatory and inhibitory receptive field extents are concentric, single-digit, small, and unimodal. Exclusively in representational boundary-adjacent zones, intracolumnar excitatory and inhibitory receptive field properties diverge: excitatory cell receptive fields are single-digit, small, and unimodal; and the paired inhibitory cell receptive fields are bimodal, double-digit, and large. In simulated syndactyly (webbed fingers), boundary-adjacent intracolumnar receptive field properties reorganize to within-representation type; divergent properties are reacquired following syndactyly release. This study generates testable hypotheses for assessment of cortical laminar-dependent receptive field properties and plasticity within and between cortical representational zones. For computational studies, present results suggest that concurrent excitatory and inhibitory plasticity may underlie novel emergent properties. PMID:27504086

  4. N400 ERPs for actions: building meaning in context

    PubMed Central

    Amoruso, Lucía; Gelormini, Carlos; Aboitiz, Francisco; Alvarez González, Miguel; Manes, Facundo; Cardona, Juan F.; Ibanez, Agustín

    2013-01-01

    Converging neuroscientific evidence suggests the existence of close links between language and sensorimotor cognition. Accordingly, during the comprehension of meaningful actions, our brain would recruit semantic-related operations similar to those associated with the processing of language information. Consistent with this view, electrophysiological findings show that the N400 component, traditionally linked to the semantic processing of linguistic material, can also be elicited by action-related material. This review outlines recent data from N400 studies that examine the understanding of action events. We focus on three specific domains, including everyday action comprehension, co-speech gesture integration, and the semantics involved in motor planning and execution. Based on the reviewed findings, we suggest that both negativities (the N400 and the action-N400) reflect a common neurocognitive mechanism involved in the construction of meaning through the expectancies created by previous experiences and current contextual information. To shed light on how this process is instantiated in the brain, a testable contextual fronto-temporo-parietal model is proposed. PMID:23459873

  5. The effect of analytic and experiential modes of thought on moral judgment.

    PubMed

    Kvaran, Trevor; Nichols, Shaun; Sanfey, Alan

    2013-01-01

    According to dual-process theories, moral judgments are the result of two competing processes: a fast, automatic, affect-driven process and a slow, deliberative, reason-based process. Accordingly, these models make clear and testable predictions about the influence of each system. Although a small number of studies have attempted to examine each process independently in the context of moral judgment, no study has yet tried to experimentally manipulate both processes within a single study. In this chapter, a well-established "mode-of-thought" priming technique was used to place participants in either an experiential/emotional or analytic mode while completing a task in which participants provide judgments about a series of moral dilemmas. We predicted that individuals primed analytically would make more utilitarian responses than control participants, while emotional priming would lead to less utilitarian responses. Support was found for both of these predictions. Implications of these findings for dual-process theories of moral judgment will be discussed. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Local Circumnuclear Magnetar Solution to Extragalactic Fast Radio Bursts

    NASA Astrophysics Data System (ADS)

    Pen, Ue-Li; Connor, Liam

    2015-07-01

    We synthesize the known information about fast radio bursts (FRBs) and radio magnetars, and describe an allowed origin near nuclei of external, but non-cosmological, galaxies. This places them at z\\ll 1, within a few hundred megaparsecs. In this scenario, the high dispersion measure (DM) is dominated by the environment of the FRB, modeled on the known properties of the Milky Way center, whose innermost 100 pc provides 1000 pc cm-3. A radio loud magnetar is known to exist in our galactic center, within ˜2 arcsec of Sgr A*. Based on the polarization, DM, and scattering properties of this known magnetar, we extrapolate its properties to those of Crab-like giant pulses and SGR flares and point out their consistency with observed FRBs. We conclude that galactic center magnetars could be the source of FRBs. This scenario is readily testable with very long baseline interferometry measurements as well as with flux count statistics from large surveys such as CHIME or UTMOST.

  7. Application of Adverse Outcome Pathways (AOPs) in Human ...

    EPA Pesticide Factsheets

    The adverse outcome pathway (AOP) framework was developed to help organize and disseminate existing knowledge concerning the means through which specific perturbations of biological pathways can lead to adverse outcomes considered relevant to risk-based regulatory decision-making. Because many fundamental molecular and cellular pathways are conserved across taxa, data from assays that screen chemicals for their ability to interact with specific biomolecular targets can often be credibly applied to a broad range of species, even if the apical outcomes of those perturbations may differ. Information concerning the different trajectories of adversity that molecular initiating events may take in different taxa, life stages, and sexes of organisms can be captured in the form of an AOP network. As an example, AOPs documenting divergent consequences of thyroid peroxidase (TPO) and deiodinase (DIO) inhibition in mammals, amphibians, and fish have been developed. These AOPs provide the foundation for using data from common in vitro assays for TPO or DIO activity to inform both human health and ecological risk assessments. They also provide the foundation for an integrated approach to testing and assessment, where available information and biological understanding can be integrated in order to formulate plausible and testable hypotheses which can be used to target in vivo testing on the endpoints of greatest concern. Application of this AOP knowledge in several different r

  8. Application of adverse outcome pathways (AOPs) in human ...

    EPA Pesticide Factsheets

    The adverse outcome pathway (AOP) framework was developed to help organize and disseminate existing knowledge concerning the means through which specific perturbations of biological pathways can lead to adverse outcomes considered relevant to risk-based regulatory decision-making. Because many fundamental molecular and cellular pathways are conserved across taxa, data from assays that screen chemicals for their ability to interact with specific biomolecular targets can often be credibly applied to a broad range of species, even if the apical outcomes of those perturbations may differ. Information concerning the different trajectories of adversity that molecular initiating events may take in different taxa, life stages, and sexes of organisms can be captured in the form of an AOP network. As an example, AOPs documenting divergent consequences of thyroid peroxidase (TPO) and deiodinase (DIO) inhibition in mammals, amphibians, and fish have been developed. These AOPs provide the foundation for using data from common in vitro assays for TPO or DIO activity to inform both human health and ecological risk assessments. They also provide the foundation for an integrated approach to testing and assessment, where available information and biological understanding can be integrated in order to formulate plausible and testable hypotheses which can be used to target in vivo testing on the endpoints of greatest concern. Application of this AOP knowledge in several different r

  9. Technology for pressure-instrumented thin airfoil models

    NASA Technical Reports Server (NTRS)

    Wigley, David A.

    1988-01-01

    A novel method of airfoil model construction was developed. This Laminated Sheet technique uses 0.8 mm thick sheets of A286 containing a network of pre-formed channels which are vacuum brazed together to form the airfoil. A 6.25 percent model of the X29A canard, which has a 5 percent thick section, was built using this technique. The model contained a total of 96 pressure orifices, 56 in three chordwise rows on the upper surface and 37 in three similar rows on the lower surface. It was tested in the NASA Langley 0.3 m Transonic Cryogenic Tunnel. Unique aerodynamic data was obtained over the full range of temperature and pressure. Part of the data was at transonic Mach numbers and flight Reynolds number. A larger two dimensional model of the NACA 64a-105 airfoil section was also fabricated. Scale up presented some problems, but a testable airfoil was fabricated.

  10. Stochastic and deterministic model of microbial heat inactivation.

    PubMed

    Corradini, Maria G; Normand, Mark D; Peleg, Micha

    2010-03-01

    Microbial inactivation is described by a model based on the changing survival probabilities of individual cells or spores. It is presented in a stochastic and discrete form for small groups, and as a continuous deterministic model for larger populations. If the underlying mortality probability function remains constant throughout the treatment, the model generates first-order ("log-linear") inactivation kinetics. Otherwise, it produces survival patterns that include Weibullian ("power-law") with upward or downward concavity, tailing with a residual survival level, complete elimination, flat "shoulder" with linear or curvilinear continuation, and sigmoid curves. In both forms, the same algorithm or model equation applies to isothermal and dynamic heat treatments alike. Constructing the model does not require assuming a kinetic order or knowledge of the inactivation mechanism. The general features of its underlying mortality probability function can be deduced from the experimental survival curve's shape. Once identified, the function's coefficients, the survival parameters, can be estimated directly from the experimental survival ratios by regression. The model is testable in principle but matching the estimated mortality or inactivation probabilities with those of the actual cells or spores can be a technical challenge. The model is not intended to replace current models to calculate sterility. Its main value, apart from connecting the various inactivation patterns to underlying probabilities at the cellular level, might be in simulating the irregular survival patterns of small groups of cells and spores. In principle, it can also be used for nonthermal methods of microbial inactivation and their combination with heat.

  11. Modelling toehold-mediated RNA strand displacement.

    PubMed

    Šulc, Petr; Ouldridge, Thomas E; Romano, Flavio; Doye, Jonathan P K; Louis, Ard A

    2015-03-10

    We study the thermodynamics and kinetics of an RNA toehold-mediated strand displacement reaction with a recently developed coarse-grained model of RNA. Strand displacement, during which a single strand displaces a different strand previously bound to a complementary substrate strand, is an essential mechanism in active nucleic acid nanotechnology and has also been hypothesized to occur in vivo. We study the rate of displacement reactions as a function of the length of the toehold and temperature and make two experimentally testable predictions: that the displacement is faster if the toehold is placed at the 5' end of the substrate; and that the displacement slows down with increasing temperature for longer toeholds. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Fast proton decay

    NASA Astrophysics Data System (ADS)

    Li, Tianjun; Nanopoulos, Dimitri V.; Walker, Joel W.

    2010-10-01

    We consider proton decay in the testable flipped SU(5)×U(1)X models with TeV-scale vector-like particles which can be realized in free fermionic string constructions and F-theory model building. We significantly improve upon the determination of light threshold effects from prior studies, and perform a fresh calculation of the second loop for the process p→eπ from the heavy gauge boson exchange. The cumulative result is comparatively fast proton decay, with a majority of the most plausible parameter space within reach of the future Hyper-Kamiokande and DUSEL experiments. Because the TeV-scale vector-like particles can be produced at the LHC, we predict a strong correlation between the most exciting particle physics experiments of the coming decade.

  13. Elevated carbon dioxide is predicted to promote coexistence among competing species in a trait-based model

    DOE PAGES

    Ali, Ashehad A.; Medlyn, Belinda E.; Aubier, Thomas G.; ...

    2015-10-06

    Differential species responses to atmospheric CO 2 concentration (C a) could lead to quantitative changes in competition among species and community composition, with flow-on effects for ecosystem function. However, there has been little theoretical analysis of how elevated C a (eC a) will affect plant competition, or how composition of plant communities might change. Such theoretical analysis is needed for developing testable hypotheses to frame experimental research. Here, we investigated theoretically how plant competition might change under eC a by implementing two alternative competition theories, resource use theory and resource capture theory, in a plant carbon and nitrogen cycling model.more » The model makes several novel predictions for the impact of eC a on plant community composition. Using resource use theory, the model predicts that eC a is unlikely to change species dominance in competition, but is likely to increase coexistence among species. Using resource capture theory, the model predicts that eC a may increase community evenness. Collectively, both theories suggest that eC a will favor coexistence and hence that species diversity should increase with eC a. Our theoretical analysis leads to a novel hypothesis for the impact of eC a on plant community composition. In this study, the hypothesis has potential to help guide the design and interpretation of eC a experiments.« less

  14. Theory of cortical function

    PubMed Central

    Heeger, David J.

    2017-01-01

    Most models of sensory processing in the brain have a feedforward architecture in which each stage comprises simple linear filtering operations and nonlinearities. Models of this form have been used to explain a wide range of neurophysiological and psychophysical data, and many recent successes in artificial intelligence (with deep convolutional neural nets) are based on this architecture. However, neocortex is not a feedforward architecture. This paper proposes a first step toward an alternative computational framework in which neural activity in each brain area depends on a combination of feedforward drive (bottom-up from the previous processing stage), feedback drive (top-down context from the next stage), and prior drive (expectation). The relative contributions of feedforward drive, feedback drive, and prior drive are controlled by a handful of state parameters, which I hypothesize correspond to neuromodulators and oscillatory activity. In some states, neural responses are dominated by the feedforward drive and the theory is identical to a conventional feedforward model, thereby preserving all of the desirable features of those models. In other states, the theory is a generative model that constructs a sensory representation from an abstract representation, like memory recall. In still other states, the theory combines prior expectation with sensory input, explores different possible perceptual interpretations of ambiguous sensory inputs, and predicts forward in time. The theory, therefore, offers an empirically testable framework for understanding how the cortex accomplishes inference, exploration, and prediction. PMID:28167793

  15. Description and Validation of a Dynamical Systems Model of Presynaptic Serotonin Function: Genetic Variation, Brain Activation and Impulsivity

    PubMed Central

    Stoltenberg, Scott F.; Nag, Parthasarathi

    2010-01-01

    Despite more than a decade of empirical work on the role of genetic polymorphisms in the serotonin system on behavior, the details across levels of analysis are not well understood. We describe a mathematical model of the genetic control of presynaptic serotonergic function that is based on control theory, implemented using systems of differential equations, and focused on better characterizing pathways from genes to behavior. We present the results of model validation tests that include the comparison of simulation outcomes with empirical data on genetic effects on brain response to affective stimuli and on impulsivity. Patterns of simulated neural firing were consistent with recent findings of additive effects of serotonin transporter and tryptophan hydroxylase-2 polymorphisms on brain activation. In addition, simulated levels of cerebral spinal fluid 5-hydroxyindoleacetic acid (CSF 5-HIAA) were negatively correlated with Barratt Impulsiveness Scale (Version 11) Total scores in college students (r = −.22, p = .002, N = 187), which is consistent with the well-established negative correlation between CSF 5-HIAA and impulsivity. The results of the validation tests suggest that the model captures important aspects of the genetic control of presynaptic serotonergic function and behavior via brain activation. The proposed model can be: (1) extended to include other system components, neurotransmitter systems, behaviors and environmental influences; (2) used to generate testable hypotheses. PMID:20111992

  16. Sting, Carry and Stock: How Corpse Availability Can Regulate De-Centralized Task Allocation in a Ponerine Ant Colony

    PubMed Central

    Schmickl, Thomas; Karsai, Istvan

    2014-01-01

    We develop a model to produce plausible patterns of task partitioning in the ponerine ant Ectatomma ruidum based on the availability of living prey and prey corpses. The model is based on the organizational capabilities of a “common stomach” through which the colony utilizes the availability of a natural (food) substance as a major communication channel to regulate the income and expenditure of the very same substance. This communication channel has also a central role in regulating task partitioning of collective hunting behavior in a supply&demand-driven manner. Our model shows that task partitioning of the collective hunting behavior in E. ruidum can be explained by regulation due to a common stomach system. The saturation of the common stomach provides accessible information to individual ants so that they can adjust their hunting behavior accordingly by engaging in or by abandoning from stinging or transporting tasks. The common stomach is able to establish and to keep stabilized an effective mix of workforce to exploit the prey population and to transport food into the nest. This system is also able to react to external perturbations in a de-centralized homeostatic way, such as to changes in the prey density or to accumulation of food in the nest. In case of stable conditions the system develops towards an equilibrium concerning colony size and prey density. Our model shows that organization of work through a common stomach system can allow Ectatomma ruidum to collectively forage for food in a robust, reactive and reliable way. The model is compared to previously published models that followed a different modeling approach. Based on our model analysis we also suggest a series of experiments for which our model gives plausible predictions. These predictions are used to formulate a set of testable hypotheses that should be investigated empirically in future experimentation. PMID:25493558

  17. On testing VLSI chips for the big Viterbi decoder

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.

    1989-01-01

    A general technique that can be used in testing very large scale integrated (VLSI) chips for the Big Viterbi Decoder (BVD) system is described. The test technique is divided into functional testing and fault-coverage testing. The purpose of functional testing is to verify that the design works functionally. Functional test vectors are converted from outputs of software simulations which simulate the BVD functionally. Fault-coverage testing is used to detect and, in some cases, to locate faulty components caused by bad fabrication. This type of testing is useful in screening out bad chips. Finally, design for testability, which is included in the BVD VLSI chip design, is described in considerable detail. Both the observability and controllability of a VLSI chip are greatly enhanced by including the design for the testability feature.

  18. Assessing quality of care for migraineurs: a model health plan measurement set.

    PubMed

    Leas, Brian F; Gagne, Joshua J; Goldfarb, Neil I; Rupnow, Marcia F T; Silberstein, Stephen

    2008-08-01

    Quality of care measures are increasingly important to health plans, purchasers, physicians, and patients. Appropriate measures can be used to assess quality and evaluate improvement and are necessary components of pay-for-performance programs. Despite the broad scope of activity in the development of quality measures, migraine headache has received little attention. Given the enormous costs associated with migraine, especially in terms of lost productivity and preventable health care utilization, health plans could gain from a structured approach to measuring the quality of migraine care their beneficiaries receive. A potential migraine quality measurement set was developed through a review of migraine care literature and guidelines, interviews with leaders in migraine care, health care purchasing, and managed care, and the assembly of an advisory board. The board discussed candidate measures and established consensus on a testable measurement set. Twenty measures were developed, focused primarily on diagnosis and utilization. Areas of utilization include physician visits, emergency department visits, hospitalizations, and imaging. Use of both acute and preventive medications is included. More complex aspects of migraine care are also addressed, including triptan overuse, the relationship between acute and preventive medications, and follow-up after emergency department visits. The measures are currently being tested in health plans to assess their feasibility and value. A compelling case can be made for the development of migraine-specific quality measures for health plans. This effort to develop and test a starter set of measures should lead to new and innovative efforts to assess and improve quality of care for migraineurs.

  19. Modeling extreme drought impacts on terrestrial ecosystems when thresholds are exceeded

    NASA Astrophysics Data System (ADS)

    Holm, J. A.; Rammig, A.; Smith, B.; Medvigy, D.; Lichstein, J. W.; Dukes, J. S.; Allen, C. D.; Beier, C.; Larsen, K. S.; Ficken, C. D.; Pockman, W.; Anderegg, W.; Luo, Y.

    2016-12-01

    Recent IPCC Assessment Reports suggest that with predicted climate changes future precipitation- and heat-related extreme events are becoming stronger and more frequent with potential for prolonged droughts. To prepare for these changes and their impacts, we need to develop a better understanding of terrestrial ecosystem responses to extreme drought events. In particular, we focus here on large-extent and long-lasting extreme drought events with noticeable impacts on the functioning of forested ecosystems. While most of ecosystem manipulative experiments have been motivated by ongoing and predicted climate change, the majority only applied relatively moderate droughts, not addressing the "very" extreme tail of these scenarios, i.e. "extreme extremes (EEs)". We explore the response of forest ecosystems to EEs using two demographic-based dynamic global vegetation models (DGVMs) (i.e. ED2, LPJ-GUESS) in which the abundances of different plant functional types, as well as tree size- and age-class structure, are emergent properties of resource competition. We evaluate the model's capabilities to represent extreme drought scenarios (i.e., 50% and 90% reduction in precipitation for 1-year, 2-year, and 4-year drought scenarios) at two dry forested sites: Palo Verde, Costa Rica (i.e. tropical) and EucFACE, Australia (i.e. temperate). Through the DGVM modeling outcomes we determine the following five testable hypotheses for future experiments: 1) EEs cannot be extrapolated from mild extremes due to plant plasticity and functional composition. 2) Response to EEs depends on functional diversity, trait combinations, and phenology, such that both models predicted even after 100 years plant biomass did not recover. 3) Mortality from drought reduces the pressure on resources and prevents further damage by subsequent years of drought. 4) Early successional stands are more vulnerable to extreme droughts while older stand are more resilient. 5) Elevated atmospheric CO2 alleviates impacts of extreme droughts while increased temperature exacerbates mortality. This study highlighted a number of questions about our current understanding of EEs and their corresponding thresholds and tipping points, and provides an analysis of confidence in model representation and accuracy of processes related to EEs.

  20. How cognitive heuristics can explain social interactions in spatial movement.

    PubMed

    Seitz, Michael J; Bode, Nikolai W F; Köster, Gerta

    2016-08-01

    The movement of pedestrian crowds is a paradigmatic example of collective motion. The precise nature of individual-level behaviours underlying crowd movements has been subject to a lively debate. Here, we propose that pedestrians follow simple heuristics rooted in cognitive psychology, such as 'stop if another step would lead to a collision' or 'follow the person in front'. In other words, our paradigm explicitly models individual-level behaviour as a series of discrete decisions. We show that our cognitive heuristics produce realistic emergent crowd phenomena, such as lane formation and queuing behaviour. Based on our results, we suggest that pedestrians follow different cognitive heuristics that are selected depending on the context. This differs from the widely used approach of capturing changes in behaviour via model parameters and leads to testable hypotheses on changes in crowd behaviour for different motivation levels. For example, we expect that rushed individuals more often evade to the side and thus display distinct emergent queue formations in front of a bottleneck. Our heuristics can be ranked according to the cognitive effort that is required to follow them. Therefore, our model establishes a direct link between behavioural responses and cognitive effort and thus facilitates a novel perspective on collective behaviour. © 2016 The Author(s).

  1. How cognitive heuristics can explain social interactions in spatial movement

    PubMed Central

    Köster, Gerta

    2016-01-01

    The movement of pedestrian crowds is a paradigmatic example of collective motion. The precise nature of individual-level behaviours underlying crowd movements has been subject to a lively debate. Here, we propose that pedestrians follow simple heuristics rooted in cognitive psychology, such as ‘stop if another step would lead to a collision’ or ‘follow the person in front’. In other words, our paradigm explicitly models individual-level behaviour as a series of discrete decisions. We show that our cognitive heuristics produce realistic emergent crowd phenomena, such as lane formation and queuing behaviour. Based on our results, we suggest that pedestrians follow different cognitive heuristics that are selected depending on the context. This differs from the widely used approach of capturing changes in behaviour via model parameters and leads to testable hypotheses on changes in crowd behaviour for different motivation levels. For example, we expect that rushed individuals more often evade to the side and thus display distinct emergent queue formations in front of a bottleneck. Our heuristics can be ranked according to the cognitive effort that is required to follow them. Therefore, our model establishes a direct link between behavioural responses and cognitive effort and thus facilitates a novel perspective on collective behaviour. PMID:27581483

  2. Mineralogical evidence of reduced East Asian summer monsoon rainfall on the Chinese loess plateau during the early Pleistocene interglacials

    NASA Astrophysics Data System (ADS)

    Meng, Xianqiang; Liu, Lianwen; Wang, Xingchen T.; Balsam, William; Chen, Jun; Ji, Junfeng

    2018-03-01

    The East Asian summer monsoon (EASM) is an important component of the global climate system. A better understanding of EASM rainfall variability in the past can help constrain climate models and better predict the response of EASM to ongoing global warming. The warm early Pleistocene, a potential analog of future climate, is an important period to study EASM dynamics. However, existing monsoon proxies for reconstruction of EASM rainfall during the early Pleistocene fail to disentangle monsoon rainfall changes from temperature variations, complicating the comparison of these monsoon records with climate models. Here, we present three 2.6 million-year-long EASM rainfall records from the Chinese Loess Plateau (CLP) based on carbonate dissolution, a novel proxy for rainfall intensity. These records show that the interglacial rainfall on the CLP was lower during the early Pleistocene and then gradually increased with global cooling during the middle and late Pleistocene. These results are contrary to previous suggestions that a warmer climate leads to higher monsoon rainfall on tectonic timescales. We propose that the lower interglacial EASM rainfall during the early Pleistocene was caused by reduced sea surface temperature gradients across the equatorial Pacific, providing a testable hypothesis for climate models.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palladino, Andrea; Vissani, Francesco; Spurio, Maurizio, E-mail: andrea.palladino@gssi.infn.it, E-mail: maurizio.spurio@bo.infn.it, E-mail: francesco.vissani@lngs.infn.it

    Recently it was noted that different IceCube datasets are not consistent with the same power law spectrum of the cosmic neutrinos: this is the IceCube spectral anomaly , that suggests that they observe a multicomponent spectrum. In this work, the main possibilities to enhance the description in terms of a single extragalactic neutrino component are examined. The hypothesis of a sizable contribution of Galactic high-energy neutrino events distributed as E {sup −2.7} [ Astrophys. J. 826 (2016) 185] is critically analyzed and its natural generalization is considered. The stability of the expectations is studied by introducing free parameters, motivated bymore » theoretical considerations and observational facts. The upgraded model here examined has 1) a Galactic component with different normalization and shape E {sup −2.4}; 2) an extragalactic neutrino spectrum based on new data; 3) a non-zero prompt component of atmospheric neutrinos. The two key predictions of the model concern the 'high-energy starting events' collected from the Southern sky. The Galactic component produces a softer spectrum and a testable angular anisotropy. A second, radically different class of models, where the second component is instead isotropic, plausibly extragalactic and with a relatively soft spectrum, is disfavored instead by existing observations of muon neutrinos from the Northern sky and below few 100 TeV.« less

  4. On the IceCube spectral anomaly

    NASA Astrophysics Data System (ADS)

    Palladino, Andrea; Spurio, Maurizio; Vissani, Francesco

    2016-12-01

    Recently it was noted that different IceCube datasets are not consistent with the same power law spectrum of the cosmic neutrinos: this is the IceCube spectral anomaly, that suggests that they observe a multicomponent spectrum. In this work, the main possibilities to enhance the description in terms of a single extragalactic neutrino component are examined. The hypothesis of a sizable contribution of Galactic high-energy neutrino events distributed as E-2.7 [Astrophys. J. 826 (2016) 185] is critically analyzed and its natural generalization is considered. The stability of the expectations is studied by introducing free parameters, motivated by theoretical considerations and observational facts. The upgraded model here examined has 1) a Galactic component with different normalization and shape E-2.4 2) an extragalactic neutrino spectrum based on new data; 3) a non-zero prompt component of atmospheric neutrinos. The two key predictions of the model concern the `high-energy starting events' collected from the Southern sky. The Galactic component produces a softer spectrum and a testable angular anisotropy. A second, radically different class of models, where the second component is instead isotropic, plausibly extragalactic and with a relatively soft spectrum, is disfavored instead by existing observations of muon neutrinos from the Northern sky and below few 100 TeV.

  5. A computational approach to negative priming

    NASA Astrophysics Data System (ADS)

    Schrobsdorff, H.; Ihrke, M.; Kabisch, B.; Behrendt, J.; Hasselhorn, M.; Herrmann, J. Michael

    2007-09-01

    Priming is characterized by a sensitivity of reaction times to the sequence of stimuli in psychophysical experiments. The reduction of the reaction time observed in positive priming is well-known and experimentally understood (Scarborough et al., J. Exp. Psycholol: Hum. Percept. Perform., 3, pp. 1-17, 1977). Negative priming—the opposite effect—is experimentally less tangible (Fox, Psychonom. Bull. Rev., 2, pp. 145-173, 1995). The dependence on subtle parameter changes (such as response-stimulus interval) usually varies. The sensitivity of the negative priming effect bears great potential for applications in research in fields such as memory, selective attention, and ageing effects. We develop and analyse a computational realization, CISAM, of a recent psychological model for action decision making, the ISAM (Kabisch, PhD thesis, Friedrich-Schiller-Universitat, 2003), which is sensitive to priming conditions. With the dynamical systems approach of the CISAM, we show that a single adaptive threshold mechanism is sufficient to explain both positive and negative priming effects. This is achieved by comparing results obtained by the computational modelling with experimental data from our laboratory. The implementation provides a rich base from which testable predictions can be derived, e.g. with respect to hitherto untested stimulus combinations (e.g. single-object trials).

  6. "Role magnets"? An empirical investigation of popularity trajectories for life-course persistent individuals during adolescence.

    PubMed

    Young, Jacob T N

    2014-01-01

    Recent scholarship has focused on the role of social status in peer groups to explain the fact that delinquency is disproportionately committed during adolescence. Yet, the precise mechanism linking adolescence, social status, and antisocial behavior is not well understood. Dual-taxonomy postulates a testable mechanism that links the sudden increase in risky behavior among adolescents to the social magnetism of a small group of persistently antisocial individuals, referred to here as the "role magnet" hypothesis. Using semi-parametric group-based trajectory modeling and growth-curve modeling, this study provides the first test of this hypothesis by examining physical violence and popularity trajectories for 1,845 male respondents age 11-32 from a nationally representative sample (54 % non-Hispanic White; 21 % non-Hispanic African American; 17 % Hispanic; 8 % Asian). Individuals assigned to a "chronic violence" trajectory group showed consistently lower average levels of popularity from 11 to 19. But, these same individuals experienced increases in popularity during early adolescence and subsequent declines in late adolescence. These findings are linked to current research examining social status as a mechanism generating antisocial behavior during adolescence and the consequences of delayed entry into adult roles.

  7. The genomic response of skeletal muscle to methylprednisolone using microarrays: tailoring data mining to the structure of the pharmacogenomic time series

    PubMed Central

    DuBois, Debra C; Piel, William H; Jusko, William J

    2008-01-01

    High-throughput data collection using gene microarrays has great potential as a method for addressing the pharmacogenomics of complex biological systems. Similarly, mechanism-based pharmacokinetic/pharmacodynamic modeling provides a tool for formulating quantitative testable hypotheses concerning the responses of complex biological systems. As the response of such systems to drugs generally entails cascades of molecular events in time, a time series design provides the best approach to capturing the full scope of drug effects. A major problem in using microarrays for high-throughput data collection is sorting through the massive amount of data in order to identify probe sets and genes of interest. Due to its inherent redundancy, a rich time series containing many time points and multiple samples per time point allows for the use of less stringent criteria of expression, expression change and data quality for initial filtering of unwanted probe sets. The remaining probe sets can then become the focus of more intense scrutiny by other methods, including temporal clustering, functional clustering and pharmacokinetic/pharmacodynamic modeling, which provide additional ways of identifying the probes and genes of pharmacological interest. PMID:15212590

  8. The Long and Viscous Road: Uncovering Nuclear Diffusion Barriers in Closed Mitosis

    PubMed Central

    Zavala, Eder; Marquez-Lago, Tatiana T.

    2014-01-01

    Diffusion barriers are effective means for constraining protein lateral exchange in cellular membranes. In Saccharomyces cerevisiae, they have been shown to sustain parental identity through asymmetric segregation of ageing factors during closed mitosis. Even though barriers have been extensively studied in the plasma membrane, their identity and organization within the nucleus remains poorly understood. Based on different lines of experimental evidence, we present a model of the composition and structural organization of a nuclear diffusion barrier during anaphase. By means of spatial stochastic simulations, we propose how specialised lipid domains, protein rings, and morphological changes of the nucleus may coordinate to restrict protein exchange between mother and daughter nuclear lobes. We explore distinct, plausible configurations of these diffusion barriers and offer testable predictions regarding their protein exclusion properties and the diffusion regimes they generate. Our model predicts that, while a specialised lipid domain and an immobile protein ring at the bud neck can compartmentalize the nucleus during early anaphase; a specialised lipid domain spanning the elongated bridge between lobes would be entirely sufficient during late anaphase. Our work shows how complex nuclear diffusion barriers in closed mitosis may arise from simple nanoscale biophysical interactions. PMID:25032937

  9. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  10. Vacuum stability and naturalness in type-II seesaw

    DOE PAGES

    Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...

    2016-06-16

    Here, we study the vacuum stability and perturbativity conditions in the minimal type-II seesaw model. These conditions give characteristic constraints to the model parameters. In the model, there is a SU(2) L triplet scalar field, which could cause a large Higgs mass correction. From the naturalness point of view, heavy Higgs masses should be lower than 350GeV, which may be testable by the LHC Run-II results. Due to the effects of the triplet scalar field, the branching ratios of the Higgs decay (h → γγ,Zγ) deviate from the standard model, and a large parameter region is excluded by the recentmore » ATLAS and CMS combined analysis of h → γγ. Our result of the signal strength for h → γγ is R γγ ≲ 1.1, but its deviation is too small to observe at the LHC experiment.« less

  11. Strength and Vulnerability Integration (SAVI): A Model of Emotional Well-Being Across Adulthood

    PubMed Central

    Charles, Susan Turk

    2010-01-01

    The following paper presents the theoretical model of Strength and Vulnerability Integration (SAVI) to explain factors that influence emotion regulation and emotional well-being across adulthood. The model posits that trajectories of adult development are marked by age-related enhancement in the use of strategies that serve to avoid or limit exposure to negative stimuli, but age-related vulnerabilities in situations that elicit high levels of sustained emotional arousal. When older adults avoid or reduce exposure to emotional distress, they often respond better than younger adults; when they experience high levels of sustained emotional arousal, however, age-related advantages in emotional well-being are attenuated, and older adults are hypothesized to have greater difficulties returning to homeostasis. SAVI provides a testable model to understand the literature on emotion and aging and to predict trajectories of emotional experience across the adult life span. PMID:21038939

  12. Strength and vulnerability integration: a model of emotional well-being across adulthood.

    PubMed

    Charles, Susan Turk

    2010-11-01

    The following article presents the theoretical model of strength and vulnerability integration (SAVI) to explain factors that influence emotion regulation and emotional well-being across adulthood. The model posits that trajectories of adult development are marked by age-related enhancement in the use of strategies that serve to avoid or limit exposure to negative stimuli but by age-related vulnerabilities in situations that elicit high levels of sustained emotional arousal. When older adults avoid or reduce exposure to emotional distress, they often respond better than younger adults; when they experience high levels of sustained emotional arousal, however, age-related advantages in emotional well-being are attenuated, and older adults are hypothesized to have greater difficulties returning to homeostasis. SAVI provides a testable model to understand the literature on emotion and aging and to predict trajectories of emotional experience across the adult life span.

  13. Random blebbing motion: A simple model linking cell structural properties to migration characteristics.

    PubMed

    Woolley, Thomas E; Gaffney, Eamonn A; Goriely, Alain

    2017-07-01

    If the plasma membrane of a cell is able to delaminate locally from its actin cortex, a cellular bleb can be produced. Blebs are pressure-driven protrusions, which are noteworthy for their ability to produce cellular motion. Starting from a general continuum mechanics description, we restrict ourselves to considering cell and bleb shapes that maintain approximately spherical forms. From this assumption, we obtain a tractable algebraic system for bleb formation. By including cell-substrate adhesions, we can model blebbing cell motility. Further, by considering mechanically isolated blebbing events, which are randomly distributed over the cell, we can derive equations linking the macroscopic migration characteristics to the microscopic structural parameters of the cell. This multiscale modeling framework is then used to provide parameter estimates, which are in agreement with current experimental data. In summary, the construction of the mathematical model provides testable relationships between the bleb size and cell motility.

  14. Reheating predictions in gravity theories with derivative coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalianis, Ioannis; Koutsoumbas, George; Ntrekis, Konstantinos

    2017-02-01

    We investigate the inflationary predictions of a simple Horndeski theory where the inflaton scalar field has a non-minimal derivative coupling (NMDC) to the Einstein tensor. The NMDC is very motivated for the construction of successful models for inflation, nevertheless its inflationary predictions are not observationally distinct. We show that it is possible to probe the effects of the NMDC on the CMB observables by taking into account both the dynamics of the inflationary slow-roll phase and the subsequent reheating. We perform a comparative study between representative inflationary models with canonical fields minimally coupled to gravity and models with NMDC. Wemore » find that the inflation models with dominant NMDC generically predict a higher reheating temperature and a different range for the tilt of the scalar perturbation spectrum n {sub s} and scalar-to-tensor ratio r , potentially testable by current and future CMB experiments.« less

  15. A simple testable model of baryon number violation: Baryogenesis, dark matter, neutron-antineutron oscillation and collider signals

    NASA Astrophysics Data System (ADS)

    Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar

    2018-04-01

    We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.

  16. Neuromechanics of crawling in D. melanogaster larvae

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Paoletti, Paolo; Mahadevan, L.

    2015-03-01

    Nervous system, body and environment interact in non-trivial ways to generate locomotion and thence behavior in an organism. Here we present a minimal integrative mathematical model to describe the simple behavior of forward crawling in Drosophila larvae. Our model couples the excitation-inhibition circuits in the nervous system to force production in the muscles and body movement in a frictional environment, which in turn leads to a proprioceptive signal that feeds back to the nervous system. Our results explain the basic observed phenomenology of crawling with or without proprioception, and elucidate the stabilizing role of proprioception in crawling with respect to external and internal perturbations. Our integrated approach allows us to make testable predictions on the effect of changing body-environment interactions on crawling, and serves as a substrate for the development of hierarchical models linking cellular processes to behavior.

  17. Coexistence of Reward and Unsupervised Learning During the Operant Conditioning of Neural Firing Rates

    PubMed Central

    Kerr, Robert R.; Grayden, David B.; Thomas, Doreen A.; Gilson, Matthieu; Burkitt, Anthony N.

    2014-01-01

    A fundamental goal of neuroscience is to understand how cognitive processes, such as operant conditioning, are performed by the brain. Typical and well studied examples of operant conditioning, in which the firing rates of individual cortical neurons in monkeys are increased using rewards, provide an opportunity for insight into this. Studies of reward-modulated spike-timing-dependent plasticity (RSTDP), and of other models such as R-max, have reproduced this learning behavior, but they have assumed that no unsupervised learning is present (i.e., no learning occurs without, or independent of, rewards). We show that these models cannot elicit firing rate reinforcement while exhibiting both reward learning and ongoing, stable unsupervised learning. To fix this issue, we propose a new RSTDP model of synaptic plasticity based upon the observed effects that dopamine has on long-term potentiation and depression (LTP and LTD). We show, both analytically and through simulations, that our new model can exhibit unsupervised learning and lead to firing rate reinforcement. This requires that the strengthening of LTP by the reward signal is greater than the strengthening of LTD and that the reinforced neuron exhibits irregular firing. We show the robustness of our findings to spike-timing correlations, to the synaptic weight dependence that is assumed, and to changes in the mean reward. We also consider our model in the differential reinforcement of two nearby neurons. Our model aligns more strongly with experimental studies than previous models and makes testable predictions for future experiments. PMID:24475240

  18. Beyond small molecule SAR – using the dopamine D3 receptor crystal structure to guide drug design

    PubMed Central

    Keck, Thomas M.; Burzynski, Caitlin; Shi, Lei; Newman, Amy Hauck

    2016-01-01

    The dopamine D3 receptor is a target of pharmacotherapeutic interest in a variety of neurological disorders including schizophrenia, restless leg syndrome, and drug addiction. The high protein sequence homology between the D3 and D2 receptors has posed a challenge to developing D3 receptor-selective ligands whose behavioral actions can be attributed to D3 receptor engagement, in vivo. However, through primarily small molecule structure-activity relationship (SAR) studies, a variety of chemical scaffolds have been discovered over the past two decades that have resulted in several D3 receptor-selective ligands with high affinity and in vivo activity. Nevertheless, viable clinical candidates remain limited. The recent determination of the high-resolution crystal structure of the D3 receptor has invigorated structure-based drug design, providing refinements to the molecular dynamic models and testable predictions about receptor-ligand interactions. This review will highlight recent preclinical and clinical studies demonstrating potential utility of D3 receptor-selective ligands in the treatment of addiction. In addition, new structure-based rational drug design strategies for D3 receptor-selective ligands that complement traditional small molecule SAR to improve the selectivity and directed efficacy profiles are examined. PMID:24484980

  19. Improvements to the design process for a real-time passive millimeter-wave imager to be used for base security and helicopter navigation in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Anderton, Rupert N.; Cameron, Colin D.; Burnett, James G.; Güell, Jeff J.; Sanders-Reed, John N.

    2014-06-01

    This paper discusses the design of an improved passive millimeter wave imaging system intended to be used for base security in degraded visual environments. The discussion starts with the selection of the optimum frequency band. The trade-offs between requirements on detection, recognition and identification ranges and optical aperture are discussed with reference to the Johnson Criteria. It is shown that these requirements also affect image sampling, receiver numbers and noise temperature, frame rate, field of view, focusing requirements and mechanisms, and tolerance budgets. The effect of image quality degradation is evaluated and a single testable metric is derived that best describes the effects of degradation on meeting the requirements. The discussion is extended to tolerance budgeting constraints if significant degradation is to be avoided, including surface roughness, receiver position errors and scan conversion errors. Although the reflective twist-polarization imager design proposed is potentially relatively low cost and high performance, there is a significant problem with obscuration of the beam by the receiver array. Methods of modeling this accurately and thus designing for best performance are given.

  20. Reply to ``Comment on `Quantum time-of-flight distribution for cold trapped atoms' ''

    NASA Astrophysics Data System (ADS)

    Ali, Md. Manirul; Home, Dipankar; Majumdar, A. S.; Pan, Alok K.

    2008-02-01

    In their comment Gomes [Phys. Rev. A 77, 026101 (2008)] have questioned the possibility of empirically testable differences existing between the semiclassical time of flight distribution for cold trapped atoms and a quantum distribution discussed by us recently [Ali , Phys. Rev. A 75, 042110 (2007).]. We argue that their criticism is based on a semiclassical treatment having restricted applicability for a particular trapping potential. Their claim does not preclude, in general, the possibility of differences between the semiclassical calculations and fully quantum results for the arrival time distribution of freely falling atoms.

  1. Reply to 'Comment on 'Quantum time-of-flight distribution for cold trapped atoms''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Md. Manirul; Home, Dipankar; Pan, Alok K.

    2008-02-15

    In their comment Gomes et al. [Phys. Rev. A 77, 026101 (2008)] have questioned the possibility of empirically testable differences existing between the semiclassical time of flight distribution for cold trapped atoms and a quantum distribution discussed by us recently [Ali et al., Phys. Rev. A 75, 042110 (2007).]. We argue that their criticism is based on a semiclassical treatment having restricted applicability for a particular trapping potential. Their claim does not preclude, in general, the possibility of differences between the semiclassical calculations and fully quantum results for the arrival time distribution of freely falling atoms.

  2. Nonlinear Fault Diagnosis,

    DTIC Science & Technology

    1981-05-01

    obtained if one first generates a data base in terms cf the nominal circuit parameters and then extracts the aocrooriate s’pnbo𔃾c transfer functicn from...0 0 0 0 I 0 I 0 0 0 0 Here we initially allow V0 , ICl , VRA ,and IE to be taken as test outputs. The measure of testability 6min is used to extract ...C C IN QQ2M + VB2 R5 R L V 0 LO R2 R Fig. 4 Direct-Coupled Two-Stage Amplifier The component-connection equations for Mode 2 analysis are: VR1 0 0 0 0

  3. A Unified Theoretical Framework for Cognitive Sequencing.

    PubMed

    Savalia, Tejas; Shukla, Anuj; Bapi, Raju S

    2016-01-01

    The capacity to sequence information is central to human performance. Sequencing ability forms the foundation stone for higher order cognition related to language and goal-directed planning. Information related to the order of items, their timing, chunking and hierarchical organization are important aspects in sequencing. Past research on sequencing has emphasized two distinct and independent dichotomies: implicit vs. explicit and goal-directed vs. habits. We propose a theoretical framework unifying these two streams. Our proposal relies on brain's ability to implicitly extract statistical regularities from the stream of stimuli and with attentional engagement organizing sequences explicitly and hierarchically. Similarly, sequences that need to be assembled purposively to accomplish a goal require engagement of attentional processes. With repetition, these goal-directed plans become habits with concomitant disengagement of attention. Thus, attention and awareness play a crucial role in the implicit-to-explicit transition as well as in how goal-directed plans become automatic habits. Cortico-subcortical loops basal ganglia-frontal cortex and hippocampus-frontal cortex loops mediate the transition process. We show how the computational principles of model-free and model-based learning paradigms, along with a pivotal role for attention and awareness, offer a unifying framework for these two dichotomies. Based on this framework, we make testable predictions related to the potential influence of response-to-stimulus interval (RSI) on developing awareness in implicit learning tasks.

  4. A Unified Theoretical Framework for Cognitive Sequencing

    PubMed Central

    Savalia, Tejas; Shukla, Anuj; Bapi, Raju S.

    2016-01-01

    The capacity to sequence information is central to human performance. Sequencing ability forms the foundation stone for higher order cognition related to language and goal-directed planning. Information related to the order of items, their timing, chunking and hierarchical organization are important aspects in sequencing. Past research on sequencing has emphasized two distinct and independent dichotomies: implicit vs. explicit and goal-directed vs. habits. We propose a theoretical framework unifying these two streams. Our proposal relies on brain's ability to implicitly extract statistical regularities from the stream of stimuli and with attentional engagement organizing sequences explicitly and hierarchically. Similarly, sequences that need to be assembled purposively to accomplish a goal require engagement of attentional processes. With repetition, these goal-directed plans become habits with concomitant disengagement of attention. Thus, attention and awareness play a crucial role in the implicit-to-explicit transition as well as in how goal-directed plans become automatic habits. Cortico-subcortical loops basal ganglia-frontal cortex and hippocampus-frontal cortex loops mediate the transition process. We show how the computational principles of model-free and model-based learning paradigms, along with a pivotal role for attention and awareness, offer a unifying framework for these two dichotomies. Based on this framework, we make testable predictions related to the potential influence of response-to-stimulus interval (RSI) on developing awareness in implicit learning tasks. PMID:27917146

  5. Actinobacteria phylogenomics, selective isolation from an iron oligotrophic environment and siderophore functional characterization, unveil new desferrioxamine traits.

    PubMed

    Cruz-Morales, Pablo; Ramos-Aboites, Hilda E; Licona-Cassani, Cuauhtémoc; Selem-Mójica, Nelly; Mejía-Ponce, Paulina M; Souza-Saldívar, Valeria; Barona-Gómez, Francisco

    2017-09-01

    Desferrioxamines are hydroxamate siderophores widely conserved in both aquatic and soil-dwelling Actinobacteria. While the genetic and enzymatic bases of siderophore biosynthesis and their transport in model families of this phylum are well understood, evolutionary studies are lacking. Here, we perform a comprehensive desferrioxamine-centric (des genes) phylogenomic analysis, which includes the genomes of six novel strains isolated from an iron and phosphorous depleted oasis in the Chihuahuan desert of Mexico. Our analyses reveal previously unnoticed desferrioxamine evolutionary patterns, involving both biosynthetic and transport genes, likely to be related to desferrioxamines chemical diversity. The identified patterns were used to postulate experimentally testable hypotheses after phenotypic characterization, including profiling of siderophores production and growth stimulation of co-cultures under iron deficiency. Based in our results, we propose a novel des gene, which we term desG, as responsible for incorporation of phenylacetyl moieties during biosynthesis of previously reported arylated desferrioxamines. Moreover, a genomic-based classification of the siderophore-binding proteins responsible for specific and generalist siderophore assimilation is postulated. This report provides a much-needed evolutionary framework, with specific insights supported by experimental data, to direct the future ecological and functional analysis of desferrioxamines in the environment. © FEMS 2017.

  6. CANcer-specific Evaluation System (CANES): a high-accuracy platform, for preclinical single/multi-biomarker discovery

    PubMed Central

    Kwon, Min-Seok; Nam, Seungyoon; Lee, Sungyoung; Ahn, Young Zoo; Chang, Hae Ryung; Kim, Yon Hui; Park, Taesung

    2017-01-01

    The recent creation of enormous, cancer-related “Big Data” public depositories represents a powerful means for understanding tumorigenesis. However, a consistently accurate system for clinically evaluating single/multi-biomarkers remains lacking, and it has been asserted that oft-failed clinical advancement of biomarkers occurs within the very early stages of biomarker assessment. To address these challenges, we developed a clinically testable, web-based tool, CANcer-specific single/multi-biomarker Evaluation System (CANES), to evaluate biomarker effectiveness, across 2,134 whole transcriptome datasets, from 94,147 biological samples (from 18 tumor types). For user-provided single/multi-biomarkers, CANES evaluates the performance of single/multi-biomarker candidates, based on four classification methods, support vector machine, random forest, neural networks, and classification and regression trees. In addition, CANES offers several advantages over earlier analysis tools, including: 1) survival analysis; 2) evaluation of mature miRNAs as markers for user-defined diagnostic or prognostic purposes; and 3) provision of a “pan-cancer” summary view, based on each single marker. We believe that such “landscape” evaluation of single/multi-biomarkers, for diagnostic therapeutic/prognostic decision-making, will be highly valuable for the discovery and “repurposing” of existing biomarkers (and their specific targeted therapies), leading to improved patient therapeutic stratification, a key component of targeted therapy success for the avoidance of therapy resistance. PMID:29050243

  7. Model of the songbird nucleus HVC as a network of central pattern generators

    PubMed Central

    Abarbanel, Henry D. I.

    2016-01-01

    We propose a functional architecture of the adult songbird nucleus HVC in which the core element is a “functional syllable unit” (FSU). In this model, HVC is organized into FSUs, each of which provides the basis for the production of one syllable in vocalization. Within each FSU, the inhibitory neuron population takes one of two operational states: 1) simultaneous firing wherein all inhibitory neurons fire simultaneously, and 2) competitive firing of the inhibitory neurons. Switching between these basic modes of activity is accomplished via changes in the synaptic strengths among the inhibitory neurons. The inhibitory neurons connect to excitatory projection neurons such that during state 1 the activity of projection neurons is suppressed, while during state 2 patterns of sequential firing of projection neurons can occur. The latter state is stabilized by feedback from the projection to the inhibitory neurons. Song composition for specific species is distinguished by the manner in which different FSUs are functionally connected to each other. Ours is a computational model built with biophysically based neurons. We illustrate that many observations of HVC activity are explained by the dynamics of the proposed population of FSUs, and we identify aspects of the model that are currently testable experimentally. In addition, and standing apart from the core features of an FSU, we propose that the transition between modes may be governed by the biophysical mechanism of neuromodulation. PMID:27535375

  8. A forecast experiment of earthquake activity in Japan under Collaboratory for the Study of Earthquake Predictability (CSEP)

    NASA Astrophysics Data System (ADS)

    Hirata, N.; Yokoi, S.; Nanjo, K. Z.; Tsuruoka, H.

    2012-04-01

    One major focus of the current Japanese earthquake prediction research program (2009-2013), which is now integrated with the research program for prediction of volcanic eruptions, is to move toward creating testable earthquake forecast models. For this purpose we started an experiment of forecasting earthquake activity in Japan under the framework of the Collaboratory for the Study of Earthquake Predictability (CSEP) through an international collaboration. We established the CSEP Testing Centre, an infrastructure to encourage researchers to develop testable models for Japan, and to conduct verifiable prospective tests of their model performance. We started the 1st earthquake forecast testing experiment in Japan within the CSEP framework. We use the earthquake catalogue maintained and provided by the Japan Meteorological Agency (JMA). The experiment consists of 12 categories, with 4 testing classes with different time spans (1 day, 3 months, 1 year, and 3 years) and 3 testing regions called "All Japan," "Mainland," and "Kanto." A total of 105 models were submitted, and are currently under the CSEP official suite of tests for evaluating the performance of forecasts. The experiments were completed for 92 rounds for 1-day, 6 rounds for 3-month, and 3 rounds for 1-year classes. For 1-day testing class all models passed all the CSEP's evaluation tests at more than 90% rounds. The results of the 3-month testing class also gave us new knowledge concerning statistical forecasting models. All models showed a good performance for magnitude forecasting. On the other hand, observation is hardly consistent in space distribution with most models when many earthquakes occurred at a spot. Now we prepare the 3-D forecasting experiment with a depth range of 0 to 100 km in Kanto region. The testing center is improving an evaluation system for 1-day class experiment to finish forecasting and testing results within one day. The special issue of 1st part titled Earthquake Forecast Testing Experiment in Japan was published on the Earth, Planets and Space Vol. 63, No.3, 2011 on March, 2011. The 2nd part of this issue, which is now on line, will be published soon. An outline of the experiment and activities of the Japanese Testing Center are published on our WEB site; http://wwweic.eri.u-tokyo.ac.jp/ZISINyosoku/wiki.en/wiki.cgi

  9. Off-line, built-in test techniques for VLSI circuits

    NASA Technical Reports Server (NTRS)

    Buehler, M. G.; Sievers, M. W.

    1982-01-01

    It is shown that the use of redundant on-chip circuitry improves the testability of an entire VLSI circuit. In the study described here, five techniques applied to a two-bit ripple carry adder are compared. The techniques considered are self-oscillation, self-comparison, partition, scan path, and built-in logic block observer. It is noted that both classical stuck-at faults and nonclassical faults, such as bridging faults (shorts), stuck-on x faults where x may be 0, 1, or vary between the two, and parasitic flip-flop faults occur in IC structures. To simplify the analysis of the testing techniques, however, a stuck-at fault model is assumed.

  10. Functional test generation for digital circuits described with a declarative language: LUSTRE

    NASA Astrophysics Data System (ADS)

    Almahrous, Mazen

    1990-08-01

    A functional approach to the test generation problem starting from a high level description is proposed. The circuit tested is modeled, using the LUSTRE high level data flow description language. The different LUSTRE primitives are translated to a SATAN format graph in order to evaluate the testability of the circuit and to generate test sequences. Another method of testing the complex circuits comprising an operative part and a control part is defined. It consists of checking experiments for the control part observed through the operative part. It was applied to the automata generated from a LUSTRE description of the circuit.

  11. Shaping Gene Expression by Landscaping Chromatin Architecture: Lessons from a Master.

    PubMed

    Sartorelli, Vittorio; Puri, Pier Lorenzo

    2018-05-19

    Since its discovery as a skeletal muscle-specific transcription factor able to reprogram somatic cells into differentiated myofibers, MyoD has provided an instructive model to understand how transcription factors regulate gene expression. Reciprocally, studies of other transcriptional regulators have provided testable hypotheses to further understand how MyoD activates transcription. Using MyoD as a reference, in this review, we discuss the similarities and differences in the regulatory mechanisms employed by tissue-specific transcription factors to access DNA and regulate gene expression by cooperatively shaping the chromatin landscape within the context of cellular differentiation. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. The Central Role of Tether-Cutting Reconnection in the Production of CMEs

    NASA Technical Reports Server (NTRS)

    Moore, Ron; Sterling, Alphonse; Suess, Steve

    2007-01-01

    This viewgraph presentation describes tether-cutting reconnection in the production of Coronal Mass Ejections (CMEs). The topics include: 1) Birth and Release of the CME Plasmoid; 2) Resulting CME in Outer Corona; 3) Governing Role of Surrounding Field; 4) Testable Prediction of the Standard Scenario Magnetic Bubble CME Model; 5) Lateral Pressure in Outer Corona; 6) Measured Angular Widths of 3 CMEs; 7) LASCO Image of each CME at Final Width; 8) Source of the CME of 2002 May 20; 9) Source of the CME of 1999 Feb 9; 10) Source of the CME of 2003 Nov 4; and 11) Test Results.

  13. A SEU-Hard Flip-Flop for Antifuse FPGAs

    NASA Technical Reports Server (NTRS)

    Katz, R.; Wang, J. J.; McCollum, J.; Cronquist, B.; Chan, R.; Yu, D.; Kleyner, I.; Day, John H. (Technical Monitor)

    2001-01-01

    A single event upset (SEU)-hardened flip-flop has been designed and developed for antifuse Field Programmable Gate Array (FPGA) application. Design and application issues, testability, test methods, simulation, and results are discussed.

  14. Gene-centric approach to integrating environmental genomics and biogeochemical models.

    PubMed

    Reed, Daniel C; Algar, Christopher K; Huber, Julie A; Dick, Gregory J

    2014-02-04

    Rapid advances in molecular microbial ecology have yielded an unprecedented amount of data about the evolutionary relationships and functional traits of microbial communities that regulate global geochemical cycles. Biogeochemical models, however, are trailing in the wake of the environmental genomics revolution, and such models rarely incorporate explicit representations of bacteria and archaea, nor are they compatible with nucleic acid or protein sequence data. Here, we present a functional gene-based framework for describing microbial communities in biogeochemical models by incorporating genomics data to provide predictions that are readily testable. To demonstrate the approach in practice, nitrogen cycling in the Arabian Sea oxygen minimum zone (OMZ) was modeled to examine key questions about cryptic sulfur cycling and dinitrogen production pathways in OMZs. Simulations support previous assertions that denitrification dominates over anammox in the central Arabian Sea, which has important implications for the loss of fixed nitrogen from the oceans. Furthermore, cryptic sulfur cycling was shown to attenuate the secondary nitrite maximum often observed in OMZs owing to changes in the composition of the chemolithoautotrophic community and dominant metabolic pathways. Results underscore the need to explicitly integrate microbes into biogeochemical models rather than just the metabolisms they mediate. By directly linking geochemical dynamics to the genetic composition of microbial communities, the method provides a framework for achieving mechanistic insights into patterns and biogeochemical consequences of marine microbes. Such an approach is critical for informing our understanding of the key role microbes play in modulating Earth's biogeochemistry.

  15. The changing features of the body-mind problem.

    PubMed

    Agassi, Joseph

    2007-01-01

    The body-mind problem invites scientific study, since mental events are repeated and repeatable and invite testable explanations. They seemed troublesome because of the classical theory of substance that failed to solve its own central problems. These are soluble with the aid of the theory of the laws of nature, particularly in its emergentist version [Bunge, M., 1980. The Body-mind Problem, Pergamon, Oxford] that invites refutable explanations [Popper, K.R., 1959. The Logic of Scientific Discovery, Hutchinson, London]. The view of mental properties as emergent is a modification of the two chief classical views, materialism and dualism. As this view invites testable explanations of events of the inner world, it is better than the quasi-behaviorist view of self-awareness as computer-style self-monitoring [Minsky, M., Laske, O., 1992. A conversation with Marvin Minsky. AI Magazine 13 (3), 31-45].

  16. Evaluation of phylogenetic footprint discovery for predicting bacterial cis-regulatory elements and revealing their evolution.

    PubMed

    Janky, Rekin's; van Helden, Jacques

    2008-01-23

    The detection of conserved motifs in promoters of orthologous genes (phylogenetic footprints) has become a common strategy to predict cis-acting regulatory elements. Several software tools are routinely used to raise hypotheses about regulation. However, these tools are generally used as black boxes, with default parameters. A systematic evaluation of optimal parameters for a footprint discovery strategy can bring a sizeable improvement to the predictions. We evaluate the performances of a footprint discovery approach based on the detection of over-represented spaced motifs. This method is particularly suitable for (but not restricted to) Bacteria, since such motifs are typically bound by factors containing a Helix-Turn-Helix domain. We evaluated footprint discovery in 368 Escherichia coli K12 genes with annotated sites, under 40 different combinations of parameters (taxonomical level, background model, organism-specific filtering, operon inference). Motifs are assessed both at the levels of correctness and significance. We further report a detailed analysis of 181 bacterial orthologs of the LexA repressor. Distinct motifs are detected at various taxonomical levels, including the 7 previously characterized taxon-specific motifs. In addition, we highlight a significantly stronger conservation of half-motifs in Actinobacteria, relative to Firmicutes, suggesting an intermediate state in specificity switching between the two Gram-positive phyla, and thereby revealing the on-going evolution of LexA auto-regulation. The footprint discovery method proposed here shows excellent results with E. coli and can readily be extended to predict cis-acting regulatory signals and propose testable hypotheses in bacterial genomes for which nothing is known about regulation.

  17. Insight and psychosis: comparing the perspectives of patient, entourage and clinician.

    PubMed

    Tranulis, Constantin; Corin, Ellen; Kirmayer, Laurence J

    2008-05-01

    The construct of insight in psychosis assumes congruence between patient and clinician views of the meaning of symptoms and experience. Current definitions and measures of insight do not give systematic attention to the impact of interpersonal, cultural and socio-economic contexts. We hypothesized that socio-cultural factors influence insight in patients with schizophrenia. We tested this hypothesis through comparison of insight in 18 triads, each composed of a patient, a family member and a clinician. The sample consisted of patients who were first diagnosed with psychosis in the last two years, and who were either immigrants from Africa or the Caribbean Islands, or Canadian born. Insight was assessed by analysis of narratives collected from patients, family members and clinicians for a research project on the negotiation of treatment. Each narrative was scored for insight along multiple dimensions with the Extracted Insight Scale (EIS), developed for this project. There was a significant correlation of insight on the EIS between patients and family members (r= 0.51, p= 0.03) but not between patient and clinician or family and clinician. The mean levels of insight across the three groups were comparable. Qualitative analysis of the illness narratives suggested that insight was based on the meanings constructed around psychotic experiences and that the process of interpreting and attributing psychotic experiences reflected each person's cultural background, life experiences, and other social determinants, especially stigma. Forms of insight can occur in the context of discordance or disagreement with the clinician's opinion. We present a testable model of the sociocultural determinants of insight that can guide future studies.

  18. Self-organization in the limb: a Turing mechanism for digit development.

    PubMed

    Cooper, Kimberly L

    2015-06-01

    The statistician George E. P. Box stated, 'Essentially all models are wrong, but some are useful.' (Box GEP, Draper NR: Empirical Model-Building and Response Surfaces. Wiley; 1987). Modeling biological processes is challenging for many of the reasons classically trained developmental biologists often resist the idea that black and white equations can explain the grayscale subtleties of living things. Although a simplified mathematical model of development will undoubtedly fall short of precision, a good model is exceedingly useful if it raises at least as many testable questions as it answers. Self-organizing Turing models that simulate the pattern of digits in the hand replicate events that have not yet been explained by classical approaches. The union of theory and experimentation has recently identified and validated the minimal components of a Turing network for digit pattern and triggered a cascade of questions that will undoubtedly be well-served by the continued merging of disciplines. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Mechanisms underlying REBT in mood disordered patients: predicting depression from the hybrid model of learning.

    PubMed

    Jackson, Chris J; Izadikah, Zahra; Oei, Tian P S

    2012-06-01

    Jackson's (2005, 2008a) hybrid model of learning identifies a number of learning mechanisms that lead to the emergence and maintenance of the balance between rationality and irrationality. We test a general hypothesis that Jackson's model will predict depressive symptoms, such that poor learning is related to depression. We draw comparisons between Jackson's model and Ellis' (2004) Rational Emotive Behavior Therapy and Theory (REBT) and thereby provide a set of testable learning mechanisms potentially underlying REBT. Results from 80 patients diagnosed with depression completed the learning styles profiler (LSP; Jackson, 2005) and two measures of depression. Results provide support for the proposed model of learning and further evidence that low rationality is a key predictor of depression. We conclude that the hybrid model of learning has the potential to explain some of the learning and cognitive processes related to the development and maintenance of irrational beliefs and depression. Copyright © 2011. Published by Elsevier B.V.

  20. Modeling T-cell activation using gene expression profiling and state-space models.

    PubMed

    Rangel, Claudia; Angus, John; Ghahramani, Zoubin; Lioumi, Maria; Sotheran, Elizabeth; Gaiba, Alessia; Wild, David L; Falciani, Francesco

    2004-06-12

    We have used state-space models to reverse engineer transcriptional networks from highly replicated gene expression profiling time series data obtained from a well-established model of T-cell activation. State space models are a class of dynamic Bayesian networks that assume that the observed measurements depend on some hidden state variables that evolve according to Markovian dynamics. These hidden variables can capture effects that cannot be measured in a gene expression profiling experiment, e.g. genes that have not been included in the microarray, levels of regulatory proteins, the effects of messenger RNA and protein degradation, etc. Bootstrap confidence intervals are developed for parameters representing 'gene-gene' interactions over time. Our models represent the dynamics of T-cell activation and provide a methodology for the development of rational and experimentally testable hypotheses. Supplementary data and Matlab computer source code will be made available on the web at the URL given below. http://public.kgi.edu/~wild/LDS/index.htm

  1. Simple neural substrate predicts complex rhythmic structure in duetting birds

    NASA Astrophysics Data System (ADS)

    Amador, Ana; Trevisan, M. A.; Mindlin, G. B.

    2005-09-01

    Horneros (Furnarius Rufus) are South American birds well known for their oven-looking nests and their ability to sing in couples. Previous work has analyzed the rhythmic organization of the duets, unveiling a mathematical structure behind the songs. In this work we analyze in detail an extended database of duets. The rhythms of the songs are compatible with the dynamics presented by a wide class of dynamical systems: forced excitable systems. Compatible with this nonlinear rule, we build a biologically inspired model for how the neural and the anatomical elements may interact to produce the observed rhythmic patterns. This model allows us to synthesize songs presenting the acoustic and rhythmic features observed in real songs. We also make testable predictions in order to support our hypothesis.

  2. Observational exclusion of a consistent loop quantum cosmology scenario

    NASA Astrophysics Data System (ADS)

    Bolliet, Boris; Barrau, Aurélien; Grain, Julien; Schander, Susanne

    2016-06-01

    It is often argued that inflation erases all the information about what took place before it started. Quantum gravity, relevant in the Planck era, seems therefore mostly impossible to probe with cosmological observations. In general, only very ad hoc scenarios or hyper fine-tuned initial conditions can lead to observationally testable theories. Here we consider a well-defined and well-motivated candidate quantum cosmology model that predicts inflation. Using the most recent observational constraints on the cosmic microwave background B-modes, we show that the model is excluded for all its parameter space, without any tuning. Some important consequences are drawn for the deformed algebra approach to loop quantum cosmology. We emphasize that neither loop quantum cosmology in general nor loop quantum gravity are disfavored by this study but their falsifiability is established.

  3. Kalman filter control of a model of spatiotemporal cortical dynamics

    PubMed Central

    Schiff, Steven J; Sauer, Tim

    2007-01-01

    Recent advances in Kalman filtering to estimate system state and parameters in nonlinear systems have offered the potential to apply such approaches to spatiotemporal nonlinear systems. We here adapt the nonlinear method of unscented Kalman filtering to observe the state and estimate parameters in a computational spatiotemporal excitable system that serves as a model for cerebral cortex. We demonstrate the ability to track spiral wave dynamics, and to use an observer system to calculate control signals delivered through applied electrical fields. We demonstrate how this strategy can control the frequency of such a system, or quench the wave patterns, while minimizing the energy required for such results. These findings are readily testable in experimental applications, and have the potential to be applied to the treatment of human disease. PMID:18310806

  4. Debates—Hypothesis testing in hydrology: Introduction

    NASA Astrophysics Data System (ADS)

    Blöschl, Günter

    2017-03-01

    This paper introduces the papers in the "Debates—Hypothesis testing in hydrology" series. The four articles in the series discuss whether and how the process of testing hypotheses leads to progress in hydrology. Repeated experiments with controlled boundary conditions are rarely feasible in hydrology. Research is therefore not easily aligned with the classical scientific method of testing hypotheses. Hypotheses in hydrology are often enshrined in computer models which are tested against observed data. Testability may be limited due to model complexity and data uncertainty. All four articles suggest that hypothesis testing has contributed to progress in hydrology and is needed in the future. However, the procedure is usually not as systematic as the philosophy of science suggests. A greater emphasis on a creative reasoning process on the basis of clues and explorative analyses is therefore needed.

  5. Metabolic network flux analysis for engineering plant systems.

    PubMed

    Shachar-Hill, Yair

    2013-04-01

    Metabolic network flux analysis (NFA) tools have proven themselves to be powerful aids to metabolic engineering of microbes by providing quantitative insights into the flows of material and energy through cellular systems. The development and application of NFA tools to plant systems has advanced in recent years and are yielding significant insights and testable predictions. Plants present substantial opportunities for the practical application of NFA but they also pose serious challenges related to the complexity of plant metabolic networks and to deficiencies in our knowledge of their structure and regulation. By considering the tools available and selected examples, this article attempts to assess where and how NFA is most likely to have a real impact on plant biotechnology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  7. Establishing a Reproducible Hypertrophic Scar following Thermal Injury: A Porcine Model

    PubMed Central

    Rapp, Scott J.; Rumberg, Aaron; Visscher, Marty; Billmire, David A.; Schwentker, Ann S.

    2015-01-01

    Background: Our complete understanding of hypertrophic scarring is still deficient, as portrayed by the poor clinical outcomes when treating them. To address the need for alternative treatment strategies, we assess the swine animal burn model as an initial approach for immature scar evaluation and therapeutic application. Methods: Thermal contact burns were created on the dorsum of 3 domestic swine with the use of a branding iron at 170°F for 20 seconds. Deep partial-thickness burns were cared for with absorptive dressings over 10 weeks and wounds evaluated with laser and negative pressure transduction, histology, photographic analysis, and RNA isolation. Results: Overall average stiffness (mm Hg/mm) increased and elasticity (mm) decreased in the scars from the initial burn injury to 8 weeks when compared with normal skin (P < 0.01). Scars were thicker, more erythematous, and uniform in the caudal dorsum. The percent change of erythema in wounds increased from weeks 6 to 10. Histology demonstrated loss of dermal papillae, increased myofibroblast presence, vertically oriented vessels, epidermal and dermal hypercellularity, and parallel-layered collagen deposition. Immature scars remained elevated at 10 weeks, and minimal RNA was able to be isolated from the tissue. Conclusions: Deep partial-thickness thermal injury to the back of domestic swine produces an immature hypertrophic scar by 10 weeks following burn with thickness appearing to coincide with the location along the dorsal axis. With minimal pig to pig variation, we describe our technique to provide a testable immature scar model. PMID:25750848

  8. Traces on the 'Ubaidian Shore: Mid-Holocene Eustasis, Marine Transgression, and Urbanization in the Mesopotamian Delta (Iraq)

    NASA Astrophysics Data System (ADS)

    Pournelle, J. R.; Smith, J. R.; Hritz, C.; Nsf Hrrpaa 1045974

    2011-12-01

    Development and flourit of pre-urban and urban complex societies of southern Mesopotamia (Iraq) during the mid-Holocene took place in the context of Tigris-Euphrates and Karun-Karkheh deltaic progradation on one hand, and marine transgression at the head of the Gulf on the other. Understanding these processes has profound implications for assessing likely resource bioavailability, resource extraction and transport options, population distribution and density, and labour requirements for intensification/ extensification of extraction and production activities during this critical formative period. Multiple attempts have been made to reconstruct the Gulf "shoreline" at various pre-historic and historical periods. Because no systematic coring operations have been undertaken in the region, these attempts have been hampered by the paucity of direct geologic evidence. Conflicting hypotheses based on models of deltaic subsidence, tectonic uplift, and and/or eustatic change were barely testable against scant available cores and archaeologically-derived proxies from a few sites on the western "shore," such as H3, Eridu, Ur, Uruk, and Tell al Oueli. Recently published coring operations in the Iranian Karun-Karkheh delta add considerably to the available corpus of archaeological and geomorphologic data useful for reconstructing the timeline and extent of these processes, especially on the eastern "shore," but these are also bounded in spatial and temporal extent. Multi-scale, multi-sensor processing of remote sensing data and imagery make possible a fuller interpretation of geomorphologic and artifactual evidence bearing on overall shoreline reconstruction from approximately 6,000-3,000 BCE. This paper reports the results of combining interpreted LANDSAT, ASTER, SPOT, CORONA, Digital Globe, and other imagery with multiple derived Digital Elevation Models, thus providing stochastic boundaries for re-interpreting geological and archaeological point data, as well as new pilot data collected in 2010-2011. The result is better understanding of the likely location, extent, and impact of maximum mid-Holocene marine incursion into lower Mesopotamia and Khuzistan associated with deltaic geomorphological and ecological evolution, with implications for assessing site locations, agricultural potential, and water transport routes available to the world's oldest-known cities.

  9. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    PubMed

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A conceptual model for generating and validating in-session clinical judgments

    PubMed Central

    Jacinto, Sofia B.; Lewis, Cara C.; Braga, João N.; Scott, Kelli

    2016-01-01

    Objective Little attention has been paid to the nuanced and complex decisions made in the clinical session context and how these decisions influence therapy effectiveness. Despite decades of research on the dual-processing systems, it remains unclear when and how intuitive and analytical reasoning influence the direction of the clinical session. Method This paper puts forth a testable conceptual model, guided by an interdisciplinary integration of the literature, that posits that the clinical session context moderates the use of intuitive versus analytical reasoning. Results A synthesis of studies examining professional best practices in clinical decision-making, empirical evidence from clinical judgment research, and the application of decision science theories indicate that intuitive and analytical reasoning may have profoundly different impacts on clinical practice and outcomes. Conclusions The proposed model is discussed with respect to its implications for clinical practice and future research. PMID:27088962

  11. How did the swiss cheese plant get its holes?

    PubMed

    Muir, Christopher D

    2013-02-01

    Adult leaf fenestration in "Swiss cheese" plants (Monstera Adans.) is an unusual leaf shape trait lacking a convincing evolutionary explanation. Monstera are secondary hemiepiphytes that inhabit the understory of tropical rainforests, where photosynthesis from sunflecks often makes up a large proportion of daily carbon assimilation. Here I present a simple model of leaf-level photosynthesis and whole-plant canopy dynamics in a stochastic light environment. The model demonstrates that leaf fenestration can reduce the variance in plant growth and thereby increase geometric mean fitness. This growth-variance hypothesis also suggests explanations for conspicuous ontogenetic changes in leaf morphology (heteroblasty) in Monstera, as well as the absence of leaf fenestration in co-occurring juvenile tree species. The model provides a testable hypothesis of the adaptive significance of a unique leaf shape and illustrates how variance in growth rate could be an important factor shaping plant morphology and physiology.

  12. Positive versus negative perfectionism in psychopathology: a comment on Slade and Owens's dual process model.

    PubMed

    Flett, Gordon L; Hewitt, Paul L

    2006-07-01

    This article reviews the concepts of positive and negative perfectionism and the dual process model of perfectionism outlined by Slade and Owens (1998). The authors acknowledge that the dual process model represents a conceptual advance in the study of perfectionism and that Slade and Owens should be commended for identifying testable hypotheses and future research directions. However, the authors take issue with the notion that there are two types of perfectionism, with one type of perfectionism representing a "normal" or "healthy" form of perfectionism. They suggest that positive perfectionism is motivated, at least in part, by an avoidance orientation and fear of failure, and recent attempts to define and conceptualize positive perfectionism may have blurred the distinction between perfectionism and conscientiousness. Research findings that question the adaptiveness of positive forms of perfectionism are highlighted, and key issues for future research are identified.

  13. Insights about Striatal Circuit Function and Schizophrenia from a Mouse Model of D2 Receptor Upregulation

    PubMed Central

    Simpson, Eleanor H.; Kellendonk, Christoph

    2016-01-01

    The dopamine hypothesis of schizophrenia is supported by a large number of imaging studies that have identified an increase in dopamine binding at the D2 receptor selectively in the striatum. Here we review a decade of work using a regionally restricted and temporally regulated transgenic mouse model to investigate the behavioral, molecular, electrophysiological, and anatomical consequences of selective D2 receptor upregulation in the striatum. These studies have identified new and potentially important biomarkers at the circuit and molecular level that can now be explored in patients with schizophrenia. They provide an example of how animal models and their detailed level of neurobiological analysis allow a deepening of our understanding of the relationship between neuronal circuit function and symptoms of schizophrenia, and as a consequence generate new hypotheses that are testable in patients. PMID:27720388

  14. Unified TeV scale picture of baryogenesis and dark matter.

    PubMed

    Babu, K S; Mohapatra, R N; Nasri, Salah

    2007-04-20

    We present a simple extension of the minimal supersymmetric standard model which provides a unified picture of cosmological baryon asymmetry and dark matter. Our model introduces a gauge singlet field N and a color triplet field X which couple to the right-handed quark fields. The out-of-equilibrium decay of the Majorana fermion N mediated by the exchange of the scalar field X generates adequate baryon asymmetry for MN approximately 100 GeV and MX approximately TeV. The scalar partner of N (denoted N1) is naturally the lightest SUSY particle as it has no gauge interactions and plays the role of dark matter. The model is experimentally testable in (i) neutron-antineutron oscillations with a transition time estimated to be around 10(10)sec, (ii) discovery of colored particles X at LHC with mass of order TeV, and (iii) direct dark matter detection with a predicted cross section in the observable range.

  15. Mathematical and Statistical Techniques for Systems Medicine: The Wnt Signaling Pathway as a Case Study.

    PubMed

    MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M

    2016-01-01

    The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.

  16. GUT Model Hierarchies from Intersecting Branes

    NASA Astrophysics Data System (ADS)

    Kokorelis, Christos

    2002-08-01

    By employing D6-branes intersecting at angles in D = 4 type I strings, we construct the first examples of three generation string GUT models (PS-A class), that contain at low energy exactly the standard model spectrum with no extra matter and/or extra gauge group factors. They are based on the group SU(4)C × SU(2)L × SU(2)R. The models are non-supersymmetric, even though SUSY is unbroken in the bulk. Baryon number is gauged and its anomalies are cancelled through a generalized Green-Schwarz mechanism. We also discuss models (PS-B class) which at low energy have the standard model augmented by an anomaly free U(1) symmetry and show that multibrane wrappings correspond to a trivial redefinition of the surviving global U(1) at low energies. There are no colour triplet couplings to mediate proton decay and proton is stable. The models are compatible with a low string scale of energy less that 650 GeV and are directly testable at present or future accelerators as they predict the existence of light left handed weak fermion doublets at energies between 90 and 246 GeV. The neutrinos get a mass through an unconventional see-saw mechanism. The mass relation me = md at the GUT scale is recovered. Imposing supersymmetry at particular intersections generates non-zero Majorana masses for right handed neutrinos as well providing the necessary singlets needed to break the surviving anomaly free U(1), thus suggesting a gauge symmetry breaking method that can be applied in general left-right symmetric models.

  17. Hybrid regulatory models: a statistically tractable approach to model regulatory network dynamics.

    PubMed

    Ocone, Andrea; Millar, Andrew J; Sanguinetti, Guido

    2013-04-01

    Computational modelling of the dynamics of gene regulatory networks is a central task of systems biology. For networks of small/medium scale, the dominant paradigm is represented by systems of coupled non-linear ordinary differential equations (ODEs). ODEs afford great mechanistic detail and flexibility, but calibrating these models to data is often an extremely difficult statistical problem. Here, we develop a general statistical inference framework for stochastic transcription-translation networks. We use a coarse-grained approach, which represents the system as a network of stochastic (binary) promoter and (continuous) protein variables. We derive an exact inference algorithm and an efficient variational approximation that allows scalable inference and learning of the model parameters. We demonstrate the power of the approach on two biological case studies, showing that the method allows a high degree of flexibility and is capable of testable novel biological predictions. http://homepages.inf.ed.ac.uk/gsanguin/software.html. Supplementary data are available at Bioinformatics online.

  18. Anatomical correlates to the bands seen in the outer retina by optical coherence tomography: literature review and model.

    PubMed

    Spaide, Richard F; Curcio, Christine A

    2011-09-01

    To evaluate the validity of commonly used anatomical designations for the four hyperreflective outer retinal bands seen in current-generation optical coherence tomography, a scale model of outer retinal morphology was created using published information for direct comparison with optical coherence tomography scans. Articles and books concerning histology of the outer retina from 1900 until 2009 were evaluated, and data were used to create a scale model drawing. Boundaries between outer retinal tissue compartments described by the model were compared with intensity variations of representative spectral-domain optical coherence tomography scans using longitudinal reflectance profiles to determine the region of origin of the hyperreflective outer retinal bands. This analysis showed a high likelihood that the spectral-domain optical coherence tomography bands attributed to the external limiting membrane (the first, innermost band) and to the retinal pigment epithelium (the fourth, outermost band) are correctly attributed. Comparative analysis showed that the second band, often attributed to the boundary between inner and outer segments of the photoreceptors, actually aligns with the ellipsoid portion of the inner segments. The third band corresponded to an ensheathment of the cone outer segments by apical processes of the retinal pigment epithelium in a structure known as the contact cylinder. Anatomical attributions and subsequent pathophysiologic assessments pertaining to the second and third outer retinal hyperreflective bands may not be correct. This analysis has identified testable hypotheses for the actual correlates of the second and third bands. Nonretinal pigment epithelium contributions to the fourth band (e.g., Bruch membrane) remain to be determined.

  19. Work-Centered Technology Development (WTD)

    DTIC Science & Technology

    2005-03-01

    theoretical, testable, inductive, and repeatable foundations of science. o Theoretical foundations include notions such as statistical versus analytical...Human Factors and Ergonomics Society, 263-267. 179 Eggleston, R. G. (2005). Coursebook : Work-Centered Design (WCD). AFRL/HECS WCD course training

  20. Writing testable software requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knirk, D.

    1997-11-01

    This tutorial identifies common problems in analyzing requirements in the problem and constructing a written specification of what the software is to do. It deals with two main problem areas: identifying and describing problem requirements, and analyzing and describing behavior specifications.

  1. All pure bipartite entangled states can be self-tested

    PubMed Central

    Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio

    2017-01-01

    Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states. PMID:28548093

  2. All pure bipartite entangled states can be self-tested

    NASA Astrophysics Data System (ADS)

    Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio

    2017-05-01

    Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states.

  3. All pure bipartite entangled states can be self-tested.

    PubMed

    Coladangelo, Andrea; Goh, Koon Tong; Scarani, Valerio

    2017-05-26

    Quantum technologies promise advantages over their classical counterparts in the fields of computation, security and sensing. It is thus desirable that classical users are able to obtain guarantees on quantum devices, even without any knowledge of their inner workings. That such classical certification is possible at all is remarkable: it is a consequence of the violation of Bell inequalities by entangled quantum systems. Device-independent self-testing refers to the most complete such certification: it enables a classical user to uniquely identify the quantum state shared by uncharacterized devices by simply inspecting the correlations of measurement outcomes. Self-testing was first demonstrated for the singlet state and a few other examples of self-testable states were reported in recent years. Here, we address the long-standing open question of whether every pure bipartite entangled state is self-testable. We answer it affirmatively by providing explicit self-testing correlations for all such states.

  4. How drugs get into cells: tested and testable predictions to help discriminate between transporter-mediated uptake and lipoidal bilayer diffusion

    PubMed Central

    Kell, Douglas B.; Oliver, Stephen G.

    2014-01-01

    One approach to experimental science involves creating hypotheses, then testing them by varying one or more independent variables, and assessing the effects of this variation on the processes of interest. We use this strategy to compare the intellectual status and available evidence for two models or views of mechanisms of transmembrane drug transport into intact biological cells. One (BDII) asserts that lipoidal phospholipid Bilayer Diffusion Is Important, while a second (PBIN) proposes that in normal intact cells Phospholipid Bilayer diffusion Is Negligible (i.e., may be neglected quantitatively), because evolution selected against it, and with transmembrane drug transport being effected by genetically encoded proteinaceous carriers or pores, whose “natural” biological roles, and substrates are based in intermediary metabolism. Despite a recent review elsewhere, we can find no evidence able to support BDII as we can find no experiments in intact cells in which phospholipid bilayer diffusion was either varied independently or measured directly (although there are many papers where it was inferred by seeing a covariation of other dependent variables). By contrast, we find an abundance of evidence showing cases in which changes in the activities of named and genetically identified transporters led to measurable changes in the rate or extent of drug uptake. PBIN also has considerable predictive power, and accounts readily for the large differences in drug uptake between tissues, cells and species, in accounting for the metabolite-likeness of marketed drugs, in pharmacogenomics, and in providing a straightforward explanation for the late-stage appearance of toxicity and of lack of efficacy during drug discovery programmes despite macroscopically adequate pharmacokinetics. Consequently, the view that Phospholipid Bilayer diffusion Is Negligible (PBIN) provides a starting hypothesis for assessing cellular drug uptake that is much better supported by the available evidence, and is both more productive and more predictive. PMID:25400580

  5. Modeling integrated cellular machinery using hybrid Petri-Boolean networks.

    PubMed

    Berestovsky, Natalie; Zhou, Wanding; Nagrath, Deepak; Nakhleh, Luay

    2013-01-01

    The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM) that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them using such more detailed mathematical models.

  6. An overview of methods for developing bioenergetic and life history models for rare and endangered species

    USGS Publications Warehouse

    Petersen, J.H.; DeAngelis, D.L.; Paukert, C.P.

    2008-01-01

    Many fish species are at risk to some degree, and conservation efforts are planned or underway to preserve sensitive populations. For many imperiled species, models could serve as useful tools for researchers and managers as they seek to understand individual growth, quantify predator-prey dynamics, and identify critical sources of mortality. Development and application of models for rare species however, has been constrained by small population sizes, difficulty in obtaining sampling permits, limited opportunities for funding, and regulations on how endangered species can be used in laboratory studies. Bioenergetic and life history models should help with endangered species-recovery planning since these types of models have been used successfully in the last 25 years to address management problems for many commercially and recreationally important fish species. In this paper we discuss five approaches to developing models and parameters for rare species. Borrowing model functions and parameters from related species is simple, but uncorroborated results can be misleading. Directly estimating parameters with laboratory studies may be possible for rare species that have locally abundant populations. Monte Carlo filtering can be used to estimate several parameters by means of performing simple laboratory growth experiments to first determine test criteria. Pattern-oriented modeling (POM) is a new and developing field of research that uses field-observed patterns to build, test, and parameterize models. Models developed using the POM approach are closely linked to field data, produce testable hypotheses, and require a close working relationship between modelers and empiricists. Artificial evolution in individual-based models can be used to gain insight into adaptive behaviors for poorly understood species and thus can fill in knowledge gaps. ?? Copyright by the American Fisheries Society 2008.

  7. Modeling Integrated Cellular Machinery Using Hybrid Petri-Boolean Networks

    PubMed Central

    Berestovsky, Natalie; Zhou, Wanding; Nagrath, Deepak; Nakhleh, Luay

    2013-01-01

    The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM) that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them using such more detailed mathematical models. PMID:24244124

  8. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions.

    PubMed

    Mohr, David C; Cuijpers, Pim; Lehman, Kenneth

    2011-03-10

    The effectiveness of and adherence to eHealth interventions is enhanced by human support. However, human support has largely not been manualized and has usually not been guided by clear models. The objective of this paper is to develop a clear theoretical model, based on relevant empirical literature, that can guide research into human support components of eHealth interventions. A review of the literature revealed little relevant information from clinical sciences. Applicable literature was drawn primarily from organizational psychology, motivation theory, and computer-mediated communication (CMC) research. We have developed a model, referred to as "Supportive Accountability." We argue that human support increases adherence through accountability to a coach who is seen as trustworthy, benevolent, and having expertise. Accountability should involve clear, process-oriented expectations that the patient is involved in determining. Reciprocity in the relationship, through which the patient derives clear benefits, should be explicit. The effect of accountability may be moderated by patient motivation. The more intrinsically motivated patients are, the less support they likely require. The process of support is also mediated by the communications medium (eg, telephone, instant messaging, email). Different communications media each have their own potential benefits and disadvantages. We discuss the specific components of accountability, motivation, and CMC medium in detail. The proposed model is a first step toward understanding how human support enhances adherence to eHealth interventions. Each component of the proposed model is a testable hypothesis. As we develop viable human support models, these should be manualized to facilitate dissemination.

  9. Brane-World Gravity.

    PubMed

    Maartens, Roy; Koyama, Kazuya

    2010-01-01

    The observable universe could be a 1+3-surface (the "brane") embedded in a 1+3+ d -dimensional spacetime (the "bulk"), with Standard Model particles and fields trapped on the brane while gravity is free to access the bulk. At least one of the d extra spatial dimensions could be very large relative to the Planck scale, which lowers the fundamental gravity scale, possibly even down to the electroweak (∼ TeV) level. This revolutionary picture arises in the framework of recent developments in M theory. The 1+10-dimensional M theory encompasses the known 1+9-dimensional superstring theories, and is widely considered to be a promising potential route to quantum gravity. At low energies, gravity is localized at the brane and general relativity is recovered, but at high energies gravity "leaks" into the bulk, behaving in a truly higher-dimensional way. This introduces significant changes to gravitational dynamics and perturbations, with interesting and potentially testable implications for high-energy astrophysics, black holes, and cosmology. Brane-world models offer a phenomenological way to test some of the novel predictions and corrections to general relativity that are implied by M theory. This review analyzes the geometry, dynamics and perturbations of simple brane-world models for cosmology and astrophysics, mainly focusing on warped 5-dimensional brane-worlds based on the Randall-Sundrum models. We also cover the simplest brane-world models in which 4-dimensional gravity on the brane is modified at low energies - the 5-dimensional Dvali-Gabadadze-Porrati models. Then we discuss co-dimension two branes in 6-dimensional models.

  10. Towards a theory of PACS deployment: an integrative PACS maturity framework.

    PubMed

    van de Wetering, Rogier; Batenburg, Ronald

    2014-06-01

    Owing to large financial investments that go along with the picture archiving and communication system (PACS) deployments and inconsistent PACS performance evaluations, there is a pressing need for a better understanding of the implications of PACS deployment in hospitals. We claim that there is a gap in the research field, both theoretically and empirically, to explain the success of the PACS deployment and maturity in hospitals. Theoretical principles are relevant to the PACS performance; maturity and alignment are reviewed from a system and complexity perspective. A conceptual model to explain the PACS performance and a set of testable hypotheses are then developed. Then, structural equation modeling (SEM), i.e. causal modeling, is applied to validate the model and hypotheses based on a research sample of 64 hospitals that use PACS, i.e. 70 % of all hospitals in the Netherlands. Outcomes of the SEM analyses substantiate that the measurements of all constructs are reliable and valid. The PACS alignment-modeled as a higher-order construct of five complementary organizational dimensions and maturity levels-has a significant positive impact on the PACS performance. This result is robust and stable for various sub-samples and segments. This paper presents a conceptual model that explains how alignment in deploying PACS in hospitals is positively related to the perceived performance of PACS. The conceptual model is extended with tools as checklists to systematically identify the improvement areas for hospitals in the PACS domain. The holistic approach towards PACS alignment and maturity provides a framework for clinical practice.

  11. Psychological stress and fibromyalgia: a review of the evidence suggesting a neuroendocrine link

    PubMed Central

    Gupta, Anindya; Silman, Alan J

    2004-01-01

    The present review attempts to reconcile the dichotomy that exists in the literature in relation to fibromyalgia, in that it is considered either a somatic response to psychological stress or a distinct organically based syndrome. Specifically, the hypothesis explored is that the link between chronic stress and the subsequent development of fibromyalgia can be explained by one or more abnormalities in neuroendocrine function. There are several such abnormalities recognised that both occur as a result of chronic stress and are observed in fibromyalgia. Whether such abnormalities have an aetiologic role remains uncertain but should be testable by well-designed prospective studies. PMID:15142258

  12. Prediction and typicality in multiverse cosmology

    NASA Astrophysics Data System (ADS)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  13. NOXclass: prediction of protein-protein interaction types.

    PubMed

    Zhu, Hongbo; Domingues, Francisco S; Sommer, Ingolf; Lengauer, Thomas

    2006-01-19

    Structural models determined by X-ray crystallography play a central role in understanding protein-protein interactions at the molecular level. Interpretation of these models requires the distinction between non-specific crystal packing contacts and biologically relevant interactions. This has been investigated previously and classification approaches have been proposed. However, less attention has been devoted to distinguishing different types of biological interactions. These interactions are classified as obligate and non-obligate according to the effect of the complex formation on the stability of the protomers. So far no automatic classification methods for distinguishing obligate, non-obligate and crystal packing interactions have been made available. Six interface properties have been investigated on a dataset of 243 protein interactions. The six properties have been combined using a support vector machine algorithm, resulting in NOXclass, a classifier for distinguishing obligate, non-obligate and crystal packing interactions. We achieve an accuracy of 91.8% for the classification of these three types of interactions using a leave-one-out cross-validation procedure. NOXclass allows the interpretation and analysis of protein quaternary structures. In particular, it generates testable hypotheses regarding the nature of protein-protein interactions, when experimental results are not available. We expect this server will benefit the users of protein structural models, as well as protein crystallographers and NMR spectroscopists. A web server based on the method and the datasets used in this study are available at http://noxclass.bioinf.mpi-inf.mpg.de/.

  14. The Structure of the Poliovirus 135S Cell Entry Intermediate at 10-Angstrom Resolution Reveals the Location of an Externalized Polypeptide That Binds to Membranes†

    PubMed Central

    Bubeck, Doryen; Filman, David J.; Cheng, Naiqian; Steven, Alasdair C.; Hogle, James M.; Belnap, David M.

    2005-01-01

    Poliovirus provides a well-characterized system for understanding how nonenveloped viruses enter and infect cells. Upon binding its receptor, poliovirus undergoes an irreversible conformational change to the 135S cell entry intermediate. This transition involves shifts of the capsid protein β barrels, accompanied by the externalization of VP4 and the N terminus of VP1. Both polypeptides associate with membranes and are postulated to facilitate entry by forming a translocation pore for the viral RNA. We have calculated cryo-electron microscopic reconstructions of 135S particles that permit accurate placement of the β barrels, loops, and terminal extensions of the capsid proteins. The reconstructions and resulting models indicate that each N terminus of VP1 exits the capsid though an opening in the interface between VP1 and VP3 at the base of the canyon that surrounds the fivefold axis. Comparison with reconstructions of 135S particles in which the first 31 residues of VP1 were proteolytically removed revealed that the externalized N terminus is located near the tips of propeller-like features surrounding the threefold axes rather than at the fivefold axes, as had been proposed in previous models. These observations have forced a reexamination of current models for the role of the 135S particle in transmembrane pore formation and suggest testable alternatives. PMID:15919927

  15. Informed maintenance for next generation space transportation systems

    NASA Astrophysics Data System (ADS)

    Fox, Jack J.

    2001-02-01

    Perhaps the most substantial single obstacle to progress of space exploration and utilization of space for human benefit is the safety & reliability and the inherent cost of launching to, and returning from, space. The primary influence in the high costs of current launch systems (the same is true for commercial and military aircraft and most other reusable systems) is the operations, maintenance and infrastructure portion of the program's total life cycle costs. Reusable Launch Vehicle (RLV) maintenance and design have traditionally been two separate engineering disciplines with often conflicting objectives-maximizing ease of maintenance versus optimizing performance, size and cost. Testability analysis, an element of Informed Maintenance (IM), has been an ad hoc, manual effort, in which maintenance engineers attempt to identify an efficient method of troubleshooting for the given product, with little or no control over product design. Therefore, testability deficiencies in the design cannot be rectified. It is now widely recognized that IM must be engineered into the product at the design stage itself, so that an optimal compromise is achieved between system maintainability and performance. The elements of IM include testability analysis, diagnostics/prognostics, automated maintenance scheduling, automated logistics coordination, paperless documentation and data mining. IM derives its heritage from complimentary NASA science, space and aeronautic enterprises such as the on-board autonomous Remote Agent Architecture recently flown on NASA's Deep Space 1 Probe as well as commercial industries that employ quick turnaround operations. Commercial technologies and processes supporting NASA's IM initiatives include condition based maintenance technologies from Boeing's Commercial 777 Aircraft and Lockheed-Martin's F-22 Fighter, automotive computer diagnostics and autonomous controllers that enable 100,000 mile maintenance free operations, and locomotive monitoring system software. This paper will summarize NASA's long-term strategy, development, and implementation plans for Informed Maintenance for next generation RLVs. This will be done through a convergence into a single IM vision the work being performed throughout NASA, industry and academia. Additionally, a current status of IM development throughout NASA programs such as the Space Shuttle, X-33, X-34 and X-37 will be provided and will conclude with an overview of near-term work that is being initiated in FY00 to support NASA's 2nd Generation Reusable Launch Vehicle Program. .

  16. Informed maintenance for next generation reusable launch systems

    NASA Astrophysics Data System (ADS)

    Fox, Jack J.; Gormley, Thomas J.

    2001-03-01

    Perhaps the most substantial single obstacle to progress of space exploration and utilization of space for human benefit is the safety & reliability and the inherent cost of launching to, and returning from, space. The primary influence in the high costs of current launch systems (the same is true for commercial and military aircraft and most other reusable systems) is the operations, maintenance and infrastructure portion of the program's total life cycle costs. Reusable Launch Vehicle (RLV) maintenance and design have traditionally been two separate engineering disciplines with often conflicting objectives - maximizing ease of maintenance versus optimizing performance, size and cost. Testability analysis, an element of Informed Maintenance (IM), has been an ad hoc, manual effort, in which maintenance engineers attempt to identify an efficient method of troubleshooting for the given product, with little or no control over product design. Therefore, testability deficiencies in the design cannot be rectified. It is now widely recognized that IM must be engineered into the product at the design stage itself, so that an optimal compromise is achieved between system maintainability and performance. The elements of IM include testability analysis, diagnostics/prognostics, automated maintenance scheduling, automated logistics coordination, paperless documentation and data mining. IM derives its heritage from complimentary NASA science, space and aeronautic enterprises such as the on-board autonomous Remote Agent Architecture recently flown on NASA's Deep Space 1 Probe as well as commercial industries that employ quick turnaround operations. Commercial technologies and processes supporting NASA's IM initiatives include condition based maintenance technologies from Boeing's Commercial 777 Aircraft and Lockheed-Martin's F-22 Fighter, automotive computer diagnostics and autonomous controllers that enable 100,000 mile maintenance free operations, and locomotive monitoring system software. This paper will summarize NASA's long-term strategy, development, and implementation plans for Informed Maintenance for next generation RLVs. This will be done through a convergence into a single IM vision the work being performed throughout NASA, industry and academia. Additionally, a current status of IM development throughout NASA programs such as the Space Shuttle, X-33, X-34 and X-37 will be provided and will conclude with an overview of near-term work that is being initiated in FY00 to support NASA's 2 nd Generation Reusable Launch Vehicle Program.

  17. Dissecting Leishmania infantum Energy Metabolism - A Systems Perspective

    PubMed Central

    Subramanian, Abhishek; Jhawar, Jitesh; Sarkar, Ram Rup

    2015-01-01

    Leishmania infantum, causative agent of visceral leishmaniasis in humans, illustrates a complex lifecycle pertaining to two extreme environments, namely, the gut of the sandfly vector and human macrophages. Leishmania is capable of dynamically adapting and tactically switching between these critically hostile situations. The possible metabolic routes ventured by the parasite to achieve this exceptional adaptation to its varying environments are still poorly understood. In this study, we present an extensively reconstructed energy metabolism network of Leishmania infantum as an attempt to identify certain strategic metabolic routes preferred by the parasite to optimize its survival in such dynamic environments. The reconstructed network consists of 142 genes encoding for enzymes performing 237 reactions distributed across five distinct model compartments. We annotated the subcellular locations of different enzymes and their reactions on the basis of strong literature evidence and sequence-based detection of cellular localization signal within a protein sequence. To explore the diverse features of parasite metabolism the metabolic network was implemented and analyzed as a constraint-based model. Using a systems-based approach, we also put forth an extensive set of lethal reaction knockouts; some of which were validated using published data on Leishmania species. Performing a robustness analysis, the model was rigorously validated and tested for the secretion of overflow metabolites specific to Leishmania under varying extracellular oxygen uptake rate. Further, the fate of important non-essential amino acids in L. infantum metabolism was investigated. Stage-specific scenarios of L. infantum energy metabolism were incorporated in the model and key metabolic differences were outlined. Analysis of the model revealed the essentiality of glucose uptake, succinate fermentation, glutamate biosynthesis and an active TCA cycle as driving forces for parasite energy metabolism and its optimal growth. Finally, through our in silico knockout analysis, we could identify possible therapeutic targets that provide experimentally testable hypotheses. PMID:26367006

  18. Constant-roll (quasi-)linear inflation

    NASA Astrophysics Data System (ADS)

    Karam, A.; Marzola, L.; Pappas, T.; Racioppi, A.; Tamvakis, K.

    2018-05-01

    In constant-roll inflation, the scalar field that drives the accelerated expansion of the Universe is rolling down its potential at a constant rate. Within this framework, we highlight the relations between the Hubble slow-roll parameters and the potential ones, studying in detail the case of a single-field Coleman-Weinberg model characterised by a non-minimal coupling of the inflaton to gravity. With respect to the exact constant-roll predictions, we find that assuming an approximate slow-roll behaviour yields a difference of Δ r = 0.001 in the tensor-to-scalar ratio prediction. Such a discrepancy is in principle testable by future satellite missions. As for the scalar spectral index ns, we find that the existing 2-σ bound constrains the value of the non-minimal coupling to ξphi ~ 0.29–0.31 in the model under consideration.

  19. Stochastic recruitment leads to symmetry breaking in foraging populations

    NASA Astrophysics Data System (ADS)

    Biancalani, Tommaso; Dyson, Louise; McKane, Alan

    2014-03-01

    When an ant colony is faced with two identical equidistant food sources, the foraging ants are found to concentrate more on one source than the other. Analogous symmetry-breaking behaviours have been reported in various population systems, (such as queueing or stock market trading) suggesting the existence of a simple universal mechanism. Past studies have neglected the effect of demographic noise and required rather complicated models to qualitatively reproduce this behaviour. I will show how including the effects of demographic noise leads to a radically different conclusion. The symmetry-breaking arises solely due to the process of recruitment and ceases to occur for large population sizes. The latter fact provides a testable prediction for a real system.

  20. Stochastic stability properties of jump linear systems

    NASA Technical Reports Server (NTRS)

    Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.

    1992-01-01

    Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.

  1. Availability, fermentability, and energy value of resistant maltodextrin: modeling of short-term indirect calorimetric measurements in healthy adults.

    PubMed

    Goda, Toshinao; Kajiya, Yuya; Suruga, Kazuhito; Tagami, Hiroyuki; Livesey, Geoffrey

    2006-06-01

    Determination of the metabolizable (ME) and net metabolizable (NME) energy of total carbohydrate requires estimation of its available (AC) and fermentable (FC) carbohydrate content. Modeling of indirect calorimetric observations (respiratory gas exchange) and breath hydrogen would appear to make it possible to estimate noninvasively these nutritional quantities and the approximate time-course of availability. We assessed the time-course of metabolism and energy availability from resistant maltodextrin (RMD) by modeling of respiratory gases after a single oral dose. Seventeen healthy adults (13 M, 4 F; aged 25-46 y) were randomly assigned to treatments (water, maltodextrin, or RMD) in a multiple-crossover, single-blinded trial with > or = 7 d washout. We monitored 8-h nitrogen-corrected oxygen and carbon dioxide exchanges and breath hydrogen. All treatment groups took low-carbohydrate meals at 3 and 6 h. Indirect calorimetry alone provided only qualitative information about the nutritional values of carbohydrate. In contrast, modeling of gaseous exchanges along with the use of central assumptions showed that 17 +/- 2% of RMD was AC and 40 +/- 4% was FC. As compared with 17 kJ gross energy/g RMD, mean (+/- SE) energy values were 7.3 +/- 0.6 kJ ME/g and 6.3 +/- 0.5 kJ NME/g. The fiber fraction of RMD provided 5.2 +/- 0.7 kJ ME/g and 4.1 +/- 0.6 kJ NME/g. Modeling with the use of this noninvasive and widely available respiratory gas-monitoring technique yields nutritional values for carbohydrate that are supported by enzymatic, microbial, and animal studies and human fecal collection studies. Improvement in this approach is likely and testable across laboratories.

  2. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases

    PubMed Central

    Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature. PMID:26735851

  3. Bayesian inferences suggest that Amazon Yunga Natives diverged from Andeans less than 5000 ybp: implications for South American prehistory.

    PubMed

    Scliar, Marilia O; Gouveia, Mateus H; Benazzo, Andrea; Ghirotto, Silvia; Fagundes, Nelson J R; Leal, Thiago P; Magalhães, Wagner C S; Pereira, Latife; Rodrigues, Maira R; Soares-Souza, Giordano B; Cabrera, Lilia; Berg, Douglas E; Gilman, Robert H; Bertorelle, Giorgio; Tarazona-Santos, Eduardo

    2014-09-30

    Archaeology reports millenary cultural contacts between Peruvian Coast-Andes and the Amazon Yunga, a rainforest transitional region between Andes and Lower Amazonia. To clarify the relationships between cultural and biological evolution of these populations, in particular between Amazon Yungas and Andeans, we used DNA-sequence data, a model-based Bayesian approach and several statistical validations to infer a set of demographic parameters. We found that the genetic diversity of the Shimaa (an Amazon Yunga population) is a subset of that of Quechuas from Central-Andes. Using the Isolation-with-Migration population genetics model, we inferred that the Shimaa ancestors were a small subgroup that split less than 5300 years ago (after the development of complex societies) from an ancestral Andean population. After the split, the most plausible scenario compatible with our results is that the ancestors of Shimaas moved toward the Peruvian Amazon Yunga and incorporated the culture and language of some of their neighbors, but not a substantial amount of their genes. We validated our results using Approximate Bayesian Computations, posterior predictive tests and the analysis of pseudo-observed datasets. We presented a case study in which model-based Bayesian approaches, combined with necessary statistical validations, shed light into the prehistoric demographic relationship between Andeans and a population from the Amazon Yunga. Our results offer a testable model for the peopling of this large transitional environmental region between the Andes and the Lower Amazonia. However, studies on larger samples and involving more populations of these regions are necessary to confirm if the predominant Andean biological origin of the Shimaas is the rule, and not the exception.

  4. A Multilayer Network Approach for Guiding Drug Repositioning in Neglected Diseases.

    PubMed

    Berenstein, Ariel José; Magariños, María Paula; Chernomoretz, Ariel; Agüero, Fernán

    2016-01-01

    Drug development for neglected diseases has been historically hampered due to lack of market incentives. The advent of public domain resources containing chemical information from high throughput screenings is changing the landscape of drug discovery for these diseases. In this work we took advantage of data from extensively studied organisms like human, mouse, E. coli and yeast, among others, to develop a novel integrative network model to prioritize and identify candidate drug targets in neglected pathogen proteomes, and bioactive drug-like molecules. We modeled genomic (proteins) and chemical (bioactive compounds) data as a multilayer weighted network graph that takes advantage of bioactivity data across 221 species, chemical similarities between 1.7 105 compounds and several functional relations among 1.67 105 proteins. These relations comprised orthology, sharing of protein domains, and shared participation in defined biochemical pathways. We showcase the application of this network graph to the problem of prioritization of new candidate targets, based on the information available in the graph for known compound-target associations. We validated this strategy by performing a cross validation procedure for known mouse and Trypanosoma cruzi targets and showed that our approach outperforms classic alignment-based approaches. Moreover, our model provides additional flexibility as two different network definitions could be considered, finding in both cases qualitatively different but sensible candidate targets. We also showcase the application of the network to suggest targets for orphan compounds that are active against Plasmodium falciparum in high-throughput screens. In this case our approach provided a reduced prioritization list of target proteins for the query molecules and showed the ability to propose new testable hypotheses for each compound. Moreover, we found that some predictions highlighted by our network model were supported by independent experimental validations as found post-facto in the literature.

  5. Models of baryogenesis

    NASA Astrophysics Data System (ADS)

    Auriemma, G.

    2005-06-01

    The origin of the asymmetry between matter and antimatter that is evident in our part of the Universe is one of the open questions in cosmology, because the CPT symmetry between matter and antimatter seems to be absolutely conserved at microscopic level. We repeat here the classical proofs which exclude the viability of a Universe baryon symmetric on the average, or the observed asymmetry as an initial condition. The current understanding is that the asymmetry should have been dynamically generated before nucleosynthesis, by B, C, and CP-violating processes, acting out of thermodynamical equilibrium, as suggested by Sakharov in the 70's. The physical realizations of these conditions would be possible, in principle, also in the framework of the Standard Model of elementary particles, but the present limits on the mass of the Higgs particle exclude this possibility. Finally we present the model of baryogenesis through leptogenesis, which is allowed by a minimal extension of the Standard Model, which has the appeal of being testable in future long-baseline neutrino oscillation experiments.

  6. Deciphering Epithelial–Mesenchymal Transition Regulatory Networks in Cancer through Computational Approaches

    PubMed Central

    Burger, Gerhard A.; Danen, Erik H. J.; Beltman, Joost B.

    2017-01-01

    Epithelial–mesenchymal transition (EMT), the process by which epithelial cells can convert into motile mesenchymal cells, plays an important role in development and wound healing but is also involved in cancer progression. It is increasingly recognized that EMT is a dynamic process involving multiple intermediate or “hybrid” phenotypes rather than an “all-or-none” process. However, the role of EMT in various cancer hallmarks, including metastasis, is debated. Given the complexity of EMT regulation, computational modeling has proven to be an invaluable tool for cancer research, i.e., to resolve apparent conflicts in experimental data and to guide experiments by generating testable hypotheses. In this review, we provide an overview of computational modeling efforts that have been applied to regulation of EMT in the context of cancer progression and its associated tumor characteristics. Moreover, we identify possibilities to bridge different modeling approaches and point out outstanding questions in which computational modeling can contribute to advance our understanding of pathological EMT. PMID:28824874

  7. Finite Cosmology and a CMB Cold Spot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adler, R.J.; /Stanford U., HEPL; Bjorken, J.D.

    2006-03-20

    The standard cosmological model posits a spatially flat universe of infinite extent. However, no observation, even in principle, could verify that the matter extends to infinity. In this work we model the universe as a finite spherical ball of dust and dark energy, and obtain a lower limit estimate of its mass and present size: the mass is at least 5 x 10{sup 23}M{sub {circle_dot}} and the present radius is at least 50 Gly. If we are not too far from the dust-ball edge we might expect to see a cold spot in the cosmic microwave background, and there mightmore » be suppression of the low multipoles in the angular power spectrum. Thus the model may be testable, at least in principle. We also obtain and discuss the geometry exterior to the dust ball; it is Schwarzschild-de Sitter with a naked singularity, and provides an interesting picture of cosmogenesis. Finally we briefly sketch how radiation and inflation eras may be incorporated into the model.« less

  8. Exploring the role of internal friction in the dynamics of unfolded proteins using simple polymer models.

    PubMed

    Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E

    2013-02-21

    Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.

  9. A quantitative systems physiology model of renal function and blood pressure regulation: Model description.

    PubMed

    Hallow, K M; Gebremichael, Y

    2017-06-01

    Renal function plays a central role in cardiovascular, kidney, and multiple other diseases, and many existing and novel therapies act through renal mechanisms. Even with decades of accumulated knowledge of renal physiology, pathophysiology, and pharmacology, the dynamics of renal function remain difficult to understand and predict, often resulting in unexpected or counterintuitive therapy responses. Quantitative systems pharmacology modeling of renal function integrates this accumulated knowledge into a quantitative framework, allowing evaluation of competing hypotheses, identification of knowledge gaps, and generation of new experimentally testable hypotheses. Here we present a model of renal physiology and control mechanisms involved in maintaining sodium and water homeostasis. This model represents the core renal physiological processes involved in many research questions in drug development. The model runs in R and the code is made available. In a companion article, we present a case study using the model to explore mechanisms and pharmacology of salt-sensitive hypertension. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  10. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    PubMed

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  11. Pre-service Teachers Learn the Nature of Science in Simulated Worlds

    NASA Astrophysics Data System (ADS)

    Marshall, Jill

    2007-10-01

    Although the Texas Essential Knowledge and Skills include an understanding of the nature of science as an essential goal of every high school science course, few students report opportunities to explore essential characteristics of science in their previous classes. A simulated-world environment (Erickson, 2005) allows students to function as working scientists and discover these essential elements for themselves (i.e. that science is evidence-based and involves testable conjectures, that theories have limitations and are constantly being modified based on new discoveries to more closely reflect the natural world.) I will report on pre-service teachers' exploration of two simulated worlds and resulting changes in their descriptions of the nature of science. Erickson (2005). Simulating the Nature of Science. Presentation at the 2005 Summer AAPT Meeting, Salt Lake City, UT.

  12. Microbial endocrinology and the microbiota-gut-brain axis.

    PubMed

    Lyte, Mark

    2014-01-01

    Microbial endocrinology is defined as the study of the ability of microorganisms to both produce and recognize neurochemicals that originate either within the microorganisms themselves or within the host they inhabit. As such, microbial endocrinology represents the intersection of the fields of microbiology and neurobiology. The acquisition of neurochemical-based cell-to-cell signaling mechanisms in eukaryotic organisms is believed to have been acquired due to late horizontal gene transfer from prokaryotic microorganisms. When considered in the context of the microbiota's ability to influence host behavior, microbial endocrinology with its theoretical basis rooted in shared neuroendocrine signaling mechanisms provides for testable experiments with which to understand the role of the microbiota in host behavior and as importantly the ability of the host to influence the microbiota through neuroendocrine-based mechanisms.

  13. Estimating outflow facility through pressure dependent pathways of the human eye

    PubMed Central

    Gardiner, Bruce S.

    2017-01-01

    We develop and test a new theory for pressure dependent outflow from the eye. The theory comprises three main parameters: (i) a constant hydraulic conductivity, (ii) an exponential decay constant and (iii) a no-flow intraocular pressure, from which the total pressure dependent outflow, average outflow facilities and local outflow facilities for the whole eye may be evaluated. We use a new notation to specify precisely the meaning of model parameters and so model outputs. Drawing on a range of published data, we apply the theory to animal eyes, enucleated eyes and in vivo human eyes, and demonstrate how to evaluate model parameters. It is shown that the theory can fit high quality experimental data remarkably well. The new theory predicts that outflow facilities and total pressure dependent outflow for the whole eye are more than twice as large as estimates based on the Goldman equation and fluorometric analysis of anterior aqueous outflow. It appears likely that this discrepancy can be largely explained by pseudofacility and aqueous flow through the retinal pigmented epithelium, while any residual discrepancy may be due to pathological processes in aged eyes. The model predicts that if the hydraulic conductivity is too small, or the exponential decay constant is too large, then intraocular eye pressure may become unstable when subjected to normal circadian changes in aqueous production. The model also predicts relationships between variables that may be helpful when planning future experiments, and the model generates many novel testable hypotheses. With additional research, the analysis described here may find application in the differential diagnosis, prognosis and monitoring of glaucoma. PMID:29261696

  14. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  15. Training Signaling Pathway Maps to Biochemical Data with Constrained Fuzzy Logic: Quantitative Analysis of Liver Cell Responses to Inflammatory Stimuli

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.

    2011-01-01

    Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212

  16. A Computational Framework for 3D Mechanical Modeling of Plant Morphogenesis with Cellular Resolution

    PubMed Central

    Gilles, Benjamin; Hamant, Olivier; Boudaoud, Arezki; Traas, Jan; Godin, Christophe

    2015-01-01

    The link between genetic regulation and the definition of form and size during morphogenesis remains largely an open question in both plant and animal biology. This is partially due to the complexity of the process, involving extensive molecular networks, multiple feedbacks between different scales of organization and physical forces operating at multiple levels. Here we present a conceptual and modeling framework aimed at generating an integrated understanding of morphogenesis in plants. This framework is based on the biophysical properties of plant cells, which are under high internal turgor pressure, and are prevented from bursting because of the presence of a rigid cell wall. To control cell growth, the underlying molecular networks must interfere locally with the elastic and/or plastic extensibility of this cell wall. We present a model in the form of a three dimensional (3D) virtual tissue, where growth depends on the local modulation of wall mechanical properties and turgor pressure. The model shows how forces generated by turgor-pressure can act both cell autonomously and non-cell autonomously to drive growth in different directions. We use simulations to explore lateral organ formation at the shoot apical meristem. Although different scenarios lead to similar shape changes, they are not equivalent and lead to different, testable predictions regarding the mechanical and geometrical properties of the growing lateral organs. Using flower development as an example, we further show how a limited number of gene activities can explain the complex shape changes that accompany organ outgrowth. PMID:25569615

  17. Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat.

    PubMed

    Aasebø, Ida E J; Lepperød, Mikkel E; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute; Hafting, Torkel; Fyhn, Marianne

    2017-01-01

    The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model.

  18. Temporal Processing in the Visual Cortex of the Awake and Anesthetized Rat

    PubMed Central

    Aasebø, Ida E. J.; Stavrinou, Maria; Nøkkevangen, Sandra; Einevoll, Gaute

    2017-01-01

    Abstract The activity pattern and temporal dynamics within and between neuron ensembles are essential features of information processing and believed to be profoundly affected by anesthesia. Much of our general understanding of sensory information processing, including computational models aimed at mathematically simulating sensory information processing, rely on parameters derived from recordings conducted on animals under anesthesia. Due to the high variety of neuronal subtypes in the brain, population-based estimates of the impact of anesthesia may conceal unit- or ensemble-specific effects of the transition between states. Using chronically implanted tetrodes into primary visual cortex (V1) of rats, we conducted extracellular recordings of single units and followed the same cell ensembles in the awake and anesthetized states. We found that the transition from wakefulness to anesthesia involves unpredictable changes in temporal response characteristics. The latency of single-unit responses to visual stimulation was delayed in anesthesia, with large individual variations between units. Pair-wise correlations between units increased under anesthesia, indicating more synchronized activity. Further, the units within an ensemble show reproducible temporal activity patterns in response to visual stimuli that is changed between states, suggesting state-dependent sequences of activity. The current dataset, with recordings from the same neural ensembles across states, is well suited for validating and testing computational network models. This can lead to testable predictions, bring a deeper understanding of the experimental findings and improve models of neural information processing. Here, we exemplify such a workflow using a Brunel network model. PMID:28791331

  19. Strategies for evaluating the assumptions of the regression discontinuity design: a case study using a human papillomavirus vaccination programme.

    PubMed

    Smith, Leah M; Lévesque, Linda E; Kaufman, Jay S; Strumpf, Erin C

    2017-06-01

    The regression discontinuity design (RDD) is a quasi-experimental approach used to avoid confounding bias in the assessment of new policies and interventions. It is applied specifically in situations where individuals are assigned to a policy/intervention based on whether they are above or below a pre-specified cut-off on a continuously measured variable, such as birth date, income or weight. The strength of the design is that, provided individuals do not manipulate the value of this variable, assignment to the policy/intervention is considered as good as random for individuals close to the cut-off. Despite its popularity in fields like economics, the RDD remains relatively unknown in epidemiology where its application could be tremendously useful. In this paper, we provide a practical introduction to the RDD for health researchers, describe four empirically testable assumptions of the design and offer strategies that can be used to assess whether these assumptions are met in a given study. For illustrative purposes, we implement these strategies to assess whether the RDD is appropriate for a study of the impact of human papillomavirus vaccination on cervical dysplasia. We found that, whereas the assumptions of the RDD were generally satisfied in our study context, birth timing had the potential to confound our effect estimate in an unexpected way and therefore needed to be taken into account in the analysis. Our findings underscore the importance of assessing the validity of the assumptions of this design, testing them when possible and making adjustments as necessary to support valid causal inference. © The Author 2016. Published by Oxford University Press on behalf of the International Epidemiological Association

  20. ToxCast Chemical Landscape: Paving the Road to 21st Century Toxicology.

    PubMed

    Richard, Ann M; Judson, Richard S; Houck, Keith A; Grulke, Christopher M; Volarath, Patra; Thillainadarajah, Inthirany; Yang, Chihae; Rathman, James; Martin, Matthew T; Wambaugh, John F; Knudsen, Thomas B; Kancherla, Jayaram; Mansouri, Kamel; Patlewicz, Grace; Williams, Antony J; Little, Stephen B; Crofton, Kevin M; Thomas, Russell S

    2016-08-15

    The U.S. Environmental Protection Agency's (EPA) ToxCast program is testing a large library of Agency-relevant chemicals using in vitro high-throughput screening (HTS) approaches to support the development of improved toxicity prediction models. Launched in 2007, Phase I of the program screened 310 chemicals, mostly pesticides, across hundreds of ToxCast assay end points. In Phase II, the ToxCast library was expanded to 1878 chemicals, culminating in the public release of screening data at the end of 2013. Subsequent expansion in Phase III has resulted in more than 3800 chemicals actively undergoing ToxCast screening, 96% of which are also being screened in the multi-Agency Tox21 project. The chemical library unpinning these efforts plays a central role in defining the scope and potential application of ToxCast HTS results. The history of the phased construction of EPA's ToxCast library is reviewed, followed by a survey of the library contents from several different vantage points. CAS Registry Numbers are used to assess ToxCast library coverage of important toxicity, regulatory, and exposure inventories. Structure-based representations of ToxCast chemicals are then used to compute physicochemical properties, substructural features, and structural alerts for toxicity and biotransformation. Cheminformatics approaches using these varied representations are applied to defining the boundaries of HTS testability, evaluating chemical diversity, and comparing the ToxCast library to potential target application inventories, such as used in EPA's Endocrine Disruption Screening Program (EDSP). Through several examples, the ToxCast chemical library is demonstrated to provide comprehensive coverage of the knowledge domains and target inventories of potential interest to EPA. Furthermore, the varied representations and approaches presented here define local chemistry domains potentially worthy of further investigation (e.g., not currently covered in the testing library or defined by toxicity "alerts") to strategically support data mining and predictive toxicology modeling moving forward.

  1. Ares I-X Ground Diagnostic Prototype

    NASA Technical Reports Server (NTRS)

    Schwabacher, Mark A.; Martin, Rodney Alexander; Waterman, Robert D.; Oostdyk, Rebecca Lynn; Ossenfort, John P.; Matthews, Bryan

    2010-01-01

    The automation of pre-launch diagnostics for launch vehicles offers three potential benefits: improving safety, reducing cost, and reducing launch delays. The Ares I-X Ground Diagnostic Prototype demonstrated anomaly detection, fault detection, fault isolation, and diagnostics for the Ares I-X first-stage Thrust Vector Control and for the associated ground hydraulics while the vehicle was in the Vehicle Assembly Building at Kennedy Space Center (KSC) and while it was on the launch pad. The prototype combines three existing tools. The first tool, TEAMS (Testability Engineering and Maintenance System), is a model-based tool from Qualtech Systems Inc. for fault isolation and diagnostics. The second tool, SHINE (Spacecraft Health Inference Engine), is a rule-based expert system that was developed at the NASA Jet Propulsion Laboratory. We developed SHINE rules for fault detection and mode identification, and used the outputs of SHINE as inputs to TEAMS. The third tool, IMS (Inductive Monitoring System), is an anomaly detection tool that was developed at NASA Ames Research Center. The three tools were integrated and deployed to KSC, where they were interfaced with live data. This paper describes how the prototype performed during the period of time before the launch, including accuracy and computer resource usage. The paper concludes with some of the lessons that we learned from the experience of developing and deploying the prototype.

  2. A Physiologically Based Model of Orexinergic Stabilization of Sleep and Wake

    PubMed Central

    Fulcher, Ben D.; Phillips, Andrew J. K.; Postnova, Svetlana; Robinson, Peter A.

    2014-01-01

    The orexinergic neurons of the lateral hypothalamus (Orx) are essential for regulating sleep-wake dynamics, and their loss causes narcolepsy, a disorder characterized by severe instability of sleep and wake states. However, the mechanisms through which Orx stabilize sleep and wake are not well understood. In this work, an explanation of the stabilizing effects of Orx is presented using a quantitative model of important physiological connections between Orx and the sleep-wake switch. In addition to Orx and the sleep-wake switch, which is composed of mutually inhibitory wake-active monoaminergic neurons in brainstem and hypothalamus (MA) and the sleep-active ventrolateral preoptic neurons of the hypothalamus (VLPO), the model also includes the circadian and homeostatic sleep drives. It is shown that Orx stabilizes prolonged waking episodes via its excitatory input to MA and by relaying a circadian input to MA, thus sustaining MA firing activity during the circadian day. During sleep, both Orx and MA are inhibited by the VLPO, and the subsequent reduction in Orx input to the MA indirectly stabilizes sustained sleep episodes. Simulating a loss of Orx, the model produces dynamics resembling narcolepsy, including frequent transitions between states, reduced waking arousal levels, and a normal daily amount of total sleep. The model predicts a change in sleep timing with differences in orexin levels, with higher orexin levels delaying the normal sleep episode, suggesting that individual differences in Orx signaling may contribute to chronotype. Dynamics resembling sleep inertia also emerge from the model as a gradual sleep-to-wake transition on a timescale that varies with that of Orx dynamics. The quantitative, physiologically based model developed in this work thus provides a new explanation of how Orx stabilizes prolonged episodes of sleep and wake, and makes a range of experimentally testable predictions, including a role for Orx in chronotype and sleep inertia. PMID:24651580

  3. Modeling evapotranspiration based on plant hydraulic theory can predict spatial variability across an elevation gradient and link to biogeochemical fluxes

    NASA Astrophysics Data System (ADS)

    Mackay, D. S.; Frank, J.; Reed, D.; Whitehouse, F.; Ewers, B. E.; Pendall, E.; Massman, W. J.; Sperry, J. S.

    2012-04-01

    In woody plant systems transpiration is often the dominant component of total evapotranspiration, and so it is key to understanding water and energy cycles. Moreover, transpiration is tightly coupled to carbon and nutrient fluxes, and so it is also vital to understanding spatial variability of biogeochemical fluxes. However, the spatial variability of transpiration and its links to biogeochemical fluxes, within- and among-ecosystems, has been a challenge to constrain because of complex feedbacks between physical and biological controls. Plant hydraulics provides an emerging theory with the rigor needed to develop testable hypotheses and build useful models for scaling these coupled fluxes from individual plants to regional scales. This theory predicts that vegetative controls over water, energy, carbon, and nutrient fluxes can be determined from the limitation of plant water transport through the soil-xylem-stomata pathway. Limits to plant water transport can be predicted from measurable plant structure and function (e.g., vulnerability to cavitation). We present a next-generation coupled transpiration-biogeochemistry model based on this emerging theory. The model, TREEScav, is capable of predicting transpiration, along with carbon and nutrient flows, constrained by plant structure and function. The model incorporates tightly coupled mechanisms of the demand and supply of water through the soil-xylem-stomata system, with the feedbacks to photosynthesis and utilizable carbohydrates. The model is evaluated by testing it against transpiration and carbon flux data along an elevation gradient of woody plants comprising sagebrush steppe, mid-elevation lodgepole pine forests, and subalpine spruce/fir forests in the Rocky Mountains. The model accurately predicts transpiration and carbon fluxes as measured from gas exchange, sap flux, and eddy covariance towers. The results of this work demonstrate that credible spatial predictions of transpiration and related biogeochemical fluxes will be possible at regional scales using relatively easily obtained vegetation structural and functional information.

  4. Reduced tyrosine kinase inhibitor dose is predicted to be as effective as standard dose in chronic myeloid leukemia: A simulation study based on phase 3 trial data.

    PubMed

    Fassoni, Artur C; Baldow, Christoph; Roeder, Ingo; Glauche, Ingmar

    2018-06-28

    Continuing tyrosine kinase inhibitor mediated targeting of the BCR-ABL1 oncoprotein is the standard therapy for chronic myeloid leukemia and allows for a sustained disease control in the majority of patients. While therapy cessation for patients appeared as a safe option for about half of the optimally responding patients, a systematic assessment of long-term tyrosine kinase inhibitor dose de-escalation is missing. We use a mathematical model to analyze and consistently describe biphasic treatment responses from tyrosine kinase inhibitor treated patients from two independent clinical phase-3 trials. Scale estimates reveal that drug efficiency determines the initial response while the long-term behavior is limited by the rare activation of leukemic stem cells. We use this mathematical framework to investigate the influence of different dosing regimens on the treatment outcome. We provide strong evidence suggesting that tyrosine kinase inhibitor dose de-escalation (at least 50%) does not lead to a reduction of long-term treatment efficiency for most patients, which have already achieved sustained remission, and maintains the secondary decline of BCR-ABL1 levels. We demonstrate that continuous BCR-ABL1 monitoring provides patient-specific predictions of an optimal reduced dose not decreasing the anti-leukemic effect on residual leukemic stem cells. Our results are consistent with the interim results of the DESTINY trial and provide clinically testable predictions. Our results suggest that dose halving should be considered as a long-term treatment option for well-responding chronic myeloid leukemia patients under continuing maintenance therapy with tyrosine kinase inhibitors. We emphasize the clinical potential of this approach to reduce treatment-related side-effects and therapy costs. Copyright © 2018, Ferrata Storti Foundation.

  5. Munitions integrity and corrosion features observed during the HUMMA deep-sea munitions disposal site investigations

    NASA Astrophysics Data System (ADS)

    Silva, Jeff A. K.; Chock, Taylor

    2016-06-01

    An evaluation of the current condition of sea-disposed military munitions observed during the 2009 Hawaii Undersea Military Munitions Assessment Project investigation is presented. The 69 km2 study area is located south of Pearl Harbor, Oahu, Hawaii, and is positioned within a former deep-sea disposal area designated as Hawaii-05 or HI-05 by the United States Department of Defense. HI-05 is known to contain both conventional and chemical munitions that were sea-disposed between 1920 and 1951. Digital images and video reconnaissance logs collected during six remotely operated vehicle and 16 human-occupied vehicle surveys were used to classify the integrity and state of corrosion of the 1842 discarded military munitions (DMM) objects encountered. Of these, 5% (or 90 individual DMM objects) were found to exhibit a mild-moderate degree of corrosion. The majority (66% or 1222 DMM objects) were observed to be significantly corroded, but visually intact on the seafloor. The remaining 29% of DMM encountered were found to be severely corroded and breached, with their contents exposed. Chemical munitions were not identified during the 2009 investigation. In general, identified munitions known to have been constructed with thicker casings were better preserved. Unusual corrosion features were also observed, including what are termed here as 'corrosion skirts' that resembled the flow and cementation of corrosion products at and away from the base of many munitions, and 'corrosion pedestal' features resembling a combination of cemented corrosion products and seafloor sediments that were observed to be supporting munitions above the surface of the seafloor. The origin of these corrosion features could not be determined due to the lack of physical samples collected. However, a microbial-mediated formation hypothesis is presented, based on visual analysis, which can serve as a testable model for future field programs.

  6. Co-regulation of the atrial natriuretic factor and cardiac myosin light chain-2 genes during alpha-adrenergic stimulation of neonatal rat ventricular cells. Identification of cis sequences within an embryonic and a constitutive contractile protein gene which mediate inducible expression.

    PubMed

    Knowlton, K U; Baracchini, E; Ross, R S; Harris, A N; Henderson, S A; Evans, S M; Glembotski, C C; Chien, K R

    1991-04-25

    To study the mechanisms which mediate the transcriptional activation of cardiac genes during alpha adrenergic stimulation, the present study examined the regulated expression of three cardiac genes, a ventricular embryonic gene (atrial natriuretic factor, ANF), a constitutively expressed contractile protein gene (cardiac MLC-2), and a cardiac sodium channel gene. alpha 1-Adrenergic stimulation activates the expression and release of ANF from neonatal ventricular cells. As assessed by RNase protection analyses, treatment with alpha-adrenergic agonists increases the steady-state levels of ANF mRNA by greater than 15-fold. However, a rat cardiac sodium channel gene mRNA is not induced, indicating that alpha-adrenergic stimulation does not lead to an increase in the expression of all cardiac genes. Studies employing a series of rat ANF luciferase and rat MLC-2 luciferase fusion genes identify 315- and 92-base pair cis regulatory sequences within an embryonic gene (ANF) and a constitutively expressed contractile protein gene (MLC-2), respectively, which mediate alpha-adrenergic-inducible gene expression. Transfection of various ANF luciferase reporters into neonatal rat ventricular cells demonstrated that upstream sequences which mediate tissue-specific expression (-3003 to -638) can be segregated from those responsible for inducibility. The lack of inducibility of a cardiac Na+ channel gene, and the segregation of ANF gene sequences which mediate cardiac specific from those which mediate inducible expression, provides further insight into the relationship between muscle-specific and inducible expression during cardiac myocyte hypertrophy. Based on these results, a testable model is proposed for the induction of embryonic cardiac genes and constitutively expressed contractile protein genes and the noninducibility of a subset of cardiac genes during alpha-adrenergic stimulation of neonatal rat ventricular cells.

  7. Automated Testability Decision Tool

    DTIC Science & Technology

    1991-09-01

    Vol. 16,1968, pp. 538-558. Bertsekas, D. P., "Constraints Optimization and Lagrange Multiplier Methods," Academic Press, New York. McLeavey , D.W... McLeavey , J.A., "Parallel Optimization Methods in Standby Reliability, " University of Connecticut, School of Business Administration, Bureau of Business

  8. Role of the Epistemic Subject in Piaget's Genetic Epistemology and Its Importance for Science Education.

    ERIC Educational Resources Information Center

    Niaz, Mansoor

    1991-01-01

    Discusses differences between the epistemic and the psychological subject, the relationship between the epistemic subject and the ideal gas law, the development of general cognitive operations, and the empirical testability of Piaget's epistemic subject. (PR)

  9. Small Town in Mass Society Revisited.

    ERIC Educational Resources Information Center

    Young, Frank W.

    1996-01-01

    A 1958 New York community study dramatized the thesis that macro forces (urbanization, industrialization, bureaucratization) have undermined all small communities' autonomy. Such "oppositional case studies" succeed when they render the dominant view immediately obsolete, have plausible origins, are testable, and generate new research.…

  10. Ethnic Enclaves and the Earnings of Immigrants

    PubMed Central

    Xie, Yu; Gough, Margaret

    2011-01-01

    A large literature in sociology concerns the implications of immigrants’ participation in ethnic enclaves for their economic and social well-being. The “enclave thesis” speculates that immigrants benefit from working in ethnic enclaves. Previous research concerning the effects of enclave participation on immigrants’ economic outcomes has come to mixed conclusions as to whether enclave effects are positive or negative. In this article, we seek to extend and improve upon past work by formulating testable hypotheses based on the enclave thesis and testing them with data from the 2003 New Immigrant Survey (NIS), employing both residence-based and workplace-based measures of the ethnic enclave. We compare the economic outcomes of immigrants working in ethnic enclaves with those of immigrants working in the mainstream economy. Our research yields minimal support for the enclave thesis. Our results further indicate that for some immigrant groups, ethnic enclave participation actually has a negative effect on economic outcomes. PMID:21863367

  11. A Motor-Gradient and Clustering Model of the Centripetal Motility of MTOCs in Meiosis I of Mouse Oocytes

    PubMed Central

    2016-01-01

    Asters nucleated by Microtubule (MT) organizing centers (MTOCs) converge on chromosomes during spindle assembly in mouse oocytes undergoing meiosis I. Time-lapse imaging suggests that this centripetal motion is driven by a biased ‘search-and-capture’ mechanism. Here, we develop a model of a random walk in a drift field to test the nature of the bias and the spatio-temporal dynamics of the search process. The model is used to optimize the spatial field of drift in simulations, by comparison to experimental motility statistics. In a second step, this optimized gradient is used to determine the location of immobilized dynein motors and MT polymerization parameters, since these are hypothesized to generate the gradient of forces needed to move MTOCs. We compare these scenarios to self-organized mechanisms by which asters have been hypothesized to find the cell-center- MT pushing at the cell-boundary and clustering motor complexes. By minimizing the error between simulation outputs and experiments, we find a model of “pulling” by a gradient of dynein motors alone can drive the centripetal motility. Interestingly, models of passive MT based “pushing” at the cortex, clustering by cross-linking motors and MT-dynamic instability gradients alone, by themselves do not result in the observed motility. The model predicts the sensitivity of the results to motor density and stall force, but not MTs per aster. A hybrid model combining a chromatin-centered immobilized dynein gradient, diffusible minus-end directed clustering motors and pushing at the cell cortex, is required to comprehensively explain the available data. The model makes experimentally testable predictions of a spatial bias and self-organized mechanisms by which MT asters can find the center of a large cell. PMID:27706163

  12. Computational analysis of an autophagy/translation switch based on mutual inhibition of MTORC1 and ULK1

    DOE PAGES

    Szymańska, Paulina; Martin, Katie R.; MacKeigan, Jeffrey P.; ...

    2015-03-11

    We constructed a mechanistic, computational model for regulation of (macro)autophagy and protein synthesis (at the level of translation). The model was formulated to study the system-level consequences of interactions among the following proteins: two key components of MTOR complex 1 (MTORC1), namely the protein kinase MTOR (mechanistic target of rapamycin) and the scaffold protein RPTOR; the autophagy-initiating protein kinase ULK1; and the multimeric energy-sensing AMP-activated protein kinase (AMPK). Inputs of the model include intrinsic AMPK kinase activity, which is taken as an adjustable surrogate parameter for cellular energy level or AMP:ATP ratio, and rapamycin dose, which controls MTORC1 activity. Outputsmore » of the model include the phosphorylation level of the translational repressor EIF4EBP1, a substrate of MTORC1, and the phosphorylation level of AMBRA1 (activating molecule in BECN1-regulated autophagy), a substrate of ULK1 critical for autophagosome formation. The model incorporates reciprocal regulation of mTORC1 and ULK1 by AMPK, mutual inhibition of MTORC1 and ULK1, and ULK1-mediated negative feedback regulation of AMPK. Through analysis of the model, we find that these processes may be responsible, depending on conditions, for graded responses to stress inputs, for bistable switching between autophagy and protein synthesis, or relaxation oscillations, comprising alternating periods of autophagy and protein synthesis. A sensitivity analysis indicates that the prediction of oscillatory behavior is robust to changes of the parameter values of the model. The model provides testable predictions about the behavior of the AMPK-MTORC1-ULK1 network, which plays a central role in maintaining cellular energy and nutrient homeostasis.« less

  13. A Motor-Gradient and Clustering Model of the Centripetal Motility of MTOCs in Meiosis I of Mouse Oocytes.

    PubMed

    Khetan, Neha; Athale, Chaitanya A

    2016-10-01

    Asters nucleated by Microtubule (MT) organizing centers (MTOCs) converge on chromosomes during spindle assembly in mouse oocytes undergoing meiosis I. Time-lapse imaging suggests that this centripetal motion is driven by a biased 'search-and-capture' mechanism. Here, we develop a model of a random walk in a drift field to test the nature of the bias and the spatio-temporal dynamics of the search process. The model is used to optimize the spatial field of drift in simulations, by comparison to experimental motility statistics. In a second step, this optimized gradient is used to determine the location of immobilized dynein motors and MT polymerization parameters, since these are hypothesized to generate the gradient of forces needed to move MTOCs. We compare these scenarios to self-organized mechanisms by which asters have been hypothesized to find the cell-center- MT pushing at the cell-boundary and clustering motor complexes. By minimizing the error between simulation outputs and experiments, we find a model of "pulling" by a gradient of dynein motors alone can drive the centripetal motility. Interestingly, models of passive MT based "pushing" at the cortex, clustering by cross-linking motors and MT-dynamic instability gradients alone, by themselves do not result in the observed motility. The model predicts the sensitivity of the results to motor density and stall force, but not MTs per aster. A hybrid model combining a chromatin-centered immobilized dynein gradient, diffusible minus-end directed clustering motors and pushing at the cell cortex, is required to comprehensively explain the available data. The model makes experimentally testable predictions of a spatial bias and self-organized mechanisms by which MT asters can find the center of a large cell.

  14. Ecosystem function in complex mountain terrain: Combining models and long-term observations to advance process-based understanding

    NASA Astrophysics Data System (ADS)

    Wieder, William R.; Knowles, John F.; Blanken, Peter D.; Swenson, Sean C.; Suding, Katharine N.

    2017-04-01

    Abiotic factors structure plant community composition and ecosystem function across many different spatial scales. Often, such variation is considered at regional or global scales, but here we ask whether ecosystem-scale simulations can be used to better understand landscape-level variation that might be particularly important in complex terrain, such as high-elevation mountains. We performed ecosystem-scale simulations by using the Community Land Model (CLM) version 4.5 to better understand how the increased length of growing seasons may impact carbon, water, and energy fluxes in an alpine tundra landscape. The model was forced with meteorological data and validated with observations from the Niwot Ridge Long Term Ecological Research Program site. Our results demonstrate that CLM is capable of reproducing the observed carbon, water, and energy fluxes for discrete vegetation patches across this heterogeneous ecosystem. We subsequently accelerated snowmelt and increased spring and summer air temperatures in order to simulate potential effects of climate change in this region. We found that vegetation communities that were characterized by different snow accumulation dynamics showed divergent biogeochemical responses to a longer growing season. Contrary to expectations, wet meadow ecosystems showed the strongest decreases in plant productivity under extended summer scenarios because of disruptions in hydrologic connectivity. These findings illustrate how Earth system models such as CLM can be used to generate testable hypotheses about the shifting nature of energy, water, and nutrient limitations across space and through time in heterogeneous landscapes; these hypotheses may ultimately guide further experimental work and model development.

  15. The possible consequences for cognitive functions of external electric fields at power line frequency on hippocampal CA1 pyramidal neurons.

    PubMed

    Migliore, Rosanna; De Simone, Giada; Leinekugel, Xavier; Migliore, Michele

    2017-04-01

    The possible effects on cognitive processes of external electric fields, such as those generated by power line pillars and household appliances are of increasing public concern. They are difficult to study experimentally, and the relatively scarce and contradictory evidence make it difficult to clearly assess these effects. In this study, we investigate how, why and to what extent external perturbations of the intrinsic neuronal activity, such as those that can be caused by generation, transmission and use of electrical energy can affect neuronal activity during cognitive processes. For this purpose, we used a morphologically and biophysically realistic three-dimensional model of CA1 pyramidal neurons. The simulation findings suggest that an electric field oscillating at power lines frequency, and environmentally measured strength, can significantly alter both the average firing rate and temporal spike distribution properties of a hippocampal CA1 pyramidal neuron. This effect strongly depends on the specific and instantaneous relative spatial location of the neuron with respect to the field, and on the synaptic input properties. The model makes experimentally testable predictions on the possible functional consequences for normal hippocampal functions such as object recognition and spatial navigation. The results suggest that, although EF effects on cognitive processes may be difficult to occur in everyday life, their functional consequences deserve some consideration, especially when they constitute a systematic presence in living environments. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Cognitive Scientists Prefer Theories and Testable Principles with Teeth

    ERIC Educational Resources Information Center

    Graesser, Arthur C.

    2009-01-01

    Alexander, Schallert, and Reynolds (2009/this issue) proposed a definition and landscape of learning that included 9 principles and 4 dimensions ("what," "who," "where," "when"). This commentary reflects on the utility of this definition and 4-dimensional landscape from the standpoint of educational…

  17. Experimental Test of Compatibility-Loophole-Free Contextuality with Spatially Separated Entangled Qutrits.

    PubMed

    Hu, Xiao-Min; Chen, Jiang-Shan; Liu, Bi-Heng; Guo, Yu; Huang, Yun-Feng; Zhou, Zong-Quan; Han, Yong-Jian; Li, Chuan-Feng; Guo, Guang-Can

    2016-10-21

    The physical impact and the testability of the Kochen-Specker (KS) theorem is debated because of the fact that perfect compatibility in a single quantum system cannot be achieved in practical experiments with finite precision. Here, we follow the proposal of A. Cabello and M. T. Cunha [Phys. Rev. Lett. 106, 190401 (2011)], and present a compatibility-loophole-free experimental violation of an inequality of noncontextual theories by two spatially separated entangled qutrits. A maximally entangled qutrit-qutrit state with a fidelity as high as 0.975±0.001 is prepared and distributed to separated spaces, and these two photons are then measured locally, providing the compatibility requirement. The results show that the inequality for noncontextual theory is violated by 31 standard deviations. Our experiments pave the way to close the debate about the testability of the KS theorem. In addition, the method to generate high-fidelity and high-dimension entangled states will provide significant advantages in high-dimension quantum encoding and quantum communication.

  18. What is a delusion? Epistemological dimensions.

    PubMed

    Leeser, J; O'Donohue, W

    1999-11-01

    Although the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association, 1994) clearly indicates delusions have an epistemic dimension, it fails to accurately identify the epistemic properties of delusions. The authors explicate the regulative causes of belief revision for rational agents and argue that delusions are unresponsive to these. They argue that delusions are (a) protected beliefs made unfalsifiable either in principle or because the agent refuses to admit anything as a potential falsifier; (b) the protected belief is not typically considered a "properly basic" belief; (c) the belief is not of the variety of protected scientific beliefs; (d) in response to an apparent falsification, the subject posits not a simple, testable explanation for the inconsistency but one that is more complicated, less testable, and provides no new corroborations; (e) the subject has a strong emotional attachment to the belief; and (f) the belief is typically supported by (or originates from) trivial occurrences that are interpreted by the subject as highly unusual, significant, having personal reference, or some combination of these.

  19. Insights into Mechanisms of Chronic Neurodegeneration

    PubMed Central

    Diack, Abigail B.; Alibhai, James D.; Barron, Rona; Bradford, Barry; Piccardo, Pedro; Manson, Jean C.

    2016-01-01

    Chronic neurodegenerative diseases such as Alzheimer’s disease (AD), Parkinson’s disease (PD), and prion diseases are characterised by the accumulation of abnormal conformers of a host encoded protein in the central nervous system. The process leading to neurodegeneration is still poorly defined and thus development of early intervention strategies is challenging. Unique amongst these diseases are Transmissible Spongiform Encephalopathies (TSEs) or prion diseases, which have the ability to transmit between individuals. The infectious nature of these diseases has permitted in vivo and in vitro modelling of the time course of the disease process in a highly reproducible manner, thus early events can be defined. Recent evidence has demonstrated that the cell-to-cell spread of protein aggregates by a “prion-like mechanism” is common among the protein misfolding diseases. Thus, the TSE models may provide insights into disease mechanisms and testable hypotheses for disease intervention, applicable to a number of these chronic neurodegenerative diseases. PMID:26771599

  20. p p →A →Z h and the wrong-sign limit of the two-Higgs-doublet model

    NASA Astrophysics Data System (ADS)

    Ferreira, Pedro M.; Liebler, Stefan; Wittbrodt, Jonas

    2018-03-01

    We point out the importance of the decay channels A →Z h and H →V V in the wrong-sign limit of the two-Higgs-doublet model (2HDM) of type II. They can be the dominant decay modes at moderate values of tan β , even if the (pseudo)scalar mass is above the threshold where the decay into a pair of top quarks is kinematically open. Accordingly, large cross sections p p →A →Z h and p p →H →V V are obtained and currently probed by the LHC experiments, yielding conclusive statements about the remaining parameter space of the wrong-sign limit. In addition, mild excesses—as recently found in the ATLAS analysis b b ¯→A →Z h —could be explained. The wrong-sign limit makes other important testable predictions for the light Higgs boson couplings.

  1. COAGULATION CALCULATIONS OF ICY PLANET FORMATION AT 15-150 AU: A CORRELATION BETWEEN THE MAXIMUM RADIUS AND THE SLOPE OF THE SIZE DISTRIBUTION FOR TRANS-NEPTUNIAN OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu

    2012-03-15

    We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less

  2. Rice-arsenate interactions in hydroponics: a three-gene model for tolerance.

    PubMed

    Norton, Gareth J; Nigar, Meher; Williams, Paul N; Dasgupta, Tapash; Meharg, Andrew A; Price, Adam H

    2008-01-01

    In this study, the genetic mapping of the tolerance of root growth to 13.3 muM arsenate [As(V)] using the BalaxAzucena population is improved, and candidate genes for further study are identified. A remarkable three-gene model of tolerance is advanced, which appears to involve epistatic interaction between three major genes, two on chromosome 6 and one on chromosome 10. Any combination of two of these genes inherited from the tolerant parent leads to the plant having tolerance. Lists of potential positional candidate genes are presented. These are then refined using whole genome transcriptomics data and bioinformatics. Physiological evidence is also provided that genes related to phosphate transport are unlikely to be behind the genetic loci conferring tolerance. These results offer testable hypotheses for genes related to As(V) tolerance that might offer strategies for mitigating arsenic (As) accumulation in consumed rice.

  3. Rice–arsenate interactions in hydroponics: a three-gene model for tolerance

    PubMed Central

    Norton, Gareth J.; Nigar, Meher; Dasgupta, Tapash; Meharg, Andrew A.; Price, Adam H.

    2008-01-01

    In this study, the genetic mapping of the tolerance of root growth to 13.3 μM arsenate [As(V)] using the Bala×Azucena population is improved, and candidate genes for further study are identified. A remarkable three-gene model of tolerance is advanced, which appears to involve epistatic interaction between three major genes, two on chromosome 6 and one on chromosome 10. Any combination of two of these genes inherited from the tolerant parent leads to the plant having tolerance. Lists of potential positional candidate genes are presented. These are then refined using whole genome transcriptomics data and bioinformatics. Physiological evidence is also provided that genes related to phosphate transport are unlikely to be behind the genetic loci conferring tolerance. These results offer testable hypotheses for genes related to As(V) tolerance that might offer strategies for mitigating arsenic (As) accumulation in consumed rice. PMID:18453529

  4. Sequential pattern formation governed by signaling gradients

    NASA Astrophysics Data System (ADS)

    Jörg, David J.; Oates, Andrew C.; Jülicher, Frank

    2016-10-01

    Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentation between different animal species.

  5. Mercury's magnetic field - A thermoelectric dynamo?

    NASA Technical Reports Server (NTRS)

    Stevenson, D. J.

    1987-01-01

    Permanent magnetism and conventional dynamo theory are possible but problematic explanations for the magnitude of the Mercurian magnetic field. A new model is proposed in which thermoelectric currents driven by temperature differences at a bumpy core-mantle boundary are responsible for the (unobserved) toroidal field, and the helicity of convective motions in a thin outer core (thickness of about 100 km) induces the observed poloidal field from the toroidal field. The observed field of about 3 x 10 to the -7th T can be reproduced provided the electrical conductivity of Mercury's semiconducting mantle approaches 1000/ohm per m. This model may be testable by future missions to Mercury because it predicts a more complicated field geometry than conventional dynamo theories. However, it is argued that polar wander may cause the core-mantle topography to migrate so that some aspects of the rotational symmetry may be reflected in the observed field.

  6. Study designs for identifying risk compensation behavior among users of biomedical HIV prevention technologies: balancing methodological rigor and research ethics.

    PubMed

    Underhill, Kristen

    2013-10-01

    The growing evidence base for biomedical HIV prevention interventions - such as oral pre-exposure prophylaxis, microbicides, male circumcision, treatment as prevention, and eventually prevention vaccines - has given rise to concerns about the ways in which users of these biomedical products may adjust their HIV risk behaviors based on the perception that they are prevented from infection. Known as risk compensation, this behavioral adjustment draws on the theory of "risk homeostasis," which has previously been applied to phenomena as diverse as Lyme disease vaccination, insurance mandates, and automobile safety. Little rigorous evidence exists to answer risk compensation concerns in the biomedical HIV prevention literature, in part because the field has not systematically evaluated the study designs available for testing these behaviors. The goals of this Commentary are to explain the origins of risk compensation behavior in risk homeostasis theory, to reframe risk compensation as a testable response to the perception of reduced risk, and to assess the methodological rigor and ethical justification of study designs aiming to isolate risk compensation responses. Although the most rigorous methodological designs for assessing risk compensation behavior may be unavailable due to ethical flaws, several strategies can help investigators identify potential risk compensation behavior during Phase II, Phase III, and Phase IV testing of new technologies. Where concerns arise regarding risk compensation behavior, empirical evidence about the incidence, types, and extent of these behavioral changes can illuminate opportunities to better support the users of new HIV prevention strategies. This Commentary concludes by suggesting a new way to conceptualize risk compensation behavior in the HIV prevention context. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Study designs for identifying risk compensation behavior among users of biomedical HIV prevention technologies: Balancing methodological rigor and research ethics

    PubMed Central

    Underhill, Kristen

    2014-01-01

    The growing evidence base for biomedical HIV prevention interventions – such as oral pre-exposure prophylaxis, microbicides, male circumcision, treatment as prevention, and eventually prevention vaccines – has given rise to concerns about the ways in which users of these biomedical products may adjust their HIV risk behaviors based on the perception that they are prevented from infection. Known as risk compensation, this behavioral adjustment draws on the theory of “risk homeostasis,” which has previously been applied to phenomena as diverse as Lyme disease vaccination, insurance mandates, and automobile safety. Little rigorous evidence exists to answer risk compensation concerns in the biomedical HIV prevention literature, in part because the field has not systematically evaluated the study designs available for testing these behaviors. The goals of this Commentary are to explain the origins of risk compensation behavior in risk homeostasis theory, to reframe risk compensation as a testable response to the perception of reduced risk, and to assess the methodological rigor and ethical justification of study designs aiming to isolate risk compensation responses. Although the most rigorous methodological designs for assessing risk compensation behavior may be unavailable due to ethical flaws, several strategies can help investigators identify potential risk compensation behavior during Phase II, Phase III, and Phase IV testing of new technologies. Where concerns arise regarding risk compensation behavior, empirical evidence about the incidence, types, and extent of these behavioral changes can illuminate opportunities to better support the users of new HIV prevention strategies. This Commentary concludes by suggesting a new way to conceptualize risk compensation behavior in the HIV prevention context. PMID:23597916

  8. Creativity, information, and consciousness: The information dynamics of thinking.

    PubMed

    Wiggins, Geraint A

    2018-05-07

    This paper presents a theory of the basic operation of mind, Information Dynamics of Thinking, which is intended for computational implementation and thence empirical testing. It is based on the information theory of Shannon, and treats the mind/brain as an information processing organ that aims to be information-efficient, in that it predicts its world, so as to use information efficiently, and regularly re-represents it, so as to store information efficiently. The theory is presented in context of a background review of various research areas that impinge upon its development. Consequences of the theory and testable hypotheses arising from it are discussed. Copyright © 2018. Published by Elsevier B.V.

  9. Understanding and Targeting the ALT Pathway in Human Breast Cancer

    DTIC Science & Technology

    2014-09-01

    Figure 5 B Rif1 Actin He La 1. 3 WI 38 -VA 13 /2 RA T.1 D U2 OS BJ hT ER T S V4 0 a2 E6 E7 .c4 T.1 L T.1 M T.1 Q T.1 R T.1 J/5 H T.1 J/1 -3C RP E h TE...stability of replication forks, an increased sensitivity to replication inhibitors, as well as changes in gene transcription32, 33, 34, 31, 35, 36, 24...down to 46 genes of interest based on the presence of mutations in two distinct panels of ALT cell lines, thereby excluding likely SNPs

  10. Are perytons signatures of ball lightning?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dodin, I. Y.; Fisch, N. J.

    2014-10-20

    The enigmatic downchirped signals, called 'perytons', that are detected by radio telescopes in the GHz frequency range may be produced by an atmospheric phenomenon known as ball lightning (BL). If BLs act as nonstationary radio frequency cavities, their characteristic emission frequencies and evolution timescales are consistent with peryton observations, and so are general patterns in which BLs are known to occur. Based on this evidence, testable predictions are made that can confirm or rule out a causal connection between perytons and BLs. In either case, how perytons are searched for in observational data may warrant reconsideration because existing procedures maymore » be discarding events that have the same nature as known perytons.« less

  11. Black hole spin inferred from 3:2 epicyclic resonance model of high-frequency quasi-periodic oscillations

    NASA Astrophysics Data System (ADS)

    Šrámková, E.; Török, G.; Kotrlová, A.; Bakala, P.; Abramowicz, M. A.; Stuchlík, Z.; Goluchová, K.; Kluźniak, W.

    2015-06-01

    Estimations of black hole spin in the three Galactic microquasars GRS 1915+105, GRO J1655-40, and XTE J1550-564 have been carried out based on spectral and timing X-ray measurements and various theoretical concepts. Among others, a non-linear resonance between axisymmetric epicyclic oscillation modes of an accretion disc around a Kerr black hole has been considered as a model for the observed high-frequency quasi-periodic oscillations (HF QPOs). Estimates of spin predicted by this model have been derived based on the geodesic approximation of the accreted fluid motion. Here we assume accretion flow described by the model of a pressure-supported torus and carry out related corrections to the mass-spin estimates. We find that for dimensionless black hole spin a ≡ cJ/GM2 ≲ 0.9, the resonant eigenfrequencies are very close to those calculated for the geodesic motion. Their values slightly grow with increasing torus thickness. These findings agree well with results of a previous study carried out in the pseudo-Newtonian approximation. The situation becomes different for a ≳ 0.9, in which case the resonant eigenfrequencies rapidly decrease as the torus thickness increases. We conclude that the assumed non-geodesic effects shift the lower limit of the spin, implied for the three microquasars by the epicyclic model and independently measured masses, from a ~ 0.7 to a ~ 0.6. Their consideration furthermore confirms compatibility of the model with the rapid spin of GRS 1915+105 and provides highly testable predictions of the QPO frequencies. Individual sources with a moderate spin (a ≲ 0.9) should exhibit a smaller spread of the measured 3:2 QPO frequencies than sources with a near-extreme spin (a ~ 1). This should be further examined using the large amount of high-resolution data expected to become available with the next generation of X-ray instruments, such as the proposed Large Observatory for X-ray Timing (LOFT).

  12. Probing leptogenesis

    NASA Astrophysics Data System (ADS)

    Chun, E. J.; Cvetič, G.; Dev, P. S. B.; Drewes, M.; Fong, C. S.; Garbrecht, B.; Hambye, T.; Harz, J.; Hernández, P.; Kim, C. S.; Molinaro, E.; Nardi, E.; Racker, J.; Rius, N.; Zamora-Saa, J.

    2018-02-01

    The focus of this paper lies on the possible experimental tests of leptogenesis scenarios. We consider both leptogenesis generated from oscillations, as well as leptogenesis from out-of-equilibrium decays. As the Akhmedov-Rubakov-Smirnov (ARS) mechanism allows for heavy neutrinos in the GeV range, this opens up a plethora of possible experimental tests, e.g. at neutrino oscillation experiments, neutrinoless double beta decay, and direct searches for neutral heavy leptons at future facilities. In contrast, testing leptogenesis from out-of-equilibrium decays is a quite difficult task. We comment on the necessary conditions for having successful leptogenesis at the TeV-scale. We further discuss possible realizations and their model specific testability in extended seesaw models, models with extended gauge sectors, and supersymmetric leptogenesis. Not being able to test high-scale leptogenesis directly, we present a way to falsify such scenarios by focusing on their washout processes. This is discussed specifically for the left-right symmetric model and the observation of a heavy WR, as well as model independently when measuring ΔL = 2 washout processes at the LHC or neutrinoless double beta decay.

  13. Understanding the Yellowstone magmatic system using 3D geodynamic inverse models

    NASA Astrophysics Data System (ADS)

    Kaus, B. J. P.; Reuber, G. S.; Popov, A.; Baumann, T.

    2017-12-01

    The Yellowstone magmatic system is one of the largest magmatic systems on Earth. Recent seismic tomography suggest that two distinct magma chambers exist: a shallow, presumably felsic chamber and a deeper much larger, partially molten, chamber above the Moho. Why melt stalls at different depth levels above the Yellowstone plume, whereas dikes cross-cut the whole lithosphere in the nearby Snake River Plane is unclear. Partly this is caused by our incomplete understanding of lithospheric scale melt ascent processes from the upper mantle to the shallow crust, which requires better constraints on the mechanics and material properties of the lithosphere.Here, we employ lithospheric-scale 2D and 3D geodynamic models adapted to Yellowstone to better understand magmatic processes in active arcs. The models have a number of (uncertain) input parameters such as the temperature and viscosity structure of the lithosphere, geometry and melt fraction of the magmatic system, while the melt content and rock densities are obtained by consistent thermodynamic modelling of whole rock data of the Yellowstone stratigraphy. As all of these parameters affect the dynamics of the lithosphere, we use the simulations to derive testable model predictions such as gravity anomalies, surface deformation rates and lithospheric stresses and compare them with observations. We incorporated it within an inversion method and perform 3D geodynamic inverse models of the Yellowstone magmatic system. An adjoint based method is used to derive the key model parameters and the factors that affect the stress field around the Yellowstone plume, locations of enhanced diking and melt accumulations. Results suggest that the plume and the magma chambers are connected with each other and that magma chamber overpressure is required to explain the surface displacement in phases of high activity above the Yellowstone magmatic system.

  14. CP violations in the Universe

    NASA Astrophysics Data System (ADS)

    Auriemma, Giulio

    2003-12-01

    The origin of the asymmetry between matter and antimatter that is evident in our part of the Universe is one of the open questions in cosmology, because the CPT symmetry between matter and antimatter seems to be absolutely conserved at microscopic level. We repeat here the classical proofs which exclude the viability of a Universe baryon symmetric on the average, or the observed asymmetry as an initial conditions. The current understanding is that the asymmetry should have been dynamically generated before nucleosynthesis, by B, C, and CP violating processes, acting out of thermodynamical equilibrium, as suggested by Sakharov in the 70's. The physical realizations of these conditions would be possible, in principle, also in the framework of the Standard Model of elementary particles, but the present limits on the mass of the higgs particle exclude this possibility. Finally we present the model of baryogenesis through leptogenesis, which is allowed by a minimal extension of the Standard Model, which has the appeal of being testable in future long-baseline neutrino oscillation experiments.

  15. Dynamic allostery of protein alpha helical coiled-coils

    PubMed Central

    Hawkins, Rhoda J; McLeish, Tom C.B

    2005-01-01

    Alpha helical coiled-coils appear in many important allosteric proteins such as the dynein molecular motor and bacteria chemotaxis transmembrane receptors. As a mechanism for transmitting the information of ligand binding to a distant site across an allosteric protein, an alternative to conformational change in the mean static structure is an induced change in the pattern of the internal dynamics of the protein. We explore how ligand binding may change the intramolecular vibrational free energy of a coiled-coil, using parameterized coarse-grained models, treating the case of dynein in detail. The models predict that coupling of slide, bend and twist modes of the coiled-coil transmits an allosteric free energy of ∼2kBT, consistent with experimental results. A further prediction is a quantitative increase in the effective stiffness of the coiled-coil without any change in inherent flexibility of the individual helices. The model provides a possible and experimentally testable mechanism for transmission of information through the alpha helical coiled-coil of dynein. PMID:16849225

  16. Yukawa unification in an SO(10) SUSY GUT: SUSY on the edge

    NASA Astrophysics Data System (ADS)

    Poh, Zijie; Raby, Stuart

    2015-07-01

    In this paper we analyze Yukawa unification in a three family SO(10) SUSY GUT. We perform a global χ2 analysis and show that supersymmetry (SUSY) effects do not decouple even though the universal scalar mass parameter at the grand unified theory (GUT) scale, m16, is found to lie between 15 and 30 TeV with the best fit given for m16≈25 TeV . Note, SUSY effects do not decouple since stops and bottoms have mass of order 5 TeV, due to renormalization group running from MGUT. The model has many testable predictions. Gauginos are the lightest sparticles and the light Higgs boson is very much standard model-like. The model is consistent with flavor and C P observables with the BR (μ →e γ ) close to the experimental upper bound. With such a large value of m16 we clearly cannot be considered "natural" SUSY nor are we "split" SUSY. We are thus in the region in between or "SUSY on the edge."

  17. Testing sterile neutrino extensions of the Standard Model at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Fischer, Oliver

    2015-05-01

    Extending the Standard Model (SM) with sterile ("right-handed") neutrinos is one of the best motivated ways to account for the observed neutrino masses. We discuss the expected sensitivity of future lepton collider experiments for probing such extensions. An interesting testable scenario is given by "symmetry protected seesaw models", which theoretically allow for sterile neutrino masses around the electroweak scale with up to order one mixings with the light (SM) neutrinos. In addition to indirect tests, e.g. via electroweak precision observables, sterile neutrinos with masses around the electroweak scale can also be probed by direct searches, e.g. via sterile neutrino decays at the Z pole, deviations from the SM cross section for four lepton final states at and beyond the WW threshold and via Higgs boson decays. We study the present bounds on sterile neutrino properties from LEP and LHC as well as the expected sensitivities of possible future lepton colliders such as ILC, CEPC and FCC-ee (TLEP).

  18. A Revamped Science Expo

    ERIC Educational Resources Information Center

    Barth, Lorna

    2007-01-01

    By changing the venue from festival to a required academic exposition, the traditional science fair was transformed into a "Science Expo" wherein students were guided away from cookbook experiments toward developing a question about their environment into a testable and measurable experiment. The revamped "Science Expo" became a night for students…

  19. Leveraging Rigorous Local Evaluations to Understand Contradictory Findings

    ERIC Educational Resources Information Center

    Boulay, Beth; Martin, Carlos; Zief, Susan; Granger, Robert

    2013-01-01

    Contradictory findings from "well-implemented" rigorous evaluations invite researchers to identify the differences that might explain the contradictions, helping to generate testable hypotheses for new research. This panel will examine efforts to ensure that the large number of local evaluations being conducted as part of four…

  20. Adolescent Pregnancy and Its Delay.

    ERIC Educational Resources Information Center

    Bell, Lloyd H.

    This paper examines some probable reasons for the black adolescent male's contribution to increased pregnancy in the black community. Using a situation analysis, it presents the following testable suppositions: (1) black males' fear of retribution for impregnating a girl has diminished, leading to increased sexual intercourse and ultimately to…

Top