Sample records for core component-based modelling

  1. Hirabayashi, Satoshi; Kroll, Charles N.; Nowak, David J. 2011. Component-based development and sensitivity analyses of an air pollutant dry deposition model. Environmental Modelling & Software. 26(6): 804-816.

    Treesearch

    Satoshi Hirabayashi; Chuck Kroll; David Nowak

    2011-01-01

    The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...

  2. Nurses' fidelity to theory-based core components when implementing Family Health Conversations - a qualitative inquiry.

    PubMed

    Östlund, Ulrika; Bäckström, Britt; Lindh, Viveca; Sundin, Karin; Saveman, Britt-Inger

    2015-09-01

    A family systems nursing intervention, Family Health Conversation, has been developed in Sweden by adapting the Calgary Family Assessment and Intervention Models and the Illness Beliefs Model. The intervention has several theoretical assumptions, and one way translate the theory into practice is to identify core components. This may produce higher levels of fidelity to the intervention. Besides information about how to implement an intervention in accordance to how it was developed, evaluating whether it was actually implemented as intended is important. Accordingly, we describe the nurses' fidelity to the identified core components of Family Health Conversation. Six nurses, working in alternating pairs, conducted Family Health Conversations with seven families in which a family member younger than 65 had suffered a stroke. The intervention contained a series of three-1-hour conversations held at 2-3 week intervals. The nurses followed a conversation structure based on 12 core components identified from theoretical assumptions. The transcripts of the 21 conversations were analysed using manifest qualitative content analysis with a deductive approach. The 'core components' seemed to be useful even if nurses' fidelity varied among the core components. Some components were followed relatively well, but others were not. This indicates that the process for achieving fidelity to the intervention can be improved, and that it is necessary for nurses to continually learn theory and to practise family systems nursing. We suggest this can be accomplished through reflections, role play and training on the core components. Furthermore, as in this study, joint reflections on how the core components have been implemented can lead to deeper understanding and knowledge of how Family Health Conversation can be delivered as intended. © 2014 Nordic College of Caring Science.

  3. Observation Data Model Core Components, its Implementation in the Table Access Protocol Version 1.1

    NASA Astrophysics Data System (ADS)

    Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel; Michel, Laurent; Bonnarel, Francos; Micol, Alberto; IVOA DataModel Working Group; Louys, Mireille; Tody, Doug; Dowler, Patrick; Durand, Daniel

    2017-05-01

    This document defines the core components of the Observation data model that are necessary to perform data discovery when querying data centers for astronomical observations of interest. It exposes use-cases to be carried out, explains the model and provides guidelines for its implementation as a data access service based on the Table Access Protocol (TAP). It aims at providing a simple model easy to understand and to implement by data providers that wish to publish their data into the Virtual Observatory. This interface integrates data modeling and data access aspects in a single service and is named ObsTAP. It will be referenced as such in the IVOA registries. In this document, the Observation Data Model Core Components (ObsCoreDM) defines the core components of queryable metadata required for global discovery of observational data. It is meant to allow a single query to be posed to TAP services at multiple sites to perform global data discovery without having to understand the details of the services present at each site. It defines a minimal set of basic metadata and thus allows for a reasonable cost of implementation by data providers. The combination of the ObsCoreDM with TAP is referred to as an ObsTAP service. As with most of the VO Data Models, ObsCoreDM makes use of STC, Utypes, Units and UCDs. The ObsCoreDM can be serialized as a VOTable. ObsCoreDM can make reference to more complete data models such as Characterisation DM, Spectrum DM or Simple Spectral Line Data Model (SSLDM). ObsCore shares a large set of common concepts with DataSet Metadata Data Model (Cresitello-Dittmar et al. 2016) which binds together most of the data model concepts from the above models in a comprehensive and more general frame work. This current specification on the contrary provides guidelines for implementing these concepts using the TAP protocol and answering ADQL queries. It is dedicated to global discovery.

  4. Nonlinear seismic analysis of a reactor structure impact between core components

    NASA Technical Reports Server (NTRS)

    Hill, R. G.

    1975-01-01

    The seismic analysis of the FFTF-PIOTA (Fast Flux Test Facility-Postirradiation Open Test Assembly), subjected to a horizontal DBE (Design Base Earthquake) is presented. The PIOTA is the first in a set of open test assemblies to be designed for the FFTF. Employing the direct method of transient analysis, the governing differential equations describing the motion of the system are set up directly and are implicitly integrated numerically in time. A simple lumped-nass beam model of the FFTF which includes small clearances between core components is used as a "driver" for a fine mesh model of the PIOTA. The nonlinear forces due to the impact of the core components and their effect on the PIOTA are computed.

  5. Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fok, Alex

    2013-10-30

    The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the modelmore » to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.« less

  6. Solvent-free nanofluid with three structure models based on the composition of a MWCNT/SiO2 core and its adsorption capacity of CO2

    NASA Astrophysics Data System (ADS)

    Yang, R. L.; Zheng, Y. P.; Wang, T. Y.; Li, P. P.; Wang, Y. D.; Yao, D. D.; Chen, L. X.

    2018-01-01

    A series of core/shell nanoparticle organic/inorganic hybrid materials (NOHMs) with different weight ratios of two components, consisting of multi-walled carbon nanotubes (MWCNTs) and silicon dioxide (SiO2) as the core were synthesized. The NOHMs display a liquid-like state in the absence of solvent at room temperature. Five NOHMs were categorized into three kinds of structure states based on different weight ratio of two components in the core, named the power strip model, the critical model and the collapse model. The capture capacities of these NOHMs for CO2 were investigated at 298 K and CO2 pressures ranging from 0 to 5 MPa. Compared with NOHMs having a neat MWCNT core, it was revealed that NOHMs with the power strip model show better adsorption capacity toward CO2 due to its lower viscosity and more reactive groups that can react with CO2. In addition, the capture capacities of NOHMs with the critical model were relatively worse than the neat MWCNT-based NOHM. The result is attributed to the aggregation of SiO2 in these samples, which may cause the consumption and hindrance of reactive groups. However, the capture capacity of NOHMs with the collapse model was the worst of all the NOHMs, owing to its lowest content of reactive groups and hollow structure in MWCNTs. In addition, they presented non-interference of MWCNTs and SiO2 without aggregation state.

  7. Two-component Gaussian core model: Strong-coupling limit, Bjerrum pairs, and gas-liquid phase transition.

    PubMed

    Frydel, Derek; Levin, Yan

    2018-01-14

    In the present work, we investigate a gas-liquid transition in a two-component Gaussian core model, where particles of the same species repel and those of different species attract. Unlike a similar transition in a one-component system with particles having attractive interactions at long separations and repulsive interactions at short separations, a transition in the two-component system is not driven solely by interactions but by a specific feature of the interactions, the correlations. This leads to extremely low critical temperature, as correlations are dominant in the strong-coupling limit. By carrying out various approximations based on standard liquid-state methods, we show that a gas-liquid transition of the two-component system poses a challenging theoretical problem.

  8. Two-component Gaussian core model: Strong-coupling limit, Bjerrum pairs, and gas-liquid phase transition

    NASA Astrophysics Data System (ADS)

    Frydel, Derek; Levin, Yan

    2018-01-01

    In the present work, we investigate a gas-liquid transition in a two-component Gaussian core model, where particles of the same species repel and those of different species attract. Unlike a similar transition in a one-component system with particles having attractive interactions at long separations and repulsive interactions at short separations, a transition in the two-component system is not driven solely by interactions but by a specific feature of the interactions, the correlations. This leads to extremely low critical temperature, as correlations are dominant in the strong-coupling limit. By carrying out various approximations based on standard liquid-state methods, we show that a gas-liquid transition of the two-component system poses a challenging theoretical problem.

  9. The Invasive Species Forecasting System

    NASA Technical Reports Server (NTRS)

    Schnase, John; Most, Neal; Gill, Roger; Ma, Peter

    2011-01-01

    The Invasive Species Forecasting System (ISFS) provides computational support for the generic work processes found in many regional-scale ecosystem modeling applications. Decision support tools built using ISFS allow a user to load point occurrence field sample data for a plant species of interest and quickly generate habitat suitability maps for geographic regions of management concern, such as a national park, monument, forest, or refuge. This type of decision product helps resource managers plan invasive species protection, monitoring, and control strategies for the lands they manage. Until now, scientists and resource managers have lacked the data-assembly and computing capabilities to produce these maps quickly and cost efficiently. ISFS focuses on regional-scale habitat suitability modeling for invasive terrestrial plants. ISFS s component architecture emphasizes simplicity and adaptability. Its core services can be easily adapted to produce model-based decision support tools tailored to particular parks, monuments, forests, refuges, and related management units. ISFS can be used to build standalone run-time tools that require no connection to the Internet, as well as fully Internet-based decision support applications. ISFS provides the core data structures, operating system interfaces, network interfaces, and inter-component constraints comprising the canonical workflow for habitat suitability modeling. The predictors, analysis methods, and geographic extents involved in any particular model run are elements of the user space and arbitrarily configurable by the user. ISFS provides small, lightweight, readily hardened core components of general utility. These components can be adapted to unanticipated uses, are tailorable, and require at most a loosely coupled, nonproprietary connection to the Web. Users can invoke capabilities from a command line; programmers can integrate ISFS's core components into more complex systems and services. Taken together, these features enable a degree of decentralization and distributed ownership that have helped other types of scientific information services succeed in recent years.

  10. Overcoming roadblocks: current and emerging reimbursement strategies for integrated mental health services in primary care.

    PubMed

    O'Donnell, Allison N; Williams, Mark; Kilbourne, Amy M

    2013-12-01

    The Chronic Care Model (CCM) has been shown to improve medical and psychiatric outcomes for persons with mental disorders in primary care settings, and has been proposed as a model to integrate mental health care in the patient-centered medical home under healthcare reform. However, the CCM has not been widely implemented in primary care settings, primarily because of a lack of a comprehensive reimbursement strategy to compensate providers for day-to-day provision of its core components, including care management and provider decision support. Drawing upon the existing literature and regulatory guidelines, we provide a critical analysis of challenges and opportunities in reimbursing CCM components under the current fee-for-service system, and describe an emerging financial model involving bundled payments to support core CCM components to integrate mental health treatment into primary care settings. Ultimately, for the CCM to be used and sustained over time to integrate physical and mental health care, effective reimbursement models will need to be negotiated across payers and providers. Such payments should provide sufficient support for primary care providers to implement practice redesigns around core CCM components, including care management, measurement-based care, and mental health specialist consultation.

  11. The caCORE Software Development Kit: streamlining construction of interoperable biomedical information services.

    PubMed

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-06

    Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development.

  12. VERAIn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan

    2015-02-16

    CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less

  13. An ontology for component-based models of water resource systems

    NASA Astrophysics Data System (ADS)

    Elag, Mostafa; Goodall, Jonathan L.

    2013-08-01

    Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.

  14. Three-dimensional NDE of VHTR core components via simulation-based testing. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzina, Bojan; Kunerth, Dennis

    2014-09-30

    A next generation, simulation-driven-and-enabled testing platform is developed for the 3D detection and characterization of defects and damage in nuclear graphite and composite structures in Very High Temperature Reactors (VHTRs). The proposed work addresses the critical need for the development of high-fidelity Non-Destructive Examination (NDE) technologies for as-manufactured and replaceable in-service VHTR components. Centered around the novel use of elastic (sonic and ultrasonic) waves, this project deploys a robust, non-iterative inverse solution for the 3D defect reconstruction together with a non-contact, laser-based approach to the measurement of experimental waveforms in VHTR core components. In particular, this research (1) deploys three-dimensionalmore » Scanning Laser Doppler Vibrometry (3D SLDV) as a means to accurately and remotely measure 3D displacement waveforms over the accessible surface of a VHTR core component excited by mechanical vibratory source; (2) implements a powerful new inverse technique, based on the concept of Topological Sensitivity (TS), for non-iterative elastic waveform tomography of internal defects - that permits robust 3D detection, reconstruction and characterization of discrete damage (e.g. holes and fractures) in nuclear graphite from limited-aperture NDE measurements; (3) implements state-of-the art computational (finite element) model that caters for accurately simulating elastic wave propagation in 3D blocks of nuclear graphite; (4) integrates the SLDV testing methodology with the TS imaging algorithm into a non-contact, high-fidelity NDE platform for the 3D reconstruction and characterization of defects and damage in VHTR core components; and (5) applies the proposed methodology to VHTR core component samples (both two- and three-dimensional) with a priori induced, discrete damage in the form of holes and fractures. Overall, the newly established SLDV-TS testing platform represents a next-generation NDE tool that surpasses all existing techniques for the 3D ultrasonic imaging of material damage from non-contact, limited-aperture waveform measurements. Outlook. The next stage in the development of this technology includes items such as (a) non-contact generation of mechanical vibrations in VHTR components via thermal expansion created by high-intensity laser; (b) development and incorporation of Synthetic Aperture Focusing Technique (SAFT) for elevating the accuracy of 3D imaging in highly noisy environments with minimal accessible surface; (c) further analytical and computational developments to facilitate the reconstruction of diffuse damage (e.g. microcracks) in nuclear graphite as they lead to the dispersion of elastic waves, (d) concept of model updating for accurate tracking of the evolution of material damage via periodic inspections; (d) adoption of the Bayesian framework to obtain information on the certainty of obtained images; and (e) optimization of the computational scheme toward real-time, model-based imaging of damage in VHTR core components.« less

  15. The caCORE Software Development Kit: Streamlining construction of interoperable biomedical information services

    PubMed Central

    Phillips, Joshua; Chilukuri, Ram; Fragoso, Gilberto; Warzel, Denise; Covitz, Peter A

    2006-01-01

    Background Robust, programmatically accessible biomedical information services that syntactically and semantically interoperate with other resources are challenging to construct. Such systems require the adoption of common information models, data representations and terminology standards as well as documented application programming interfaces (APIs). The National Cancer Institute (NCI) developed the cancer common ontologic representation environment (caCORE) to provide the infrastructure necessary to achieve interoperability across the systems it develops or sponsors. The caCORE Software Development Kit (SDK) was designed to provide developers both within and outside the NCI with the tools needed to construct such interoperable software systems. Results The caCORE SDK requires a Unified Modeling Language (UML) tool to begin the development workflow with the construction of a domain information model in the form of a UML Class Diagram. Models are annotated with concepts and definitions from a description logic terminology source using the Semantic Connector component. The annotated model is registered in the Cancer Data Standards Repository (caDSR) using the UML Loader component. System software is automatically generated using the Codegen component, which produces middleware that runs on an application server. The caCORE SDK was initially tested and validated using a seven-class UML model, and has been used to generate the caCORE production system, which includes models with dozens of classes. The deployed system supports access through object-oriented APIs with consistent syntax for retrieval of any type of data object across all classes in the original UML model. The caCORE SDK is currently being used by several development teams, including by participants in the cancer biomedical informatics grid (caBIG) program, to create compatible data services. caBIG compatibility standards are based upon caCORE resources, and thus the caCORE SDK has emerged as a key enabling technology for caBIG. Conclusion The caCORE SDK substantially lowers the barrier to implementing systems that are syntactically and semantically interoperable by providing workflow and automation tools that standardize and expedite modeling, development, and deployment. It has gained acceptance among developers in the caBIG program, and is expected to provide a common mechanism for creating data service nodes on the data grid that is under development. PMID:16398930

  16. Feature-based component model for design of embedded systems

    NASA Astrophysics Data System (ADS)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  17. Selective interface transparency in graphene nanoribbon based molecular junctions.

    PubMed

    Dou, K P; Kaun, C C; Zhang, R Q

    2018-03-08

    A clear understanding of electrode-molecule interfaces is a prerequisite for the rational engineering of future generations of nanodevices that will rely on single-molecule coupling between components. With a model system, we reveal a peculiar dependence on interfaces in all graphene nanoribbon-based carbon molecular junctions. The effect can be classified into two types depending on the intrinsic feature of the embedded core graphene nanoflake (GNF). For metallic GNFs with |N A - N B | = 1, good/poor contact transparency occurs when the core device aligns with the center/edge of the electrode. The situation is reversed when a semiconducting GNF is the device, where N A = N B . These results may shed light on the design of real connecting components in graphene-based nanocircuits.

  18. Simulation Based Optimization of Complex Monolithic Composite Structures Using Cellular Core Technology

    NASA Astrophysics Data System (ADS)

    Hickmott, Curtis W.

    Cellular core tooling is a new technology which has the capability to manufacture complex integrated monolithic composite structures. This novel tooling method utilizes thermoplastic cellular cores as inner tooling. The semi-rigid nature of the cellular cores makes them convenient for lay-up, and under autoclave temperature and pressure they soften and expand providing uniform compaction on all surfaces including internal features such as ribs and spar tubes. This process has the capability of developing fully optimized aerospace structures by reducing or eliminating assembly using fasteners or bonded joints. The technology is studied in the context of evaluating its capabilities, advantages, and limitations in developing high quality structures. The complex nature of these parts has led to development of a model using the Finite Element Analysis (FEA) software Abaqus and the plug-in COMPRO Common Component Architecture (CCA) provided by Convergent Manufacturing Technologies. This model utilizes a "virtual autoclave" technique to simulate temperature profiles, resin flow paths, and ultimately deformation from residual stress. A model has been developed simulating the temperature profile during curing of composite parts made with the cellular core technology. While modeling of composites has been performed in the past, this project will look to take this existing knowledge and apply it to this new manufacturing method capable of building more complex parts and develop a model designed specifically for building large, complex components with a high degree of accuracy. The model development has been carried out in conjunction with experimental validation. A double box beam structure was chosen for analysis to determine the effects of the technology on internal ribs and joints. Double box beams were manufactured and sectioned into T-joints for characterization. Mechanical behavior of T-joints was performed using the T-joint pull-off test and compared to traditional tooling methods. Components made with the cellular core tooling method showed an improved strength at the joints. It is expected that this knowledge will help optimize the processing of complex, integrated structures and benefit applications in aerospace where lighter, structurally efficient components would be advantageous.

  19. Moving from Pathology to Possibility: Integrating Strengths-Based Interventions in Child Welfare Provision

    ERIC Educational Resources Information Center

    Sabalauskas, Kara L.; Ortolani, Charles L.; McCall, Matthew J.

    2014-01-01

    Child welfare providers are increasingly required to demonstrate that strengths-based, evidence-informed practices are central to their intervention methodology. This case study describes how a large child welfare agency instituted cognitive behavioural therapy (CBT) as the core component of a strength-based practice model with the goal of…

  20. An analysis of the adaptability of a professional development program in public health: results from the ALPS Study.

    PubMed

    Richard, Lucie; Torres, Sara; Tremblay, Marie-Claude; Chiocchio, François; Litvak, Éric; Fortin-Pellerin, Laurence; Beaudet, Nicole

    2015-06-14

    Professional development is a key component of effective public health infrastructures. To be successful, professional development programs in public health and health promotion must adapt to practitioners' complex real-world practice settings while preserving the core components of those programs' models and theoretical bases. An appropriate balance must be struck between implementation fidelity, defined as respecting the core nature of the program that underlies its effects, and adaptability to context to maximize benefit in specific situations. This article presents a professional development pilot program, the Health Promotion Laboratory (HPL), and analyzes how it was adapted to three different settings while preserving its core components. An exploratory analysis was also conducted to identify team and contextual factors that might have been at play in the emergence of implementation profiles in each site. This paper describes the program, its core components and adaptive features, along with three implementation experiences in local public health teams in Quebec, Canada. For each setting, documentary sources were analyzed to trace the implementation of activities, including temporal patterns throughout the project for each program component. Information about teams and their contexts/settings was obtained through documentary analysis and semi-structured interviews with HPL participants, colleagues and managers from each organization. While each team developed a unique pattern of implementing the activities, all the program's core components were implemented. Differences of implementation were observed in terms of numbers and percentages of activities related to different components of the program as well as in the patterns of activities across time. It is plausible that organizational characteristics influencing, for example, work schedule flexibility or learning culture might have played a role in the HPL implementation process. This paper shows how a professional development program model can be adapted to different contexts while preserving its core components. Capturing the heterogeneity of the intervention's exposure, as was done here, will make possible in-depth impact analyses involving, for example, the testing of program-context interactions to identify program outcomes predictors. Such work is essential to advance knowledge on the action mechanisms of professional development programs.

  1. Role of core excitation in (d ,p ) transfer reactions

    NASA Astrophysics Data System (ADS)

    Deltuva, A.; Ross, A.; Norvaišas, E.; Nunes, F. M.

    2016-10-01

    Background: Recent work found that core excitations can be important in extracting structure information from (d ,p ) reactions. Purpose: Our objective is to systematically explore the role of core excitation in (d ,p ) reactions and to understand the origin of the dynamical effects. Method: Based on the particle-rotor model of n +10Be , we generate a number of models with a range of separation energies (Sn=0.1 -5.0 MeV), while maintaining a significant core excited component. We then apply the latest extension of the momentum-space-based Faddeev method, including dynamical core excitation in the reaction mechanism to all orders, to the 10Be(d ,p )11Be -like reactions, and study the excitation effects for beam energies Ed=15 -90 MeV. Results: We study the resulting angular distributions and the differences between the spectroscopic factor that would be extracted from the cross sections, when including dynamical core excitation in the reaction, and that of the original structure model. We also explore how different partial waves affect the final cross section. Conclusions: Our results show a strong beam-energy dependence of the extracted spectroscopic factors that become smaller for intermediate beam energies. This dependence increases for loosely bound systems.

  2. Transient thermohydraulic heat pipe modeling

    NASA Astrophysics Data System (ADS)

    Hall, Michael L.; Doster, Joseph M.

    Many space based reactor designs employ heat pipes as a means of conveying heat. In these designs, thermal radiation is the principle means for rejecting waste heat from the reactor system, making it desirable to operate at high temperatures. Lithium is generally the working fluid of choice as it undergoes a liquid-vapor transformation at the preferred operating temperature. The nature of remote startup, restart, and reaction to threats necessitates an accurate, detailed transient model of the heat pipe operation. A model is outlined of the vapor core region of the heat pipe which is part of a large model of the entire heat pipe thermal response. The vapor core is modeled using the area averaged Navier-Stokes equations in one dimension, which take into account the effects of mass, energy and momentum transfer. The core model is single phase (gaseous), but contains two components: lithium gas and a noncondensible vapor. The vapor core model consists of the continuity equations for the mixture and noncondensible, as well as mixture equations for internal energy and momentum.

  3. Identifying Core Mobile Learning Faculty Competencies Based Integrated Approach: A Delphi Study

    ERIC Educational Resources Information Center

    Elbarbary, Rafik Said

    2015-01-01

    This study is based on the integrated approach as a concept framework to identify, categorize, and rank a key component of mobile learning core competencies for Egyptian faculty members in higher education. The field investigation framework used four rounds Delphi technique to determine the importance rate of each component of core competencies…

  4. Core Intervention Components: Identifying and Operationalizing What Makes Programs Work. ASPE Research Brief

    ERIC Educational Resources Information Center

    Blase, Karen; Fixsen, Dean

    2013-01-01

    This brief is part of a series that explores key implementation considerations. It focuses on the importance of identifying, operationalizing, and implementing the "core components" of evidence-based and evidence-informed interventions that likely are critical to producing positive outcomes. The brief offers a definition of "core components",…

  5. Low back pain in 17 countries, a Rasch analysis of the ICF core set for low back pain.

    PubMed

    Røe, Cecilie; Bautz-Holter, Erik; Cieza, Alarcos

    2013-03-01

    Previous studies indicate that a worldwide measurement tool may be developed based on the International Classification of Functioning Disability and Health (ICF) Core Sets for chronic conditions. The aim of the present study was to explore the possibility of constructing a cross-cultural measurement of functioning for patients with low back pain (LBP) on the basis of the Comprehensive ICF Core Set for LBP and to evaluate the properties of the ICF Core Set. The Comprehensive ICF Core Set for LBP was scored by health professionals for 972 patients with LBP from 17 countries. Qualifier levels of the categories, invariance across age, sex and countries, construct validity and the ordering of the categories in the components of body function, body structure, activities and participation were explored by Rasch analysis. The item-trait χ2-statistics showed that the 53 categories in the ICF Core Set for LBP did not fit the Rasch model (P<0.001). The main challenge was the invariance in the responses according to country. Analysis of the four countries with the largest sample sizes indicated that the data from Germany fit the Rasch model, and the data from Norway, Serbia and Kuwait in terms of the components of body functions and activities and participation also fit the model. The component of body functions and activity and participation had a negative mean location, -2.19 (SD 1.19) and -2.98 (SD 1.07), respectively. The negative location indicates that the ICF Core Set reflects patients with a lower level of function than the present patient sample. The present results indicate that it may be possible to construct a clinical measure of function on the basis of the Comprehensive ICF Core Set for LBP by calculating country-specific scores before pooling the data.

  6. Next Generation Community Based Unified Global Modeling System Development and Operational Implementation Strategies at NCEP

    NASA Astrophysics Data System (ADS)

    Tallapragada, V.

    2017-12-01

    NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.

  7. Mean electromotive force generated by asymmetric fluid flow near the surface of earth's outer core

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Archana

    1992-10-01

    The phi component of the mean electromotive force, (ETF) generated by asymmetric flow of fluid just beneath the core-mantle boundary (CMB), is obtained using a geomagnetic field model. This analysis is based on the supposition that the axisymmetric part of fluid flow beneath the CMB is tangentially geostrophic and toroidal. For all the epochs studied, the computed phi component is stronger in the Southern Hemisphere than that in the Northern Hemisphere. Assuming a linear relationship between (ETF) and the azimuthally averaged magnetic field (AAMF), the only nonzero off-diagonal components of the pseudotensor relating ETF to AAMF, are estimated as functions of colatitude, and the physical implications of the results are discussed.

  8. Making the CARE Comprehensive Geriatric Assessment as the Core of a Total Mobile Long Term Care Support System in China.

    PubMed

    Cui, Yanyan; Gong, Dongwei; Yang, Bo; Chen, Hua; Tu, Ming-Hsiang; Zhang, Chaonan; Li, Huan; Liang, Naiwen; Jiang, Liping; Chang, Polun

    2018-01-01

    Comprehensive Geriatric Assessments (CGAs) have been recommended to be used for better monitoring the health status of elder residents and providing quality care. This study reported how our nurses perceived the usability of CGA component of a mobile integrated-care long term care support system developed in China. We used the Continuity Assessment Record and Evaluation (CARE), developed in the US, as the core CGA component of our Android-based support system, in which apps were designed for all key stakeholders for delivering quality long term care. A convenience sample of 18 subjects from local long term care facilities in Shanghai, China were invited to assess the CGA assessment component in terms of Technology Acceptance Model for Mobile based on real field trial assessment. All (100%) were satisfied with the mobile CGA component. 88.9% perceived the system was easy to learn and use. 99.4% showed their willingness to use for their work. We concluded it is technically feasible to implement a CGA-based mobile integrated care support system in China.

  9. Active Learning through Modeling: Introduction to Software Development in the Business Curriculum

    ERIC Educational Resources Information Center

    Roussev, Boris; Rousseva, Yvonna

    2004-01-01

    Modern software practices call for the active involvement of business people in the software process. Therefore, programming has become an indispensable part of the information systems component of the core curriculum at business schools. In this paper, we present a model-based approach to teaching introduction to programming to general business…

  10. Phenomenological model of nuclear primary air showers

    NASA Technical Reports Server (NTRS)

    Tompkins, D. R., Jr.; Saterlie, S. F.

    1976-01-01

    The development of proton primary air showers is described in terms of a model based on a hadron core plus an electromagnetic cascade. The muon component is neglected. The model uses three parameters: a rate at which hadron core energy is converted into electromagnetic cascade energy and a two-parameter sea-level shower-age function. By assuming an interaction length for the primary nucleus, the model is extended to nuclear primaries. Both models are applied over the energy range from 10 to the 13th power to 10 to the 21st power eV. Both models describe the size and age structure (neglecting muons) from a depth of 342 to 2052 g/sq cm.

  11. Advanced and secure architectural EHR approaches.

    PubMed

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.

  12. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability.

    PubMed

    Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A

    2008-02-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).

  13. caCORE version 3: Implementation of a model driven, service-oriented architecture for semantic interoperability

    PubMed Central

    Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.

    2008-01-01

    One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259

  14. Role of core excitation in ( d , p ) transfer reactions

    DOE PAGES

    Deltuva, A.; Ross, A.; Norvaišas, E.; ...

    2016-10-24

    In our recent work we found that core excitations can be important in extracting structure information from (d,p) reactions. Our objective is to systematically explore the role of core excitation in (d,p) reactions and to understand the origin of the dynamical effects. Based on the particle-rotor model of n+Be 10, we generate a number of models with a range of separation energies (S n=0.1–5.0 MeV), while maintaining a significant core excited component. We then apply the latest extension of the momentum-space-based Faddeev method, including dynamical core excitation in the reaction mechanism to all orders, to the Be 10(d,p)Be 11-like reactions,more » and study the excitation effects for beam energies E d=15–90 MeV. We study the resulting angular distributions and the differences between the spectroscopic factor that would be extracted from the cross sections, when including dynamical core excitation in the reaction, and that of the original structure model. We also explore how different partial waves affect the final cross section. Our results show a strong beam-energy dependence of the extracted spectroscopic factors that become smaller for intermediate beam energies. Finally, this dependence increases for loosely bound systems.« less

  15. Nonplanar core structure of the screw dislocations in tantalum from the improved Peierls-Nabarro theory

    NASA Astrophysics Data System (ADS)

    Hu, Xiangsheng; Wang, Shaofeng

    2018-02-01

    The extended structure of ? screw dislocation in Ta has been studied theoretically using the improved Peierls-Nabarro model combined with the first principles calculation. An instructive way to derive the fundamental equation for dislocations with the nonplanar structure is presented. The full ?-surface of ? plane in tantalum is evaluated from the first principles. In order to compare the energy of the screw dislocation with different structures, the structure parameter is introduced to describe the core configuration. Each kind of screw dislocation is described by an overall-shape component and a core component. Far from the dislocation centre, the asymptotic behaviour of dislocation is uniquely controlled by the overall-shape component. Near the dislocation centre, the structure detail is described by the core component. The dislocation energy is explicitly plotted as a function of the core parameter for the nonplanar dislocation as well as for the planar dislocation. It is found that in the physical regime of the core parameter, the sixfold nonplanar structure always has the lowest energy. Our result clearly confirms that the sixfold nonplanar structure is the most stable. Furthermore, the pressure effect on the dislocation structure is explored up to 100 GPa. The stability of the sixfold nonplanar structure is not changed by the applied pressure. The equilibrium structure and the related stress field are calculated, and a possible mechanism of the dislocation movement is discussed briefly based on the structure deformation caused by the external stress.

  16. Sensitivity of the Geomagnetic Octupole to a Stably Stratified Layer in the Earth's Core

    NASA Astrophysics Data System (ADS)

    Yan, C.; Stanley, S.

    2017-12-01

    The presence of a stably stratified layer at the top of the core has long been proposed for Earth, based on evidence from seismology and geomagnetic secular variation. Geodynamo modeling offers a unique window to inspect the properties and dynamics in Earth's core. For example, numerical simulations have shown that magnetic field morphology is sensitive to the presence of stably stratified layers in a planet's core. Here we use the mMoSST numerical dynamo model to investigate the effects of a thin stably stratified layer at the top of the fluid outer core in Earth on the resulting large-scale geomagnetic field morphology. We find that the existence of a stable layer has significant influence on the octupolar component of the magnetic field in our models, whereas the quadrupole doesn't show an obvious trend. This suggests that observations of the geomagnetic field can be applied to provide information of the properties of this plausible stable layer, such as how thick and how stable this layer could be. Furthermore, we have examined whether the dominant thermal signature from mantle tomography at the core-mantle boundary (CMB) (a degree & order 2 spherical harmonic) can influence our results. We found that this heat flux pattern at the CMB has no outstanding effects on the quadrupole and octupole magnetic field components. Our studies suggest that if there is a stably stratified layer at the top of the Earth's core, it must be limited in terms of stability and thickness, in order to be compatible with the observed paleomagnetic record.

  17. Control of the shell structural properties and cavity diameter of hollow magnesium fluoride particles.

    PubMed

    Nandiyanto, Asep Bayu Dani; Ogi, Takashi; Okuyama, Kikuo

    2014-03-26

    Control of the shell structural properties [i.e., thickness (8-25 nm) and morphology (dense and raspberry)] and cavity diameter (100-350 nm) of hollow particles was investigated experimentally, and the results were qualitatively explained based on the available theory. We found that the selective deposition size and formation of the shell component on the surface of a core template played important roles in controlling the structure of the resulting shell. To achieve the selective deposition size and formation of the shell component, various process parameters (i.e., reaction temperature and charge, size, and composition of the core template and shell components) were tested. Magnesium fluoride (MgF2) and polystyrene spheres were used as models for shell and core components, respectively. MgF2 was selected because, to the best of our knowledge, the current reported approaches to date were limited to synthesis of MgF2 in film and particle forms only. Therefore, understanding how to control the formation of MgF2 with various structures (both the thickness and morphology) is a prospective for advanced lens synthesis and applications.

  18. A Model-based Approach to Reactive Self-Configuring Systems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Nayak, P. Pandurang

    1996-01-01

    This paper describes Livingstone, an implemented kernel for a self-reconfiguring autonomous system, that is reactive and uses component-based declarative models. The paper presents a formal characterization of the representation formalism used in Livingstone, and reports on our experience with the implementation in a variety of domains. Livingstone's representation formalism achieves broad coverage of hybrid software/hardware systems by coupling the concurrent transition system models underlying concurrent reactive languages with the discrete qualitative representations developed in model-based reasoning. We achieve a reactive system that performs significant deductions in the sense/response loop by drawing on our past experience at building fast prepositional conflict-based algorithms for model-based diagnosis, and by framing a model-based configuration manager as a prepositional, conflict-based feedback controller that generates focused, optimal responses. Livingstone automates all these tasks using a single model and a single core deductive engine, thus making significant progress towards achieving a central goal of model-based reasoning. Livingstone, together with the HSTS planning and scheduling engine and the RAPS executive, has been selected as the core autonomy architecture for Deep Space One, the first spacecraft for NASA's New Millennium program.

  19. International Classification of Functioning, Disability and Health Core Sets for cerebral palsy, autism spectrum disorder, and attention-deficit-hyperactivity disorder.

    PubMed

    Schiariti, Verónica; Mahdi, Soheil; Bölte, Sven

    2018-05-30

    Capturing functional information is crucial in childhood disability. The International Classification of Functioning, Disability and Health (ICF) Core Sets promote assessments of functional abilities and disabilities in clinical practice regarding circumscribed diagnoses. However, the specificity of ICF Core Sets for childhood-onset disabilities has been doubted. This study aimed to identify content commonalities and differences among the ICF Core Sets for cerebral palsy (CP), and the newly developed Core Sets for autism spectrum disorder (ASD) and attention-deficit-hyperactivity disorder (ADHD). The categories within each Core Set were aggregated at the ICF component and chapter levels. Content comparison was conducted using descriptive analyses. The activities and participation component of the ICF was the most covered across all Core Sets. Main differences included representation of ICF components and coverage of ICF chapters within each component. CP included all ICF components, while ADHD and ASD predominantly focused on activities and participation. Environmental factors were highly represented in the ADHD Core Sets (40.5%) compared to the ASD (28%) and CP (27%) Core Sets. International Classification of Functioning, Disability and Health Core Sets for CP, ASD, and ADHD capture both common but also unique functional information, showing the importance of creating condition-specific, ICF-based tools to build functional profiles of individuals with childhood-onset disabilities. The International Classification of Functioning, Disability and Health (ICF) Core Sets for cerebral palsy (CP), autism spectrum disorder (ASD), and attention-deficit-hyperactivity disorder (ADHD) include unique functional information. The ICF-based tools for CP, ASD, and ADHD differ in terms of representation and coverage of ICF components and ICF chapters. Representation of environmental factors uniquely influences functioning and disability across ICF Core Sets for CP, ASD and ADHD. © 2018 Mac Keith Press.

  20. Multi-tiered S-SOA, Parameter-Driven New Islamic Syariah Products of Holistic Islamic Banking System (HiCORE): Virtual Banking Environment

    NASA Astrophysics Data System (ADS)

    Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.

    The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.

  1. Extended Day Treatment: A Comprehensive Model of after School Behavioral Health Services for Youth

    ERIC Educational Resources Information Center

    Vanderploeg, Jeffrey J.; Franks, Robert P.; Plant, Robert; Cloud, Marilyn; Tebes, Jacob Kraemer

    2009-01-01

    Extended day treatment (EDT) is an innovative intermediate-level service for children and adolescents with serious emotional and behavioral disorders delivered during the after school hours. This paper describes the core components of the EDT model of care within the context of statewide systems of care, including its core service components,…

  2. JAMS - a software platform for modular hydrological modelling

    NASA Astrophysics Data System (ADS)

    Kralisch, Sven; Fischer, Christian

    2015-04-01

    Current challenges of understanding and assessing the impacts of climate and land use changes on environmental systems demand for an ever-increasing integration of data and process knowledge in corresponding simulation models. Software frameworks that allow for a seamless creation of integrated models based on less complex components (domain models, process simulation routines) have therefore gained increasing attention during the last decade. JAMS is an Open-Source software framework that has been especially designed to cope with the challenges of eco-hydrological modelling. This is reflected by (i) its flexible approach for representing time and space, (ii) a strong separation of process simulation components from the declarative description of more complex models using domain specific XML, (iii) powerful analysis and visualization functions for spatial and temporal input and output data, and (iv) parameter optimization and uncertainty analysis functions commonly used in environmental modelling. Based on JAMS, different hydrological and nutrient-transport simulation models were implemented and successfully applied during the last years. We will present the JAMS core concepts and give an overview of models, simulation components and support tools available for that framework. Sample applications will be used to underline the advantages of component-based model designs and to show how JAMS can be used to address the challenges of integrated hydrological modelling.

  3. The Core Components of Reading Instruction in Chinese

    ERIC Educational Resources Information Center

    Ho, Connie Suk-Han; Wong, Yau-Kai; Yeung, Pui-Sze; Chan, David Wai-ock; Chung, Kevin Kien-Hoa; Lo, Sau-Ching; Luan, Hui

    2012-01-01

    The present study aimed at identifying core components of reading instruction in Chinese within the framework of the tiered intervention model. A curriculum with four teaching components of cognitive-linguistic skills was implemented in a Program school for 3 years since Grade 1. The findings showed that the Tier 1 intervention was effective in…

  4. DETERMINATION OF CENTRAL ENGINE POSITION AND ACCRETION DISK STRUCTURE IN NGC 4261 BY CORE SHIFT MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haga, Takafumi; Doi, Akihiro; Murata, Yasuhiro

    2015-07-01

    We report multifrequency phase-referenced observations of the nearby radio galaxy NGC 4261, which has prominent two-sided jets, using the Very Long Baseline Array at 1.4–43 GHz. We measured radio core positions showing observing frequency dependences (known as “core shift”) in both approaching jets and counterjets. The limit of the core position as the frequency approaches infinity, which suggests a jet base, is separated by 82 ± 16 μas upstream in projection, corresponding to (310 ± 60)R{sub s} (R{sub s}: Schwarzschild radius) as a deprojected distance, from the 43 GHz core in the approaching jet. In addition, the innermost component atmore » the counterjet side appeared to approach the same position at infinity of the frequency, indicating that cores on both sides are approaching the same position, suggesting a spatial coincidence with the central engine. Applying a phase-referencing technique, we also obtained spectral index maps, which indicate that emission from the counterjet is affected by free–free absorption (FFA). The result of the core shift profile on the counterjet also requires FFA because the core positions at 5–15 GHz cannot be explained by a simple core shift model based on synchrotron self-absorption (SSA). Our result is apparently consistent with the SSA core shift with an additional disk-like absorber over the counterjet side. Core shift and opacity profiles at the counterjet side suggest a two-component accretion: a radiatively inefficient accretion flow at the inner region and a truncated thin disk in the outer region. We proposed a possible solution about density and temperature profiles in the outer disk on the basis of the radio observation.« less

  5. Teaching Personal and Social Responsibility and Transfer of Learning: Opportunities and Challenges for Teachers and Coaches

    ERIC Educational Resources Information Center

    Gordon, Barrie; Doyle, Stephanie

    2015-01-01

    The transfer of learning from the gym to other areas of participants' lives has always been a core component of the Teaching Personal and Social Responsibility Model. The degree to which transfer of learning is successfully facilitated in the reality of Teaching Personal and Social Responsibility Model-based teaching and coaching is, however,…

  6. A Case Study Examining the Career Academy Model at a Large Urban Public High School

    ERIC Educational Resources Information Center

    Ho, Howard

    2013-01-01

    This study focused on how career academies were implemented at a large, urban, public high school. Research shows that the career academy model should consist of 3 core components: (a) a small learning community (SLC), (b) a theme-based curriculum, and (c) business partnerships (Stern, Dayton, & Raby, 2010). The purpose of this qualitative…

  7. Use of an Existing Airborne Radon Data Base in the Verification of the NASA/AEAP Core Model

    NASA Technical Reports Server (NTRS)

    Kritz, Mark A.

    1998-01-01

    The primary objective of this project was to apply the tropospheric atmospheric radon (Rn222) measurements to the development and verification of the global 3-D atmospheric chemical transport model under development by NASA's Atmospheric Effects of Aviation Project (AEAP). The AEAP project had two principal components: (1) a modeling effort, whose goal was to create, test and apply an elaborate three-dimensional atmospheric chemical transport model (the NASA/AEAP Core model to an evaluation of the possible short and long-term effects of aircraft emissions on atmospheric chemistry and climate--and (2) a measurement effort, whose goal was to obtain a focused set of atmospheric measurements that would provide some of the observational data used in the modeling effort. My activity in this project was confined to the first of these components. Both atmospheric transport and atmospheric chemical reactions (as well the input and removal of chemical species) are accounted for in the NASA/AEAP Core model. Thus, for example, in assessing the effect of aircraft effluents on the chemistry of a given region of the upper troposphere, the model must keep track not only of the chemical reactions of the effluent species emitted by aircraft flying in this region, but also of the transport into the region of these (and other) species from other, remote sources--for example, via the vertical convection of boundary layer air to the upper troposphere. Radon, because of its known surface source and known radioactive half-life, and freedom from chemical production or loss, and from removal from the atmosphere by physical scavenging, is a recognized and valuable tool for testing the transport components of global transport and circulation models.

  8. A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods

    PubMed Central

    Luo, Guangchun; Qin, Ke

    2014-01-01

    Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565

  9. Can we teach core clinical obstetrics and gynaecology skills using low fidelity simulation in an interprofessional setting?

    PubMed

    Kumar, Arunaz; Gilmour, Carole; Nestel, Debra; Aldridge, Robyn; McLelland, Gayle; Wallace, Euan

    2014-12-01

    Core clinical skills acquisition is an essential component of undergraduate medical and midwifery education. Although interprofessional education is an increasingly common format for learning efficient teamwork in clinical medicine, its value in undergraduate education is less clear. We present a collaborative effort from the medical and midwifery schools of Monash University, Melbourne, towards the development of an educational package centred around a core skills-based workshop using low fidelity simulation models in an interprofessional setting. Detailed feedback on the package was positive with respect to the relevance of the teaching content, whether the topic was well taught by task trainers and simulation models used, pitch of level of teaching and perception of confidence gained in performing the skill on a real patient after attending the workshop. Overall, interprofessional core skills training using low fidelity simulation models introduced at an undergraduate level in medicine and midwifery had a good acceptance. © 2014 The Royal Australian and New Zealand College of Obstetricians and Gynaecologists.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salko, Robert K; Sung, Yixing; Kucukboyaci, Vefa

    The Virtual Environment for Reactor Applications core simulator (VERA-CS) being developed by the Consortium for the Advanced Simulation of Light Water Reactors (CASL) includes coupled neutronics, thermal-hydraulics, and fuel temperature components with an isotopic depletion capability. The neutronics capability employed is based on MPACT, a three-dimensional (3-D) whole core transport code. The thermal-hydraulics and fuel temperature models are provided by the COBRA-TF (CTF) subchannel code. As part of the CASL development program, the VERA-CS (MPACT/CTF) code system was applied to model and simulate reactor core response with respect to departure from nucleate boiling ratio (DNBR) at the limiting time stepmore » of a postulated pressurized water reactor (PWR) main steamline break (MSLB) event initiated at the hot zero power (HZP), either with offsite power available and the reactor coolant pumps in operation (high-flow case) or without offsite power where the reactor core is cooled through natural circulation (low-flow case). The VERA-CS simulation was based on core boundary conditions from the RETRAN-02 system transient calculations and STAR-CCM+ computational fluid dynamics (CFD) core inlet distribution calculations. The evaluation indicated that the VERA-CS code system is capable of modeling and simulating quasi-steady state reactor core response under the steamline break (SLB) accident condition, the results are insensitive to uncertainties in the inlet flow distributions from the CFD simulations, and the high-flow case is more DNB limiting than the low-flow case.« less

  11. Impact of the dynamical core on the direct simulation of tropical cyclones in a high-resolution global model

    DOE PAGES

    Reed, K. A.; Bacmeister, J. T.; Rosenbloom, N. A.; ...

    2015-05-13

    Our paper examines the impact of the dynamical core on the simulation of tropical cyclone (TC) frequency, distribution, and intensity. The dynamical core, the central fluid flow component of any general circulation model (GCM), is often overlooked in the analysis of a model's ability to simulate TCs compared to the impact of more commonly documented components (e.g., physical parameterizations). The Community Atmosphere Model version 5 is configured with multiple dynamics packages. This analysis demonstrates that the dynamical core has a significant impact on storm intensity and frequency, even in the presence of similar large-scale environments. In particular, the spectral elementmore » core produces stronger TCs and more hurricanes than the finite-volume core using very similar parameterization packages despite the latter having a slightly more favorable TC environment. Furthermore, these results suggest that more detailed investigations into the impact of the GCM dynamical core on TC climatology are needed to fully understand these uncertainties. Key Points The impact of the GCM dynamical core is often overlooked in TC assessments The CAM5 dynamical core has a significant impact on TC frequency and intensity A larger effort is needed to better understand this uncertainty« less

  12. A Delphi study to identify the core components of nurse to nurse handoff.

    PubMed

    O'Rourke, Jennifer; Abraham, Joanna; Riesenberg, Lee Ann; Matson, Jeff; Lopez, Karen Dunn

    2018-03-08

    The aim of this study was to identify the core components of nurse-nurse handoffs. Patient handoffs involve a process of passing information, responsibility and control from one caregiver to the next during care transitions. Around the globe, ineffective handoffs have serious consequences resulting in wrong treatments, delays in diagnosis, longer stays, medication errors, patient falls and patient deaths. To date, the core components of nurse-nurse handoff have not been identified. This lack of identification is a significant gap in moving towards a standardized approach for nurse-nurse handoff. Mixed methods design using the Delphi technique. From May 2016 - October 2016, using a series of iterative steps, a panel of handoff experts gave feedback on the nurse-nurse handoff core components and the content in each component to be passed from one nurse to the next during a typical unit-based shift handoff. Consensus was defined as 80% agreement or higher. After three rounds of participant review, 17 handoff experts with backgrounds in clinical nursing practice, academia and handoff research came to consensus on the core components of handoff: patient summary, action plan and nurse-nurse synthesis. This is the first study to identify the core components of nurse-nurse handoff. Subsequent testing of the core components will involve evaluating the handoff approach in a simulated and then actual patient care environment. Our long-term goal is to improve patient safety outcomes by validating an evidence-based handoff framework and handoff curriculum for pre-licensure nursing programmes that strengthen the quality of their handoff communication as they enter clinical practice. © 2018 John Wiley & Sons Ltd.

  13. MODELING EXTRAGALACTIC EXTINCTION THROUGH GAMMA-RAY BURST AFTERGLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zonca, Alberto; Mulas, Giacomo; Casu, Silvia

    We analyze extragalactic extinction profiles derived through gamma-ray burst afterglows, using a dust model specifically constructed on the assumption that dust grains are not immutable but respond, time-dependently, to the local physics. Such a model includes core-mantle spherical particles of mixed chemical composition (silicate core, sp{sup 2}, and sp{sup 3} carbonaceous layers), and an additional molecular component in the form of free-flying polycyclic aromatic hydrocarbons. We fit most of the observed extinction profiles. Failures occur for lines of sight, presenting remarkable rises blueward of the bump. We find a tendency for the carbon chemical structure to become more aliphatic withmore » the galactic activity, and to some extent with increasing redshifts. Moreover, the contribution of the molecular component to the total extinction is more important in younger objects. The results of the fitting procedure (either successes and failures) may be naturally interpreted through an evolutionary prescription based on the carbon cycle in the interstellar medium of galaxies.« less

  14. Core losses of a permanent magnet synchronous motor with an amorphous stator core under inverter and sinusoidal excitations

    NASA Astrophysics Data System (ADS)

    Yao, Atsushi; Sugimoto, Takaya; Odawara, Shunya; Fujisaki, Keisuke

    2018-05-01

    We report core loss properties of permanent magnet synchronous motors (PMSM) with amorphous magnetic materials (AMM) core under inverter and sinusoidal excitations. To discuss the core loss properties of AMM core, a comparison with non-oriented (NO) core is also performed. In addition, based on both experiments and numerical simulations, we estimate higher (time and space) harmonic components of the core losses under inverter and sinusoidal excitations. The core losses of PMSM are reduced by about 59% using AMM stator core instead of NO core under sinusoidal excitation. We show that the average decrease obtained by using AMM instead of NO in the stator core is about 94% in time harmonic components.

  15. Indoorgml - a Standard for Indoor Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Li, Ki-Joune

    2016-06-01

    With recent progress of mobile devices and indoor positioning technologies, it becomes possible to provide location-based services in indoor space as well as outdoor space. It is in a seamless way between indoor and outdoor spaces or in an independent way only for indoor space. However, we cannot simply apply spatial models developed for outdoor space to indoor space due to their differences. For example, coordinate reference systems are employed to indicate a specific position in outdoor space, while the location in indoor space is rather specified by cell number such as room number. Unlike outdoor space, the distance between two points in indoor space is not determined by the length of the straight line but the constraints given by indoor components such as walls, stairs, and doors. For this reason, we need to establish a new framework for indoor space from fundamental theoretical basis, indoor spatial data models, and information systems to store, manage, and analyse indoor spatial data. In order to provide this framework, an international standard, called IndoorGML has been developed and published by OGC (Open Geospatial Consortium). This standard is based on a cellular notion of space, which considers an indoor space as a set of non-overlapping cells. It consists of two types of modules; core module and extension module. While core module consists of four basic conceptual and implementation modeling components (geometric model for cell, topology between cells, semantic model of cell, and multi-layered space model), extension modules may be defined on the top of the core module to support an application area. As the first version of the standard, we provide an extension for indoor navigation.

  16. Building Interoperable FHIR-Based Vocabulary Mapping Services: A Case Study of OHDSI Vocabularies and Mappings.

    PubMed

    Jiang, Guoqian; Kiefer, Richard; Prud'hommeaux, Eric; Solbrig, Harold R

    2017-01-01

    The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.

  17. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.

  18. Conceptual bases of Christian, faith-based substance abuse rehabilitation programs: qualitative analysis of staff interviews.

    PubMed

    McCoy, Lisa K; Hermos, John A; Bokhour, Barbara G; Frayne, Susan M

    2004-09-01

    Faith-based substance abuse rehabilitation programs provide residential treatment for many substance abusers. To determine key governing concepts of such programs, we conducted semi-structured interviews with sample of eleven clinical and administrative staff referred to us by program directors at six, Evangelical Christian, faith-based, residential rehabilitation programs representing two large, nationwide networks. Qualitative analysis using grounded theory methods examined how spirituality is incorporated into treatment and elicited key theories of addiction and recovery. Although containing comprehensive secular components, the core activities are strongly rooted in a Christian belief system that informs their understanding of addiction and recovery and drives the treatment format. These governing conceptions, that addiction stems from attempts to fill a spiritual void through substance use and recovery through salvation and a long-term relationship with God, provide an explicit, theory-driven model upon which they base their core treatment activities. Knowledge of these core concepts and practices should be helpful to clinicians in considering referrals to faith-based recovery programs.

  19. A Model-Driven, Science Data Product Registration Service

    NASA Astrophysics Data System (ADS)

    Hardman, S.; Ramirez, P.; Hughes, J. S.; Joyner, R.; Cayanan, M.; Lee, H.; Crichton, D. J.

    2011-12-01

    The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (including the data model, data structures, data dictionary, etc.) and to deploy an upgraded software system (including data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while taking advantage of modern innovations in information technology (including networking capabilities, processing speeds, and software breakthroughs). A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc. This service offers a single reference implementation of the registry capabilities detailed in the Consultative Committee for Space Data Systems (CCSDS) Registry Reference Model White Book. The CCSDS Reference Model in turn relies heavily on the Electronic Business using eXtensible Markup Language (ebXML) standards for registry services and the registry information model, managed by the OASIS consortium. Registries are pervasive components in most information systems. For example, data dictionaries, service registries, LDAP directory services, and even databases provide registry-like services. These all include an account of informational items that are used in large-scale information systems ranging from data values such as names and codes, to vocabularies, services and software components. The problem is that many of these registry-like services were designed with their own data models associated with the specific type of artifact they track. Additionally these services each have their own specific interface for interacting with the service. This Registry Service implements the data model specified in the ebXML Registry Information Model (RIM) specification that supports the various artifacts above as well as offering the flexibility to support customer-defined artifacts. Key features for the Registry Service include: - Model-based configuration specifying customer-defined artifact types, metadata attributes to capture for each artifact type, supported associations and classification schemes. - A REST-based external interface that is accessible via the Hypertext Transfer Protocol (HTTP). - Federation of Registry Service instances allowing associations between registered artifacts across registries as well as queries for artifacts across those same registries. A federation also enables features such as replication and synchronization if desired for a given deployment. In addition to its use as a core component of the PDS, the generic implementation of the Registry Service facilitates its applicability as a core component in any science data archive or science data system.

  20. Research on the equivalence between digital core and rock physics models

    NASA Astrophysics Data System (ADS)

    Yin, Xingyao; Zheng, Ying; Zong, Zhaoyun

    2017-06-01

    In this paper, we calculate the elastic modulus of 3D digital cores using the finite element method, systematically study the equivalence between the digital core model and various rock physics models, and carefully analyze the conditions of the equivalence relationships. The influences of the pore aspect ratio and consolidation coefficient on the equivalence relationships are also further refined. Theoretical analysis indicates that the finite element simulation based on the digital core is equivalent to the boundary theory and Gassmann model. For pure sandstones, effective medium theory models (SCA and DEM) and the digital core models are equivalent in cases when the pore aspect ratio is within a certain range, and dry frame models (Nur and Pride model) and the digital core model are equivalent in cases when the consolidation coefficient is a specific value. According to the equivalence relationships, the comparison of the elastic modulus results of the effective medium theory and digital rock physics is an effective approach for predicting the pore aspect ratio. Furthermore, the traditional digital core models with two components (pores and matrix) are extended to multiple minerals to more precisely characterize the features and mineral compositions of rocks in underground reservoirs. This paper studies the effects of shale content on the elastic modulus in shaly sandstones. When structural shale is present in the sandstone, the elastic modulus of the digital cores are in a reasonable agreement with the DEM model. However, when dispersed shale is present in the sandstone, the Hill model cannot describe the changes in the stiffness of the pore space precisely. Digital rock physics describes the rock features such as pore aspect ratio, consolidation coefficient and rock stiffness. Therefore, digital core technology can, to some extent, replace the theoretical rock physics models because the results are more accurate than those of the theoretical models.

  1. Enhanced Core Noise Modeling for Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.

    2011-01-01

    This report describes work performed by MTC Technologies (MTCT) for NASA Glenn Research Center (GRC) under Contract NAS3-00178, Task Order No. 15. MTCT previously developed a first-generation empirical model that correlates the core/combustion noise of four GE engines, the CF6, CF34, CFM56, and GE90 for General Electric (GE) under Contract No. 200-1X-14W53048, in support of GRC Contract NAS3-01135. MTCT has demonstrated in earlier noise modeling efforts that the improvement of predictive modeling is greatly enhanced by an iterative approach, so in support of NASA's Quiet Aircraft Technology Project, GRC sponsored this effort to improve the model. Since the noise data available for correlation are total engine noise spectra, it is total engine noise that must be predicted. Since the scope of this effort was not sufficient to explore fan and turbine noise, the most meaningful comparisons must be restricted to frequencies below the blade passage frequency. Below the blade passage frequency and at relatively high power settings jet noise is expected to be the dominant source, and comparisons are shown that demonstrate the accuracy of the jet noise model recently developed by MTCT for NASA under Contract NAS3-00178, Task Order No. 10. At lower power settings the core noise became most apparent, and these data corrected for the contribution of jet noise were then used to establish the characteristics of core noise. There is clearly more than one spectral range where core noise is evident, so the spectral approach developed by von Glahn and Krejsa in 1982 wherein four spectral regions overlap, was used in the GE effort. Further analysis indicates that the two higher frequency components, which are often somewhat masked by turbomachinery noise, can be treated as one component, and it is on that basis that the current model is formulated. The frequency scaling relationships are improved and are now based on combustor and core nozzle geometries. In conjunction with the Task Order No. 10 jet noise model, this core noise model is shown to provide statistical accuracy comparable to the jet noise model for frequencies below blade passage. This model is incorporated in the NASA FOOTPR code and a user s guide is provided.

  2. Rotation of a rigid satellite with a fluid component: a new light onto Titan's obliquity

    NASA Astrophysics Data System (ADS)

    Boué, Gwenaël; Rambaux, Nicolas; Richard, Andy

    2017-12-01

    We revisit the rotation dynamics of a rigid satellite with either a liquid core or a global subsurface ocean. In both problems, the flow of the fluid component is assumed inviscid. The study of a hollow satellite with a liquid core is based on the Poincaré-Hough model which provides exact equations of motion. We introduce an approximation when the ellipticity of the cavity is low. This simplification allows to model both types of satellite in the same manner. The analysis of their rotation is done in a non-canonical Hamiltonian formalism closely related to Poincaré's "forme nouvelle des équations de la mécanique". In the case of a satellite with a global ocean, we obtain a seven-degree-of-freedom system. Six of them account for the motion of the two rigid components, and the last one is associated with the fluid layer. We apply our model to Titan for which the origin of the obliquity is still a debated question. We show that the observed value is compatible with Titan slightly departing from the hydrostatic equilibrium and being in a Cassini equilibrium state.

  3. Design, fabrication and test of a trace contaminant control system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.

  4. Molecular Determinants and Dynamics of Hepatitis C Virus Secretion

    PubMed Central

    Coller, Kelly E.; Heaton, Nicholas S.; Berger, Kristi L.; Cooper, Jacob D.; Saunders, Jessica L.; Randall, Glenn

    2012-01-01

    The current model of hepatitis C virus (HCV) production involves the assembly of virions on or near the surface of lipid droplets, envelopment at the ER in association with components of VLDL synthesis, and egress via the secretory pathway. However, the cellular requirements for and a mechanistic understanding of HCV secretion are incomplete at best. We combined an RNA interference (RNAi) analysis of host factors for infectious HCV secretion with the development of live cell imaging of HCV core trafficking to gain a detailed understanding of HCV egress. RNAi studies identified multiple components of the secretory pathway, including ER to Golgi trafficking, lipid and protein kinases that regulate budding from the trans-Golgi network (TGN), VAMP1 vesicles and adaptor proteins, and the recycling endosome. Our results support a model wherein HCV is infectious upon envelopment at the ER and exits the cell via the secretory pathway. We next constructed infectious HCV with a tetracysteine (TC) tag insertion in core (TC-core) to monitor the dynamics of HCV core trafficking in association with its cellular cofactors. In order to isolate core protein movements associated with infectious HCV secretion, only trafficking events that required the essential HCV assembly factor NS2 were quantified. TC-core traffics to the cell periphery along microtubules and this movement can be inhibited by nocodazole. Sub-populations of TC-core localize to the Golgi and co-traffic with components of the recycling endosome. Silencing of the recycling endosome component Rab11a results in the accumulation of HCV core at the Golgi. The majority of dynamic core traffics in association with apolipoprotein E (ApoE) and VAMP1 vesicles. This study identifies many new host cofactors of HCV egress, while presenting dynamic studies of HCV core trafficking in infected cells. PMID:22241992

  5. Decadal variability in core surface flows deduced from geomagnetic observatory monthly means

    NASA Astrophysics Data System (ADS)

    Whaler, K. A.; Olsen, N.; Finlay, C. C.

    2016-10-01

    Monthly means of the magnetic field measurements at ground observatories are a key data source for studying temporal changes of the core magnetic field. However, when they are calculated in the usual way, contributions of external (magnetospheric and ionospheric) origin may remain, which make them less favourable for studying the field generated by dynamo action in the core. We remove external field predictions, including a new way of characterizing the magnetospheric ring current, from the data and then calculate revised monthly means using robust methods. The geomagnetic secular variation (SV) is calculated as the first annual differences of these monthly means, which also removes the static crustal field. SV time-series based on revised monthly means are much less scattered than those calculated from ordinary monthly means, and their variances and correlations between components are smaller. On the annual to decadal timescale, the SV is generated primarily by advection in the fluid outer core. We demonstrate the utility of the revised monthly means by calculating models of the core surface advective flow between 1997 and 2013 directly from the SV data. One set of models assumes flow that is constant over three months; such models exhibit large and rapid temporal variations. For models of this type, less complex flows achieve the same fit to the SV derived from revised monthly means than those from ordinary monthly means. However, those obtained from ordinary monthly means are able to follow excursions in SV that are likely to be external field contamination rather than core signals. Having established that we can find models that fit the data adequately, we then assess how much temporal variability is required. Previous studies have suggested that the flow is consistent with torsional oscillations (TO), solid body-like oscillations of fluid on concentric cylinders with axes aligned along the Earth's rotation axis. TO have been proposed to explain decadal timescale changes in the length-of-day. We invert for flow models where the only temporal changes are consistent with TO, but such models have an unacceptably large data misfit. However, if we relax the TO constraint to allow a little more temporal variability, we can fit the data as well as with flows assumed constant over three months, demonstrating that rapid SV changes can be reproduced by rather small flow changes. Although the flow itself changes slowly, its time derivative can be locally (temporally and spatially) large, in particular when and where core surface secular acceleration peaks. Spherical harmonic expansion coefficients of the flows are not well resolved, and many of them are strongly correlated. Averaging functions, a measure of our ability to determine the flow at a given location from the data distribution available, are poor approximations to the ideal, even when centred on points of the core surface below areas of high observatory density. Both resolution and averaging functions are noticeably worse for the toroidal flow component, which dominates the flow, than the poloidal flow component, except around the magnetic equator where averaging functions for both components are poor.

  6. A probabilisitic based failure model for components fabricated from anisotropic graphite

    NASA Astrophysics Data System (ADS)

    Xiao, Chengfeng

    The nuclear moderator for high temperature nuclear reactors are fabricated from graphite. During reactor operations graphite components are subjected to complex stress states arising from structural loads, thermal gradients, neutron irradiation damage, and seismic events. Graphite is a quasi-brittle material. Two aspects of nuclear grade graphite, i.e., material anisotropy and different behavior in tension and compression, are explicitly accounted for in this effort. Fracture mechanic methods are useful for metal alloys, but they are problematic for anisotropic materials with a microstructure that makes it difficult to identify a "critical" flaw. In fact cracking in a graphite core component does not necessarily result in the loss of integrity of a nuclear graphite core assembly. A phenomenological failure criterion that does not rely on flaw detection has been derived that accounts for the material behaviors mentioned. The probability of failure of components fabricated from graphite is governed by the scatter in strength. The design protocols being proposed by international code agencies recognize that design and analysis of reactor core components must be based upon probabilistic principles. The reliability models proposed herein for isotropic graphite and graphite that can be characterized as being transversely isotropic are another set of design tools for the next generation very high temperature reactors (VHTR) as well as molten salt reactors. The work begins with a review of phenomenologically based deterministic failure criteria. A number of this genre of failure models are compared with recent multiaxial nuclear grade failure data. Aspects in each are shown to be lacking. The basic behavior of different failure strengths in tension and compression is exhibited by failure models derived for concrete, but attempts to extend these concrete models to anisotropy were unsuccessful. The phenomenological models are directly dependent on stress invariants. A set of invariants, known as an integrity basis, was developed for a non-linear elastic constitutive model. This integrity basis allowed the non-linear constitutive model to exhibit different behavior in tension and compression and moreover, the integrity basis was amenable to being augmented and extended to anisotropic behavior. This integrity basis served as the starting point in developing both an isotropic reliability model and a reliability model for transversely isotropic materials. At the heart of the reliability models is a failure function very similar in nature to the yield functions found in classic plasticity theory. The failure function is derived and presented in the context of a multiaxial stress space. States of stress inside the failure envelope denote safe operating states. States of stress on or outside the failure envelope denote failure. The phenomenological strength parameters associated with the failure function are treated as random variables. There is a wealth of failure data in the literature that supports this notion. The mathematical integration of a joint probability density function that is dependent on the random strength variables over the safe operating domain defined by the failure function provides a way to compute the reliability of a state of stress in a graphite core component fabricated from graphite. The evaluation of the integral providing the reliability associated with an operational stress state can only be carried out using a numerical method. Monte Carlo simulation with importance sampling was selected to make these calculations. The derivation of the isotropic reliability model and the extension of the reliability model to anisotropy are provided in full detail. Model parameters are cast in terms of strength parameters that can (and have been) characterized by multiaxial failure tests. Comparisons of model predictions with failure data is made and a brief comparison is made to reliability predictions called for in the ASME Boiler and Pressure Vessel Code. Future work is identified that would provide further verification and augmentation of the numerical methods used to evaluate model predictions.

  7. Melting in super-earths.

    PubMed

    Stixrude, Lars

    2014-04-28

    We examine the possible extent of melting in rock-iron super-earths, focusing on those in the habitable zone. We consider the energetics of accretion and core formation, the timescale of cooling and its dependence on viscosity and partial melting, thermal regulation via the temperature dependence of viscosity, and the melting curves of rock and iron components at the ultra-high pressures characteristic of super-earths. We find that the efficiency of kinetic energy deposition during accretion increases with planetary mass; considering the likely role of giant impacts and core formation, we find that super-earths probably complete their accretionary phase in an entirely molten state. Considerations of thermal regulation lead us to propose model temperature profiles of super-earths that are controlled by silicate melting. We estimate melting curves of iron and rock components up to the extreme pressures characteristic of super-earth interiors based on existing experimental and ab initio results and scaling laws. We construct super-earth thermal models by solving the equations of mass conservation and hydrostatic equilibrium, together with equations of state of rock and iron components. We set the potential temperature at the core-mantle boundary and at the surface to the local silicate melting temperature. We find that ancient (∼4 Gyr) super-earths may be partially molten at the top and bottom of their mantles, and that mantle convection is sufficiently vigorous to sustain dynamo action over the whole range of super-earth masses.

  8. Seven Activities for Enhancing the Replicability of Evidence-Based Practices. Research-to-Results Brief. Publication #2007-30

    ERIC Educational Resources Information Center

    Metz, Allison J. R.; Bowie, Lillian; Blase, Karen

    2007-01-01

    This brief will define program replication, describe the critical role of "core components" in program replication, and outline seven activities that program developers and researchers can conduct to enhance the replicability of effective program models and facilitate their adoption by other organizations and programs. Outlined is seven specific…

  9. XLS (c9orf142) is a new component of mammalian DNA double-stranded break repair.

    PubMed

    Craxton, A; Somers, J; Munnur, D; Jukes-Jones, R; Cain, K; Malewicz, M

    2015-06-01

    Repair of double-stranded DNA breaks (DSBs) in mammalian cells primarily occurs by the non-homologous end-joining (NHEJ) pathway, which requires seven core proteins (Ku70/Ku86, DNA-PKcs (DNA-dependent protein kinase catalytic subunit), Artemis, XRCC4-like factor (XLF), XRCC4 and DNA ligase IV). Here we show using combined affinity purification and mass spectrometry that DNA-PKcs co-purifies with all known core NHEJ factors. Furthermore, we have identified a novel evolutionary conserved protein associated with DNA-PKcs-c9orf142. Computer-based modelling of c9orf142 predicted a structure very similar to XRCC4, hence we have named c9orf142-XLS (XRCC4-like small protein). Depletion of c9orf142/XLS in cells impaired DSB repair consistent with a defect in NHEJ. Furthermore, c9orf142/XLS interacted with other core NHEJ factors. These results demonstrate the existence of a new component of the NHEJ DNA repair pathway in mammalian cells.

  10. The message development tool: a case for effective operationalization of messaging in social marketing practice.

    PubMed

    Mattson, Marifran; Basu, Ambar

    2010-07-01

    That messages are essential, if not the most critical component of any communicative process, seems like an obvious claim. More so when the communication is about health--one of the most vital and elemental of human experiences (Babrow & Mattson, 2003). Any communication campaign that aims to change a target audience's health behaviors needs to centralize messages. Even though messaging strategies are an essential component of social marketing and are a widely used campaign model, health campaigns based on this framework have not always been able to effectively operationalize this key component, leading to cases where initiating and sustaining prescribed health behavior has been difficult (MacStravic, 2000). Based on an examination of the VERB campaign and an Australian breastfeeding promotion campaign, we propose a message development tool within the ambit of the social marketing framework that aims to extend the framework and ensure that the messaging component of the model is contextualized at the core of planning, implementation, and evaluation efforts.

  11. Estimating a Preference-Based Index from the Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE-OM)

    PubMed Central

    Brazier, John E.; Rowen, Donna; Barkham, Michael

    2013-01-01

    Background. The Clinical Outcomes in Routine Evaluation–Outcome Measure (CORE-OM) is used to evaluate the effectiveness of psychological therapies in people with common mental disorders. The objective of this study was to estimate a preference-based index for this population using CORE-6D, a health state classification system derived from the CORE-OM consisting of a 5-item emotional component and a physical item, and to demonstrate a novel method for generating states that are not orthogonal. Methods. Rasch analysis was used to identify 11 emotional health states from CORE-6D that were frequently observed in the study population and are, thus, plausible (in contrast, conventional statistical design might generate implausible states). Combined with the 3 response levels of the physical item of CORE-6D, they generate 33 plausible health states, 18 of which were selected for valuation. A valuation survey of 220 members of the public in South Yorkshire, United Kingdom, was undertaken using the time tradeoff (TTO) method. Regression analysis was subsequently used to predict values for all possible states described by CORE-6D. Results. A number of multivariate regression models were built to predict values for the 33 health states of CORE-6D, using the Rasch logit value of the emotional state and the response level of the physical item as independent variables. A cubic model with high predictive value (adjusted R2 = 0.990) was selected to predict TTO values for all 729 CORE-6D health states. Conclusion. The CORE-6D preference-based index will enable the assessment of cost-effectiveness of interventions for people with common mental disorders using existing and prospective CORE-OM data sets. The new method for generating states may be useful for other instruments with highly correlated dimensions. PMID:23178639

  12. What is reflection? A conceptual analysis of major definitions and a proposal of a five-component model.

    PubMed

    Nguyen, Quoc Dinh; Fernandez, Nicolas; Karsenti, Thierry; Charlin, Bernard

    2014-12-01

    Although reflection is considered a significant component of medical education and practice, the literature does not provide a consensual definition or model for it. Because reflection has taken on multiple meanings, it remains difficult to operationalise. A standard definition and model are needed to improve the development of practical applications of reflection. This study was conducted in order to identify, explore and analyse the most influential conceptualisations of reflection, and to develop a new theory-informed and unified definition and model of reflection. A systematic review was conducted to identify the 15 most cited authors in papers on reflection published during the period from 2008 to 2012. The authors' definitions and models were extracted. An exploratory thematic analysis was carried out and identified seven initial categories. Categories were clustered and reworded to develop an integrative definition and model of reflection, which feature core components that define reflection and extrinsic elements that influence instances of reflection. Following our review and analysis, five core components of reflection and two extrinsic elements were identified as characteristics of the reflective thinking process. Reflection is defined as the process of engaging the self (S) in attentive, critical, exploratory and iterative (ACEI) interactions with one's thoughts and actions (TA), and their underlying conceptual frame (CF), with a view to changing them and a view on the change itself (VC). Our conceptual model consists of the defining core components, supplemented with the extrinsic elements that influence reflection. This article presents a new theory-informed, five-component definition and model of reflection. We believe these have advantages over previous models in terms of helping to guide the further study, learning, assessment and teaching of reflection. © 2014 John Wiley & Sons Ltd.

  13. A model of heat flow in the sheep exposed to high levels of solar radiation.

    PubMed

    Vera, R R; Koong, L J; Morris, J G

    1975-08-01

    The fleece is an important component in thermoregulation of sheep exposed to high levels of solar radiation. A model written in CSMP has been developed which represents the flow of energy between the sheep and its environment. This model is based on a set of differential equations which describe the flux of heat between the components of the system--fleece, tip, skin, body and environment. It requires as input parameters location, date, time of day, temperature, relative humidity, cloud cover, wind movement, animal weight and linear measurements and fleece length. At each integration interval incoming solar radiation and its components, the heat arising from the animal's metabolism and the heat exchange by long-wave radiation, convection, conduction and evaporative cooling are computed. Temperatures at the fleece tip, skin and body core are monitored.

  14. Bringing Back the Social Affordances of the Paper Memo to Aerospace Systems Engineering Work

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott; Holloway, Alexandra

    2014-01-01

    Model-based systems engineering (MBSE) is a relatively new field that brings together the interdisciplinary study of technological components of a project (systems engineering) with a model-based ontology to express the hierarchical and behavioral relationships between the components (computational modeling). Despite the compelling promises of the benefits of MBSE, such as improved communication and productivity due to an underlying language and data model, we observed hesitation to its adoption at the NASA Jet Propulsion Laboratory. To investigate, we conducted a six-month ethnographic field investigation and needs validation with 19 systems engineers. This paper contributes our observations of a generational shift in one of JPL's core technologies. We report on a cultural misunderstanding between communities of practice that bolsters the existing technology drag. Given the high cost of failure, we springboard our observations into a design hypothesis - an intervention that blends the social affordances of the narrative-based work flow with the rich technological advantages of explicit data references and relationships of the model-based approach. We provide a design rationale, and the results of our evaluation.

  15. CHEMICAL AND PHYSICAL CHARACTERIZATION OF COLLAPSING LOW-MASS PRESTELLAR DENSE CORES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hincelin, U.; Commerçon, B.; Wakelam, V.

    The first hydrostatic core, also called the first Larson core, is one of the first steps in low-mass star formation as predicted by theory. With recent and future high-performance telescopes, the details of these first phases are becoming accessible, and observations may confirm theory and even present new challenges for theoreticians. In this context, from a theoretical point of view, we study the chemical and physical evolution of the collapse of prestellar cores until the formation of the first Larson core, in order to better characterize this early phase in the star formation process. We couple a state-of-the-art hydrodynamical modelmore » with full gas-grain chemistry, using different assumptions for the magnetic field strength and orientation. We extract the different components of each collapsing core (i.e., the central core, the outflow, the disk, the pseudodisk, and the envelope) to highlight their specific physical and chemical characteristics. Each component often presents a specific physical history, as well as a specific chemical evolution. From some species, the components can clearly be differentiated. The different core models can also be chemically differentiated. Our simulation suggests that some chemical species act as tracers of the different components of a collapsing prestellar dense core, and as tracers of the magnetic field characteristics of the core. From this result, we pinpoint promising key chemical species to be observed.« less

  16. Reactive Aggregate Model Protecting Against Real-Time Threats

    DTIC Science & Technology

    2014-09-01

    on the underlying functionality of three core components. • MS SQL server 2008 backend database. • Microsoft IIS running on Windows server 2008...services. The capstone tested a Linux-based Apache web server with the following software implementations: • MySQL as a Linux-based backend server for...malicious compromise. 1. Assumptions • GINA could connect to a backend MS SQL database through proper configuration of DotNetNuke. • GINA had access

  17. Tungsten isotope evidence that mantle plumes contain no contribution from the Earth's core

    NASA Astrophysics Data System (ADS)

    Scherstén, Anders; Elliott, Tim; Hawkesworth, Chris; Norman, Marc

    2004-01-01

    Osmium isotope ratios provide important constraints on the sources of ocean-island basalts, but two very different models have been put forward to explain such data. One model interprets 187Os-enrichments in terms of a component of recycled oceanic crust within the source material. The other model infers that interaction of the mantle with the Earth's outer core produces the isotope anomalies and, as a result of coupled 186Os-187Os anomalies, put time constraints on inner-core formation. Like osmium, tungsten is a siderophile (`iron-loving') element that preferentially partitioned into the Earth's core during core formation but is also `incompatible' during mantle melting (it preferentially enters the melt phase), which makes it further depleted in the mantle. Tungsten should therefore be a sensitive tracer of core contributions in the source of mantle melts. Here we present high-precision tungsten isotope data from the same set of Hawaiian rocks used to establish the previously interpreted 186Os-187Os anomalies and on selected South African rocks, which have also been proposed to contain a core contribution. None of the samples that we have analysed have a negative tungsten isotope value, as predicted from the core-contribution model. This rules out a simple core-mantle mixing scenario and suggests that the radiogenic osmium in ocean-island basalts can better be explained by the source of such basalts containing a component of recycled crust.

  18. The development of a healing model of care for an Indigenous drug and alcohol residential rehabilitation service: a community-based participatory research approach.

    PubMed

    Munro, Alice; Shakeshaft, Anthony; Clifford, Anton

    2017-12-04

    Given the well-established evidence of disproportionately high rates of substance-related morbidity and mortality after release from incarceration for Indigenous Australians, access to comprehensive, effective and culturally safe residential rehabilitation treatment will likely assist in reducing recidivism to both prison and substance dependence for this population. In the absence of methodologically rigorous evidence, the delivery of Indigenous drug and alcohol residential rehabilitation services vary widely, and divergent views exist regarding the appropriateness and efficacy of different potential treatment components. One way to increase the methodological quality of evaluations of Indigenous residential rehabilitation services is to develop partnerships with researchers to better align models of care with the client's, and the community's, needs. An emerging research paradigm to guide the development of high quality evidence through a number of sequential steps that equitably involves services, stakeholders and researchers is community-based participatory research (CBPR). The purpose of this study is to articulate an Indigenous drug and alcohol residential rehabilitation service model of care, developed in collaboration between clients, service providers and researchers using a CBPR approach. This research adopted a mixed methods CBPR approach to triangulate collected data to inform the development of a model of care for a remote Indigenous drug and alcohol residential rehabilitation service. Four iterative CBPR steps of research activity were recorded during the 3-year research partnership. As a direct outcome of the CBPR framework, the service and researchers co-designed a Healing Model of Care that comprises six core treatment components, three core organisational components and is articulated in two program logics. The program logics were designed to specifically align each component and outcome with the mechanism of change for the client or organisation to improve data collection and program evaluation. The description of the CBPR process and the Healing Model of Care provides one possible solution about how to provide better care for the large and growing population of Indigenous people with substance.

  19. Free energy change of a dislocation due to a Cottrell atmosphere

    DOE PAGES

    Sills, R. B.; Cai, W.

    2018-03-07

    The free energy reduction of a dislocation due to a Cottrell atmosphere of solutes is computed using a continuum model. In this work, we show that the free energy change is composed of near-core and far-field components. The far-field component can be computed analytically using the linearized theory of solid solutions. Near the core the linearized theory is inaccurate, and the near-core component must be computed numerically. The influence of interactions between solutes in neighbouring lattice sites is also examined using the continuum model. We show that this model is able to reproduce atomistic calculations of the nickel–hydrogen system, predictingmore » hydride formation on dislocations. The formation of these hydrides leads to dramatic reductions in the free energy. Lastly, the influence of the free energy change on a dislocation’s line tension is examined by computing the equilibrium shape of a dislocation shear loop and the activation stress for a Frank–Read source using discrete dislocation dynamics.« less

  20. Free energy change of a dislocation due to a Cottrell atmosphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sills, R. B.; Cai, W.

    The free energy reduction of a dislocation due to a Cottrell atmosphere of solutes is computed using a continuum model. In this work, we show that the free energy change is composed of near-core and far-field components. The far-field component can be computed analytically using the linearized theory of solid solutions. Near the core the linearized theory is inaccurate, and the near-core component must be computed numerically. The influence of interactions between solutes in neighbouring lattice sites is also examined using the continuum model. We show that this model is able to reproduce atomistic calculations of the nickel–hydrogen system, predictingmore » hydride formation on dislocations. The formation of these hydrides leads to dramatic reductions in the free energy. Lastly, the influence of the free energy change on a dislocation’s line tension is examined by computing the equilibrium shape of a dislocation shear loop and the activation stress for a Frank–Read source using discrete dislocation dynamics.« less

  1. Free energy change of a dislocation due to a Cottrell atmosphere

    NASA Astrophysics Data System (ADS)

    Sills, R. B.; Cai, W.

    2018-06-01

    The free energy reduction of a dislocation due to a Cottrell atmosphere of solutes is computed using a continuum model. We show that the free energy change is composed of near-core and far-field components. The far-field component can be computed analytically using the linearized theory of solid solutions. Near the core the linearized theory is inaccurate, and the near-core component must be computed numerically. The influence of interactions between solutes in neighbouring lattice sites is also examined using the continuum model. We show that this model is able to reproduce atomistic calculations of the nickel-hydrogen system, predicting hydride formation on dislocations. The formation of these hydrides leads to dramatic reductions in the free energy. Finally, the influence of the free energy change on a dislocation's line tension is examined by computing the equilibrium shape of a dislocation shear loop and the activation stress for a Frank-Read source using discrete dislocation dynamics.

  2. The Role of Goal Pursuit in the Interaction between Psychosocial Work Environment and Occupational Well-Being

    ERIC Educational Resources Information Center

    Hyvonen, Katriina; Feldt, Taru; Tolvanen, Asko; Kinnunen, Ulla

    2010-01-01

    The relation of the core components of the Effort-Reward Imbalance model (ERI; Siegrist, 1996) to goal pursuit was investigated. Goal pursuit was studied through categories of goal contents--competency, progression, well-being, job change, job security, organization, finance, or no work goal--based on the personal work goals of managers (Hyvonen,…

  3. DCMIP2016: a review of non-hydrostatic dynamical core design and intercomparison of participating models

    NASA Astrophysics Data System (ADS)

    Ullrich, Paul A.; Jablonowski, Christiane; Kent, James; Lauritzen, Peter H.; Nair, Ramachandran; Reed, Kevin A.; Zarzycki, Colin M.; Hall, David M.; Dazlich, Don; Heikes, Ross; Konor, Celal; Randall, David; Dubos, Thomas; Meurdesoif, Yann; Chen, Xi; Harris, Lucas; Kühnlein, Christian; Lee, Vivian; Qaddouri, Abdessamad; Girard, Claude; Giorgetta, Marco; Reinert, Daniel; Klemp, Joseph; Park, Sang-Hun; Skamarock, William; Miura, Hiroaki; Ohno, Tomoki; Yoshida, Ryuji; Walko, Robert; Reinecke, Alex; Viner, Kevin

    2017-12-01

    Atmospheric dynamical cores are a fundamental component of global atmospheric modeling systems and are responsible for capturing the dynamical behavior of the Earth's atmosphere via numerical integration of the Navier-Stokes equations. These systems have existed in one form or another for over half of a century, with the earliest discretizations having now evolved into a complex ecosystem of algorithms and computational strategies. In essence, no two dynamical cores are alike, and their individual successes suggest that no perfect model exists. To better understand modern dynamical cores, this paper aims to provide a comprehensive review of 11 non-hydrostatic dynamical cores, drawn from modeling centers and groups that participated in the 2016 Dynamical Core Model Intercomparison Project (DCMIP) workshop and summer school. This review includes a choice of model grid, variable placement, vertical coordinate, prognostic equations, temporal discretization, and the diffusion, stabilization, filters, and fixers employed by each system.

  4. Discovery and Broad Relevance May Be Insignificant Components of Course-Based Undergraduate Research Experiences (CUREs) for Non-Biology Majors.

    PubMed

    Ballen, Cissy J; Thompson, Seth K; Blum, Jessamina E; Newstrom, Nicholas P; Cotner, Sehoya

    2018-01-01

    Course-based undergraduate research experiences (CUREs) are a type of laboratory learning environment associated with a science course, in which undergraduates participate in novel research. According to Auchincloss et al. (CBE Life Sci Educ 2104; 13:29-40), CUREs are distinct from other laboratory learning environments because they possess five core design components, and while national calls to improve STEM education have led to an increase in CURE programs nationally, less work has specifically focused on which core components are critical to achieving desired student outcomes. Here we use a backward elimination experimental design to test the importance of two CURE components for a population of non-biology majors: the experience of discovery and the production of data broadly relevant to the scientific or local community. We found nonsignificant impacts of either laboratory component on students' academic performance, science self-efficacy, sense of project ownership, and perceived value of the laboratory experience. Our results challenge the assumption that all core components of CUREs are essential to achieve positive student outcomes when applied at scale.

  5. Galactic cold cores. VII. Filament formation and evolution: Methods and observational constraints

    NASA Astrophysics Data System (ADS)

    Rivera-Ingraham, A.; Ristorcelli, I.; Juvela, M.; Montillaud, J.; Men'shchikov, A.; Malinen, J.; Pelkonen, V.-M.; Marston, A.; Martin, P. G.; Pagani, L.; Paladini, R.; Paradis, D.; Ysard, N.; Ward-Thompson, D.; Bernard, J.-P.; Marshall, D. J.; Montier, L.; Tóth, L. V.

    2016-06-01

    Context. The association of filaments with protostellar objects has made these structures a priority target in star formation studies. However, little is known about the link between filament properties and their local environment. Aims: The datasets from the Herschel Galactic Cold cores key programme allow for a statistical study of filaments with a wide range of intrinsic and environmental characteristics. Characterisation of this sample can therefore be used to identify key physical parameters and quantify the role of the environment in the formation of supercritical filaments. These results are necessary to constrain theoretical models of filament formation and evolution. Methods: Filaments were extracted from fields at distance D< 500 pc with the getfilaments algorithm and characterised according to their column density profiles and intrinsic properties. Each profile was fitted with a beam-convolved Plummer-like function, and the filament structure was quantified based on the relative contributions from the filament "core", represented by a Gaussian, and "wing" component, dominated by the power-law behaviour of the Plummer-like function. These filament parameters were examined for populations associated with different background levels. Results: Filaments increase their core (Mline,core) and wing (Mline,wing) contributions while increasing their total linear mass density (Mline,tot). Both components appear to be linked to the local environment, with filaments in higher backgrounds having systematically more massive Mline,core and Mline,wing. This dependence on the environment supports an accretion-based model of filament evolution in the local neighbourhood (D ≤ 500 pc). Structures located in the highest backgrounds develop the highest central AV, Mline,core, and Mline,wing as Mline,tot increases with time, favoured by the local availability of material and the enhanced gravitational potential. Our results indicate that filaments acquiring a significantly massive central region with Mline,core≳Mcrit/2 may become supercritical and form stars. This translates into a need for filaments to become at least moderately self-gravitating to undergo localised star formation or become star-forming filaments. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  6. A systematic review and qualitative analysis to inform the development of a new emergency department-based geriatric case management model.

    PubMed

    Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce

    2011-06-01

    We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one lacked 2, one lacked 3, and one lacked 4 of the 8 model components. Successful models of ED-based case management models for older adults share certain key characteristics. This study builds on the emerging literature in this area and leverages the differences in these models and their associated outcomes to support the development of an evidence-based normative and effective geriatric emergency management practice model designed to address the special care needs and thereby improve the health and health service utilization outcomes of older patients. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  7. AN ADVANCED LEAKAGE SCHEME FOR NEUTRINO TREATMENT IN ASTROPHYSICAL SIMULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perego, A.; Cabezón, R. M.; Käppeli, R., E-mail: albino.perego@physik.tu-darmstadt.de

    We present an Advanced Spectral Leakage (ASL) scheme to model neutrinos in the context of core-collapse supernovae (CCSNe) and compact binary mergers. Based on previous gray leakage schemes, the ASL scheme computes the neutrino cooling rates by interpolating local production and diffusion rates (relevant in optically thin and thick regimes, respectively) separately for discretized values of the neutrino energy. Neutrino trapped components are also modeled, based on equilibrium and timescale arguments. The better accuracy achieved by the spectral treatment allows a more reliable computation of neutrino heating rates in optically thin conditions. The scheme has been calibrated and tested against Boltzmannmore » transport in the context of Newtonian spherically symmetric models of CCSNe. ASL shows a very good qualitative and a partial quantitative agreement for key quantities from collapse to a few hundreds of milliseconds after core bounce. We have proved the adaptability and flexibility of our ASL scheme, coupling it to an axisymmetric Eulerian and to a three-dimensional smoothed particle hydrodynamics code to simulate core collapse. Therefore, the neutrino treatment presented here is ideal for large parameter-space explorations, parametric studies, high-resolution tests, code developments, and long-term modeling of asymmetric configurations, where more detailed neutrino treatments are not available or are currently computationally too expensive.« less

  8. Density-based cluster algorithms for the identification of core sets

    NASA Astrophysics Data System (ADS)

    Lemke, Oliver; Keller, Bettina G.

    2016-10-01

    The core-set approach is a discretization method for Markov state models of complex molecular dynamics. Core sets are disjoint metastable regions in the conformational space, which need to be known prior to the construction of the core-set model. We propose to use density-based cluster algorithms to identify the cores. We compare three different density-based cluster algorithms: the CNN, the DBSCAN, and the Jarvis-Patrick algorithm. While the core-set models based on the CNN and DBSCAN clustering are well-converged, constructing core-set models based on the Jarvis-Patrick clustering cannot be recommended. In a well-converged core-set model, the number of core sets is up to an order of magnitude smaller than the number of states in a conventional Markov state model with comparable approximation error. Moreover, using the density-based clustering one can extend the core-set method to systems which are not strongly metastable. This is important for the practical application of the core-set method because most biologically interesting systems are only marginally metastable. The key point is to perform a hierarchical density-based clustering while monitoring the structure of the metric matrix which appears in the core-set method. We test this approach on a molecular-dynamics simulation of a highly flexible 14-residue peptide. The resulting core-set models have a high spatial resolution and can distinguish between conformationally similar yet chemically different structures, such as register-shifted hairpin structures.

  9. The General Assessment of Personality Disorder (GAPD) as an instrument for assessing the core features of personality disorders.

    PubMed

    Berghuis, Han; Kamphuis, Jan H; Verheul, Roel; Larstone, Roseann; Livesley, John

    2013-01-01

    This study presents a psychometric evaluation of the General Assessment of Personality Disorder (GAPD), a self-report questionnaire for assessing the core components of personality dysfunction on the basis of Livesley's (2003) adaptive failure model. Analysis of samples from a general (n = 196) and a clinical population (n = 280) from Canada and the Netherlands, respectively, found a very similar two-component structure consistent with the two core components of personality dysfunction proposed by the model, namely, self-pathology and interpersonal dysfunction. Moreover, the GAPD discriminated between patients diagnosed with and without Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV-TR) personality disorder(s) and demonstrated discriminative power in detecting the severity of personality pathology. Correlations with a DSM-IV symptom measure and a pathological traits model suggest partial conceptual overlap. Although further testing is indicated, the present findings suggest the GAPD is suitable for assessing the core components of personality dysfunction. It may contribute to a two-step integrated assessment of personality pathology that assesses both personality dysfunction and personality traits. The core features of personality disorder can be defined as disorders in the self and in the capacity for interpersonal functioning. A clinically useful operationalization of disordered functioning of personality is needed to determine the maladaptivity of personality traits. An integrated assessment of personality (dys)functioning and personality traits provides a more comprehensive clinical picture of the patient, which may aid treatment planning. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Cardiac rehabilitation delivery model for low-resource settings

    PubMed Central

    Grace, Sherry L; Turk-Adawi, Karam I; Contractor, Aashish; Atrey, Alison; Campbell, Norm; Derman, Wayne; Melo Ghisi, Gabriela L; Oldridge, Neil; Sarkar, Bidyut K; Yeo, Tee Joo; Lopez-Jimenez, Francisco; Mendis, Shanthi; Oh, Paul; Hu, Dayi; Sarrafzadegan, Nizal

    2016-01-01

    Objective Cardiovascular disease is a global epidemic, which is largely preventable. Cardiac rehabilitation (CR) is demonstrated to be cost-effective and efficacious in high-income countries. CR could represent an important approach to mitigate the epidemic of cardiovascular disease in lower-resource settings. The purpose of this consensus statement was to review low-cost approaches to delivering the core components of CR, to propose a testable model of CR which could feasibly be delivered in middle-income countries. Methods A literature review regarding delivery of each core CR component, namely: (1) lifestyle risk factor management (ie, physical activity, diet, tobacco and mental health), (2) medical risk factor management (eg, lipid control, blood pressure control), (3) education for self-management and (4) return to work, in low-resource settings was undertaken. Recommendations were developed based on identified articles, using a modified GRADE approach where evidence in a low-resource setting was available, or consensus where evidence was not. Results Available data on cost of CR delivery in low-resource settings suggests it is not feasible to deliver CR in low-resource settings as is delivered in high-resource ones. Strategies which can be implemented to deliver all of the core CR components in low-resource settings were summarised in practice recommendations, and approaches to patient assessment proffered. It is suggested that CR be adapted by delivery by non-physician healthcare workers, in non-clinical settings. Conclusions Advocacy to achieve political commitment for broad delivery of adapted CR services in low-resource settings is needed. PMID:27181874

  11. Assessing Fidelity of Core Components in a Mindfulness and Yoga Intervention for Urban Youth: Applying the CORE Process

    ERIC Educational Resources Information Center

    Gould, Laura Feagans; Mendelson, Tamar; Dariotis, Jacinda K.; Ancona, Matthew; Smith, Ali S. R.; Gonzalez, Andres A.; Smith, Atman A.; Greenberg, Mark T.

    2014-01-01

    In the past years, the number of mindfulness-based intervention and prevention programs has increased steadily. In order to achieve the intended program outcomes, program implementers need to understand the essential and indispensable components that define a program's success. This chapter describes the complex process of identifying the core…

  12. Approach to numerical safety guidelines based on a core melt criterion. [PWR; BWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azarm, M.A.; Hall, R.E.

    1982-01-01

    A plausible approach is proposed for translating a single level criterion to a set of numerical guidelines. The criterion for core melt probability is used to set numerical guidelines for various core melt sequences, systems and component unavailabilities. These guidelines can be used as a means for making decisions regarding the necessity for replacing a component or improving part of a safety system. This approach is applied to estimate a set of numerical guidelines for various sequences of core melts that are analyzed in Reactor Safety Study for the Peach Bottom Nuclear Power Plant.

  13. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  14. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  15. Neutron Radiation Damage Estimation in the Core Structure Base Metal of RSG GAS

    NASA Astrophysics Data System (ADS)

    Santa, S. A.; Suwoto

    2018-02-01

    Radiation damage in core structure of the Indonesian RGS GAS multi purpose reactor resulting from the reaction of fast and thermal neutrons with core material structure was investigated for the first time after almost 30 years in operation. The aim is to analyze the degradation level of the critical components of the RSG GAS reactor so that the remaining life of its component can be estimated. Evaluation results of critical components remaining life will be used as data ccompleteness for submission of reactor operating permit extension. Material damage analysis due to neutron radiation is performed for the core structure components made of AlMg3 material and bolts reinforcement of core structure made of SUS304. Material damage evaluation was done on Al and Fe as base metal of AlMg3 and SUS304, respectively. Neutron fluences are evaluated based on the assumption that neutron flux calculations of U3Si8-Al equilibrium core which is operated on power rated of 15 MW. Calculation result using SRAC2006 code of CITATION module shows the maximum total neutron flux and flux >0.1 MeV are 2.537E+14 n/cm2/s and 3.376E+13 n/cm2/s, respectively. It was located at CIP core center close to the fuel element. After operating up to the end of #89 core formation, the total neutron fluence and fluence >0.1 MeV were achieved 9.063E+22 and 1.269E+22 n/cm2, respectively. Those are related to material damage of Al and Fe as much as 17.91 and 10.06 dpa, respectively. Referring to the life time of Al-1100 material irradiated in the neutron field with thermal flux/total flux=1.7 which capable of accepting material damage up to 250 dpa, it was concluded that RSG GAS reactor core structure underwent 7.16% of its operating life span. It means that core structure of RSG GAS reactor is still capable to receive the total neutron fluence of 9.637E+22 n/cm2 or fluence >0.1 MeV of 5.672E+22 n/cm2.

  16. An active monitoring method for flood events

    NASA Astrophysics Data System (ADS)

    Chen, Zeqiang; Chen, Nengcheng; Du, Wenying; Gong, Jianya

    2018-07-01

    Timely and active detecting and monitoring of a flood event are critical for a quick response, effective decision-making and disaster reduction. To achieve the purpose, this paper proposes an active service framework for flood monitoring based on Sensor Web services and an active model for the concrete implementation of the active service framework. The framework consists of two core components-active warning and active planning. The active warning component is based on a publish-subscribe mechanism implemented by the Sensor Event Service. The active planning component employs the Sensor Planning Service to control the execution of the schemes and models and plans the model input data. The active model, called SMDSA, defines the quantitative calculation method for five elements, scheme, model, data, sensor, and auxiliary information, as well as their associations. Experimental monitoring of the Liangzi Lake flood in the summer of 2010 is conducted to test the proposed framework and model. The results show that 1) the proposed active service framework is efficient for timely and automated flood monitoring. 2) The active model, SMDSA, is a quantitative calculation method used to monitor floods from manual intervention to automatic computation. 3) As much preliminary work as possible should be done to take full advantage of the active service framework and the active model.

  17. XLS (c9orf142) is a new component of mammalian DNA double-stranded break repair

    PubMed Central

    Craxton, A; Somers, J; Munnur, D; Jukes-Jones, R; Cain, K; Malewicz, M

    2015-01-01

    Repair of double-stranded DNA breaks (DSBs) in mammalian cells primarily occurs by the non-homologous end-joining (NHEJ) pathway, which requires seven core proteins (Ku70/Ku86, DNA-PKcs (DNA-dependent protein kinase catalytic subunit), Artemis, XRCC4-like factor (XLF), XRCC4 and DNA ligase IV). Here we show using combined affinity purification and mass spectrometry that DNA-PKcs co-purifies with all known core NHEJ factors. Furthermore, we have identified a novel evolutionary conserved protein associated with DNA-PKcs—c9orf142. Computer-based modelling of c9orf142 predicted a structure very similar to XRCC4, hence we have named c9orf142—XLS (XRCC4-like small protein). Depletion of c9orf142/XLS in cells impaired DSB repair consistent with a defect in NHEJ. Furthermore, c9orf142/XLS interacted with other core NHEJ factors. These results demonstrate the existence of a new component of the NHEJ DNA repair pathway in mammalian cells. PMID:25941166

  18. Landscape-scale spatial abundance distributions discriminate core from random components of boreal lake bacterioplankton.

    PubMed

    Niño-García, Juan Pablo; Ruiz-González, Clara; Del Giorgio, Paul A

    2016-12-01

    Aquatic bacterial communities harbour thousands of coexisting taxa. To meet the challenge of discriminating between a 'core' and a sporadically occurring 'random' component of these communities, we explored the spatial abundance distribution of individual bacterioplankton taxa across 198 boreal lakes and their associated fluvial networks (188 rivers). We found that all taxa could be grouped into four distinct categories based on model statistical distributions (normal like, bimodal, logistic and lognormal). The distribution patterns across lakes and their associated river networks showed that lake communities are composed of a core of taxa whose distribution appears to be linked to in-lake environmental sorting (normal-like and bimodal categories), and a large fraction of mostly rare bacteria (94% of all taxa) whose presence appears to be largely random and linked to downstream transport in aquatic networks (logistic and lognormal categories). These rare taxa are thus likely to reflect species sorting at upstream locations, providing a perspective of the conditions prevailing in entire aquatic networks rather than only in lakes. © 2016 John Wiley & Sons Ltd/CNRS.

  19. One-zone synchrotron self-Compton model for the core emission of Centaurus A revisited

    NASA Astrophysics Data System (ADS)

    Petropoulou, M.; Lefa, E.; Dimitrakoudis, S.; Mastichiadis, A.

    2014-02-01

    Aims: We investigate the role of the second synchrotron self-Compton (SSC) photon generation to the multiwavelength emission from the compact regions of sources that are characterized as misaligned blazars. For this, we focus on the nearest high-energy emitting radio galaxy Centaurus A and we revisit the one-zone SSC model for its core emission. Methods: We have calculated analytically the peak luminosities of the first and second SSC components by first deriving the steady-state electron distribution in the presence of synchrotron and SSC cooling, and then by using appropriate expressions for the positions of the spectral peaks. We have also tested our analytical results against those derived from a numerical code where the full emissivities and cross-sections were used. Results: We show that the one-zone SSC model cannot account for the core emission of Centaurus A above a few GeV, where the peak of the second SSC component appears. We thus propose an alternative explanation for the origin of the high-energy (≳0.4 GeV) and TeV emission, where these are attributed to the radiation emitted by a relativistic proton component through photohadronic interactions with the photons produced by the primary leptonic component. We show that the required proton luminosities are not extremely high, i.e. ~1043 erg/s, provided that the injection spectra are modelled by a power law with a high value of the lower energy cutoff. Finally, we find that the contribution of the core emitting region of Cen A to the observed neutrino and ultra-high-energy cosmic-ray fluxes is negligible.

  20. IPSL-CM5A2. An Earth System Model designed to run long simulations for past and future climates.

    NASA Astrophysics Data System (ADS)

    Sepulchre, Pierre; Caubel, Arnaud; Marti, Olivier; Hourdin, Frédéric; Dufresne, Jean-Louis; Boucher, Olivier

    2017-04-01

    The IPSL-CM5A model was developed and released in 2013 "to study the long-term response of the climate system to natural and anthropogenic forcings as part of the 5th Phase of the Coupled Model Intercomparison Project (CMIP5)" [Dufresne et al., 2013]. Although this model also has been used for numerous paleoclimate studies, a major limitation was its computation time, which averaged 10 model-years / day on 32 cores of the Curie supercomputer (on TGCC computing center, France). Such performances were compatible with the experimental designs of intercomparison projects (e.g. CMIP, PMIP) but became limiting for modelling activities involving several multi-millenial experiments, which are typical for Quaternary or "deeptime" paleoclimate studies, in which a fully-equilibrated deep-ocean is mandatory. Here we present the Earth-System model IPSL-CM5A2. Based on IPSL-CM5A, technical developments have been performed both on separate components and on the coupling system in order to speed up the whole coupled model. These developments include the integration of hybrid parallelization MPI-OpenMP in LMDz atmospheric component, the use of a new input-ouput library to perform parallel asynchronous input/output by using computing cores as "IO servers", the use of a parallel coupling library between the ocean and the atmospheric components. Running on 304 cores, the model can now simulate 55 years per day, opening new gates towards multi-millenial simulations. Apart from obtaining better computing performances, one aim of setting up IPSL-CM5A2 was also to overcome the cold bias depicted in global surface air temperature (t2m) in IPSL-CM5A. We present the tuning strategy to overcome this bias as well as the main characteristics (including biases) of the pre-industrial climate simulated by IPSL-CM5A2. Lastly, we shortly present paleoclimate simulations run with this model, for the Holocene and for deeper timescales in the Cenozoic, for which the particular continental configuration was overcome by a new design of the ocean tripolar grid.

  1. Self-powered information measuring wireless networks using the distribution of tasks within multicore processors

    NASA Astrophysics Data System (ADS)

    Zhuravska, Iryna M.; Koretska, Oleksandra O.; Musiyenko, Maksym P.; Surtel, Wojciech; Assembay, Azat; Kovalev, Vladimir; Tleshova, Akmaral

    2017-08-01

    The article contains basic approaches to develop the self-powered information measuring wireless networks (SPIM-WN) using the distribution of tasks within multicore processors critical applying based on the interaction of movable components - as in the direction of data transmission as wireless transfer of energy coming from polymetric sensors. Base mathematic model of scheduling tasks within multiprocessor systems was modernized to schedule and allocate tasks between cores of one-crystal computer (SoC) to increase energy efficiency SPIM-WN objects.

  2. AGAMA: Action-based galaxy modeling framework

    NASA Astrophysics Data System (ADS)

    Vasiliev, Eugene

    2018-05-01

    The AGAMA library models galaxies. It computes gravitational potential and forces, performs orbit integration and analysis, and can convert between position/velocity and action/angle coordinates. It offers a framework for finding best-fit parameters of a model from data and self-consistent multi-component galaxy models, and contains useful auxiliary utilities such as various mathematical routines. The core of the library is written in C++, and there are Python and Fortran interfaces. AGAMA may be used as a plugin for the stellar-dynamical software packages galpy (ascl:1411.008), AMUSE (ascl:1107.007), and NEMO (ascl:1010.051).

  3. The Effect of Core Configuration on Thermal Barrier Thermal Performance

    NASA Technical Reports Server (NTRS)

    DeMange, Jeffrey J.; Bott, Robert H.; Druesedow, Anne S.

    2015-01-01

    Thermal barriers and seals are integral components in the thermal protection systems (TPS) of nearly all aerospace vehicles. They are used to minimize heat transfer through interfaces and gaps and protect underlying temperature-sensitive components. The core insulation has a significant impact on both the thermal and mechanical properties of compliant thermal barriers. Proper selection of an appropriate core configuration to mitigate conductive, convective and radiative heat transfer through the thermal barrier is challenging. Additionally, optimization of the thermal barrier for thermal performance may have counteracting effects on mechanical performance. Experimental evaluations have been conducted to better understand the effect of insulation density on permeability and leakage performance, which can significantly impact the resistance to convective heat transfer. The effect of core density on mechanical performance was also previously investigated and will be reviewed. Simple thermal models were also developed to determine the impact of various core parameters on downstream temperatures. An extended understanding of these factors can improve the ability to design and implement these critical TPS components.

  4. Detection of a turbulent gas component associated with a starless core with subthermal turbulence in the Orion A cloud

    NASA Astrophysics Data System (ADS)

    Ohashi, Satoshi; Tatematsu, Ken'ichi; Sanhueza, Patricio; Luong, Quang Nguyen; Hirota, Tomoya; Choi, Minho; Mizuno, Norikazu

    2016-07-01

    We report the detection of a wing component in NH3 emission towards the starless core TUKH122 with subthermal turbulence in the Orion A cloud. This NH3 core is suggested to be on the verge of star formation because the turbulence inside the NH3 core is almost completely dissipated, and also because it is surrounded by CCS, which resembles the prestellar core L1544 in Taurus showing infall motions. Observations were carried out with the Nobeyama 45-m telescope at 0.05 km s-1 velocity resolution. We find that the NH3 line profile consists of two components. The quiescent main component has a small linewidth of 0.3 km s-1 dominated by thermal motion, and the red-shifted wing component has a large linewidth of 1.36 km s-1 representing turbulent motion. These components show kinetic temperatures of 11 and <30 K, respectively. Furthermore, there is a clear velocity offset between the NH3 quiescent gas (Local Standard of Rest velocity = 3.7 km s-1) and the turbulent gas (4.4 km s-1). The centroid velocity of the turbulent gas corresponds to that of the surrounding gas traced by the 13CO (J = 1-0) and CS (J = 2-1) lines. Large Velocity Gradient (LVG) model calculations for CS and CO show that the turbulent gas has a temperature of 8-13 K and an H2 density of ˜104 cm-3, suggesting that the temperature of the turbulent component is also ˜10 K. The detections of both NH3 quiescent and wing components may indicate a sharp transition from the turbulent parent cloud to the quiescent dense core.

  5. Energy consumption estimation of an OMAP-based Android operating system

    NASA Astrophysics Data System (ADS)

    González, Gabriel; Juárez, Eduardo; Castro, Juan José; Sanz, César

    2011-05-01

    System-level energy optimization of battery-powered multimedia embedded systems has recently become a design goal. The poor operational time of multimedia terminals makes computationally demanding applications impractical in real scenarios. For instance, the so-called smart-phones are currently unable to remain in operation longer than several hours. The OMAP3530 processor basically consists of two processing cores, a General Purpose Processor (GPP) and a Digital Signal Processor (DSP). The former, an ARM Cortex-A8 processor, is aimed to run a generic Operating System (OS) while the latter, a DSP core based on the C64x+, has architecture optimized for video processing. The BeagleBoard, a commercial prototyping board based on the OMAP processor, has been used to test the Android Operating System and measure its performance. The board has 128 MB of SDRAM external memory, 256 MB of Flash external memory and several interfaces. Note that the clock frequency of the ARM and DSP OMAP cores is 600 MHz and 430 MHz, respectively. This paper describes the energy consumption estimation of the processes and multimedia applications of an Android v1.6 (Donut) OS on the OMAP3530-Based BeagleBoard. In addition, tools to communicate the two processing cores have been employed. A test-bench to profile the OS resource usage has been developed. As far as the energy estimates concern, the OMAP processor energy consumption model provided by the manufacturer has been used. The model is basically divided in two energy components. The former, the baseline core energy, describes the energy consumption that is independent of any chip activity. The latter, the module active energy, describes the energy consumed by the active modules depending on resource usage.

  6. Core Competencies for Medical Teachers (KLM)--A Position Paper of the GMA Committee on Personal and Organizational Development in Teaching.

    PubMed

    Görlitz, Anja; Ebert, Thomas; Bauer, Daniel; Grasl, Matthäus; Hofer, Matthias; Lammerding-Köppel, Maria; Fabry, Götz

    2015-01-01

    Recent developments in medical education have created increasing challenges for medical teachers which is why the majority of German medical schools already offer educational and instructional skills trainings for their teaching staff. However, to date no framework for educational core competencies for medical teachers exists that might serve as guidance for the qualification of the teaching faculty. Against the background of the discussion about competency based medical education and based upon the international literature, the GMA Committee for Faculty and Organizational Development in Teaching developed a model of core teaching competencies for medical teachers. This framework is designed not only to provide guidance with regard to individual qualification profiles but also to support further advancement of the content, training formats and evaluation of faculty development initiatives and thus, to establish uniform quality criteria for such initiatives in German-speaking medical schools. The model comprises a framework of six competency fields, subdivided into competency components and learning objectives. Additional examples of their use in medical teaching scenarios illustrate and clarify each specific teaching competency. The model has been designed for routine application in medical schools and is thought to be complemented consecutively by additional competencies for teachers with special duties and responsibilities in a future step.

  7. Inverse Problems for Nonlinear Delay Systems

    DTIC Science & Technology

    2011-03-15

    population dynamics. We consider the delay between birth and adulthood for neonate pea aphids and present a mathematical model that treats this delay as...which there is currently no known cure. For HIV, the core of the virus is composed of single-stranded viral RNA and protein components. As depicted in...at a CD4 receptor site and the viral core is injected into the cell. Once inside, the protein components enable transcription and integration of the

  8. Ground Vehicle Condition Based Maintenance

    DTIC Science & Technology

    2010-10-04

    Diagnostic Process Map 32 FMEAs Developed : • Diesel Engine • Transmission • Alternators Analysis : • Identify failure modes • Derive design factors and...S&T Initiatives  TARDEC P&D Process Map  Component Testing  ARL CBM Research  AMSAA SDC & Terrain Modeling UNCLASSIFIED 3 CBM+ Overview...UNCLASSIFIED 4 RCM and CBM are core processes for CBM+ System Development • Army Regulation 750-1, 20 Sep 2007, p. 79 - Reliability Centered Maintenance (RCM

  9. Domestic dog roaming patterns in remote northern Australian indigenous communities and implications for disease modelling.

    PubMed

    Hudson, Emily G; Brookes, Victoria J; Dürr, Salome; Ward, Michael P

    2017-10-01

    Although Australia is canine rabies free, the Northern Peninsula Area (NPA), Queensland and other northern Australian communities are at risk of an incursion due to proximity to rabies infected islands of Indonesia and existing disease spread pathways. Northern Australia also has large populations of free-roaming domestic dogs, presenting a risk of rabies establishment and maintenance should an incursion occur. Agent-based rabies spread models are being used to predict potential outbreak size and identify effective control strategies to aid incursion preparedness. A key component of these models is knowledge of dog roaming patterns to inform contact rates. However, a comprehensive understanding of how dogs utilise their environment and the heterogeneity of their movements to estimate contact rates is lacking. Using a novel simulation approach - and GPS data collected from 21 free-roaming domestic dogs in the NPA in 2014 and 2016 - we characterised the roaming patterns within this dog population. Multiple subsets from each individual dog's GPS dataset were selected representing different monitoring durations and a utilisation distribution (UD) and derived core (50%) and extended (95%) home ranges (HR) were estimated for each duration. Three roaming patterns were identified, based on changes in mean HR over increased monitoring durations, supported by assessment of maps of daily UDs of each dog. Stay-at-home dogs consolidated their HR around their owner's residence, resulting in a decrease in mean HR (both core and extended) as monitoring duration increased (median peak core and extended HR 0.336 and 3.696ha, respectively). Roamer dogs consolidated their core HR but their extended HR increased with longer monitoring durations, suggesting that their roaming patterns based on place of residence were more variable (median peak core and extended HR 0.391 and 6.049ha, respectively). Explorer dogs demonstrated large variability in their roaming patterns, with both core and extended HR increasing as monitoring duration increased (median peak core and extended HR 0.650 and 9.520ha, respectively). These findings are likely driven by multiple factors that have not been further investigated within this study. Different roaming patterns suggest heterogeneous contact rates between dogs in this population. These findings will be incorporated into disease-spread modelling to more realistically represent roaming patterns and improve model predictions. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. An early geodynamo driven by exsolution of mantle components from Earth’s core

    PubMed Central

    Badro, James; Siebert, Julien; Nimmo, Francis

    2016-01-01

    Terrestrial core formation occurred in the early molten Earth by gravitational segregation of immiscible metal and silicate melts, stripping iron-loving elements from the silicate mantle to the metallic core1–3, and leaving rock-loving components behind. Here we performed experiments showing that at high enough temperature, Earth’s major rock-loving component, magnesium oxide, can also dissolve in core-forming metallic melts. Our data clearly point to a dissolution reaction, and are in agreement with recent DFT calculations4. Using core formation models5, we further show that a high-temperature event during Earth’s accretion (such as the Moon-forming giant impact6) can contribute significant amounts of magnesium to the early core. As it subsequently cools, the ensuing exsolution7 of buoyant magnesium oxide generates a substantial amount of gravitational energy. This energy is comparable to if not significantly higher than that produced by inner core solidification8 — the primary driver of the Earth’s current magnetic field9–11. Since the inner core is too young12 to explain the existence of an ancient field prior to ~1 billion years, our results solve the conundrum posed by the recent paleomagnetic observation13 of an ancient field at least 3.45 Gyr old. PMID:27437583

  11. A comparison between plaque-based and vessel-based measurement for plaque component using volumetric intravascular ultrasound radiofrequency data analysis.

    PubMed

    Shin, Eun-Seok; Garcia-Garcia, Hector M; Garg, Scot; Serruys, Patrick W

    2011-04-01

    Although percent plaque components on plaque-based measurement have been used traditionally in previous studies, the impact of vessel-based measurement for percent plaque components have yet to be studied. The purpose of this study was therefore to correlate percent plaque components derived by plaque- and vessel-based measurement using intravascular ultrasound virtual histology (IVUS-VH). The patient cohort comprised of 206 patients with de novo coronary artery lesions who were imaged with IVUS-VH. Age ranged from 35 to 88 years old, and 124 patients were male. Whole pullback analysis was used to calculate plaque volume, vessel volume, and absolute and percent volumes of fibrous, fibrofatty, necrotic core, and dense calcium. The plaque and vessel volumes were well correlated (r = 0.893, P < 0.001). There was a strong correlation between percent plaque components volumes calculated by plaque and those calculated by vessel volumes (fibrous; r = 0.927, P < 0.001, fibrofatty; r = 0.972, P < 0.001, necrotic core; r = 0.964, P < 0.001, dense calcium; r = 0.980, P < 0.001,). Plaque and vessel volumes correlated well to the overall plaque burden. For percent plaque component volume, plaque-based measurement was also highly correlated with vessel-based measurement. Therefore, the percent plaque component volume calculated by vessel volume could be used instead of the conventional percent plaque component volume calculated by plaque volume.

  12. Construction of Martian Interior Model

    NASA Astrophysics Data System (ADS)

    Zharkov, V. N.; Gudkova, T. V.

    2005-09-01

    We present the results of extensive numerical modeling of the Martian interior. Yoder et al. in 2003 reported a mean moment of inertia of Mars that was somewhat smaller than the previously used value and the Love number k 2 obtained from observations of solar tides on Mars. These values of k 2 and the mean moment of inertia impose a strong new constraint on the model of the planet. The models of the Martian interior are elastic, while k 2 contains both elastic and inelastic components. We thoroughly examined the problem of partitioning the Love number k 2 into elastic and inelastic components. The information necessary to construct models of the planet (observation data, choice of a chemical model, and the cosmogonic aspect of the problem) are discussed in the introduction. The model of the planet comprises four submodels—a model of the outer porous layer, a model of the consolidated crust, a model of the silicate mantle, and a core model. We estimated the possible content of hydrogen in the core of Mars. The following parameters were varied while constructing the models: the ferric number of the mantle (Fe#) and the sulfur and hydrogen content in the core. We used experimental data concerning the pressure and temperature dependence of elastic properties of minerals and the information about the behavior of Fe(γ-Fe ), FeS, FeH, and their mixtures at high P and T. The model density, pressure, temperature, and compressional and shear velocities are given as functions of the planetary radius. The trial model M13 has the following parameters: Fe#=0.20; 14 wt % of sulfur in the core; 50 mol % of hydrogen in the core; the core mass is 20.9 wt %; the core radius is 1699 km; the pressure at the mantle-core boundary is 20.4 GPa; the crust thickness is 50 km; Fe is 25.6 wt %; the Fe/Si weight ratio is 1.58, and there is no perovskite layer. The model gives a radius of the Martian core within 1600 1820 km while ≥30 mol % of hydrogen is incorporated into the core. When the inelasticity of the Martian interior is taken into account, the Love number k 2 increases by several thousandths; therefore, the model radius of the planetary core increases as well. The prognostic value of the Chandler period of Mars is 199.5 days, including one day due to inelasticity. Finally, we calculated parameters of the equilibrium figure of Mars for the M13 model: J 2 0 = 1.82 × 10-3, J 4 0 = -7.79 × 10-6, e c-m D = 1/242.3 (the dynamical flattening of the core-mantle boundary).

  13. X-rays from Eta Carinae

    NASA Technical Reports Server (NTRS)

    Chlebowski, T.; Seward, F. D.; Swank, J.; Szymkowiak, A.

    1984-01-01

    X-ray observations of Eta Car obtained with the high-resolution imager and solid-state spectrometer of the Einstein observatory are reported and interpreted in terms of a two-shell model. A soft component with temperature 5 million K is located in the expanding outer shell, and the hard core component with temperature 80 million K is attributed to the interaction of a high-velocity stellar wind from the massive central object with the inner edge of a dust shell. Model calculations based on comparison with optical and IR data permit estimation of the mass of the outer shell (0.004 solar mass), the mass of the dust shell (3 solar mass), and the total shell expansion energy (less than 2 x 10 to the 49th ergs).

  14. Flow Analysis of a Gas Turbine Low- Pressure Subsystem

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.

    1997-01-01

    The NASA Lewis Research Center is coordinating a project to numerically simulate aerodynamic flow in the complete low-pressure subsystem (LPS) of a gas turbine engine. The numerical model solves the three-dimensional Navier-Stokes flow equations through all components within the low-pressure subsystem as well as the external flow around the engine nacelle. The Advanced Ducted Propfan Analysis Code (ADPAC), which is being developed jointly by Allison Engine Company and NASA, is the Navier-Stokes flow code being used for LPS simulation. The majority of the LPS project is being done under a NASA Lewis contract with Allison. Other contributors to the project are NYMA and the University of Toledo. For this project, the Energy Efficient Engine designed by GE Aircraft Engines is being modeled. This engine includes a low-pressure system and a high-pressure system. An inlet, a fan, a booster stage, a bypass duct, a lobed mixer, a low-pressure turbine, and a jet nozzle comprise the low-pressure subsystem within this engine. The tightly coupled flow analysis evaluates aerodynamic interactions between all components of the LPS. The high-pressure core engine of this engine is simulated with a one-dimensional thermodynamic cycle code in order to provide boundary conditions to the detailed LPS model. This core engine consists of a high-pressure compressor, a combustor, and a high-pressure turbine. The three-dimensional LPS flow model is coupled to the one-dimensional core engine model to provide a "hybrid" flow model of the complete gas turbine Energy Efficient Engine. The resulting hybrid engine model evaluates the detailed interaction between the LPS components at design and off-design engine operating conditions while considering the lumped-parameter performance of the core engine.

  15. Establishing ecological networks for habitat conservation in the case of Çeşme-Urla Peninsula, Turkey.

    PubMed

    Hepcan, Ciğdem Coşkun; Ozkan, Mehmet Bülent

    2011-03-01

    The study involves the Çeşme-Urla Peninsula, where habitat fragmentation and loss, which threaten biological diversity, have become an urgent matter of concern in recent decades. The study area has been subjected to anthropogenic pressures and alterations due to ongoing and impending land uses. Therefore, ecological networks, as an appropriate way to deal with habitat fragmentation and loss and to improve ecological quality, were identified in the study area as one of the early attempts in the country to maintain its rich biodiversity. In this sense, core areas and ecological linkages as primary components of ecological networks were established on the basis of sustaining natural habitats. A GIS-based model was created to identify core areas and to facilitate the ecological connectivity. The modeling process for core areas and corridors combined 14 and 21 different variables, respectively. The variables were used as environmental inputs in the model, and all analyses were materialized in ArcGIS 9.2 using grid functions of image analysis and spatial analyst modules. As a result, six core areas and 36 corridor alternatives were materialized. Furthermore, some recommendations for the implementation and management of the proposed ecological networks were revealed and discussed.

  16. Low-Velocity Impact Response of Sandwich Beams with Functionally Graded Core

    NASA Technical Reports Server (NTRS)

    Apetre, N. A.; Sankar, B. V.; Ambur, D. R.

    2006-01-01

    The problem of low-speed impact of a one-dimensional sandwich panel by a rigid cylindrical projectile is considered. The core of the sandwich panel is functionally graded such that the density, and hence its stiffness, vary through the thickness. The problem is a combination of static contact problem and dynamic response of the sandwich panel obtained via a simple nonlinear spring-mass model (quasi-static approximation). The variation of core Young's modulus is represented by a polynomial in the thickness coordinate, but the Poisson's ratio is kept constant. The two-dimensional elasticity equations for the plane sandwich structure are solved using a combination of Fourier series and Galerkin method. The contact problem is solved using the assumed contact stress distribution method. For the impact problem we used a simple dynamic model based on quasi-static behavior of the panel - the sandwich beam was modeled as a combination of two springs, a linear spring to account for the global deflection and a nonlinear spring to represent the local indentation effects. Results indicate that the contact stiffness of thc beam with graded core Increases causing the contact stresses and other stress components in the vicinity of contact to increase. However, the values of maximum strains corresponding to the maximum impact load arc reduced considerably due to grading of thc core properties. For a better comparison, the thickness of the functionally graded cores was chosen such that the flexural stiffness was equal to that of a beam with homogeneous core. The results indicate that functionally graded cores can be used effectively to mitigate or completely prevent impact damage in sandwich composites.

  17. Entering the Historical Problem Space: Whole-Class Text-Based Discussion in History Class

    ERIC Educational Resources Information Center

    Reisman, Abby

    2015-01-01

    Background/Context: The Common Core State Standards Initiative reveals how little we understand about the components of effective discussion-based instruction in disciplinary history. Although the case for classroom discussion as a core method for subject matter learning stands on stable theoretical and empirical ground, to date, none of the…

  18. Equation-oriented specification of neural models for simulations

    PubMed Central

    Stimberg, Marcel; Goodman, Dan F. M.; Benichoux, Victor; Brette, Romain

    2013-01-01

    Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator. PMID:24550820

  19. Structured Therapeutic Games for Nonoffending Caregivers of Children Who Have Experienced Sexual Abuse.

    PubMed

    Springer, Craig I; Colorado, Giselle; Misurell, Justin R

    2015-01-01

    Game-based cognitive-behavioral therapy group model for nonoffending caregivers utilizes structured therapeutic games to assist parents following child sexual abuse. Game-based cognitive-behavioral therapy group model is a manualized group treatment approach that integrates evidence-based cognitive-behavioral therapy components with structured play therapy to teach parenting and coping skills, provide psychoeducation, and process trauma. Structured therapeutic games were designed to allow nonoffending caregivers to process their children's abuse experiences and learn skills necessary to overcome trauma in a nonthreatening, fun, and engaging manner. The implementation of these techniques allow clinicians to address a variety of psychosocial difficulties that are commonly found among nonoffending caregivers of children who have experienced sexual abuse. In addition, structured therapeutic games help caregivers develop strengths and abilities that they can use to help their children cope with abuse and trauma and facilitates the development of positive posttraumatic growth. Techniques and procedures for treatment delivery along with a description of core components and therapeutic modules are discussed. An illustrative case study is provided.

  20. Toroidal-Core Microinductors Biased by Permanent Magnets

    NASA Technical Reports Server (NTRS)

    Lieneweg, Udo; Blaes, Brent

    2003-01-01

    The designs of microscopic toroidal-core inductors in integrated circuits of DC-to-DC voltage converters would be modified, according to a proposal, by filling the gaps in the cores with permanent magnets that would apply bias fluxes (see figure). The magnitudes and polarities of the bias fluxes would be tailored to counteract the DC fluxes generated by the DC components of the currents in the inductor windings, such that it would be possible to either reduce the sizes of the cores or increase the AC components of the currents in the cores without incurring adverse effects. Reducing the sizes of the cores could save significant amounts of space on integrated circuits because relative to other integrated-circuit components, microinductors occupy large areas - of the order of a square millimeter each. An important consideration in the design of such an inductor is preventing magnetic saturation of the core at current levels up to the maximum anticipated operating current. The requirement to prevent saturation, as well as other requirements and constraints upon the design of the core are expressed by several equations based on the traditional magnetic-circuit approximation. The equations involve the core and gap dimensions and the magnetic-property parameters of the core and magnet materials. The equations show that, other things remaining equal, as the maximum current is increased, one must increase the size of the core to prevent the flux density from rising to the saturation level. By using a permanent bias flux to oppose the flux generated by the DC component of the current, one would reduce the net DC component of flux in the core, making it possible to reduce the core size needed to prevent the total flux density (sum of DC and AC components) from rising to the saturation level. Alternatively, one could take advantage of the reduction of the net DC component of flux by increasing the allowable AC component of flux and the corresponding AC component of current. In either case, permanent-magnet material and the slant (if any) and thickness of the gap must be chosen according to the equations to obtain the required bias flux. In modifying the design of the inductor, one must ensure that the inductance is not altered. The simplest way to preserve the original value of inductance would be to leave the gap dimensions unchanged and fill the gap with a permanent- magnet material that, fortuitously, would produce just the required bias flux. A more generally applicable alternative would be to partly fill either the original gap or a slightly enlarged gap with a suitable permanent-magnet material (thereby leaving a small residual gap) so that the reluctance of the resulting magnetic circuit would yield the desired inductance.

  1. HPMA-based block copolymers promote differential drug delivery kinetics for hydrophobic and amphiphilic molecules.

    PubMed

    Tomcin, Stephanie; Kelsch, Annette; Staff, Roland H; Landfester, Katharina; Zentel, Rudolf; Mailänder, Volker

    2016-04-15

    We describe a method how polymeric nanoparticles stabilized with (2-hydroxypropyl)methacrylamide (HPMA)-based block copolymers are used as drug delivery systems for a fast release of hydrophobic and a controlled release of an amphiphilic molecule. The versatile method of the miniemulsion solvent-evaporation technique was used to prepare polystyrene (PS) as well as poly-d/l-lactide (PDLLA) nanoparticles. Covalently bound or physically adsorbed fluorescent dyes labeled the particles' core and their block copolymer corona. Confocal laser scanning microscopy (CLSM) in combination with flow cytometry measurements were applied to demonstrate the burst release of a fluorescent hydrophobic drug model without the necessity of nanoparticle uptake. In addition, CLSM studies and quantitative calculations using the image processing program Volocity® show the intracellular detachment of the amphiphilic block copolymer from the particles' core after uptake. Our findings offer the possibility to combine the advantages of a fast release for hydrophobic and a controlled release for an amphiphilic molecule therefore pointing to the possibility to a 'multi-step and multi-site' targeting by one nanocarrier. We describe thoroughly how different components of a nanocarrier end up in cells. This enables different cargos of a nanocarrier having a consecutive release and delivery of distinct components. Most interestingly we demonstrate individual kinetics of distinct components of such a system: first the release of a fluorescent hydrophobic drug model at contact with the cell membrane without the necessity of nanoparticle uptake. Secondly, the intracellular detachment of the amphiphilic block copolymer from the particles' core after uptake occurs. This offers the possibility to combine the advantages of a fast release for a hydrophobic substance at the time of interaction of the nanoparticle with the cell surface and a controlled release for an amphiphilic molecule later on therefore pointing to the possibility to a 'multi-step and multisite' targeting by one nanocarrier. We therefore feel that this could be used for many cellular systems where the combined and orchestrated delivery of components is prerequisite in order to obtain the highest efficiency. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  2. Regulation of the X-ray luminosity of clusters of galaxies by cooling and supernova feedback.

    PubMed

    Voit, G M; Bryan, G L

    2001-11-22

    Clusters of galaxies are thought to contain about ten times as much dark matter as baryonic matter. The dark component therefore dominates the gravitational potential of a cluster, and the baryons confined by this potential radiate X-rays with a luminosity that depends mainly on the gas density in the cluster's core. Predictions of the X-rays' properties based on models of cluster formation do not, however, agree with the observations. If the models ignore the condensation of cooling gas into stars and feedback from the associated supernovae, they overestimate the X-ray luminosity because the density of the core gas is too high. An early episode of uniformly distributed supernova feedback could rectify this by heating the uncondensed gas and therefore making it harder to compress into the core, but such a process seems to require an implausibly large number of supernovae. Here we show how radiative cooling of intergalactic gas and subsequent supernova heating conspire to eliminate highly compressible low-entropy gas from the intracluster medium. This brings the core entropy and X-ray luminosities of clusters into agreement with the observations, in a way that depends little on the efficiency of supernova heating in the early Universe.

  3. Galactic cold cores. VIII. Filament formation and evolution: Filament properties in context with evolutionary models

    NASA Astrophysics Data System (ADS)

    Rivera-Ingraham, A.; Ristorcelli, I.; Juvela, M.; Montillaud, J.; Men'shchikov, A.; Malinen, J.; Pelkonen, V.-M.; Marston, A.; Martin, P. G.; Pagani, L.; Paladini, R.; Paradis, D.; Ysard, N.; Ward-Thompson, D.; Bernard, J.-P.; Marshall, D. J.; Montier, L.; Tóth, L. V.

    2017-05-01

    Context. The onset of star formation is intimately linked with the presence of massive unstable filamentary structures. These filaments are therefore key for theoretical models that aim to reproduce the observed characteristics of the star formation process in the Galaxy. Aims: As part of the filament study carried out by the Herschel Galactic Cold Cores Key Programme, here we study and discuss the filament properties presented in GCC VII (Paper I) in context with theoretical models of filament formation and evolution. Methods: A conservatively selected sample of filaments located at a distance D< 500 pc was extracted from the GCC fields with the getfilaments algorithm. The physical structure of the filaments was quantified according to two main components: the central (Gaussian) region of the filament (core component), and the power-law-like region dominating the filament column density profile at larger radii (wing component). The properties and behaviour of these components relative to the total linear mass density of the filament and the column density of its environment were compared with the predictions from theoretical models describing the evolution of filaments under gravity-dominated conditions. Results: The feasibility of a transition from a subcritical to supercritical state by accretion at any given time is dependent on the combined effect of filament intrinsic properties and environmental conditions. Reasonably self-gravitating (high Mline,core) filaments in dense environments (AV≳ 3 mag) can become supercritical on timescales of t 1 Myr by accreting mass at constant or decreasing width. The trend of increasing Mline,tot (Mline,core and Mline,wing) and ridge AV with background for the filament population also indicates that the precursors of star-forming filaments evolve coevally with their environment. The simultaneous increase of environment and filament AV explains the observed association between dense environments and high Mline,core values, and it argues against filaments remaining in constant single-pressure equilibrium states. The simultaneous growth of filament and background in locations with efficient mass assembly, predicted in numerical models of filaments in collapsing clouds, presents a suitable scenario for the fulfillment of the combined filament mass-environment criterium that is in quantitative agreement with Herschel observations. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  4. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  5. The 57Fe hyperfine interactions in human liver ferritin and its iron-polymaltose analogues: the heterogeneous iron core model

    NASA Astrophysics Data System (ADS)

    Oshtrakh, M. I.; Alenkina, I. V.; Semionkin, V. A.

    2016-12-01

    Human liver ferritin and its iron-polymaltose pharmaceutical analogues Ferrum Lek, Maltofer® and Ferrifol® were studied using Mössbauer spectroscopy at 295 and 90 K. The Mössbauer spectra were fitted on the basis of a new model of heterogeneous iron core structure using five quadrupole doublets. These components were related to the corresponding more or less close-packed iron core layers/regions demonstrating some variations in the 57Fe hyperfine parameters for the studied samples.

  6. Sulfur- and Oyxgen(?)-Rich Cores of Large Icy Satellites

    NASA Astrophysics Data System (ADS)

    McKinnon, W. B.

    2008-12-01

    The internal structures of Jupiter's large moons, Io, Europa, Ganymede, and Callisto, and Titan once Cassini data is sufficiently analyzed, can be usefully compared with those of the terrestrial planets. With sufficient heating we expect not only separation of rock from ice, but also metal from rock. The internally generated dipole magnetic field of Ganymede is perhaps the strongest evidence for this separation, but the gravity field of Io also implies a metallic core. Nevertheless, the evolutionary paths to differentiation taken (or avoided in the case of Callisto) by these worlds are quite different from those presumed to have the governed differentiation of the terrestrial planets, major asteroids, and iron meteorite parent bodies. Several aspects stand out. Slow accretion in gas-starved protosatellite nebulae implies that neither giant, magma-forming impacts were likely, nor were short-lived radiogenic nuclei in sufficient abundance to drive prompt differentiation. Rather, differentiation would have relied on quotidian long-lived radionuclide heating and/or in the cases of Io, Europa, and possibly Ganymede, tidal heating in mean-motion resonances. The best a priori estimate for the composition of the "rock" component near Jupiter and Saturn is solar, and it is this material that is fed into the accretion disks around Jupiter and Saturn, across the gaps the planets likely created in the solar nebula. Solar composition rock implies a sulfur abundance close to the Fe-FeS eutectic (at appropriate pressures). The rocky component of these worlds was likely highly oxidized as well, based on carbonaceous meteorite analogues, implying relatively low Mg#s (by terrestrial standards), lower amounts of Fe metal available for core formation, or even oxidized Fe3O4 as a potential core component. The latter may be important, as an Fe-S-O melt wets silicate grains readily, and thus can easily percolate downward, Elsasser style, to form a core. Nevertheless, the amount of FeS alone available to form a core may have been considerable, and a picture emerges of large, relatively low-density cores (a far greater proportion of "light alloying elements" than in the Earth's core), and relatively iron-rich rock mantles. Ganymede, and possibly Europa, may even retain residual solid FeS in their rock mantles, depending on the tidal heating history of each. Large, dominantly fluid cores imply enhanced mantle tidal deformation and heating. Published models have claimed that the Galilean satellites are depleted in Fe compared to rock, and in the case of Ganymede, that it is either depleted or enhanced in Fe. Obviously Ganymede cannot be both, and detailed structural models show that the Galilean satellites can be explained in terms of solar composition, once one allows for abundant sulfur and hot (liquid) cores.

  7. E-novo: an automated workflow for efficient structure-based lead optimization.

    PubMed

    Pearce, Bradley C; Langley, David R; Kang, Jia; Huang, Hongwei; Kulkarni, Amit

    2009-07-01

    An automated E-Novo protocol designed as a structure-based lead optimization tool was prepared through Pipeline Pilot with existing CHARMm components in Discovery Studio. A scaffold core having 3D binding coordinates of interest is generated from a ligand-bound protein structural model. Ligands of interest are generated from the scaffold using an R-group fragmentation/enumeration tool within E-Novo, with their cores aligned. The ligand side chains are conformationally sampled and are subjected to core-constrained protein docking, using a modified CHARMm-based CDOCKER method to generate top poses along with CDOCKER energies. In the final stage of E-Novo, a physics-based binding energy scoring function ranks the top ligand CDOCKER poses using a more accurate Molecular Mechanics-Generalized Born with Surface Area method. Correlation of the calculated ligand binding energies with experimental binding affinities were used to validate protocol performance. Inhibitors of Src tyrosine kinase, CDK2 kinase, beta-secretase, factor Xa, HIV protease, and thrombin were used to test the protocol using published ligand crystal structure data within reasonably defined binding sites. In-house Respiratory Syncytial Virus inhibitor data were used as a more challenging test set using a hand-built binding model. Least squares fits for all data sets suggested reasonable validation of the protocol within the context of observed ligand binding poses. The E-Novo protocol provides a convenient all-in-one structure-based design process for rapid assessment and scoring of lead optimization libraries.

  8. OMIP contribution to CMIP6: experimental and diagnostic protocol for the physical component of the Ocean Model Intercomparison Project

    NASA Astrophysics Data System (ADS)

    Griffies, Stephen M.; Danabasoglu, Gokhan; Durack, Paul J.; Adcroft, Alistair J.; Balaji, V.; Böning, Claus W.; Chassignet, Eric P.; Curchitser, Enrique; Deshayes, Julie; Drange, Helge; Fox-Kemper, Baylor; Gleckler, Peter J.; Gregory, Jonathan M.; Haak, Helmuth; Hallberg, Robert W.; Heimbach, Patrick; Hewitt, Helene T.; Holland, David M.; Ilyina, Tatiana; Jungclaus, Johann H.; Komuro, Yoshiki; Krasting, John P.; Large, William G.; Marsland, Simon J.; Masina, Simona; McDougall, Trevor J.; Nurser, A. J. George; Orr, James C.; Pirani, Anna; Qiao, Fangli; Stouffer, Ronald J.; Taylor, Karl E.; Treguier, Anne Marie; Tsujino, Hiroyuki; Uotila, Petteri; Valdivieso, Maria; Wang, Qiang; Winton, Michael; Yeager, Stephen G.

    2016-09-01

    The Ocean Model Intercomparison Project (OMIP) is an endorsed project in the Coupled Model Intercomparison Project Phase 6 (CMIP6). OMIP addresses CMIP6 science questions, investigating the origins and consequences of systematic model biases. It does so by providing a framework for evaluating (including assessment of systematic biases), understanding, and improving ocean, sea-ice, tracer, and biogeochemical components of climate and earth system models contributing to CMIP6. Among the WCRP Grand Challenges in climate science (GCs), OMIP primarily contributes to the regional sea level change and near-term (climate/decadal) prediction GCs.OMIP provides (a) an experimental protocol for global ocean/sea-ice models run with a prescribed atmospheric forcing; and (b) a protocol for ocean diagnostics to be saved as part of CMIP6. We focus here on the physical component of OMIP, with a companion paper (Orr et al., 2016) detailing methods for the inert chemistry and interactive biogeochemistry. The physical portion of the OMIP experimental protocol follows the interannual Coordinated Ocean-ice Reference Experiments (CORE-II). Since 2009, CORE-I (Normal Year Forcing) and CORE-II (Interannual Forcing) have become the standard methods to evaluate global ocean/sea-ice simulations and to examine mechanisms for forced ocean climate variability. The OMIP diagnostic protocol is relevant for any ocean model component of CMIP6, including the DECK (Diagnostic, Evaluation and Characterization of Klima experiments), historical simulations, FAFMIP (Flux Anomaly Forced MIP), C4MIP (Coupled Carbon Cycle Climate MIP), DAMIP (Detection and Attribution MIP), DCPP (Decadal Climate Prediction Project), ScenarioMIP, HighResMIP (High Resolution MIP), as well as the ocean/sea-ice OMIP simulations.

  9. A volatile-rich Earth's core inferred from melting temperature of core materials

    NASA Astrophysics Data System (ADS)

    Morard, G.; Andrault, D.; Antonangeli, D.; Nakajima, Y.; Auzende, A. L.; Boulard, E.; Clark, A. N.; Lord, O. T.; Cervera, S.; Siebert, J.; Garbarino, G.; Svitlyk, V.; Mezouar, M.

    2016-12-01

    Planetary cores are mainly constituted of iron and nickel, alloyed with lighter elements (Si, O, C, S or H). Understanding how these elements affect the physical and chemical properties of solid and liquid iron provides stringent constraints on the composition of the Earth's core. In particular, melting curves of iron alloys are key parameter to establish the temperature profile in the Earth's core, and to asses the potential occurrence of partial melting at the Core-Mantle Boundary. Core formation models based on metal-silicate equilibration suggest that Si and O are the major light element components1-4, while the abundance of other elements such as S, C and H is constrained by arguments based on their volatility during planetary accretion5,6. Each compositional model implies a specific thermal state for the core, due to the different effect that light elements have on the melting behaviour of Fe. We recently measured melting temperatures in Fe-C and Fe-O systems at high pressures, which complete the data sets available both for pure Fe7 and other binary alloys8. Compositional models with an O- and Si-rich outer core are suggested to be compatible with seismological constraints on density and sound velocity9. However, their crystallization temperatures of 3650-4050 K at the CMB pressure of 136 GPa are very close to, if not higher than the melting temperature of the silicate mantle and yet mantle melting above the CMB is not a ubiquitous feature. This observation requires significant amounts of volatile elements (S, C or H) in the outer core to further reduce the crystallisation temperature of the core alloy below that of the lower mantle. References 1. Wood, B. J., et al Nature 441, 825-833 (2006). 2. Siebert, J., et al Science 339, 1194-7 (2013). 3. Corgne, A., et al Earth Planet. Sc. Lett. 288, 108-114 (2009). 4. Fischer, R. a. et al. Geochim. Cosmochim. Acta 167, 177-194 (2015). 5. Dreibus, G. & Palme, H. Geochim. Cosmochim. Acta 60, 1125-1130 (1995). 6. McDonough, W. F. Treatise in Geochemistry 2, 547-568 (2003). 7. Anzellini, S., et al Science 340, 464-6 (2013). 8. Morard, G. et al. Phys. Chem. Miner. 38, 767-776 (2011). 9. Badro, J., et al Proc. Natl. Acad. Sci. U. S. A. 111, 7542-5 (2014).

  10. Determining Greenland Ice Sheet Accumulation Rates from Radar Remote Sensing

    NASA Technical Reports Server (NTRS)

    Jezek, Kenneth C.

    2002-01-01

    An important component of NASA's Program for Arctic Regional Climate Assessment (PARCA) is a mass balance investigation of the Greenland Ice Sheet. The mass balance is calculated by taking the difference between the areally Integrated snow accumulation and the net ice discharge of the ice sheet. Uncertainties in this calculation Include the snow accumulation rate, which has traditionally been determined by interpolating data from ice core samples taken from isolated spots across the ice sheet. The sparse data associated with ice cores juxtaposed against the high spatial and temporal resolution provided by remote sensing , has motivated scientists to investigate relationships between accumulation rate and microwave observations as an option for obtaining spatially contiguous estimates. The objective of this PARCA continuation proposal was to complete an estimate of surface accumulation rate on the Greenland Ice Sheet derived from C-band radar backscatter data compiled in the ERS-1 SAR mosaic of data acquired during, September-November, 1992. An empirical equation, based on elevation and latitude, is used to determine the mean annual temperature. We examine the influence of accumulation rate, and mean annual temperature on C-band radar backscatter using a forward model, which incorporates snow metamorphosis and radar backscatter components. Our model is run over a range of accumulation and temperature conditions. Based on the model results, we generate a look-up table, which uniquely maps the measured radar backscatter, and mean annual temperature to accumulation rate. Our results compare favorably with in situ accumulation rate measurements falling within our study area.

  11. A Comprehensive Plan for the Long-Term Calibration and Validation of Oceanic Biogeochemical Satellite Data

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio

    2007-01-01

    The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage the activity), and a discussion of issues associated with the plan's scientific focus.

  12. SBML Level 3 package: Groups, Version 1 Release 1

    PubMed Central

    Hucka, Michael; Smith, Lucian P.

    2017-01-01

    Summary Biological models often contain components that have relationships with each other, or that modelers want to treat as belonging to groups with common characteristics or shared metadata. The SBML Level 3 Version 1 Core specification does not provide an explicit mechanism for expressing such relationships, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Groups package for SBML Level 3 adds the necessary features to SBML to allow grouping of model components to be expressed. Such groups do not affect the mathematical interpretation of a model, but they do provide a way to add information that can be useful for modelers and software tools. The SBML Groups package enables a modeler to include definitions of groups and nested groups, each of which may be annotated to convey why that group was created, and what it represents. PMID:28187406

  13. X-ray edge singularity in resonant inelastic x-ray scattering (RIXS)

    NASA Astrophysics Data System (ADS)

    Markiewicz, Robert; Rehr, John; Bansil, Arun

    2013-03-01

    We develop a lattice model based on the theory of Mahan, Noziéres, and de Dominicis for x-ray absorption to explore the effect of the core hole on the RIXS cross section. The dominant part of the spectrum can be described in terms of the dynamic structure function S (q , ω) dressed by matrix element effects, but there is also a weak background associated with multi-electron-hole pair excitations. The model reproduces the decomposition of the RIXS spectrum into well- and poorly-screened components. An edge singularity arises at the threshold of both components. Fairly large lattice sizes are required to describe the continuum limit. Supported by DOE Grant DE-FG02-07ER46352 and facilitated by the DOE CMCSN, under grant number DE-SC0007091.

  14. Evaluating core technology capacity based on an improved catastrophe progression method: the case of automotive industry

    NASA Astrophysics Data System (ADS)

    Zhao, Shijia; Liu, Zongwei; Wang, Yue; Zhao, Fuquan

    2017-01-01

    Subjectivity usually causes large fluctuations in evaluation results. Many scholars attempt to establish new mathematical methods to make evaluation results consistent with actual objective situations. An improved catastrophe progression method (ICPM) is constructed to overcome the defects of the original method. The improved method combines the merits of the principal component analysis' information coherence and the catastrophe progression method's none index weight and has the advantage of highly objective comprehensive evaluation. Through the systematic analysis of the influencing factors of the automotive industry's core technology capacity, the comprehensive evaluation model is established according to the different roles that different indices play in evaluating the overall goal with a hierarchical structure. Moreover, ICPM is developed for evaluating the automotive industry's core technology capacity for the typical seven countries in the world, which demonstrates the effectiveness of the method.

  15. Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework

    NASA Astrophysics Data System (ADS)

    Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac

    2016-10-01

    Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.

  16. Cardiac rehabilitation delivery model for low-resource settings.

    PubMed

    Grace, Sherry L; Turk-Adawi, Karam I; Contractor, Aashish; Atrey, Alison; Campbell, Norm; Derman, Wayne; Melo Ghisi, Gabriela L; Oldridge, Neil; Sarkar, Bidyut K; Yeo, Tee Joo; Lopez-Jimenez, Francisco; Mendis, Shanthi; Oh, Paul; Hu, Dayi; Sarrafzadegan, Nizal

    2016-09-15

    Cardiovascular disease is a global epidemic, which is largely preventable. Cardiac rehabilitation (CR) is demonstrated to be cost-effective and efficacious in high-income countries. CR could represent an important approach to mitigate the epidemic of cardiovascular disease in lower-resource settings. The purpose of this consensus statement was to review low-cost approaches to delivering the core components of CR, to propose a testable model of CR which could feasibly be delivered in middle-income countries. A literature review regarding delivery of each core CR component, namely: (1) lifestyle risk factor management (ie, physical activity, diet, tobacco and mental health), (2) medical risk factor management (eg, lipid control, blood pressure control), (3) education for self-management and (4) return to work, in low-resource settings was undertaken. Recommendations were developed based on identified articles, using a modified GRADE approach where evidence in a low-resource setting was available, or consensus where evidence was not. Available data on cost of CR delivery in low-resource settings suggests it is not feasible to deliver CR in low-resource settings as is delivered in high-resource ones. Strategies which can be implemented to deliver all of the core CR components in low-resource settings were summarised in practice recommendations, and approaches to patient assessment proffered. It is suggested that CR be adapted by delivery by non-physician healthcare workers, in non-clinical settings. Advocacy to achieve political commitment for broad delivery of adapted CR services in low-resource settings is needed. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. Models of magnetic field generation in partly stable planetary cores: Applications to Mercury and Saturn

    NASA Astrophysics Data System (ADS)

    Christensen, Ulrich R.; Wicht, Johannes

    2008-07-01

    A substantial part of Mercury's iron core may be stably stratified because the temperature gradient is subadiabatic. A dynamo would operate only in a deep sublayer. We show that such a situation arises for a wide range of values for the heat flow and the sulfur content in the core. In Saturn the upper part of the metallic hydrogen core could be stably stratified because of helium depletion. The magnetic field is unusually weak in the case of Mercury and unusually axisymmetric at Saturn. We study numerical dynamo models in rotating spherical shells with a stable outer region. The control parameters are chosen such that the magnetic Reynolds number is in the range of expected Mercury values. Because of its slow rotation, Mercury may be in a regime where the dipole contribution to the internal magnetic field is weak. Most of our models are in this regime, where the dynamo field consists mainly of rapidly varying higher multipole components. They can hardly pass the stable conducting layer because of the skin effect. The weak low-degree components vary more slowly and control the structure of the field outside the core, whose strength matches the observed field strength at Mercury. In some models the axial dipole dominates at the planet's surface and in others the axial quadrupole is dominant. Differential rotation in the stable layer, representing a thermal wind, is important for attenuating non-axisymmetric components in the exterior field. In some models that we relate to Saturn the axial dipole is intrinsically strong inside the dynamo. The surface field strength is much larger than in the other cases, but the stable layer eliminates non-axisymmetric modes. The Messenger and Bepi Colombo space missions can test our predictions that Mercury's field is large-scaled, fairly axisymmetric, and shows no secular variations on the decadal time scale.

  18. Accounting for crustal magnetization in models of the core magnetic field

    NASA Technical Reports Server (NTRS)

    Jackson, Andrew

    1990-01-01

    The problem of determining the magnetic field originating in the earth's core in the presence of remanent and induced magnetization is considered. The effect of remanent magnetization in the crust on satellite measurements of the core magnetic field is investigated. The crust as a zero-mean stationary Gaussian random process is modelled using an idea proposed by Parker (1988). It is shown that the matrix of second-order statistics is proportional to the Gram matrix, which depends only on the inner-products of the appropriate Green's functions, and that at a typical satellite altitude of 400 km the data are correlated out to an angular separation of approximately 15 deg. Accurate and efficient means of calculating the matrix elements are given. It is shown that the variance of measurements of the radial component of a magnetic field due to the crust is expected to be approximately twice that in horizontal components.

  19. Residue-level global and local ensemble-ensemble comparisons of protein domains.

    PubMed

    Clark, Sarah A; Tronrud, Dale E; Karplus, P Andrew

    2015-09-01

    Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a "consistency check" of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. © 2015 The Protein Society.

  20. Residue-level global and local ensemble-ensemble comparisons of protein domains

    PubMed Central

    Clark, Sarah A; Tronrud, Dale E; Andrew Karplus, P

    2015-01-01

    Many methods of protein structure generation such as NMR-based solution structure determination and template-based modeling do not produce a single model, but an ensemble of models consistent with the available information. Current strategies for comparing ensembles lose information because they use only a single representative structure. Here, we describe the ENSEMBLATOR and its novel strategy to directly compare two ensembles containing the same atoms to identify significant global and local backbone differences between them on per-atom and per-residue levels, respectively. The ENSEMBLATOR has four components: eePREP (ee for ensemble-ensemble), which selects atoms common to all models; eeCORE, which identifies atoms belonging to a cutoff-distance dependent common core; eeGLOBAL, which globally superimposes all models using the defined core atoms and calculates for each atom the two intraensemble variations, the interensemble variation, and the closest approach of members of the two ensembles; and eeLOCAL, which performs a local overlay of each dipeptide and, using a novel measure of local backbone similarity, reports the same four variations as eeGLOBAL. The combination of eeGLOBAL and eeLOCAL analyses identifies the most significant differences between ensembles. We illustrate the ENSEMBLATOR's capabilities by showing how using it to analyze NMR ensembles and to compare NMR ensembles with crystal structures provides novel insights compared to published studies. One of these studies leads us to suggest that a “consistency check” of NMR-derived ensembles may be a useful analysis step for NMR-based structure determinations in general. The ENSEMBLATOR 1.0 is available as a first generation tool to carry out ensemble-ensemble comparisons. PMID:26032515

  1. Spontaneous symmetry breaking in coupled parametrically driven waveguides.

    PubMed

    Dror, Nir; Malomed, Boris A

    2009-01-01

    We introduce a system of linearly coupled parametrically driven damped nonlinear Schrödinger equations, which models a laser based on a nonlinear dual-core waveguide with parametric amplification symmetrically applied to both cores. The model may also be realized in terms of parallel ferromagnetic films, in which the parametric gain is provided by an external field. We analyze spontaneous symmetry breaking (SSB) of fundamental and multiple solitons in this system, which was not studied systematically before in linearly coupled dissipative systems with intrinsic nonlinearity. For fundamental solitons, the analysis reveals three distinct SSB scenarios. Unlike the standard dual-core-fiber model, the present system gives rise to a vast bistability region, which may be relevant to applications. Other noteworthy findings are restabilization of the symmetric soliton after it was destabilized by the SSB bifurcation, and the existence of a generic situation with all solitons unstable in the single-component (decoupled) model, while both symmetric and asymmetric solitons may be stable in the coupled system. The stability of the asymmetric solitons is identified via direct simulations, while for symmetric and antisymmetric ones the stability is verified too through the computation of stability eigenvalues, families of antisymmetric solitons being entirely unstable. In this way, full stability maps for the symmetric solitons are produced. We also investigate the SSB bifurcation of two-soliton bound states (it breaks the symmetry between the two components, while the two peaks in the shape of the soliton remain mutually symmetric). The family of the asymmetric double-peak states may decouple from its symmetric counterpart, being no longer connected to it by the bifurcation, with a large portion of the asymmetric family remaining stable.

  2. A thermodynamic recipe for baking the Earth's lower mantle and core as a whole

    NASA Astrophysics Data System (ADS)

    Tirone, Max; Faak, Kathi

    2016-04-01

    A rigorous understanding of the thermal and dynamic evolution of the core and the interaction with the silicate mantle cannot preclude a non-empirical petrological description of the problem which takes the form of a thermodynamic model. Because the Earth's core is predominantly made of iron such model may seem relatively straightforward, simply delivering a representation of the phase transformations in the P,T space. However due to well known geophysical considerations, a certain amount of light elements should be added. With the Occam's razor principle in mind, potential candidates could be the most abundant and easily accessible elements in the mantle, O, Si and Mg. Given these premises, the challenging problems on developing this type of model are: - a thermodynamic formulation should not simply describe phase equilibrium relations at least in the Fe-Si-O system (a formidable task itself) but should be also consistently applicable to evaluate thermophysical properties of liquid components and solids phases at extreme conditions (P=500-2000 kbar, T=1000-5000 K). Presently these properties are unknown for certain mineral and liquid components or partially available from scattered sources. - experimental data on the phase relations for iron rich liquid are extremely difficult to obtain and could not cover the entire P,T,X spectrum. - interaction of the outer core with the silicate mantle requires a melt model that is capable of describing a vast range of compositions ranging from metal-rich liquids to silicate liquids. The compound energy formalism for liquids with variable tendency to ionization developed by Hillert and coworkers is a sublattice model with varying stoichiometry that includes vacancies and neutral species in one site. It represents the ideal candidate for the task in hand. The thermodynamic model unfortunately is rather complex and a detailed description of the formulation for practical applications like chemical equilibrium calculations is nowhere to be found, while the model is only accessible on few commercial thermodynamic programs. The latest developments regarding all these related issues will be discussed in this contribution. In particular some self-consistent but preliminary results will be presented addressing the following topics: - some details regarding the implementation of the liquid model for Gibbs free energy minimizations, - the physically consistent behavior of thermodynamic properties of certain solid phases like (Fe,O,Si) BCC, FCC, HCP and liquid components, - selected phase diagrams at core conditions in the system Fe-Si-O, - derived geotherms linking the inner-outer core with the core-mantle boundary. - brief outline of the future geodynamic applications.

  3. A study of bending effect on the femtosecond-pulse inscribed fiber Bragg gratings in a dual-core fiber

    NASA Astrophysics Data System (ADS)

    Yakushin, Sergey S.; Wolf, Alexey A.; Dostovalov, Alexandr V.; Skvortsov, Mikhail I.; Wabnitz, Stefan; Babin, Sergey A.

    2018-07-01

    Fiber Bragg gratings with different reflection wavelengths have been inscribed in different cores of a dual-core fiber section. The effect of fiber bending on the FBG reflection spectra has been studied. Various interrogation schemes are presented, including a single-end scheme based on a cross-talk between the cores that uses only standard optical components. Simultaneous interrogation of the FBGs in both cores allows to achieve a bending sensitivity of 12.8 pm/m-1, being free of temperature and strain influence. The technology enables the development of real-time bending sensors with high spatial resolution based on series of FBGs with different wavelength inscribed along the multi-core fiber.

  4. QSO Broad Emission Line Asymmetries: Evidence of Gravitational Redshift?

    NASA Astrophysics Data System (ADS)

    Corbin, Michael R.

    1995-07-01

    The broad optical and ultraviolet emission lines of QSOs and active galactic nuclei (AGNs) display both redward and blueward asymmetries. This result is particularly well established for Hβ and C IV λ1549, and it has been found that Hβ becomes increasingly redward asymmetric with increasing soft X-ray luminosity. Two models for the origin of these asymmetries are investigated: (1) Anisotropic line emission from an ensemble of radially moving clouds, and (2) Two-component profiles consisting of a core of intermediate (˜1000-4000 km s-1) velocity width and a very broad (˜5000-20,000 km s-1) base, in which the asymmetries arise due to a velocity difference between the centroids of the components. The second model is motivated by the evidence that the traditional broad-line region is actually composed of an intermediate-line region (ILR) of optically thick clouds and a very broad line region (VBLR) of optically thin clouds lying closer to the central continuum source. Line profiles produced by model (1) are found to be inconsistent with those observed, being asymmetric mainly in their cores, whereas the asymmetries of actual profiles arise mainly from excess emission in their wings. By contrast, numerical fitting to actual Hβ and C IV λ1549 line profiles reveals that the majority can be accurately modeled by two components, either two Gaussians or the combination of a Gaussian base and a logarithmic core. The profile asymmetries in Hβ can be interpreted as arising from a shift of the base component over a range ˜6300 km s-1 relative to systemic velocity as defined by the position of the [O III] λ5007 line. A similar model appears to apply to C IV λ1549. The correlation between Hβ asymmetry and X-ray luminosity may thus be interpreted as a progressive red- shift of the VBLR velocity centroid relative to systemic velocity with increasing X-ray luminosity. This in turn suggests that the underlying effect is gravitational red shift, as soft X-ray emission arises from a region ˜ light-minutes in size and arguably traces the mass of the putative supermassive black hole. Depending on the size of the VBLR and the exact amount of its profile centroid shift, central masses in the range 109-10 Msun are implied for the objects displaying the strongest redward profile asymmetries, consistent with other estimates. The largest VBLR velocity dispersions measured from the two-component modeling are ˜20,000 km s-1, which also yields a virial mass ˜109 Msun for a VBLR size 0.1 pc. The gravitational redshift model does not explain the origin of the blueshift of the VBLR emission among low X-ray luminosity sources, however. This must be interpreted as arising from a competing effect such as electron scattering of line photons in the vicinity of the VBLR. On average, radio-loud objects have redward asymmetric broad-line profiles and stronger intermediate- and narrow-line emission than radio-quiet objects of comparable optical luminosity. Under the gravitational redshift model these differences may be interpreted as the result of black hole and host galaxy masses that are larger on average among the former class, consistent with the evidence that they are merger products.

  5. Reviewing Core Kindergarten and First-Grade Reading Programs in Light of No Child Left Behind: An Exploratory Study

    ERIC Educational Resources Information Center

    Al Otaiba, Stephanie; Kosanovich-Grek, Marcia L.; Torgesen, Joseph K.; Hassler, Laura; Wahl, Michelle

    2005-01-01

    This article describes the findings of our review process for core reading programs and provides a preliminary rubric emanating from this process for rating core reading programs. To our knowledge, this is the first published review of the current "Reading First" guidelines and includes all five components of scientifically based reading…

  6. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  7. The Component Model of Infrastructure: A Practical Approach to Understanding Public Health Program Infrastructure

    PubMed Central

    Snyder, Kimberly; Rieker, Patricia P.

    2014-01-01

    Functioning program infrastructure is necessary for achieving public health outcomes. It is what supports program capacity, implementation, and sustainability. The public health program infrastructure model presented in this article is grounded in data from a broader evaluation of 18 state tobacco control programs and previous work. The newly developed Component Model of Infrastructure (CMI) addresses the limitations of a previous model and contains 5 core components (multilevel leadership, managed resources, engaged data, responsive plans and planning, networked partnerships) and 3 supporting components (strategic understanding, operations, contextual influences). The CMI is a practical, implementation-focused model applicable across public health programs, enabling linkages to capacity, sustainability, and outcome measurement. PMID:24922125

  8. Statistical-dynamical modeling of the cloud-to-ground lightning activity in Portugal

    NASA Astrophysics Data System (ADS)

    Sousa, J. F.; Fragoso, M.; Mendes, S.; Corte-Real, J.; Santos, J. A.

    2013-10-01

    The present study employs a dataset of cloud-to-ground discharges over Portugal, collected by the Portuguese lightning detection network in the period of 2003-2009, to identify dynamically coherent lightning regimes in Portugal and to implement a statistical-dynamical modeling of the daily discharges over the country. For this purpose, the high-resolution MERRA reanalysis is used. Three lightning regimes are then identified for Portugal: WREG, WREM and SREG. WREG is a typical cold-core cut-off low. WREM is connected to strong frontal systems driven by remote low pressure systems at higher latitudes over the North Atlantic. SREG is a combination of an inverted trough and a mid-tropospheric cold-core nearby Portugal. The statistical-dynamical modeling is based on logistic regressions (statistical component) developed for each regime separately (dynamical component). It is shown that the strength of the lightning activity (either strong or weak) for each regime is consistently modeled by a set of suitable dynamical predictors (65-70% of efficiency). The difference of the equivalent potential temperature in the 700-500 hPa layer is the best predictor for the three regimes, while the best 4-layer lifted index is still important for all regimes, but with much weaker significance. Six other predictors are more suitable for a specific regime. For the purpose of validating the modeling approach, a regional-scale climate model simulation is carried out under a very intense lightning episode.

  9. Reliability prediction of ontology-based service compositions using Petri net and time series models.

    PubMed

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy.

  10. A single factor underlies the metabolic syndrome: a confirmatory factor analysis.

    PubMed

    Pladevall, Manel; Singal, Bonita; Williams, L Keoki; Brotons, Carlos; Guyer, Heidi; Sadurni, Josep; Falces, Carles; Serrano-Rios, Manuel; Gabriel, Rafael; Shaw, Jonathan E; Zimmet, Paul Z; Haffner, Steven

    2006-01-01

    Confirmatory factor analysis (CFA) was used to test the hypothesis that the components of the metabolic syndrome are manifestations of a single common factor. Three different datasets were used to test and validate the model. The Spanish and Mauritian studies included 207 men and 203 women and 1,411 men and 1,650 women, respectively. A third analytical dataset including 847 men was obtained from a previously published CFA of a U.S. population. The one-factor model included the metabolic syndrome core components (central obesity, insulin resistance, blood pressure, and lipid measurements). We also tested an expanded one-factor model that included uric acid and leptin levels. Finally, we used CFA to compare the goodness of fit of one-factor models with the fit of two previously published four-factor models. The simplest one-factor model showed the best goodness-of-fit indexes (comparative fit index 1, root mean-square error of approximation 0.00). Comparisons of one-factor with four-factor models in the three datasets favored the one-factor model structure. The selection of variables to represent the different metabolic syndrome components and model specification explained why previous exploratory and confirmatory factor analysis, respectively, failed to identify a single factor for the metabolic syndrome. These analyses support the current clinical definition of the metabolic syndrome, as well as the existence of a single factor that links all of the core components.

  11. Cloud-Based Tools to Support High-Resolution Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Jones, N.; Nelson, J.; Swain, N.; Christensen, S.

    2013-12-01

    The majority of watershed models developed to support decision-making by water management agencies are simple, lumped-parameter models. Maturity in research codes and advances in the computational power from multi-core processors on desktop machines, commercial cloud-computing resources, and supercomputers with thousands of cores have created new opportunities for employing more accurate, high-resolution distributed models for routine use in decision support. The barriers for using such models on a more routine basis include massive amounts of spatial data that must be processed for each new scenario and lack of efficient visualization tools. In this presentation we will review a current NSF-funded project called CI-WATER that is intended to overcome many of these roadblocks associated with high-resolution modeling. We are developing a suite of tools that will make it possible to deploy customized web-based apps for running custom scenarios for high-resolution models with minimal effort. These tools are based on a software stack that includes 52 North, MapServer, PostGIS, HT Condor, CKAN, and Python. This open source stack provides a simple scripting environment for quickly configuring new custom applications for running high-resolution models as geoprocessing workflows. The HT Condor component facilitates simple access to local distributed computers or commercial cloud resources when necessary for stochastic simulations. The CKAN framework provides a powerful suite of tools for hosting such workflows in a web-based environment that includes visualization tools and storage of model simulations in a database to archival, querying, and sharing of model results. Prototype applications including land use change, snow melt, and burned area analysis will be presented. This material is based upon work supported by the National Science Foundation under Grant No. 1135482

  12. Application of Powder Diffraction Methods to the Analysis of the Atomic Structure of Nanocrystals: The Concept of the Apparent Lattice Parameter (ALP)

    NASA Technical Reports Server (NTRS)

    Palosz, B.; Grzanka, E.; Gierlotka, S.; Stelmakh, S.; Pielaszek, R.; Bismayer, U.; Weber, H.-P.; Palosz, W.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    The applicability of standard methods of elaboration of powder diffraction data for determination of the structure of nano-size crystallites is analysed. Based on our theoretical calculations of powder diffraction data we show, that the assumption of the infinite crystal lattice for nanocrystals smaller than 20 nm in size is not justified. Application of conventional tools developed for elaboration of powder diffraction data, like the Rietveld method, may lead to erroneous interpretation of the experimental results. An alternate evaluation of diffraction data of nanoparticles, based on the so-called 'apparent lattice parameter' (alp) is introduced. We assume a model of nanocrystal having a grain core with well-defined crystal structure, surrounded by a surface shell with the atomic structure similar to that of the core but being under a strain (compressive or tensile). The two structural components, the core and the shell, form essentially a composite crystal with interfering, inseparable diffraction properties. Because the structure of such a nanocrystal is not uniform, it defies the basic definitions of an unambiguous crystallographic phase. Consequently, a set of lattice parameters used for characterization of simple crystal phases is insufficient for a proper description of the complex structure of nanocrystals. We developed a method of evaluation of powder diffraction data of nanocrystals, which refers to a core-shell model and is based on the 'apparent lattice parameter' methodology. For a given diffraction pattem, the alp values are calculated for every individual Bragg reflection. For nanocrystals the alp values depend on the diffraction vector Q. By modeling different a0tomic structures of nanocrystals and calculating theoretically corresponding diffraction patterns using the Debye functions we showed, that alp-Q plots show characteristic shapes which can be used for evaluation of the atomic structure of the core-shell system. We show, that using a simple model of a nanocrystal with spherical shape and centro-symmetric strain at the surface shell we obtain theoretical alp-Q values which match very well the alp-Q plots determined experimentally for Sic, GaN, and diamond nanopowders. The theoretical models are defined by the lattice parameter of the grain core, thickness of the surface shell, and the magnitude and distribution of the strain field in the surface shell. According to our calculations, the part of the diffraction pattern measured at relatively low diffraction vectors Q (below 10/angstrom) provides information on the surface strain, whle determination of the lattice parameters in the grain core requires measurements at large Q-values (above 15 - 20/angstrom).

  13. Self-consistent model of the interstellar pickup protons, Alfvenic turbulence, and core solar wind in the outer heliosphere

    DOE PAGES

    Gamayunov, Konstantin V.; Zhang, Ming; Pogorelov, Nikolai V.; ...

    2012-09-05

    In this study, a self-consistent model of the interstellar pickup protons, the slab component of the Alfvénic turbulence, and core solar wind (SW) protons is presented for r ≥ 1 along with the initial results of and comparison with the Voyager 2 (V2) observations. Two kinetic equations are used for the pickup proton distribution and Alfvénic power spectral density, and a third equation governs SW temperature including source due to the Alfvén wave energy dissipation. A fraction of the pickup proton free energy, fD , which is actually released in the waveform during isotropization, is taken from the quasi-linear considerationmore » without preexisting turbulence, whereas we use observations to specify the strength of the large-scale driving, C sh, for turbulence. The main conclusions of our study can be summarized as follows. (1) For C sh ≈ 1-1.5 and f D ≈ 0.7-1, the model slab component agrees well with the V2 observations of the total transverse magnetic fluctuations starting from ~8 AU. This indicates that the slab component at low-latitudes makes up a majority of the transverse magnetic fluctuations beyond 8-10 AU. (2) The model core SW temperature agrees well with the V2 observations for r ≳ 20 AU if f D ≈ 0.7-1. (3) A combined effect of the Wentzel-Kramers-Brillouin attenuation, large-scale driving, and pickup proton generated waves results in the energy sink in the region r ≲ 10 AU, while wave energy is pumped in the turbulence beyond 10 AU. Without energy pumping, the nonlinear energy cascade is suppressed for r ≲ 10 AU, supplying only a small energy fraction into the k-region of dissipation by the core SW protons. A similar situation takes place for the two-dimensional turbulence. (4) The energy source due to the resonant Alfvén wave damping by the core SW protons is small at heliocentric distances r ≲ 10 AU for both the slab and the two-dimensional turbulent components. As a result, adiabatic cooling mostly controls the model SW temperature in this region, and the model temperature disagrees with the V2 observations in the region r ≲ 20 AU.« less

  14. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  15. Magnetohydrodynamic Convection in the Outer Core and its Geodynamic Consequences

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Chao, Benjamin F.; Fang, Ming

    2004-01-01

    The Earth's fluid outer core is in vigorous convection through much of the Earth's history. In addition to generating and maintaining Earth s time-varying magnetic field (geodynamo), the core convection also generates mass redistribution in the core and a dynamical pressure field on the core-mantle boundary (CMB). All these shall result in various core-mantle interactions, and contribute to surface geodynamic observables. For example, electromagnetic core-mantle coupling arises from finite electrically conducting lower mantle; gravitational interaction occurs between the cores and the heterogeneous mantle; mechanical coupling may also occur when the CMB topography is aspherical. Besides changing the mantle rotation via the coupling torques, the mass-redistribution in the core shall produce a spatial-temporal gravity anomaly. Numerical modeling of the core dynamical processes contributes in several geophysical disciplines. It helps explain the physical causes of surface geodynamic observables via space geodetic techniques and other means, e.g. Earth's rotation variation on decadal time scales, and secular time-variable gravity. Conversely, identification of the sources of the observables can provide additional insights on the dynamics of the fluid core, leading to better constraints on the physics in the numerical modeling. In the past few years, our core dynamics modeling efforts, with respect to our MoSST model, have made significant progress in understanding individual geophysical consequences. However, integrated studies are desirable, not only because of more mature numerical core dynamics models, but also because of inter-correlation among the geophysical phenomena, e.g. mass redistribution in the outer core produces not only time-variable gravity, but also gravitational core-mantle coupling and thus the Earth's rotation variation. They are expected to further facilitate multidisciplinary studies of core dynamics and interactions of the core with other components of the Earth.

  16. Excitation transfer and trapping kinetics in plant photosystem I probed by two-dimensional electronic spectroscopy.

    PubMed

    Akhtar, Parveen; Zhang, Cheng; Liu, Zhengtang; Tan, Howe-Siang; Lambrev, Petar H

    2018-03-01

    Photosystem I is a robust and highly efficient biological solar engine. Its capacity to utilize virtually every absorbed photon's energy in a photochemical reaction generates great interest in the kinetics and mechanisms of excitation energy transfer and charge separation. In this work, we have employed room-temperature coherent two-dimensional electronic spectroscopy and time-resolved fluorescence spectroscopy to follow exciton equilibration and excitation trapping in intact Photosystem I complexes as well as core complexes isolated from Pisum sativum. We performed two-dimensional electronic spectroscopy measurements with low excitation pulse energies to record excited-state kinetics free from singlet-singlet annihilation. Global lifetime analysis resolved energy transfer and trapping lifetimes closely matches the time-correlated single-photon counting data. Exciton energy equilibration in the core antenna occurred on a timescale of 0.5 ps. We further observed spectral equilibration component in the core complex with a 3-4 ps lifetime between the bulk Chl states and a state absorbing at 700 nm. Trapping in the core complex occurred with a 20 ps lifetime, which in the supercomplex split into two lifetimes, 16 ps and 67-75 ps. The experimental data could be modelled with two alternative models resulting in equally good fits-a transfer-to-trap-limited model and a trap-limited model. However, the former model is only possible if the 3-4 ps component is ascribed to equilibration with a "red" core antenna pool absorbing at 700 nm. Conversely, if these low-energy states are identified with the P 700 reaction centre, the transfer-to-trap-model is ruled out in favour of a trap-limited model.

  17. Open access support groups for people experiencing personality disorders: do group members' experiences reflect the theoretical foundations of the SUN project?

    PubMed

    Gillard, Steve; White, Rachel; Miller, Steve; Turner, Kati

    2015-03-01

    The SUN Project is an innovative, open access support group, based in the community, for people experiencing personality disorders, developed in response to UK Department of Health policy advocating improvements in personality disorders services. The aim of this article is to critically explore where and how the theoretically informed model underpinning the SUN Project is reflected in the view and experiences of people attending the project. This article reports an in-depth, qualitative interview-based study employing a critical realist approach. As part of a larger study about self-care and mental health, in-depth qualitative interviews were held with 38 people new to the SUN Project, and again 9 months later. Data were extracted that were relevant to core components of the project model and were subjected to thematic analysis. The critical realist approach was used to move back and forth between empirical data and theory underpinning the SUN project, providing critical insight into the model. Participant accounts were broadly concordant with core components of the SUN Project's underlying model: Open access and self-referral; group therapeutic processes; community-based support; service users as staff. There were some tensions between interviewee accounts and theoretical aspects of the model, notably around the challenges that group processes presented for some individuals. The model underlying the SUN Project is useful in informing good practice in therapeutic, community-based peer support groups for people experiencing personality disorders. Careful consideration should be given to a limited multi-modal approach, providing focused one-to-one support for vulnerable individuals who find it hard to engage in group processes. Facilitated peer support groups based in the community may act as a powerful therapeutic resource for people experiencing personality disorders. Promoting open access and self-referral to support groups may increase feelings of empowerment and engagement for people experiencing personality disorders. Some individuals experiencing personality disorders who could potentially benefit from therapeutic groups may need focused one-to-one support to do so. © 2014 The British Psychological Society.

  18. The Toxoplasma Acto-MyoA Motor Complex Is Important but Not Essential for Gliding Motility and Host Cell Invasion

    PubMed Central

    Jackson, Allison J.; Whitelaw, Jamie A.; Pall, Gurman; Black, Jennifer Ann; Ferguson, David J. P.; Tardieux, Isabelle; Mogilner, Alex; Meissner, Markus

    2014-01-01

    Apicomplexan parasites are thought to actively invade the host cell by gliding motility. This movement is powered by the parasite's own actomyosin system, and depends on the regulated polymerisation and depolymerisation of actin to generate the force for gliding and host cell penetration. Recent studies demonstrated that Toxoplasma gondii can invade the host cell in the absence of several core components of the invasion machinery, such as the motor protein myosin A (MyoA), the microneme proteins MIC2 and AMA1 and actin, indicating the presence of alternative invasion mechanisms. Here the roles of MyoA, MLC1, GAP45 and Act1, core components of the gliding machinery, are re-dissected in detail. Although important roles of these components for gliding motility and host cell invasion are verified, mutant parasites remain invasive and do not show a block of gliding motility, suggesting that other mechanisms must be in place to enable the parasite to move and invade the host cell. A novel, hypothetical model for parasite gliding motility and invasion is presented based on osmotic forces generated in the cytosol of the parasite that are converted into motility. PMID:24632839

  19. A new method for teaching physical examination to junior medical students.

    PubMed

    Sayma, Meelad; Williams, Hywel Rhys

    2016-01-01

    Teaching effective physical examination is a key component in the education of medical students. Preclinical medical students often have insufficient clinical knowledge to apply to physical examination recall, which may hinder their learning when taught through certain understanding-based models. This pilot project aimed to develop a method to teach physical examination to preclinical medical students using "core clinical cases", overcoming the need for "rote" learning. This project was developed utilizing three cycles of planning, action, and reflection. Thematic analysis of feedback was used to improve this model, and ensure it met student expectations. A model core clinical case developed in this project is described, with gout as the basis for a "foot and ankle" examination. Key limitations and difficulties encountered on implementation of this pilot are discussed for future users, including the difficulty encountered in "content overload". This approach aims to teach junior medical students physical examination through understanding, using a simulated patient environment. Robust research is now required to demonstrate efficacy and repeatability in the physical examination of other systems.

  20. Preliminary LOCA analysis of the westinghouse small modular reactor using the WCOBRA/TRAC-TF2 thermal-hydraulics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, J.; Kucukboyaci, V. N.; Nguyen, L.

    2012-07-01

    The Westinghouse Small Modular Reactor (SMR) is an 800 MWt (> 225 MWe) integral pressurized water reactor (iPWR) with all primary components, including the steam generator and the pressurizer located inside the reactor vessel. The reactor core is based on a partial-height 17x17 fuel assembly design used in the AP1000{sup R} reactor core. The Westinghouse SMR utilizes passive safety systems and proven components from the AP1000 plant design with a compact containment that houses the integral reactor vessel and the passive safety systems. A preliminary loss of coolant accident (LOCA) analysis of the Westinghouse SMR has been performed using themore » WCOBRA/TRAC-TF2 code, simulating a transient caused by a double ended guillotine (DEG) break in the direct vessel injection (DVI) line. WCOBRA/TRAC-TF2 is a new generation Westinghouse LOCA thermal-hydraulics code evolving from the US NRC licensed WCOBRA/TRAC code. It is designed to simulate PWR LOCA events from the smallest break size to the largest break size (DEG cold leg). A significant number of fluid dynamics models and heat transfer models were developed or improved in WCOBRA/TRAC-TF2. A large number of separate effects and integral effects tests were performed for a rigorous code assessment and validation. WCOBRA/TRAC-TF2 was introduced into the Westinghouse SMR design phase to assist a quick and robust passive cooling system design and to identify thermal-hydraulic phenomena for the development of the SMR Phenomena Identification Ranking Table (PIRT). The LOCA analysis of the Westinghouse SMR demonstrates that the DEG DVI break LOCA is mitigated by the injection and venting from the Westinghouse SMR passive safety systems without core heat up, achieving long term core cooling. (authors)« less

  1. A new method for evaluating impacts of data assimilation with respect to tropical cyclone intensity forecast problem

    NASA Astrophysics Data System (ADS)

    Vukicevic, T.; Uhlhorn, E.; Reasor, P.; Klotz, B.

    2012-12-01

    A significant potential for improving numerical model forecast skill of tropical cyclone (TC) intensity by assimilation of airborne inner core observations in high resolution models has been demonstrated in recent studies. Although encouraging , the results so far have not provided clear guidance on the critical information added by the inner core data assimilation with respect to the intensity forecast skill. Better understanding of the relationship between the intensity forecast and the value added by the assimilation is required to further the progress, including the assimilation of satellite observations. One of the major difficulties in evaluating such a relationship is the forecast verification metric of TC intensity: the maximum one-minute sustained wind speed at 10 m above surface. The difficulty results from two issues : 1) the metric refers to a practically unobservable quantity since it is an extreme value in a highly turbulent, and spatially-extensive wind field and 2) model- and observation-based estimates of this measure are not compatible in terms of spatial and temporal scales, even in high-resolution models. Although the need for predicting the extreme value of near surface wind is well justified, and the observation-based estimates that are used in practice are well thought of, a revised metric for the intensity is proposed for the purpose of numerical forecast evaluation and the impacts on the forecast. The metric should enable a robust observation- and model-resolvable and phenomenologically-based evaluation of the impacts. It is shown that the maximum intensity could be represented in terms of decomposition into deterministic and stochastic components of the wind field. Using the vortex-centric cylindrical reference frame, the deterministic component is defined as the sum of amplitudes of azimuthal wave numbers 0 and 1 at the radius of maximum wind, whereas the stochastic component is represented by a non-Gaussian PDF. This decomposition is exact and fully independent of individual TC properties. The decomposition of the maximum wind intensity was first evaluated using several sources of data including Step Frequency Microwave Radiometer surface wind speeds from NOAA and Air Force reconnaissance flights,NOAA P-3 Tail Doppler Radar measurements, and best track maximum intensity estimates as well as the simulations from Hurricane WRF Ensemble Data Assimilation System (HEDAS) experiments for 83 real data cases. The results confirmed validity of the method: the stochastic component of the maximum exibited a non-Gaussian PDF with small mean amplitude and variance that was comparable to the known best track error estimates. The results of the decomposition were then used to evaluate the impact of the improved initial conditions on the forecast. It was shown that the errors in the deterministic component of the intensity had the dominant effect on the forecast skill for the studied cases. This result suggests that the data assimilation of the inner core observations could focus primarily on improving the analysis of wave number 0 and 1 initial structure and on the mechanisms responsible for forcing the evolution of this low-wavenumber structure. For the latter analysis, the assimilation of airborne and satellite remote sensing observations could play significant role.

  2. Automatic threshold selection for multi-class open set recognition

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2017-05-01

    Multi-class open set recognition is the problem of supervised classification with additional unknown classes encountered after a model has been trained. An open set classifer often has two core components. The first component is a base classifier which estimates the most likely class of a given example. The second component consists of open set logic which estimates if the example is truly a member of the candidate class. Such a system is operated in a feed-forward fashion. That is, a candidate label is first estimated by the base classifier, and the true membership of the example to the candidate class is estimated afterward. Previous works have developed an iterative threshold selection algorithm for rejecting examples from classes which were not present at training time. In those studies, a Platt-calibrated SVM was used as the base classifier, and the thresholds were applied to class posterior probabilities for rejection. In this work, we investigate the effectiveness of other base classifiers when paired with the threshold selection algorithm and compare their performance with the original SVM solution.

  3. MCore: A High-Order Finite-Volume Dynamical Core for Atmospheric General Circulation Models

    NASA Astrophysics Data System (ADS)

    Ullrich, P.; Jablonowski, C.

    2011-12-01

    The desire for increasingly accurate predictions of the atmosphere has driven numerical models to smaller and smaller resolutions, while simultaneously exponentially driving up the cost of existing numerical models. Even with the modern rapid advancement of computational performance, it is estimated that it will take more than twenty years before existing models approach the scales needed to resolve atmospheric convection. However, smarter numerical methods may allow us to glimpse the types of results we would expect from these fine-scale simulations while only requiring a fraction of the computational cost. The next generation of atmospheric models will likely need to rely on both high-order accuracy and adaptive mesh refinement in order to properly capture features of interest. We present our ongoing research on developing a set of ``smart'' numerical methods for simulating the global non-hydrostatic fluid equations which govern atmospheric motions. We have harnessed a high-order finite-volume based approach in developing an atmospheric dynamical core on the cubed-sphere. This type of method is desirable for applications involving adaptive grids, since it has been shown that spuriously reflected wave modes are intrinsically damped out under this approach. The model further makes use of an implicit-explicit Runge-Kutta-Rosenbrock (IMEX-RKR) time integrator for accurate and efficient coupling of the horizontal and vertical model components. We survey the algorithmic development of the model and present results from idealized dynamical core test cases, as well as give a glimpse at future work with our model.

  4. Developing comprehensive and Brief ICF core sets for morbid obesity for disability assessment in Taiwan: a preliminary study.

    PubMed

    Lin, Y-N; Chang, K-H; Lin, C-Y; Hsu, M-I; Chen, H-C; Chen, H-H; Liou, T-H

    2014-04-01

    The International Classification of Functioning, Disability, and Health (ICF) provides a framework for measuring functioning and disability based on a biopsychosocial model. The aim of this study was to develop comprehensive and brief ICF core sets for morbid obesity for disability assessment in Taiwan. Observational Other Twenty-nine multidisciplinary experts of ICF METHODS: The questionnaire contained 112 obesity-relevant and second-level ICF categories. Using a 5-point Likert scale, the participants rated the significance of the effects of each category on the heath status of people with obesity. Correlation between an individual's score and the average score of the group indicated consensus. The categories were selected for the comprehensive core set for obesity if more than 50% of the experts rated them as "important" in the third round of the Delphi exercise, and for the brief core set if more than 80% of the experts rated them "very important." Twenty-nine experts participated in the study. These included 18 physicians, 4 dieticians, 3 physical therapists, 2 nurses, and 2 ICF experts. The comprehensive core set for morbid obesity contained 61 categories. Of these, 26 categories were from the component body function, 8 were from body structure, 18 were from activities and participation, and 9 were from environmental factors. The brief core set for obesity disability contained 29 categories. Of these, 19 categories were from the component body function, 3 were from body structure, 6 were from activities and participation, and one was from environmental factors. The comprehensive and brief ICF core sets provide comprehensive information on the health effects of morbid obesity and concise information for clinical practice. Comprehensive and brief core sets were created after three rounds of Delphi technique. Further validation study of these core sets by applying to patients with morbid obesity is needed. The comprehensive ICF core set for morbid obesity provides comprehensive information on the health effects of morbid obesity; the brief core set can provide concise information for clinical practice.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohanty, Subhasish; Majumdar, Saurindranath

    Irradiation creep plays a major role in the structural integrity of the graphite components in high temperature gas cooled reactors. Finite element procedures combined with a suitable irradiation creep model can be used to simulate the time-integrated structural integrity of complex shapes, such as the reactor core graphite reflector and fuel bricks. In the present work a comparative study was undertaken to understand the effect of linear and nonlinear irradiation creep on results of finite element based stress analysis. Numerical results were generated through finite element simulations of a typical graphite reflector.

  6. THE ROLE OF AFFECTIVE EXPERIENCE IN WORK MOTIVATION

    PubMed Central

    SEO, MYEONG-GU; BARRETT, LISA FELDMAN; BARTUNEK, JEAN M.

    2005-01-01

    Based on psychological and neurobiological theories of core affective experience, we identify a set of direct and indirect paths through which affective feelings at work affect three dimensions of behavioral outcomes: direction, intensity, and persistence. First, affective experience may influence these behavioral outcomes indirectly by affecting goal level and goal commitment, as well as three key judgment components of work motivation: expectancy judgments, utility judgments, and progress judgments. Second, affective experience may also affect these behavioral outcomes directly. We discuss implications of our model. PMID:16871321

  7. Evaluation of shrinking core model in leaching process of Pomalaa nickel laterite using citric acid as leachant at atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Wanta, K. C.; Perdana, I.; Petrus, H. T. B. M.

    2016-11-01

    Most of kinetics studies related to leaching process used shrinking core model to describe physical phenomena of the process. Generally, the model was developed in connection with transport and/or reaction of reactant components. In this study, commonly used internal diffusion controlled shrinking core model was evaluated for leaching process of Pomalaa nickel laterite using citric acid as leachant. Particle size was varied at 60-70, 100-120, -200 meshes, while the operating temperature was kept constant at 358 K, citric acid concentration at 0.1 M, pulp density at 20% w/v and the leaching time was for 120 minutes. Simulation results showed that the shrinking core model was inadequate to closely approach the experimental data. Meanwhile, the experimental data indicated that the leaching process was determined by the mobility of product molecules in the ash layer pores. In case of leaching resulting large product molecules, a mathematical model involving steps of reaction and product diffusion might be appropriate to develop.

  8. Structural response of 1/20-scale models of the Clinch River Breeder Reactor to a simulated hypothetical core-disruptive accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romander, C M; Cagliostro, D J

    Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less

  9. Design, Analysis and Fabrication of Secondary Structural Components for the Habitat Demonstration Unit-Deep Space Habitat

    NASA Technical Reports Server (NTRS)

    Smith, Russell W.; Langford, William M.

    2012-01-01

    In support of NASA s Habitat Demonstration Unit - Deep Space Habitat Prototype, a number of evolved structural sections were designed, fabricated, analyzed and installed in the 5 meter diameter prototype. The hardware consisted of three principal structural sections, and included the development of novel fastener insert concepts. The articles developed consisted of: 1) 1/8th of the primary flooring section, 2) an inner radius floor beam support which interfaced with, and supported (1), 3) two upper hatch section prototypes, and 4) novel insert designs for mechanical fastener attachments. Advanced manufacturing approaches were utilized in the fabrication of the components. The structural components were developed using current commercial aircraft constructions as a baseline (for both the flooring components and their associated mechanical fastener inserts). The structural sections utilized honeycomb sandwich panels. The core section consisted of 1/8th inch cell size Nomex, at 9 lbs/cu ft, and which was 0.66 inches thick. The facesheets had 3 plys each, with a thickness of 0.010 inches per ply, made from woven E-glass with epoxy reinforcement. Analysis activities consisted of both analytical models, as well as initial closed form calculations. Testing was conducted to help verify analysis model inputs, as well as to facilitate correlation between testing and analysis. Test activities consisted of both 4 point bending tests as well as compressive core crush sequences. This paper presents an overview of this activity, and discusses issues encountered during the various phases of the applied research effort, and its relevance to future space based habitats.

  10. Anisotropic Velocities of Gas Hydrate-Bearing Sediments in Fractured Reservoirs

    USGS Publications Warehouse

    Lee, Myung W.

    2009-01-01

    During the Indian National Gas Hydrate Program Expedition 01 (NGHP-01), one of the richest marine gas hydrate accumulations was discovered at drill site NGHP-01-10 in the Krishna-Godavari Basin, offshore of southeast India. The occurrence of concentrated gas hydrate at this site is primarily controlled by the presence of fractures. Gas hydrate saturations estimated from P- and S-wave velocities, assuming that gas hydrate-bearing sediments (GHBS) are isotropic, are much higher than those estimated from the pressure cores. To reconcile this difference, an anisotropic GHBS model is developed and applied to estimate gas hydrate saturations. Gas hydrate saturations estimated from the P-wave velocities, assuming high-angle fractures, agree well with saturations estimated from the cores. An anisotropic GHBS model assuming two-component laminated media - one component is fracture filled with 100-percent gas hydrate, and the other component is the isotropic water-saturated sediment - adequately predicts anisotropic velocities at the research site.

  11. Evaluation of Counter-Based Dynamic Load Balancing Schemes for Massive Contingency Analysis on Over 10,000 Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Huang, Zhenyu; Rice, Mark J.

    Contingency analysis studies are necessary to assess the impact of possible power system component failures. The results of the contingency analysis are used to ensure the grid reliability, and in power market operation for the feasibility test of market solutions. Currently, these studies are performed in real time based on the current operating conditions of the grid with a set of pre-selected contingency list, which might result in overlooking some critical contingencies caused by variable system status. To have a complete picture of a power grid, more contingencies need to be studied to improve grid reliability. High-performance computing techniques holdmore » the promise of being able to perform the analysis for more contingency cases within a much shorter time frame. This paper evaluates the performance of counter-based dynamic load balancing schemes for a massive contingency analysis program on 10,000+ cores. One million N-2 contingency analysis cases with a Western Electricity Coordinating Council power grid model have been used to demonstrate the performance. The speedup of 3964 with 4096 cores and 7877 with 10240 cores are obtained. This paper reports the performance of the load balancing scheme with a single counter and two counters, describes disk I/O issues, and discusses other potential techniques for further improving the performance.« less

  12. CHAMP Magnetic Anomalies of the Antarctic Crust

    NASA Technical Reports Server (NTRS)

    Kim, Hyung Rae; Gaya-Pique, Luis R.; vonFrese, Ralph R. B.; Taylor, Patrick T.; Kim, Jeong Woo

    2003-01-01

    Regional magnetic signals of the crust are strongly masked by the core field and its secular variations components and hence difficult to isolate in the satellite measurements. In particular, the un-modeled effects of the strong auroral external fields and the complicated- behavior of the core field near the geomagnetic poles conspire to greatly reduce the crustal magnetic signal-to-noise ratio in the polar regions relative to the rest of the Earth. We can, however, use spectral correlation theory to filter the static lithospheric and core field components from the dynamic external field effects. To help isolate regional lithospheric from core field components, the correlations between CHAMP magnetic anomalies and the pseudo magnetic effects inferred from gravity-derived crustal thickness variations can also be exploited.. Employing these procedures, we processed the CHAMP magnetic observations for an improved magnetic anomaly map of the Antarctic crust. Relative to the much higher altitude Orsted and noisier Magsat observations, the CHAMP magnetic anomalies at 400 km altitude reveal new details on the effects of intracrustal magnetic features and crustal thickness variations of the Antarctic.

  13. Predictable Particle Engineering: Programming the Energy Level, Carrier Generation, and Conductivity of Core-Shell Particles.

    PubMed

    Yuan, Conghui; Wu, Tong; Mao, Jie; Chen, Ting; Li, Yuntong; Li, Min; Xu, Yiting; Zeng, Birong; Luo, Weiang; Yu, Lingke; Zheng, Gaofeng; Dai, Lizong

    2018-06-20

    Core-shell structures are of particular interest in the development of advanced composite materials as they can efficiently bring different components together at nanoscale. The advantage of this structure greatly relies on the crucial design of both core and shell, thus achieving an intercomponent synergistic effect. In this report, we show that decorating semiconductor nanocrystals with a boronate polymer shell can easily achieve programmable core-shell interactions. Taking ZnO and anatase TiO 2 nanocrystals as inner core examples, the effective core-shell interactions can narrow the band gap of semiconductor nanocrystals, change the HOMO and LUMO levels of boronate polymer shell, and significantly improve the carrier density of core-shell particles. The hole mobility of core-shell particles can be improved by almost 9 orders of magnitude in comparison with net boronate polymer, while the conductivity of core-shell particles is at most 30-fold of nanocrystals. The particle engineering strategy is based on two driving forces: catechol-surface binding and B-N dative bonding and having a high ability to control and predict the shell thickness. Also, this approach is applicable to various inorganic nanoparticles with different components, sizes, and shapes.

  14. Application of sandwich honeycomb carbon/glass fiber-honeycomb composite in the floor component of electric car

    NASA Astrophysics Data System (ADS)

    Sukmaji, I. C.; Wijang, W. R.; Andri, S.; Bambang, K.; Teguh, T.

    2017-01-01

    Nowadays composite is a superior material used in automotive component due to its outstanding mechanical behavior. The sandwich polypropylene honeycomb core with carbon/glass fiber composite skin (SHCG) as based material in a floor component of electric car application is investigated in the present research. In sandwich structure form, it can absorb noise better compare with the conventional material [1]. Also in present paper, Finite Element Analysis (FEA) of SHCG as based material for floor component of the electric car is analyzed. The composite sandwich is contained with a layer uniform carbon fiber and mixing non-uniform carbon-glass fiber in upper and lower skin. Between skins of SHCG are core polypropylene honeycomb that it have good flexibility to form following dies profile. The variables of volume fraction ratio of carbon/glass fiber in SHCG skin are 20/80%, 30/70%, and 50/50%. The specimen of SHCG is tested using the universal testing machine by three points bending method refers to ASTM C393 and ASTM C365. The cross point between tensile strength to the volume fraction the mixing carbon/glass line and ratio cost line are the searched material with good mechanical performance and reasonable cost. The point is 30/70 volume fraction of carbon/glass fiber. The result of the testing experiment is become input properties of model structure sandwich in FEA simulation. FEA simulation approach is conducted to find critical strength and factor of complex safety geometry against varied distributed passenger loads of a floor component the electric car. The passenger loads variable are 80, 100, 150, 200, 250 and 300 kg.

  15. Generation of surface-wave microwave microplasmas in hollow-core photonic crystal fiber based on a split-ring resonator.

    PubMed

    Vial, Florian; Gadonna, Katell; Debord, Benoît; Delahaye, Frédéric; Amrani, Foued; Leroy, Olivier; Gérôme, Frédéric; Benabid, Fetah

    2016-05-15

    We report on a new and highly compact scheme for the generation and sustainment of microwave-driven plasmas inside the core of an inhibited coupling Kagome hollow-core photonic crystal fiber. The microwave plasma generator consists of a split-ring resonator that efficiently couples the microwave field into the gas-filled fiber. This coupling induces the concomitant generation of a microwave surface wave at the fiber core surround and a stable plasma column confined in the fiber core. The scheme allowed the generation of several centimeters long argon microplasma columns with a very low excitation power threshold. This result represents an important step toward highly compact plasma lasers or plasma-based photonic components.

  16. SERS-fluorescence joint spectral encoded magnetic nanoprobes for multiplex cancer cell separation.

    PubMed

    Wang, Zhuyuan; Zong, Shenfei; Chen, Hui; Wang, Chunlei; Xu, Shuhong; Cui, Yiping

    2014-11-01

    A new kind of cancer cell separation method is demonstrated, using surface-enhanced Raman scattering (SERS) and fluorescence dual-encoded magnetic nanoprobes. The designed nanoprobes can realize SERS-fluorescence joint spectral encoding (SFJSE) and greatly improve the multiplexing ability. The nanoprobes have four main components, that is, the magnetic core, SERS generator, fluorescent agent, and targeting antibody. These components are assembled with a multi-layered structure to form the nanoprobes. Specifically, silica-coated magnetic nanobeads (MBs) are used as the inner core. Au core-Ag shell nanorods (Au@Ag NRs) are employed as the SERS generators and attached on the silica-coated MBs. After burying these Au@Ag NRs with another silica layer, CdTe quantum dots (QDs), that is, the fluorescent agent, are anchored onto the silica layer. Finally, antibodies are covalently linked to CdTe QDs. SFJSE is fulfilled by using different Raman molecules and QDs with different emission wavelengths. By utilizing four human cancer cell lines and one normal cell line as the model cells, the nanoprobes can specifically and simultaneously separate target cancer cells from the normal ones. This SFJSE-based method greatly facilitates the multiplex, rapid, and accurate cancer cell separation, and has a prosperous potential in high-throughput analysis and cancer diagnosis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Work-related musculoskeletal disorders (WMDs) risk assessment at core assembly production of electronic components manufacturing company

    NASA Astrophysics Data System (ADS)

    Yahya, N. M.; Zahid, M. N. O.

    2018-03-01

    This study conducted to assess the work-related musculoskeletal disorders (WMDs) among the workers at core assembly production in an electronic components manufacturing company located in Pekan, Pahang, Malaysia. The study is to identify the WMDs risk factor and risk level. A set of questionnaires survey based on modified Nordic Musculoskeletal Disorder Questionnaires have been distributed to respective workers to acquire the WMDs risk factor identification. Then, postural analysis was conducted in order to measure the respective WMDs risk level. The analysis were based on two ergonomics assessment tools; Rapid Upper Limb Assessment (RULA) and Rapid Entire Body Assessment (REBA). The study found that 30 respondents out of 36 respondents suffered from WMDs especially at shoulder, wrists and lower back. The WMDs risk have been identified from unloading process, pressing process and winding process. In term of the WMDs risk level, REBA and RULA assessment tools have indicated high risk level to unloading and pressing process. Thus, this study had established the WMDs risk factor and risk level of core assembly production in an electronic components manufacturing company at Malaysia environment.

  18. ISTIMES Integrated System for Transport Infrastructures Surveillance and Monitoring by Electromagnetic Sensing

    NASA Astrophysics Data System (ADS)

    Argenti, M.; Giannini, V.; Averty, R.; Bigagli, L.; Dumoulin, J.

    2012-04-01

    The EC FP7 ISTIMES project has the goal of realizing an ICT-based system exploiting distributed and local sensors for non destructive electromagnetic monitoring in order to make critical transport infrastructures more reliable and safe. Higher situation awareness thanks to real time and detailed information and images of the controlled infrastructure status allows improving decision capabilities for emergency management stakeholders. Web-enabled sensors and a service-oriented approach are used as core of the architecture providing a sys-tem that adopts open standards (e.g. OGC SWE, OGC CSW etc.) and makes efforts to achieve full interoperability with other GMES and European Spatial Data Infrastructure initiatives as well as compliance with INSPIRE. The system exploits an open easily scalable network architecture to accommodate a wide range of sensors integrated with a set of tools for handling, analyzing and processing large data volumes from different organizations with different data models. Situation Awareness tools are also integrated in the system. Definition of sensor observations and services follows a metadata model based on the ISO 19115 Core set of metadata elements and the O&M model of OGC SWE. The ISTIMES infrastructure is based on an e-Infrastructure for geospatial data sharing, with a Data Cata-log that implements the discovery services for sensor data retrieval, acting as a broker through static connections based on standard SOS and WNS interfaces; a Decision Support component which helps decision makers providing support for data fusion and inference and generation of situation indexes; a Presentation component which implements system-users interaction services for information publication and rendering, by means of a WEB Portal using SOA design principles; A security framework using Shibboleth open source middleware based on the Security Assertion Markup Language supporting Single Sign On (SSO). ACKNOWLEDGEMENT - The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under Grant Agreement n° 225663

  19. Structural response of 1/20-scale models of the Clinch River Breeder Reactor to a simulated hypothetical core disruptive accident. Technical report 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romander, C. M.; Cagliostro, D. J.

    Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less

  20. Physics-Based Crystal Plasticity Modeling of Single Crystal Niobium

    NASA Astrophysics Data System (ADS)

    Maiti, Tias

    Crystal plasticity models based on thermally activated dislocation kinetics has been successful in predicting the deformation behavior of crystalline materials, particularly in face-centered cubic (fcc) metals. In body-centered cubic (bcc) metals success has been limited owing to ill-defined slip planes. The flow stress of a bcc metal is strongly dependent on temperature and orientation due to the non-planar splitting of a/2 screw dislocations. As a consequence of this, bcc metals show two unique deformation characteristics: (a) thermally-activated glide of screw dislocations--the motion of screw components with their non-planar core structure at the atomistic level occurs even at low stress through the nucleation (assisted by thermal activation) and lateral propagation of dislocation kink pairs; (b) break-down of the Schmid Law, where dislocation slip is driven only by the resolved shear stress. Since the split dislocation core has to constrict for a kink pair formation (and propagation), the non-planarity of bcc screw dislocation cores entails an influence of (shear) stress components acting on planes other than the primary glide plane on their mobility. Another consequence of the asymmetric core splitting on the glide plane is a direction-sensitive slip resistance, which is termed twinning/atwinning sense of shear and should be taken into account when developing constitutive models. Modeling thermally-activated flow including the above-mentioned non-Schmid effects in bcc metals has been the subject of much work, starting in the 1980s and gaining increased interest in recent times. The majority of these works focus on single crystal deformation of commonly used metals such as Iron (Fe), Molybdenum (Mo), and Tungsten (W), while very few published studies address deformation behavior in Niobium (Nb). Most of the work on Nb revolves around fitting parameters of phenomenological descriptions, which do not capture adequately the macroscopic multi-stage hardening behavior and evolution of crystallographic texture from a physical point of view. Therefore, we aim to develop a physics-based crystal plasticity model that can capture these effects as a function of grain orientations, microstructure parameters, and temperature. To achieve this goal, first, a new dilatational constitutive model is developed for simulating the deformation of non-compact geometries (foams or geometries with free surfaces) using the spectral method. The model has been used to mimic the void-growth behavior of a biaxially loaded plate with a circular inclusion. The results show that the proposed formulation provides a much better description of void-like behavior compared to the pure elastic behavior of voids. Using the developed dilatational framework, periodic boundary conditions arising from the spectral solver has been relaxed to study the tensile deformation behavior of dogbone-shaped Nb single crystals. Second, a dislocation density-based constitutive model with storage and recovery laws derived from Discrete Dislocation Dynamics (DDD) is implemented to model multi-stage strain hardening. The influence of pre-deformed dislocation content, dislocation interaction strengths and mean free path on stage II hardening is then simulated and compared with in-situ tensile experiments.

  1. Palaeomagnetic constraints on the evolution of the Atlantis Massif oceanic core complex (Mid-Atlantic Ridge, 30°N)

    NASA Astrophysics Data System (ADS)

    Morris, Antony; Pressling, Nicola; Gee, Jeffrey; John, Barbara; MacLeod, Christopher

    2010-05-01

    Oceanic core complexes expose lower crustal and upper mantle rocks on the seafloor by tectonic unroofing in the footwalls of large-slip detachment faults. They represent a fundamental component of the seafloor spreading system at slow and ultraslow axes. For example, recent analyses suggest that detachment faults may underlie more than 50% of the Mid Atlantic Ridge (MAR) and may take up most of the overall plate divergence at times when magma supply to the ridge system is reduced. The most extensively studied oceanic core complex is Atlantis Massif, located at 30°N on the MAR. This forms an inside-corner bathymetric high at the intersection of the Atlantis Transform Fault and the MAR. The central dome of the massif exposes the corrugated detachment fault surface and was drilled during IODP Expedition 304/305. This sampled a 1.4 km faulted and complexly layered footwall section dominated by gabbroic lithologies with minor ultramafic rocks. The core (Hole U1309D) reflects the interplay between magmatism and deformation prior to, during, and subsequent to a period of footwall displacement and denudation associated with slip on the detachment fault. Palaeomagnetic analyses demonstrate that the gabbroic sequences at Atlantis Massif carry highly stable remanent magnetizations that provide valuable information on the evolution of the section. Thermal demagnetization experiments recover high unblocking temperature components of reversed polarity (R1) throughout the gabbroic sequences. In a number of intervals, however, the gabbros exhibit a complex remanence structure with the presence of intermediate temperature normal (N1) and lower temperature reversed (R2) polarity components, suggesting an extended period of remanence acquisition during different polarity intervals. Sharp break-points between different polarity components suggest that they were acquired by a thermal mechanism. There appears to be no correlation between remanence structure and either the igneous stratigraphy or the distribution of alteration in the core. Instead, the remanence data are more consistent with a model in which the lower crustal section acquired magnetizations of different polarity during a protracted cooling history spanning two geomagnetic reversals. Differences in the width of blocking temperature spectra between samples appear to control the number of components present; samples with narrow and high temperature spectra record only R1 components, whereas those with broader blocking temperature spectra record multicomponent (R1-N1 and R1-N1-R2) remanences. The common occurrence of detachment faults in slow and ultra-slow spreading oceanic crust suggests they accommodate a significant component of plate divergence. However, the sub-surface geometry of oceanic detachment faults remains unclear. Competing models involve either: (a) displacement on planar, low-angle faults with little tectonic rotation; or (b) progressive shallowing by rotation of initially steeply dipping faults as a result of flexural unloading (the "rolling-hinge" model). We resolve this debate using paleomagnetic remanences as a marker for tectonic rotation of the Atlantis Massif footwall. Previous ODP/IODP palaeomagnetic studies have been restricted to analysis of magnetic inclination data, since hard-rock core pieces are azimuthally unoriented and free to rotate in the core barrel. For the first time we have overcome this limitation by independently reorienting core pieces to a true geographic reference frame by correlating structures in individual pieces with those identified from oriented imagery of the borehole wall. This allows reorientation of paleomagnetic data and subsequent tectonic interpretation without the need for a priori assumptions on the azimuth of the rotation axis. Results indicate a 46°±6° counterclockwise rotation of the footwall around a MAR-parallel horizontal axis trending 011°±6°. This provides unequivocal confirmation of the key prediction of flexural, rolling-hinge models for oceanic core complexes, whereby faults initiate at higher dips and rotate to their present day low angle geometries.

  2. TOGA: A TOUGH code for modeling three-phase, multi-component, and non-isothermal processes involved in CO 2-based Enhanced Oil Recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Lehua; Oldenburg, Curtis M.

    TOGA is a numerical reservoir simulator for modeling non-isothermal flow and transport of water, CO 2, multicomponent oil, and related gas components for applications including CO 2-enhanced oil recovery (CO 2-EOR) and geologic carbon sequestration in depleted oil and gas reservoirs. TOGA uses an approach based on the Peng-Robinson equation of state (PR-EOS) to calculate the thermophysical properties of the gas and oil phases including the gas/oil components dissolved in the aqueous phase, and uses a mixing model to estimate the thermophysical properties of the aqueous phase. The phase behavior (e.g., occurrence and disappearance of the three phases, gas +more » oil + aqueous) and the partitioning of non-aqueous components (e.g., CO 2, CH 4, and n-oil components) between coexisting phases are modeled using K-values derived from assumptions of equal-fugacity that have been demonstrated to be very accurate as shown by comparison to measured data. Models for saturated (water) vapor pressure and water solubility (in the oil phase) are used to calculate the partitioning of the water (H 2O) component between the gas and oil phases. All components (e.g., CO 2, H 2O, and n hydrocarbon components) are allowed to be present in all phases (aqueous, gaseous, and oil). TOGA uses a multiphase version of Darcy’s Law to model flow and transport through porous media of mixtures with up to three phases over a range of pressures and temperatures appropriate to hydrocarbon recovery and geologic carbon sequestration systems. Transport of the gaseous and dissolved components is by advection and Fickian molecular diffusion. New methods for phase partitioning and thermophysical property modeling in TOGA have been validated against experimental data published in the literature for describing phase partitioning and phase behavior. Flow and transport has been verified by testing against related TOUGH2 EOS modules and CMG. The code has also been validated against a CO 2-EOR experimental core flood involving flow of three phases and 12 components. Results of simulations of a hypothetical 3D CO 2-EOR problem involving three phases and multiple components are presented to demonstrate the field-scale capabilities of the new code. This user guide provides instructions for use and sample problems for verification and demonstration.« less

  3. Fiber Diffraction Data Indicate a Hollow Core for the Alzheimer’s Aβ Three-fold Symmetric Fibril

    PubMed Central

    McDonald, Michele; Box, Hayden; Bian, Wen; Kendall, Amy; Tycko, Robert; Stubbs, Gerald

    2012-01-01

    Amyloid β protein (Aβ), the principal component of the extracellular plaques found in the brains of Alzheimer’s disease patients, forms fibrils well suited to structural study by X-ray fiber diffraction. Fiber diffraction patterns from the 40-residue form Aβ(1–40) confirm a number of features of a three-fold symmetric Aβ model from solid state NMR, but suggest that the fibrils have a hollow core, not present in the original ssNMR models. Diffraction patterns calculated from a revised hollow three-fold model with a more regular β-sheet structure are in much better agreement with the observed diffraction data than patterns calculated from the original ssNMR model. Refinement of a hollow-core model against ssNMR data led to a revised ssNMR model, similar to the fiber diffraction model. PMID:22903058

  4. Neutron flux and power in RTP core-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rabir, Mohamad Hairie, E-mail: m-hairie@nuclearmalaysia.gov.my; Zin, Muhammad Rawi Md; Usang, Mark Dennis

    PUSPATI TRIGA Reactor achieved initial criticality on June 28, 1982. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes. This paper describes the reactor parameters calculation for the PUSPATI TRIGA REACTOR (RTP); focusing on the application of the developed reactor 3D model for criticality calculation, analysis of power and neutron flux distribution of TRIGA core. The 3D continuous energy Monte Carlo code MCNP was used to develop a versatile and accurate full model of the TRIGA reactor. The model represents in detailed all important components of the core withmore » literally no physical approximation. The consistency and accuracy of the developed RTP MCNP model was established by comparing calculations to the available experimental results and TRIGLAV code calculation.« less

  5. OpenDanubia - An integrated, modular simulation system to support regional water resource management

    NASA Astrophysics Data System (ADS)

    Muerth, M.; Waldmann, D.; Heinzeller, C.; Hennicker, R.; Mauser, W.

    2012-04-01

    The already completed, multi-disciplinary research project GLOWA-Danube has developed a regional scale, integrated modeling system, which was successfully applied on the 77,000 km2 Upper Danube basin to investigate the impact of Global Change on both the natural and anthropogenic water cycle. At the end of the last project phase, the integrated modeling system was transferred into the open source project OpenDanubia, which now provides both the core system as well as all major model components to the general public. First, this will enable decision makers from government, business and management to use OpenDanubia as a tool for proactive management of water resources in the context of global change. Secondly, the model framework to support integrated simulations and all simulation models developed for OpenDanubia in the scope of GLOWA-Danube are further available for future developments and research questions. OpenDanubia allows for the investigation of water-related scenarios considering different ecological and economic aspects to support both scientists and policy makers to design policies for sustainable environmental management. OpenDanubia is designed as a framework-based, distributed system. The model system couples spatially distributed physical and socio-economic process during run-time, taking into account their mutual influence. To simulate the potential future impacts of Global Change on agriculture, industrial production, water supply, households and tourism businesses, so-called deep actor models are implemented in OpenDanubia. All important water-related fluxes and storages in the natural environment are implemented in OpenDanubia as spatially explicit, process-based modules. This includes the land surface water and energy balance, dynamic plant water uptake, ground water recharge and flow as well as river routing and reservoirs. Although the complete system is relatively demanding on data requirements and hardware requirements, the modular structure and the generic core system (Core Framework, Actor Framework) allows the application in new regions and the selection of a reduced number of modules for simulation. As part of the Open Source Initiative in GLOWA-Danube (opendanubia.glowa-danube.de) a comprehensive documentation for the system installation was created and both the program code of the framework and of all major components is licensed under the GNU General Public License. In addition, some helpful programs and scripts necessary for the operation and processing of input and result data sets are provided.

  6. Curriculum Providing Cognitive Knowledge and Problem-Solving Skills for Anesthesia Systems-Based Practice

    PubMed Central

    Wachtel, Ruth E.; Dexter, Franklin

    2010-01-01

    Background Residency programs accredited by the ACGME are required to teach core competencies, including systems-based practice (SBP). Projects are important for satisfying this competency, but the level of knowledge and problem-solving skills required presupposes a basic understanding of the field. The responsibilities of anesthesiologists include the coordination of patient flow in the surgical suite. Familiarity with this topic is crucial for many improvement projects. Intervention A course in operations research for surgical services was originally developed for hospital administration students. It satisfies 2 of the Institute of Medicine's core competencies for health professionals: evidence-based practice and work in interdisciplinary teams. The course lasts 3.5 days (eg, 2 weekends) and consists of 45 cognitive objectives taught using 7 published articles, 10 lectures, and 156 computer-assisted problem-solving exercises based on 17 case studies. We tested the hypothesis that the cognitive objectives of the curriculum provide the knowledge and problem-solving skills necessary to perform projects that satisfy the SBP competency. Standardized terminology was used to define each component of the SBP competency for the minimum level of knowledge needed. The 8 components of the competency were examined independently. Findings Most cognitive objectives contributed to at least 4 of the 8 core components of the SBP competency. Each component of SBP is addressed at the minimum requirement level of exemplify by at least 6 objectives. There is at least 1 cognitive objective at the level of summarize for each SBP component. Conclusions A curriculum in operating room management can provide the knowledge and problem-solving skills anesthesiologists need for participation in projects that satisfy the SBP competency. PMID:22132289

  7. Going beyond the second virial coefficient in the hadron resonance gas model

    NASA Astrophysics Data System (ADS)

    Bugaev, K. A.; Sagun, V. V.; Ivanytskyi, A. I.; Yakimenko, I. P.; Nikonov, E. G.; Taranenko, A. V.; Zinovjev, G. M.

    2018-02-01

    We develop a novel formulation of the hadron resonance gas model which, besides a hard-core repulsion, explicitly accounts for the surface tension induced by the interaction between the particles. Such an equation of state allows us to go beyond the Van der Waals approximation for any number of different hard-core radii. A comparison with the Carnahan-Starling equation of state shows that the new model is valid for packing fractions 0.2-0.22, while the usual Van der Waals model is inapplicable at packing fractions above 0.1-0.11. Moreover, it is shown that the equation of state with induced surface tension is softer than the one of hard spheres and remains causal at higher particle densities. The great advantage of our model is that there are only two equations to be solved and neither their number nor their form depend on the values of the hard-core radii used for different hadronic resonances. Such an advantage leads to a significant mathematical simplification compared to other versions of truly multi-component hadron resonance gas models. Using this equation of state we obtain a high-quality fit of the ALICE hadron multiplicities measured at the center-of-mass energy 2.76 TeV per nucleon and we find that the dependence of χ2 / ndf on the temperature has a single global minimum in the traditional hadron resonance gas model with the multi-component hard-core repulsion. Also we find two local minima of χ2 / ndf in the model in which the proper volume of each hadron is proportional to its mass. However, it is shown that in the latter model a second local minimum located at higher temperatures always appears far above the limit of its applicability.

  8. Innovative use of wood-plastic-composites (WPC) as a core material in the sandwich injection molding process

    NASA Astrophysics Data System (ADS)

    Moritzer, Elmar; Martin, Yannick

    2016-03-01

    The demand for materials based on renewable raw materials has risen steadily in recent years. With society's increasing interest for climate protection and sustainability, natural-based materials such as wood-plastic-composites (WPC) have gained market share thanks to their positive reputation. Due to advantages over unreinforced plastics such as cost reduction and weight savings it is possible to use WPC in a wide area of application. Additionally, an increase in mechanical properties such as rigidity and strength is achieved by the fibers compared to unreinforced polymers. The combination of plastic and wood combines the positive properties of both components in an innovative material. Despite the many positive properties of wood-plastic-composite, there are also negative characteristics that prevent the use of WPC in many product areas, such as automotive interiors. In particular, increased water intake, which may result in swelling of near-surface particles, increased odor emissions, poor surface textures and distortion of the components are unacceptable for many applications. The sandwich injection molding process can improve this situation by eliminating the negative properties of WPC by enclosing it with a pure polymer. In this case, a layered structure of skin and core material is produced, wherein the core component is completely enclosed by the skin component. The suitability of WPC as the core component in the sandwich injection molding has not yet been investigated. In this study the possibilities and limitations of the use of WPC are presented. The consideration of different fiber types, fiber contents, skin materials and its effect on the filling behavior are the focus of the presented analysis.

  9. Reliability Prediction of Ontology-Based Service Compositions Using Petri Net and Time Series Models

    PubMed Central

    Li, Jia; Xia, Yunni; Luo, Xin

    2014-01-01

    OWL-S, one of the most important Semantic Web service ontologies proposed to date, provides a core ontological framework and guidelines for describing the properties and capabilities of their web services in an unambiguous, computer interpretable form. Predicting the reliability of composite service processes specified in OWL-S allows service users to decide whether the process meets the quantitative quality requirement. In this study, we consider the runtime quality of services to be fluctuating and introduce a dynamic framework to predict the runtime reliability of services specified in OWL-S, employing the Non-Markovian stochastic Petri net (NMSPN) and the time series model. The framework includes the following steps: obtaining the historical response times series of individual service components; fitting these series with a autoregressive-moving-average-model (ARMA for short) and predicting the future firing rates of service components; mapping the OWL-S process into a NMSPN model; employing the predicted firing rates as the model input of NMSPN and calculating the normal completion probability as the reliability estimate. In the case study, a comparison between the static model and our approach based on experimental data is presented and it is shown that our approach achieves higher prediction accuracy. PMID:24688429

  10. Engineering Promoter Architecture in Oleaginous Yeast Yarrowia lipolytica.

    PubMed

    Shabbir Hussain, Murtaza; Gambill, Lauren; Smith, Spencer; Blenner, Mark A

    2016-03-18

    Eukaryotic promoters have a complex architecture to control both the strength and timing of gene transcription spanning up to thousands of bases from the initiation site. This complexity makes rational fine-tuning of promoters in fungi difficult to predict; however, this very same complexity enables multiple possible strategies for engineering promoter strength. Here, we studied promoter architecture in the oleaginous yeast, Yarrowia lipolytica. While recent studies have focused on upstream activating sequences, we systematically examined various components common in fungal promoters. Here, we examine several promoter components including upstream activating sequences, proximal promoter sequences, core promoters, and the TATA box in autonomously replicating expression plasmids and integrated into the genome. Our findings show that promoter strength can be fine-tuned through the engineering of the TATA box sequence, core promoter, and upstream activating sequences. Additionally, we identified a previously unreported oleic acid responsive transcription enhancement in the XPR2 upstream activating sequences, which illustrates the complexity of fungal promoters. The promoters engineered here provide new genetic tools for metabolic engineering in Y. lipolytica and provide promoter engineering strategies that may be useful in engineering other non-model fungal systems.

  11. Modeling and studying of white light emitting diodes based on CdS/ZnS spherical quantum dots

    NASA Astrophysics Data System (ADS)

    Hasanirokh, K.; Asgari, A.

    2018-07-01

    In this paper, we propose a quantum dot (QD) based white light emitting diode (WLED) structure to study theoretically the material gain and quantum efficiency of the system. We consider the spherical QDs with a II-VI semiconductor core (CdS) that covered with a wider band gap semiconductor acting as a shell (ZnS). In order to generate white light spectrum, we use layers with different dot size that can emit blue, green and red colors. The blue emission originating from CdS core combines to green/orange components originating from ZnS shell and creates an efficiency white light emission. To model this device, at first, we solve Schrödinger and Poisson equations self consistently and obtain eigen energies and wave functions. Then, we calculate the optical gain and internal quantum efficiency (IQE) of a CdS/ZnS LED sample. We investigate the structural parameter effects on the optical properties of the WLED. The numerical results show that the gain profile and IQE curves depend strongly on the structural parameters such as dot size, carrier density and volume scaling parameter. The gain profile becomes higher and wider with increasing the core radius while it becomes less and narrower with increasing the shell thickness. Furthermore, it is found that the volume scaling parameter can manage the system quantum efficiency.

  12. Parameters of oscillation generation regions in open star cluster models

    NASA Astrophysics Data System (ADS)

    Danilov, V. M.; Putkov, S. I.

    2017-07-01

    We determine the masses and radii of central regions of open star cluster (OCL) models with small or zero entropy production and estimate the masses of oscillation generation regions in clustermodels based on the data of the phase-space coordinates of stars. The radii of such regions are close to the core radii of the OCL models. We develop a new method for estimating the total OCL masses based on the cluster core mass, the cluster and cluster core radii, and radial distribution of stars. This method yields estimates of dynamical masses of Pleiades, Praesepe, and M67, which agree well with the estimates of the total masses of the corresponding clusters based on proper motions and spectroscopic data for cluster stars.We construct the spectra and dispersion curves of the oscillations of the field of azimuthal velocities v φ in OCL models. Weak, low-amplitude unstable oscillations of v φ develop in cluster models near the cluster core boundary, and weak damped oscillations of v φ often develop at frequencies close to the frequencies of more powerful oscillations, which may reduce the non-stationarity degree in OCL models. We determine the number and parameters of such oscillations near the cores boundaries of cluster models. Such oscillations points to the possible role that gradient instability near the core of cluster models plays in the decrease of the mass of the oscillation generation regions and production of entropy in the cores of OCL models with massive extended cores.

  13. Mathematical Modeling: A Structured Process

    ERIC Educational Resources Information Center

    Anhalt, Cynthia Oropesa; Cortez, Ricardo

    2015-01-01

    Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…

  14. Application of reliability-centered-maintenance to BWR ECCS motor operator valve performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Choi, Y.A.

    1993-01-01

    This paper describes the application of reliability-centered maintenance (RCM) methods to plant probabilistic risk assessment (PRA) and safety analyses for four boiling water reactor emergency core cooling systems (ECCSs): (1) high-pressure coolant injection (HPCI); (2) reactor core isolation cooling (RCIC); (3) residual heat removal (RHR); and (4) core spray systems. Reliability-centered maintenance is a system function-based technique for improving a preventive maintenance program that is applied on a component basis. Those components that truly affect plant function are identified, and maintenance tasks are focused on preventing their failures. The RCM evaluation establishes the relevant criteria that preserve system function somore » that an RCM-focused approach can be flexible and dynamic.« less

  15. Proposed changes in personality and personality disorder assessment and diagnosis for DSM-5 Part I: Description and rationale.

    PubMed

    Skodol, Andrew E; Clark, Lee Anna; Bender, Donna S; Krueger, Robert F; Morey, Leslie C; Verheul, Roel; Alarcon, Renato D; Bell, Carl C; Siever, Larry J; Oldham, John M

    2011-01-01

    A major reconceptualization of personality psychopathology has been proposed for DSM-5 that identifies core impairments in personality functioning, pathological personality traits, and prominent pathological personality types. A comprehensive personality assessment consists of four components: levels of personality functioning, personality disorder types, pathological personality trait domains and facets, and general criteria for personality disorder. This four-part assessment focuses attention on identifying personality psychopathology with increasing degrees of specificity, based on a clinician's available time, information, and expertise. In Part I of this two-part article, we describe the components of the new model and present brief theoretical and empirical rationales for each. In Part II, we will illustrate the clinical application of the model with vignettes of patients with varying degrees of personality psychopathology, to show how assessments might be conducted and diagnoses reached.

  16. CONFIG: Qualitative simulation tool for analyzing behavior of engineering devices

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Basham, Bryan D.; Harris, Richard A.

    1987-01-01

    To design failure management expert systems, engineers mentally analyze the effects of failures and procedures as they propagate through device configurations. CONFIG is a generic device modeling tool for use in discrete event simulation, to support such analyses. CONFIG permits graphical modeling of device configurations and qualitative specification of local operating modes of device components. Computation requirements are reduced by focussing the level of component description on operating modes and failure modes, and specifying qualitative ranges of variables relative to mode transition boundaries. Simulation processing occurs only when modes change or variables cross qualitative boundaries. Device models are built graphically, using components from libraries. Components are connected at ports by graphical relations that define data flow. The core of a component model is its state transition diagram, which specifies modes of operation and transitions among them.

  17. Toward a more efficient and scalable checkpoint/restart mechanism in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Anantharaj, Valentine

    2015-04-01

    The number of cores (both CPU as well as accelerator) in large-scale systems has been increasing rapidly over the past several years. In 2008, there were only 5 systems in the Top500 list that had over 100,000 total cores (including accelerator cores) whereas the number of system with such capability has jumped to 31 in Nov 2014. This growth however has also increased the risk of hardware failure rates, necessitating the implementation of fault tolerance mechanism in applications. The checkpoint and restart (C/R) approach is commonly used to save the state of the application and restart at a later time either after failure or to continue execution of experiments. The implementation of an efficient C/R mechanism will make it more affordable to output the necessary C/R files more frequently. The availability of larger systems (more nodes, memory and cores) has also facilitated the scaling of applications. Nowadays, it is more common to conduct coupled global climate simulation experiments at 1 deg horizontal resolution (atmosphere), often requiring about 103 cores. At the same time, a few climate modeling teams that have access to a dedicated cluster and/or large scale systems are involved in modeling experiments at 0.25 deg horizontal resolution (atmosphere) and 0.1 deg resolution for the ocean. These ultrascale configurations require the order of 104 to 105 cores. It is not only necessary for the numerical algorithms to scale efficiently but the input/output (IO) mechanism must also scale accordingly. An ongoing series of ultrascale climate simulations, using the Titan supercomputer at the Oak Ridge Leadership Computing Facility (ORNL), is based on the spectral element dynamical core of the Community Atmosphere Model (CAM-SE), which is a component of the Community Earth System Model and the DOE Accelerated Climate Model for Energy (ACME). The CAM-SE dynamical core for a 0.25 deg configuration has been shown to scale efficiently across 100,000 cpu cores. At this scale, there is an increased risk that the simulation could be terminated due to hardware failures, resulting in a loss that could be as high as 105 - 106 titan core hours. Increasing the frequency of the output of C/R files could mitigate this loss but at the cost of additional C/R overhead. We are testing a more efficient C/R mechanism in CAM-SE. Our early implementation has demonstrated a nearly 3X performance improvement for a 1 deg CAM-SE (with CAM5 physics and MOZART chemistry) configuration using nearly 103 cores. We are in the process of scaling our implementation to 105 cores. This would allow us to run ultra scale simulations with more sophisticated physics and chemistry options while making better utilization of resources.

  18. Environmental history of Lake Hovsgul from physical interpretation of remanent magnetization endmember analysis

    NASA Astrophysics Data System (ADS)

    Kosareva, Lina; Fabian, Karl; Shcherbakov, Valera; Nurgaliev, Danis

    2016-04-01

    The environmental history of Lake Hovsgul (Mongolia) is studied based on magnetic measurements of the core KDP-01. The drill hole reached a maximum depth of 53 m, from which sediment cores with a total length of 48 m were recovered. Coring gaps are due to the applied drilling technology. Following the approach by Heslop and Dillon, 2007, we develop the way of decomposition of the total magnetic fraction of a sample into not virtual but real three distinctive mineralogical components. For this, we first apply the end-member non-negative matrix factorization (NMF) modeling for the unmixing magnetic remanence curves. Having these results in hands, we decompose the hysteresis loops, backfield and strong field thermomagnetic curves into the components which now can be interpreted as certain mineralogical fractions. The likely interpretation of the components obtained is as follows. The soft component is represented by a coarse grained magnetite fraction as it typically results from terrigenous influx via fluvial transport. The second component is presented by a sharply defined magnetite grain size fraction in the 30-100 nm range that in lake environments is related to magnetosome chains of magnetotactic bacteria. It apparently covaries with a diamagnetic mineral, most likely carbonate. This indicates a link to organic authigenic fractions and fits to biogenic magnetite from magnetotactic bacteria. The third component also has a very high coercivity around 85 mT and is identified as a mixture of biogenic and abiotic greigite common in suboxic/anoxic sediments. The results of such the combined study are used to infer information on paleoclimatic and paleogeography conditions around the lake Hovsgul's area for the period of the last million years. A correlation between the outbursts of biogenic magnetite and greigite content with warm periods is found. Within some parts of the core the dominance of greigite contribution into magnetic signal is observed which we link to onset of icy anoxic environmental conditions. The work was carried out according to the Russian Government's Program of Competitive Growth of Kazan Federal University, supported by the grant provided to the Kazan State University for performing the state program in the field of scientific research, and partially supported by the Russian Foundation for Basic research (grant №. 14_05_00785).

  19. Turbulence Measurements in the Near Field of a Wingtip Vortex

    NASA Technical Reports Server (NTRS)

    Chow, Jim; Zilliac, Greg; Bradshaw, Peter

    1997-01-01

    The roll-up of a wingtip vortex, at Reynolds number based on chord of 4.6 million was studied with an emphasis on suction side and near wake measurements. The research was conducted in a 32 in. x 48 in. low-speed wind tunnel. The half-wing model had a semi-span of 36 in. a chord of 48 in. and a rounded tip. Seven-hole pressure probe measurements of the velocity field surrounding the wingtip showed that a large axial velocity of up to 1.77 U(sub infinity) developed in the vortex core. This level of axial velocity has not been previously measured. Triple-wire probes have been used to measure all components of the Reynolds stress tensor. It was determined from correlation measurements that meandering of the vortex was small and did not appreciably contribute to the turbulence measurements. The flow was found to be turbulent in the near-field (as high as 24 percent RMS w - velocity on the edge of the core) and the turbulence decayed quickly with streamwise distance because of the nearly solid body rotation of the vortex core mean flow. A streamwise variation of the location of peak levels of turbulence, relative to the core centerline, was also found. Close to the trailing edge of the wing, the peak shear stress levels were found at the edge of the vortex core, whereas in the most downstream wake planes they occurred at a radius roughly equal to one-third of the vortex core radius. The Reynolds shear stresses were not aligned with the mean strain rate, indicating that an isotropic-eddy-viscosity based prediction method cannot accurately model the turbulence in the cortex. In cylindrical coordinates, with the origin at the vortex centerline, the radial normal stress was found to be larger than the circumferential.

  20. 3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"

    NASA Technical Reports Server (NTRS)

    Douglass, A.

    2005-01-01

    The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.

  1. Conceptual model of knowledge base system

    NASA Astrophysics Data System (ADS)

    Naykhanova, L. V.; Naykhanova, I. V.

    2018-05-01

    In the article, the conceptual model of the knowledge based system by the type of the production system is provided. The production system is intended for automation of problems, which solution is rigidly conditioned by the legislation. A core component of the system is a knowledge base. The knowledge base consists of a facts set, a rules set, the cognitive map and ontology. The cognitive map is developed for implementation of a control strategy, ontology - the explanation mechanism. Knowledge representation about recognition of a situation in the form of rules allows describing knowledge of the pension legislation. This approach provides the flexibility, originality and scalability of the system. In the case of changing legislation, it is necessary to change the rules set. This means that the change of the legislation would not be a big problem. The main advantage of the system is that there is an opportunity to be adapted easily to changes of the legislation.

  2. The development of learning materials based on core model to improve students’ learning outcomes in topic of Chemical Bonding

    NASA Astrophysics Data System (ADS)

    Avianti, R.; Suyatno; Sugiarto, B.

    2018-04-01

    This study aims to create an appropriate learning material based on CORE (Connecting, Organizing, Reflecting, Extending) model to improve students’ learning achievement in Chemical Bonding Topic. This study used 4-D models as research design and one group pretest-posttest as design of the material treatment. The subject of the study was teaching materials based on CORE model, conducted on 30 students of Science class grade 10. The collecting data process involved some techniques such as validation, observation, test, and questionnaire. The findings were that: (1) all the contents were valid, (2) the practicality and the effectiveness of all the contents were good. The conclusion of this research was that the CORE model is appropriate to improve students’ learning outcomes for studying Chemical Bonding.

  3. Modeling analysis of pulsed magnetization process of magnetic core based on inverse Jiles-Atherton model

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang

    2018-05-01

    The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.

  4. Fuzzy cognitive mapping in support of integrated ecosystem assessments: Developing a shared conceptual model among stakeholders.

    PubMed

    Vasslides, James M; Jensen, Olaf P

    2016-01-15

    Ecosystem-based approaches, including integrated ecosystem assessments, are a popular methodology being used to holistically address management issues in social-ecological systems worldwide. In this study we utilized fuzzy logic cognitive mapping to develop conceptual models of a complex estuarine system among four stakeholder groups. The average number of categories in an individual map was not significantly different among groups, and there were no significant differences between the groups in the average complexity or density indices of the individual maps. When ordered by their complexity scores, eight categories contributed to the top four rankings of the stakeholder groups, with six of the categories shared by at least half of the groups. While non-metric multidimensional scaling (nMDS) analysis displayed a high degree of overlap between the individual models across groups, there was also diversity within each stakeholder group. These findings suggest that while all of the stakeholders interviewed perceive the subject ecosystem as a complex series of social and ecological interconnections, there are a core set of components that are present in most of the groups' models that are crucial in managing the system towards some desired outcome. However, the variability in the connections between these core components and the rest of the categories influences the exact nature of these outcomes. Understanding the reasons behind these differences will be critical to developing a shared conceptual model that will be acceptable to all stakeholder groups and can serve as the basis for an integrated ecosystem assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Advancing psychotherapy and evidence-based psychological interventions.

    PubMed

    Emmelkamp, Paul M G; David, Daniel; Beckers, Tom; Muris, Peter; Cuijpers, Pim; Lutz, Wolfgang; Andersson, Gerhard; Araya, Ricardo; Banos Rivera, Rosa M; Barkham, Michael; Berking, Matthias; Berger, Thomas; Botella, Christina; Carlbring, Per; Colom, Francesc; Essau, Cecilia; Hermans, Dirk; Hofmann, Stefan G; Knappe, Susanne; Ollendick, Thomas H; Raes, Filip; Rief, Winfried; Riper, Heleen; Van Der Oord, Saskia; Vervliet, Bram

    2014-01-01

    Psychological models of mental disorders guide research into psychological and environmental factors that elicit and maintain mental disorders as well as interventions to reduce them. This paper addresses four areas. (1) Psychological models of mental disorders have become increasingly transdiagnostic, focusing on core cognitive endophenotypes of psychopathology from an integrative cognitive psychology perspective rather than offering explanations for unitary mental disorders. It is argued that psychological interventions for mental disorders will increasingly target specific cognitive dysfunctions rather than symptom-based mental disorders as a result. (2) Psychotherapy research still lacks a comprehensive conceptual framework that brings together the wide variety of findings, models and perspectives. Analysing the state-of-the-art in psychotherapy treatment research, "component analyses" aiming at an optimal identification of core ingredients and the mechanisms of change is highlighted as the core need towards improved efficacy and effectiveness of psychotherapy, and improved translation to routine care. (3) In order to provide more effective psychological interventions to children and adolescents, there is a need to develop new and/or improved psychotherapeutic interventions on the basis of developmental psychopathology research taking into account knowledge of mediators and moderators. Developmental neuroscience research might be instrumental to uncover associated aberrant brain processes in children and adolescents with mental health problems and to better examine mechanisms of their correction by means of psychotherapy and psychological interventions. (4) Psychotherapy research needs to broaden in terms of adoption of large-scale public health strategies and treatments that can be applied to more patients in a simpler and cost-effective way. Increased research on efficacy and moderators of Internet-based treatments and e-mental health tools (e.g. to support "real time" clinical decision-making to prevent treatment failure or relapse) might be one promising way forward. Copyright © 2013 John Wiley & Sons, Ltd.

  6. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing

    PubMed Central

    Fang, Ye; Ding, Yun; Feinstein, Wei P.; Koppelman, David M.; Moreno, Juana; Jarrell, Mark; Ramanujam, J.; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249. PMID:27420300

  7. GeauxDock: Accelerating Structure-Based Virtual Screening with Heterogeneous Computing.

    PubMed

    Fang, Ye; Ding, Yun; Feinstein, Wei P; Koppelman, David M; Moreno, Juana; Jarrell, Mark; Ramanujam, J; Brylinski, Michal

    2016-01-01

    Computational modeling of drug binding to proteins is an integral component of direct drug design. Particularly, structure-based virtual screening is often used to perform large-scale modeling of putative associations between small organic molecules and their pharmacologically relevant protein targets. Because of a large number of drug candidates to be evaluated, an accurate and fast docking engine is a critical element of virtual screening. Consequently, highly optimized docking codes are of paramount importance for the effectiveness of virtual screening methods. In this communication, we describe the implementation, tuning and performance characteristics of GeauxDock, a recently developed molecular docking program. GeauxDock is built upon the Monte Carlo algorithm and features a novel scoring function combining physics-based energy terms with statistical and knowledge-based potentials. Developed specifically for heterogeneous computing platforms, the current version of GeauxDock can be deployed on modern, multi-core Central Processing Units (CPUs) as well as massively parallel accelerators, Intel Xeon Phi and NVIDIA Graphics Processing Unit (GPU). First, we carried out a thorough performance tuning of the high-level framework and the docking kernel to produce a fast serial code, which was then ported to shared-memory multi-core CPUs yielding a near-ideal scaling. Further, using Xeon Phi gives 1.9× performance improvement over a dual 10-core Xeon CPU, whereas the best GPU accelerator, GeForce GTX 980, achieves a speedup as high as 3.5×. On that account, GeauxDock can take advantage of modern heterogeneous architectures to considerably accelerate structure-based virtual screening applications. GeauxDock is open-sourced and publicly available at www.brylinski.org/geauxdock and https://figshare.com/articles/geauxdock_tar_gz/3205249.

  8. Space Launch System Base Heating Test: Environments and Base Flow Physics

    NASA Technical Reports Server (NTRS)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen- hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during ight. Due to the complex nature of rocket plume-induced ows within the launch vehicle base during ascent and a new vehicle con guration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot- re test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate ight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative e ort that has not been attempted in 40+ years for a NASA vehicle. This presentation discusses the various trends of base convective heat ux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base ow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi- empirical numerical models to determine exceedance and conservatism of the ight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  9. Space Launch System Base Heating Test: Environments and Base Flow Physics

    NASA Technical Reports Server (NTRS)

    Mehta, Manish; Knox, Kyle S.; Seaford, C. Mark; Dufrene, Aaron T.

    2016-01-01

    The NASA Space Launch System (SLS) vehicle is composed of four RS-25 liquid oxygen-hydrogen rocket engines in the core-stage and two 5-segment solid rocket boosters and as a result six hot supersonic plumes interact within the aft section of the vehicle during flight. Due to the complex nature of rocket plume-induced flows within the launch vehicle base during ascent and a new vehicle configuration, sub-scale wind tunnel testing is required to reduce SLS base convective environment uncertainty and design risk levels. This hot-fire test program was conducted at the CUBRC Large Energy National Shock (LENS) II short-duration test facility to simulate flight from altitudes of 50 kft to 210 kft. The test program is a challenging and innovative effort that has not been attempted in 40+ years for a NASA vehicle. This paper discusses the various trends of base convective heat flux and pressure as a function of altitude at various locations within the core-stage and booster base regions of the two-percent SLS wind tunnel model. In-depth understanding of the base flow physics is presented using the test data, infrared high-speed imaging and theory. The normalized test design environments are compared to various NASA semi-empirical numerical models to determine exceedance and conservatism of the flight scaled test-derived base design environments. Brief discussion of thermal impact to the launch vehicle base components is also presented.

  10. A new method for teaching physical examination to junior medical students

    PubMed Central

    Sayma, Meelad; Williams, Hywel Rhys

    2016-01-01

    Introduction Teaching effective physical examination is a key component in the education of medical students. Preclinical medical students often have insufficient clinical knowledge to apply to physical examination recall, which may hinder their learning when taught through certain understanding-based models. This pilot project aimed to develop a method to teach physical examination to preclinical medical students using “core clinical cases”, overcoming the need for “rote” learning. Methods This project was developed utilizing three cycles of planning, action, and reflection. Thematic analysis of feedback was used to improve this model, and ensure it met student expectations. Results and discussion A model core clinical case developed in this project is described, with gout as the basis for a “foot and ankle” examination. Key limitations and difficulties encountered on implementation of this pilot are discussed for future users, including the difficulty encountered in “content overload”. Conclusion This approach aims to teach junior medical students physical examination through understanding, using a simulated patient environment. Robust research is now required to demonstrate efficacy and repeatability in the physical examination of other systems. PMID:26937208

  11. Gas hydrate saturations estimated from pore-and fracture-filling gas hydrate reservoirs in the Qilian Mountain permafrost, China.

    PubMed

    Xiao, Kun; Zou, Changchun; Lu, Zhenquan; Deng, Juzhi

    2017-11-24

    Accurate calculation of gas hydrate saturation is an important aspect of gas hydrate resource evaluation. The effective medium theory (EMT model), the velocity model based on two-phase medium theory (TPT model), and the two component laminated media model (TCLM model), are adopted to investigate the characteristics of acoustic velocity and gas hydrate saturation of pore- and fracture-filling reservoirs in the Qilian Mountain permafrost, China. The compressional wave (P-wave) velocity simulated by the EMT model is more consistent with actual log data than the TPT model in the pore-filling reservoir. The range of the gas hydrate saturation of the typical pore-filling reservoir in hole DKXX-13 is 13.0~85.0%, and the average value of the gas hydrate saturation is 61.9%, which is in accordance with the results by the standard Archie equation and actual core test. The P-wave phase velocity simulated by the TCLM model can be transformed directly into the P-wave transverse velocity in a fracture-filling reservoir. The range of the gas hydrate saturation of the typical fracture-filling reservoir in hole DKXX-19 is 14.1~89.9%, and the average value of the gas hydrate saturation is 69.4%, which is in accordance with actual core test results.

  12. Energy efficient engine. Core engine bearings, drives and configuration: Detailed design report

    NASA Technical Reports Server (NTRS)

    Broman, C. L.

    1981-01-01

    The detailed design of the forward and aft sumps, the accessory drive system, the lubrication system, and the piping/manifold configuration to be employed in the core engine test of the Energy Efficient Engine is addressed. The design goals for the above components were established based on the requirements of the test cell engine.

  13. Can the ICF osteoarthritis core set represent a future clinical tool in measuring functioning in persons with osteoarthritis undergoing hip and knee joint replacement?

    PubMed

    Alviar, Maria Jenelyn; Olver, John; Pallant, Julie F; Brand, Caroline; de Steiger, Richard; Pirpiris, Marinis; Bucknill, Andrew; Khan, Fary

    2012-11-01

    To determine the dimensionality, reliability, model fit, adequacy of the qualifier levels, response patterns across different factors, and targeting of the International Classification of Functioning, Disability and Health (ICF) osteoarthritis core set categories in people with osteoarthritis undergoing hip and knee arthroplasty. The osteoarthritis core set was rated in 316 persons with osteoarthritis who were either in the pre-operative or within one year post-operative stage. Rasch analyses were performed using the RUMM 2030 program. Twelve of the 13 body functions categories and 13 of the 19 activity and participation categories had good model fit. The qualifiers displayed disordered thresholds necessitating rescoring. There was uneven spread of ICF categories across the full range of the patients' scores indicating off--targeting. Subtest analysis of the reduced ICF categories of body functions and activity and participation showed that the two components could be integrated to form one measure. The results suggest that it is possible to measure functioning using a unidimensional construct based on ICF osteoarthritis core set categories of body functions and activity and participation in this population. However, omission of some categories and reduction in qualifier levels are necessary. Further studies are needed to determine whether better targeting is achieved, particularly during the pre-operative and during the sub-acute care period.

  14. Adaptive control method for core power control in TRIGA Mark II reactor

    NASA Astrophysics Data System (ADS)

    Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd

    2018-01-01

    The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.

  15. Phasemeter core for intersatellite laser heterodyne interferometry: modelling, simulations and experiments

    NASA Astrophysics Data System (ADS)

    Gerberding, Oliver; Sheard, Benjamin; Bykov, Iouri; Kullmann, Joachim; Esteban Delgado, Juan Jose; Danzmann, Karsten; Heinzel, Gerhard

    2013-12-01

    Intersatellite laser interferometry is a central component of future space-borne gravity instruments like Laser Interferometer Space Antenna (LISA), evolved LISA, NGO and future geodesy missions. The inherently small laser wavelength allows us to measure distance variations with extremely high precision by interfering a reference beam with a measurement beam. The readout of such interferometers is often based on tracking phasemeters, which are able to measure the phase of an incoming beatnote with high precision over a wide range of frequencies. The implementation of such phasemeters is based on all digital phase-locked loops (ADPLL), hosted in FPGAs. Here, we present a precise model of an ADPLL that allows us to design such a readout algorithm and we support our analysis by numerical performance measurements and experiments with analogue signals.

  16. The College Core: Why a Valuable Curricular Component Can Be a Challenge to the Provision of Services That Enhance School of Business Student Outcomes

    ERIC Educational Resources Information Center

    Kopp, Thomas J.; Rosetti, Joseph L.

    2015-01-01

    Out-of-class faculty services, such as advising, career advice, and lecture series, stimulate student interest, retention, and graduation rates. Through modeling the interrelationships between the allocation of faculty lines and a college's general education core requirements, its impact on the provision of out-of-class faculty services is…

  17. Simultaneous UV and X-ray Spectroscopy of the Seyfert 1 Galaxy NGC 5548. I: Physical Conditions in the UV Absorbers

    NASA Technical Reports Server (NTRS)

    Crenshaw, D. M.; Kraemer, S. B.; Gabel, J. R.; Kaastra, J. S.; Steenbrugge, K. C.; Brinkman, A. C.; Dunn, J. P.; George, I. M.; Liedahl, D. A.; Paerels, F. B. S.

    2003-01-01

    We present new UV spectra of the nucleus of the Seyfert 1 galaxy NGC 5548, which we obtained with the Space Telescope Imaging Spectrograph at high spectral resolution, in conjunction with simultaneous Chandra X-ray Observatory spectra. Taking advantage of the low UV continuum and broad emission-line fluxes, we have determined that the deepest UV absorption component covers at least a portion of the inner, high-ionization narrow-line region (NLR). We find nonunity covering factors in the cores of several kinematic components, which increase the column density measurements of N V and C IV by factors of 1.2 to 1.9 over the full-covering case; however, the revised columns have only a minor effect on the parameters derived from our photoionization models. For the first time, we have simultaneous N V and C IV columns for component 1 (at -1040 km/s), and find that this component cannot be an X-ray warm absorber, contrary to our previous claim based on nonsimultaneous observations. We find that models of the absorbers based on solar abundances severely overpredict the O VI columns previously obtained with the Far Ultraviolet Spectrograph, and present arguments that this is not likely due to variability. However, models that include either enhanced nitrogen (twice solar) or dust, with strong depletion of carbon in either case, are successful in matching all of the observed ionic columns. These models result in substantially lower ionization parameters and total column densities compared to dust-free solar-abundance models, and produce little O VII or O VIII, indicating that none of the UV absorbers are X-ray warm absorbers.

  18. AGC-2 Irradiation Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohrbaugh, David Thomas; Windes, William; Swank, W. David

    The Next Generation Nuclear Plant (NGNP) will be a helium-cooled, very high temperature reactor (VHTR) with a large graphite core. In past applications, graphite has been used effectively as a structural and moderator material in both research and commercial high temperature gas cooled reactor (HTGR) designs.[ , ] Nuclear graphite H 451, used previously in the United States for nuclear reactor graphite components, is no longer available. New nuclear graphites have been developed and are considered suitable candidates for the new NGNP reactor design. To support the design and licensing of NGNP core components within a commercial reactor, a completemore » properties database must be developed for these current grades of graphite. Quantitative data on in service material performance are required for the physical, mechanical, and thermal properties of each graphite grade with a specific emphasis on data related to the life limiting effects of irradiation creep on key physical properties of the NGNP candidate graphites. Based on experience with previous graphite core components, the phenomenon of irradiation induced creep within the graphite has been shown to be critical to the total useful lifetime of graphite components. Irradiation induced creep occurs under the simultaneous application of high temperatures, neutron irradiation, and applied stresses within the graphite components. Significant internal stresses within the graphite components can result from a second phenomenon—irradiation induced dimensional change. In this case, the graphite physically changes i.e., first shrinking and then expanding with increasing neutron dose. This disparity in material volume change can induce significant internal stresses within graphite components. Irradiation induced creep relaxes these large internal stresses, thus reducing the risk of crack formation and component failure. Obviously, higher irradiation creep levels tend to relieve more internal stress, thus allowing the components longer useful lifetimes within the core. Determining the irradiation creep rates of nuclear grade graphites is critical for determining the useful lifetime of graphite components and is a major component of the Advanced Graphite Creep (AGC) experiment.« less

  19. A General Strategy for Nanohybrids Synthesis via Coupled Competitive Reactions Controlled in a Hybrid Process

    PubMed Central

    Wang, Rongming; Yang, Wantai; Song, Yuanjun; Shen, Xiaomiao; Wang, Junmei; Zhong, Xiaodi; Li, Shuai; Song, Yujun

    2015-01-01

    A new methodology based on core alloying and shell gradient-doping are developed for the synthesis of nanohybrids, realized by coupled competitive reactions, or sequenced reducing-nucleation and co-precipitation reaction of mixed metal salts in a microfluidic and batch-cooling process. The latent time of nucleation and the growth of nanohybrids can be well controlled due to the formation of controllable intermediates in the coupled competitive reactions. Thus, spatiotemporal-resolved synthesis can be realized by the hybrid process, which enables us to investigate nanohybrid formation at each stage through their solution color changes and TEM images. By adjusting the bi-channel solvents and kinetic parameters of each stage, the primary components of alloyed cores and the second components of transition metal doping ZnO or Al2O3 as surface coatings can be successively formed. The core alloying and shell gradient-doping strategy can efficiently eliminate the crystal lattice mismatch in different components. Consequently, varieties of gradient core-shell nanohybrids can be synthesized using CoM, FeM, AuM, AgM (M = Zn or Al) alloys as cores and transition metal gradient-doping ZnO or Al2O3 as shells, endowing these nanohybrids with unique magnetic and optical properties (e.g., high temperature ferromagnetic property and enhanced blue emission). PMID:25818342

  20. Long-term millimeter VLBI monitoring of M 87 with KVN at milliarcsecond resolution: nuclear spectrum

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Young; Lee, Sang-Sung; Hodgson, Jeffrey A.; Algaba, Juan-Carlos; Zhao, Guang-Yao; Kino, Motoki; Byun, Do-Young; Kang, Sincheol

    2018-02-01

    We study the centimeter- to millimeter-wavelength synchrotron spectrum of the core of the radio galaxy M 87 at ≲0.8 mas 110Rs spatial scales using four years of fully simultaneous, multi-frequency VLBI data obtained by the Korean VLBI Network (KVN). We find a core spectral index α of ≳‑0.37 (S ∝ ν+α) between 22 and 129 GHz. By combining resolution-matched flux measurements from the Very Long Baseline Array (VLBA) at 15 GHz and taking the Event Horizon Telescope (EHT) 230 GHz core flux measurements in epochs 2009 and 2012 as lower limits, we find evidence of a nearly flat core spectrum across 15 and 129 GHz, which could naturally connect the 230 GHz VLBI core flux. The extremely flat spectrum is a strong indication that the jet base does not consist of a simple homogeneous plasma, but of inhomogeneous multi-energy components, with at least one component with the turn-over frequency ≳ 100 GHz. The spectral shape can be qualitatively explained if both the strongly (compact, optically thick at >100 GHz) and the relatively weakly magnetized (more extended, optically thin at <100 GHz) plasma components are colocated in the footprint of the relativistic jet.

  1. Optical properties of core-shell and multi-shell nanorods

    NASA Astrophysics Data System (ADS)

    Mokkath, Junais Habeeb; Shehata, Nader

    2018-05-01

    We report a first-principles time dependent density functional theory study of the optical response modulations in bimetallic core-shell (Na@Al and Al@Na) and multi-shell (Al@Na@Al@Na and Na@Al@Na@Al: concentric shells of Al and Na alternate) nanorods. All of the core-shell and multi-shell configurations display highly enhanced absorption intensity with respect to the pure Al and Na nanorods, showing sensitivity to both composition and chemical ordering. Remarkably large spectral intensity enhancements were found in a couple of core-shell configurations, indicative that optical response averaging based on the individual components can not be considered as true as always in the case of bimetallic core-shell nanorods. We believe that our theoretical results would be useful in promising applications depending on Aluminum-based plasmonic materials such as solar cells and sensors.

  2. Implementation of 5-layer thermal diffusion scheme in weather research and forecasting model with Intel Many Integrated Cores

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.

    2014-10-01

    For weather forecasting and research, the Weather Research and Forecasting (WRF) model has been developed, consisting of several components such as dynamic solvers and physical simulation modules. WRF includes several Land- Surface Models (LSMs). The LSMs use atmospheric information, the radiative and precipitation forcing from the surface layer scheme, the radiation scheme, and the microphysics/convective scheme all together with the land's state variables and land-surface properties, to provide heat and moisture fluxes over land and sea-ice points. The WRF 5-layer thermal diffusion simulation is an LSM based on the MM5 5-layer soil temperature model with an energy budget that includes radiation, sensible, and latent heat flux. The WRF LSMs are very suitable for massively parallel computation as there are no interactions among horizontal grid points. The features, efficient parallelization and vectorization essentials, of Intel Many Integrated Core (MIC) architecture allow us to optimize this WRF 5-layer thermal diffusion scheme. In this work, we present the results of the computing performance on this scheme with Intel MIC architecture. Our results show that the MIC-based optimization improved the performance of the first version of multi-threaded code on Xeon Phi 5110P by a factor of 2.1x. Accordingly, the same CPU-based optimizations improved the performance on Intel Xeon E5- 2603 by a factor of 1.6x as compared to the first version of multi-threaded code.

  3. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE PAGES

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.; ...

    2017-07-12

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  4. Self-consistent core-pedestal transport simulations with neural network accelerated models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meneghini, Orso; Smith, Sterling P.; Snyder, Philip B.

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflowmore » that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. Finally, the NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.« less

  5. Self-consistent core-pedestal transport simulations with neural network accelerated models

    NASA Astrophysics Data System (ADS)

    Meneghini, O.; Smith, S. P.; Snyder, P. B.; Staebler, G. M.; Candy, J.; Belli, E.; Lao, L.; Kostuk, M.; Luce, T.; Luda, T.; Park, J. M.; Poli, F.

    2017-08-01

    Fusion whole device modeling simulations require comprehensive models that are simultaneously physically accurate, fast, robust, and predictive. In this paper we describe the development of two neural-network (NN) based models as a means to perform a snon-linear multivariate regression of theory-based models for the core turbulent transport fluxes, and the pedestal structure. Specifically, we find that a NN-based approach can be used to consistently reproduce the results of the TGLF and EPED1 theory-based models over a broad range of plasma regimes, and with a computational speedup of several orders of magnitudes. These models are then integrated into a predictive workflow that allows prediction with self-consistent core-pedestal coupling of the kinetic profiles within the last closed flux surface of the plasma. The NN paradigm is capable of breaking the speed-accuracy trade-off that is expected of traditional numerical physics models, and can provide the missing link towards self-consistent coupled core-pedestal whole device modeling simulations that are physically accurate and yet take only seconds to run.

  6. FEM analysis of magnetic flake composites

    NASA Astrophysics Data System (ADS)

    Claassen, J. H.

    2009-07-01

    A composite comprised of layered flake-like magnetic particles embedded in an insulating medium has been proposed as a low permeability, low loss core material. This would be an alternative to "distributed air gap" compressed powder cores that are widely used for inductors in power applications. Since the lowest loss metallic materials are manufactured in the form of very thin sheets, the particles after pulverizing would be in the form of flakes. The effective permeability and average core loss have been computed for model systems of flake composites in a two-dimensional approximation. The core loss is modeled by eddy current dissipation in the low-frequency limit, where the conductor thickness is much less than the skin depth. It is found that useful values of permeability should be obtained for a modest filling fraction of magnetic material, in contrast to the powder cores which require a value close to unity. The core loss will scale as the inverse of filling fraction, with a small additional enhancement due to perpendicular field components. It is thus expected that useful core materials may be attainable without the necessity of large compaction forces.

  7. Printing Space: Using 3D Printing of Digital Terrain Models in Geosciences Education and Research

    ERIC Educational Resources Information Center

    Horowitz, Seth S.; Schultz, Peter H.

    2014-01-01

    Data visualization is a core component of every scientific project; however, generation of physical models previously depended on expensive or labor-intensive molding, sculpting, or laser sintering techniques. Physical models have the advantage of providing not only visual but also tactile modes of inspection, thereby allowing easier visual…

  8. Online Professional Development: Combining Best Practices from Teacher, Technology and Distance Education

    ERIC Educational Resources Information Center

    Signer, Barbara

    2008-01-01

    This article provides a model of online professional development that is consistent with recommendations from the fields of teacher education, technology staff development and online learning. A graduate mathematics education course designed and implemented using the model is presented to exemplify the model's core components and interactions. The…

  9. Etude et simulation du protocole TTEthernet sur un sous-systeme de gestion de vols et adaptation de la planification des tâches a des fins de simulation

    NASA Astrophysics Data System (ADS)

    Abidi, Dhafer

    TTEthernet is a deterministic network technology that makes enhancements to Layer 2 Quality-of-Service (QoS) for Ethernet. The components that implement its services enrich the Ethernet functionality with distributed fault-tolerant synchronization, robust temporal partitioning bandwidth and synchronous communication with fixed latency and low jitter. TTEthernet services can facilitate the design of scalable, robust, less complex distributed systems and architectures tolerant to faults. Simulation is nowadays an essential step in critical systems design process and represents a valuable support for validation and performance evaluation. CoRE4INET is a project bringing together all TTEthernet simulation models currently available. It is based on the extension of models of OMNeT ++ INET framework. Our objective is to study and simulate the TTEthernet protocol on a flight management subsystem (FMS). The idea is to use CoRE4INET to design the simulation model of the target system. The problem is that CoRE4INET does not offer a task scheduling tool for TTEthernet network. To overcome this problem we propose an adaptation for simulation purposes of a task scheduling approach based on formal specification of network constraints. The use of Yices solver allowed the translation of the formal specification into an executable program to generate the desired transmission plan. A case study allowed us at the end to assess the impact of the arrangement of Time-Triggered frames offsets on the performance of each type of the system traffic.

  10. Heat Pipe Reactor Dynamic Response Tests: SAFE-100 Reactor Core Prototype

    NASA Technical Reports Server (NTRS)

    Bragg-Sitton, Shannon M.

    2005-01-01

    The SAFE-I00a test article at the NASA Marshall Space Flight Center was used to simulate a variety of potential reactor transients; the SAFEl00a is a resistively heated, stainless-steel heat-pipe (HP)-reactor core segment, coupled to a gas-flow heat exchanger (HX). For these transients the core power was controlled by a point kinetics model with reactivity feedback based on core average temperature; the neutron generation time and the temperature feedback coefficient are provided as model inputs. This type of non-nuclear test is expected to provide reasonable approximation of reactor transient behavior because reactivity feedback is very simple in a compact fast reactor (simple, negative, and relatively monotonic temperature feedback, caused mostly by thermal expansion) and calculations show there are no significant reactivity effects associated with fluid in the HP (the worth of the entire inventory of Na in the core is .

  11. WETTABILITY AND PREDICTION OF OIL RECOVERY FROM RESERVOIRS DEVELOPED WITH MODERN DRILLING AND COMPLETION FLUIDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jill S. Buckley; Norman R. Morrow

    2006-01-01

    The objectives of this project are: (1) to improve understanding of the wettability alteration of mixed-wet rocks that results from contact with the components of synthetic oil-based drilling and completion fluids formulated to meet the needs of arctic drilling; (2) to investigate cleaning methods to reverse the wettability alteration of mixed-wet cores caused by contact with these SBM components; and (3) to develop new approaches to restoration of wetting that will permit the use of cores drilled with SBM formulations for valid studies of reservoir properties.

  12. Introduction to focus issue: Synchronization in large networks and continuous media—data, models, and supermodels

    NASA Astrophysics Data System (ADS)

    Duane, Gregory S.; Grabow, Carsten; Selten, Frank; Ghil, Michael

    2017-12-01

    The synchronization of loosely coupled chaotic systems has increasingly found applications to large networks of differential equations and to models of continuous media. These applications are at the core of the present Focus Issue. Synchronization between a system and its model, based on limited observations, gives a new perspective on data assimilation. Synchronization among different models of the same system defines a supermodel that can achieve partial consensus among models that otherwise disagree in several respects. Finally, novel methods of time series analysis permit a better description of synchronization in a system that is only observed partially and for a relatively short time. This Focus Issue discusses synchronization in extended systems or in components thereof, with particular attention to data assimilation, supermodeling, and their applications to various areas, from climate modeling to macroeconomics.

  13. Introduction to focus issue: Synchronization in large networks and continuous media-data, models, and supermodels.

    PubMed

    Duane, Gregory S; Grabow, Carsten; Selten, Frank; Ghil, Michael

    2017-12-01

    The synchronization of loosely coupled chaotic systems has increasingly found applications to large networks of differential equations and to models of continuous media. These applications are at the core of the present Focus Issue. Synchronization between a system and its model, based on limited observations, gives a new perspective on data assimilation. Synchronization among different models of the same system defines a supermodel that can achieve partial consensus among models that otherwise disagree in several respects. Finally, novel methods of time series analysis permit a better description of synchronization in a system that is only observed partially and for a relatively short time. This Focus Issue discusses synchronization in extended systems or in components thereof, with particular attention to data assimilation, supermodeling, and their applications to various areas, from climate modeling to macroeconomics.

  14. 186Os- 187Os systematics of Gorgona Island komatiites: implications for early growth of the inner core

    NASA Astrophysics Data System (ADS)

    Brandon, Alan D.; Walker, Richard J.; Puchtel, Igor S.; Becker, Harry; Humayun, Munir; Revillon, Sidonie

    2003-02-01

    The presence of coupled enrichments in 186Os/ 188Os and 187Os/ 188Os in some mantle-derived materials reflects long-term elevation of Pt/Os and Re/Os relative to the primitive upper mantle. New Os data for the 89 Ma Gorgona Island, Colombia komatiites indicate that these lavas are also variably enriched in 186Os and 187Os, with 186Os/ 188Os ranging between 0.1198397±22 and 0.1198470±38, and with γOs correspondingly ranging from +0.15 to +4.4. These data define a linear trend that converges with the previously reported linear trend generated from data for modern Hawaiian picritic lavas and a sample from the ca. 251 Ma Siberian plume, to a common component with a 186Os/ 188Os of approximately 0.119870 and γOs of +17.5. The convergence of these data to this Os isotopic composition may imply a single ubiquitous source in the Earth's interior that mixes with a variety of different mantle compositions distinguished by variations in γOs. The 187Os- and 186Os-enriched component may have been generated via early crystallization of the solid inner core and consequent increases in Pt/Os and Re/Os in the liquid outer core, with time leading to suprachondritic 186Os/ 188Os and γOs in the outer core. The presence of Os from the outer core in certain portions of the mantle would require a mechanism that could transfer Os from the outer core to the lower mantle, and thence to the surface. If this is the process that generated the isotopic enrichments in the mantle sources of these plume-derived systems, then the current understanding of solid metal-liquid metal partitioning of Pt, Re and Os requires that crystallization of the inner core began prior to 3.5 Ga. Thus, the Os isotopic data reported here provide a new source of data to better constrain the timing of inner core formation, complementing magnetic field paleo-intensity measurements as data sources that constrain models based on secular cooling of the Earth.

  15. Steady induction effects in geomagnetism. Part 1C: Geomagnetic estimation of steady surficial core motions: Application to the definitive geomagnetic reference field models

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    In the source-free mantle/frozen-flux core magnetic earth model, the non-linear inverse steady motional induction problem was solved using the method presented in Part 1B. How that method was applied to estimate steady, broad-scale fluid velocity fields near the top of Earth's core that induce the secular change indicated by the Definitive Geomagnetic Reference Field (DGRF) models from 1945 to 1980 are described. Special attention is given to the derivation of weight matrices for the DGRF models because the weights determine the apparent significance of the residual secular change. The derived weight matrices also enable estimation of the secular change signal-to-noise ratio characterizing the DGRF models. Two types of weights were derived in 1987-88: radial field weights for fitting the evolution of the broad-scale portion of the radial geomagnetic field component at Earth's surface implied by the DGRF's, and general weights for fitting the evolution of the broad-scale portion of the scalar potential specified by these models. The difference is non-trivial because not all the geomagnetic data represented by the DGRF's constrain the radial field component. For radial field weights (or general weights), a quantitatively acceptable explication of broad-scale secular change relative to the 1980 Magsat epoch must account for 99.94271 percent (or 99.98784 percent) of the total weighted variance accumulated therein. Tolerable normalized root-mean-square weighted residuals of 2.394 percent (or 1.103 percent) are less than the 7 percent errors expected in the source-free mantle/frozen-flux core approximation.

  16. Energy efficient engine fan component detailed design report

    NASA Technical Reports Server (NTRS)

    Halle, J. E.; Michael, C. J.

    1981-01-01

    The fan component which was designed for the energy efficient engine is an advanced high performance, single stage system and is based on technology advancements in aerodynamics and structure mechanics. Two fan components were designed, both meeting the integrated core/low spool engine efficiency goal of 84.5%. The primary configuration, envisioned for a future flight propulsion system, features a shroudless, hollow blade and offers a predicted efficiency of 87.3%. A more conventional blade was designed, as a back up, for the integrated core/low spool demonstrator engine. The alternate blade configuration has a predicted efficiency of 86.3% for the future flight propulsion system. Both fan configurations meet goals established for efficiency surge margin, structural integrity and durability.

  17. The moon: Composition determined by nebular processes

    USGS Publications Warehouse

    Morgan, J.W.; Hertogen, J.; Anders, E.

    1978-01-01

    The bulk composition of the Moon was determined by the conditions in the solar nebula during its formation, and may be quantitatively estimated from the premise that the terrestrial planets were formed by cosmochemical processes similar to those recorded in the chondrites. The calculations are based on the Ganapathy-Anders 7-component model using trace element indicators, but incorportate improved geophysical data and petrological constraints. A model Moon with 40 ppb U, a core 2% by weight (1.8% metal with ???35% Ni and 0.2% FeS) and Mg/(Fe2++Mg)?????0.75 meets the trace element restrictions, and has acceptable density, heat flow and moment of inertia ratio. The high Ni content of the core permits low-Ti mare basalts to equilibrate with metal, yet still retain substantial Ni. The silicate resembles the Taylor-Jakes?? composition (and in some respects the waif Ganapathy-Anders Model 2a), but has lower SiO2. Minor modifications of the model composition (U=30-35 ppb) yield a 50% melt approximating Apollo 15 green glass and a residuum of olivine plus 3 to 4% spinel; the low SiO2, favors spinel formation, and, contrary to expectation, Cr is not depleted in the liquid. There may no longer be any inconsistency between the cosmochemical approach and arguments based on experimental petrology. ?? 1978 D. Reidel Publishing Company.

  18. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    NASA Astrophysics Data System (ADS)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  19. Vertically aligned P(VDF-TrFE) core-shell structures on flexible pillar arrays

    PubMed Central

    Choi, Yoon-Young; Yun, Tae Gwang; Qaiser, Nadeem; Paik, Haemin; Roh, Hee Seok; Hong, Jongin; Hong, Seungbum; Han, Seung Min; No, Kwangsoo

    2015-01-01

    PVDF and P(VDF-TrFE) nano- and micro- structures have been widely used due to their potential applications in several fields, including sensors, actuators, vital sign transducers, and energy harvesters. In this study, we developed vertically aligned P(VDF-TrFE) core-shell structures using high modulus polyurethane acrylate (PUA) pillars as the support structure to maintain the structural integrity. In addition, we were able to improve the piezoelectric effect by 1.85 times from 40 ± 2 to 74 ± 2 pm/V when compared to the thin film counterpart, which contributes to the more efficient current generation under a given stress, by making an effective use of the P(VDF-TrFE) thin top layer as well as the side walls. We attribute the enhancement of piezoelectric effects to the contributions from the shell component and the strain confinement effect, which was supported by our modeling results. We envision that these organic-based P(VDF-TrFE) core-shell structures will be used widely as 3D sensors and power generators because they are optimized for current generations by utilizing all surface areas, including the side walls of core-shell structures. PMID:26040539

  20. Vertically aligned P(VDF-TrFE) core-shell structures on flexible pillar arrays

    DOE PAGES

    Choi, Yoon-Young; Yun, Tae Gwang; Qaiser, Nadeem; ...

    2015-06-04

    PVDF and P(VDF-TrFE) nano- and micro- structures are widely used due to their potential applications in several fields, including sensors, actuators, vital sign transducers, and energy harvesters. In this study, we developed vertically aligned P(VDF-TrFE) core-shell structures using high modulus polyurethane acrylate (PUA) pillars as the support structure to maintain the structural integrity. In addition, we were able to improve the piezoelectric effect by 1.85 times from 40 ± 2 to 74 ± 2 pm/V when compared to the thin film counterpart, which contributes to the more efficient current generation under a given stress, by making an effective use ofmore » the P(VDF-TrFE) thin top layer as well as the side walls. We attribute the enhancement of piezoelectric effects to the contributions from the shell component and the strain confinement effect, which was supported by our modeling results. We envision that these organic-based P(VDF-TrFE) core-shell structures will be used widely as 3D sensors and power generators because they are optimized for current generations by utilizing all surface areas, including the side walls of core-shell structures.« less

  1. Two-dimensional ice mapping of molecular cores

    NASA Astrophysics Data System (ADS)

    Noble, J. A.; Fraser, H. J.; Pontoppidan, K. M.; Craigon, A. M.

    2017-06-01

    We present maps of the column densities of H2O, CO2 and CO ices towards the molecular cores B 35A, DC 274.2-00.4, BHR 59 and DC 300.7-01.0. These ice maps, probing spatial distances in molecular cores as low as 2200 au, challenge the traditional hypothesis that the denser the region observed, the more ice is present, providing evidence that the relationships between solid molecular species are more varied than the generic picture we often adopt to model gas-grain chemical processes and explain feedback between solid phase processes and gas phase abundances. We present the first combined solid-gas maps of a single molecular species, based upon observations of both CO ice and gas phase C18O towards B 35A, a star-forming dense core in Orion. We conclude that molecular species in the solid phase are powerful tracers of 'small-scale' chemical diversity, prior to the onset of star formation. With a component analysis approach, we can probe the solid phase chemistry of a region at a level of detail greater than that provided by statistical analyses or generic conclusions drawn from single pointing line-of-sight observations alone.

  2. A novel medical image data-based multi-physics simulation platform for computational life sciences.

    PubMed

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-04-06

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.

  3. Quark cluster model for deep-inelastic lepton-deuteron scattering

    NASA Astrophysics Data System (ADS)

    Yen, G.; Vary, J. P.; Harindranath, A.; Pirner, H. J.

    1990-10-01

    We evaluate the contribution of quasifree nucleon knockout and of inelastic lepton-nucleon scattering in inclusive electron-deuteron reactions at large momentum transfer. We examine the degree of quantitative agreement with deuteron wave functions from the Reid soft-core and Bonn realistic nucleon-nucleon interactions. For the range of data available there is strong sensitivity to the tensor correlations which are distinctively different in these two deuteron models. At this stage of the analyses the Reid soft-core wave function provides a reasonable description of the data while the Bonn wave function does not. We then include a six-quark cluster component whose relative contribution is based on an overlap criterion and obtain a good description of all the data with both interactions. The critical separation at which overlap occurs (formation of six-quark clusters) is taken to be 1.0 fm and the six-quark cluster probability is 4.7% for Reid and 5.4% for Bonn. As a consequence the quark cluster model with either Reid or Bonn wave function describe the SLAC inclusive electron-deuteron scattering data equally well. We then show how additional data would be decisive in resolving which model is ultimately more correct.

  4. An MPI-based MoSST core dynamics model

    NASA Astrophysics Data System (ADS)

    Jiang, Weiyuan; Kuang, Weijia

    2008-09-01

    Distributed systems are among the main cost-effective and expandable platforms for high-end scientific computing. Therefore scalable numerical models are important for effective use of such systems. In this paper, we present an MPI-based numerical core dynamics model for simulation of geodynamo and planetary dynamos, and for simulation of core-mantle interactions. The model is developed based on MPI libraries. Two algorithms are used for node-node communication: a "master-slave" architecture and a "divide-and-conquer" architecture. The former is easy to implement but not scalable in communication. The latter is scalable in both computation and communication. The model scalability is tested on Linux PC clusters with up to 128 nodes. This model is also benchmarked with a published numerical dynamo model solution.

  5. Scattering of S waves diffracted at the core-mantle boundary: forward modelling

    NASA Astrophysics Data System (ADS)

    Emery, Valérie; Maupin, Valérie; Nataf, Henri-Claude

    1999-11-01

    The lowermost 200-300 km of the Earth's mantle, known as the D'' layer, is an extremely complex and heterogeneous region where transfer processes between the core and the mantle take place. Diffracted S waves propagate over large distances and are very sensitive to the velocity structure of this region. Strong variations of ampli-tudes and waveforms are observed on recordings from networks of broad-band seismic stations. We perform forward modelling of diffracted S waves in laterally heterogeneous structures in order to analyse whether or not these observations can be related to lateral inhomogeneities in D''. We combine the diffraction due to the core and the scattering due to small-scale volumetric heterogeneities (10-100 km) by coupling single scattering (Born approximation) with the Langer approximation, which describes Sdiff wave propagation. The influence on the direct as well as on the scattered wavefields of the CMB as well as of possible tunnelling in the core or in D'' is fully accounted for. The SH and the SV components of the diffracted waves are analysed, as well as their coupling. The modelling is applied in heterogeneous models with different geometries: isolated heterogeneities, vertical cylinders, horizontal inhomogeneities and random media. Amplitudes of scattered waves are weak and only velocity perturbations of the order of 10 per cent over a volume of 240 x 240 x 300 km3 produce visible effects on seismograms. The two polarizations of Sdiff have different radial sensitivities, the SH components being more sensitive to heterogeneities closer to the CMB. However, we do not observe significant time-shifts between the two components similar to those produced by anisotropy. The long-period Sdiff have a poor lateral resolution and average the velocity perturbations in their Fresnel zone. Random small-scale heterogeneities with +/- 10 per cent velocity contrast in the layer therefore have little effect on Sdiff, in contrast to their effect on PKIKP.

  6. Manufacture of a four-sheet complex component from different titanium alloys by superplastic forming

    NASA Astrophysics Data System (ADS)

    Allazadeh, M. R.; Zuelli, N.

    2017-10-01

    A superplastic forming (SPF) technology process was deployed to form a complex component with eight-pocket from a four-sheet sandwich panel sheetstock. Six sheetstock packs were composed of two core sheets made of Ti-6Al-4V or Ti-5Al-4Cr-4Mo-2Sn-2Zr titanium alloy and two skin sheets made of Ti-6Al-4V or Ti-6Al-2Sn-4Zr-2Mo titanium alloy in three different combinations. The sheets were welded with two subsequent welding patterns over the core and skin sheets to meet the required component's details. The applied welding methods were intermittent and continuous resistance seam welding for bonding the core sheets to each other and the skin sheets over the core panel, respectively. The final component configuration was predicted based on the die drawings and finite element method (FEM) simulations for the sandwich panels. An SPF system set-up with two inlet gas pipe feeding facilitated the trials to deliver two pressure-time load cycles acting simultaneously which were extracted from FEM analysis for specific forming temperature and strain rate. The SPF pressure-time cycles were optimized via GOM scanning and visually inspecting some sections of the packs in order to assess the levels of core panel formation during the inflation process of the sheetstock. Two sets of GOM scan results were compared via GOM software to inspect the surface and internal features of the inflated multisheet packs. The results highlighted the capability of the tested SPF process to form complex components from a flat multisheet pack made of different titanium alloys.

  7. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Astrophysics Data System (ADS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-03-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an example problem, and compares the results to related NTP engine system designs. Initial installation instructions and program disks are in Volume 2 of the NESS Program User's Guide.

  8. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide

    NASA Technical Reports Server (NTRS)

    Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.

    1993-01-01

    A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an example problem, and compares the results to related NTP engine system designs. Initial installation instructions and program disks are in Volume 2 of the NESS Program User's Guide.

  9. Simultaneous Ultraviolet and X-Ray Spectroscopy of the Seyfert 1 Galaxy NGC 5548. I. Physical Conditions in the Ultraviolet Absorbers

    NASA Astrophysics Data System (ADS)

    Crenshaw, D. M.; Kraemer, S. B.; Gabel, J. R.; Kaastra, J. S.; Steenbrugge, K. C.; Brinkman, A. C.; Dunn, J. P.; George, I. M.; Liedahl, D. A.; Paerels, F. B. S.; Turner, T. J.; Yaqoob, T.

    2003-09-01

    We present new UV spectra of the nucleus of the Seyfert 1 galaxy NGC 5548, which we obtained with the Space Telescope Imaging Spectrograph at high spectral resolution, in conjunction with simultaneous Chandra X-Ray Observatory spectra. Taking advantage of the low UV continuum and broad emission-line fluxes, we have determined that the deepest UV absorption component covers at least a portion of the inner, high-ionization narrow-line region (NLR). We find nonunity covering factors in the cores of several kinematic components, which increase the column density measurements of N V and C IV by factors of 1.2-1.9 over the full-covering case; however, the revised columns have only a minor effect on the parameters derived from our photoionization models. For the first time, we have simultaneous N V and C IV columns for component 1 (at -1040 km s-1) and find that this component cannot be an X-ray warm absorber, contrary to our previous claim based on nonsimultaneous observations. We find that models of the absorbers based on solar abundances severely overpredict the O VI columns previously obtained with the Far Ultraviolet Spectroscopic Explorer and present arguments that this is not likely due to variability. However, models that include either enhanced nitrogen (twice solar) or dust, with strong depletion of carbon in either case, are successful in matching all of the observed ionic columns. These models result in substantially lower ionization parameters and total column densities compared to dust-free solar-abundance models and produce little O VII or O VIII, indicating that none of the UV absorbers are X-ray warm absorbers. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. These observations are associated with proposal 9279.

  10. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; hide

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.

  11. In Situ FTIR Microspectroscopy of Brain Tissue from a Transgenic Mouse Model of Alzheimer Disease

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rak,M.; Del Bigio, M.; Mai, S.

    2007-01-01

    Plaques composed of the A{beta} peptide are the main pathological feature of Alzheimer's disease. Dense-core plaques are fibrillar deposits of A{beta}, showing all the classical properties of amyloid including {beta}-sheet secondary structure, while diffuse plaques are amorphous deposits. We studied both plaque types, using synchrotron infrared (IR) microspectroscopy, a technique that allows the chemical composition and average protein secondary structure to be investigated in situ. We examined plaques in hippocampal, cortical and caudal tissue from 5- to 21-month-old TgCRND8 mice, a transgenic model expressing doubly mutant amyloid precursor protein, and displaying impaired hippocampal function and robust pathology from an earlymore » age. Spectral analysis confirmed that the congophilic plaque cores were composed of protein in a {beta}-sheet conformation. The amide I maximum of plaque cores was at 1623 cm-1, and unlike for in vitro A{beta} fibrils, the high-frequency (1680-1690 cm-1) component attributed to antiparallel {beta}-sheet was not observed. A significant elevation in phospholipids was found around dense-core plaques in TgCRND8 mice ranging in age from 5 to 21 months. In contrast, diffuse plaques were not associated with IR detectable changes in protein secondary structure or relative concentrations of any other tissue components.« less

  12. A deep dynamo generating Mercury's magnetic field.

    PubMed

    Christensen, Ulrich R

    2006-12-21

    Mercury has a global magnetic field of internal origin and it is thought that a dynamo operating in the fluid part of Mercury's large iron core is the most probable cause. However, the low intensity of Mercury's magnetic field--about 1% the strength of the Earth's field--cannot be reconciled with an Earth-like dynamo. With the common assumption that Coriolis and Lorentz forces balance in planetary dynamos, a field thirty times stronger is expected. Here I present a numerical model of a dynamo driven by thermo-compositional convection associated with inner core solidification. The thermal gradient at the core-mantle boundary is subadiabatic, and hence the outer region of the liquid core is stably stratified with the dynamo operating only at depth, where a strong field is generated. Because of the planet's slow rotation the resulting magnetic field is dominated by small-scale components that fluctuate rapidly with time. The dynamo field diffuses through the stable conducting region, where rapidly varying parts are strongly attenuated by the skin effect, while the slowly varying dipole and quadrupole components pass to some degree. The model explains the observed structure and strength of Mercury's surface magnetic field and makes predictions that are testable with space missions both presently flying and planned.

  13. On fast solid-body rotation of the solar core and differential (liquid-like) rotation of the solar surface

    NASA Astrophysics Data System (ADS)

    Pashitskii, E. A.

    2017-07-01

    On the basis of a two-component (two-fluid) hydrodynamic model, it is shown that the probable phenomenon of solar core rotation with a velocity higher than the average velocity of global rotation of the Sun, discovered by the SOHO mission, can be related to fast solid-body rotation of the light hydrogen component of the solar plasma, which is caused by thermonuclear fusion of hydrogen into helium inside the hot dense solar core. Thermonuclear fusion of four protons into a helium nucleus (α-particle) creates a large free specific volume per unit particle due to the large difference between the densities of the solar plasma and nuclear matter. As a result, an efficient volumetric sink of one of the components of the solar substance—hydrogen—forms inside the solar core. Therefore, a steady-state radial proton flux converging to the center should exist inside the Sun, which maintains a constant concentration of hydrogen as it burns out in the solar core. It is demonstrated that such a converging flux of hydrogen plasma with the radial velocity v r ( r) = -β r creates a convective, v r ∂ v φ/∂ r, and a local Coriolis, v r v φ/ r,φ nonlinear hydrodynamic forces in the solar plasma, rotating with the azimuthal velocity v φ. In the absence of dissipation, these forces should cause an exponential growth of the solid-body rotation velocity of the hydrogen component inside the solar core. However, friction between the hydrogen and helium components of the solar plasma due to Coulomb collisions of protons with α-particles results in a steady-state regime of rotation of the hydrogen component in the solar core with an angular velocity substantially exceeding the global rotational velocity of the Sun. It is suggested that the observed differential (liquid-like) rotation of the visible surface of the Sun (photosphere) with the maximum angular velocity at the equator is caused by sold-body rotation of the solar plasma in the radiation zone and strong turbulence in the tachocline layer, where the turbulent viscosity reaches its maximum value at the equator. There, the tachocline layer exerts the most efficient drag on the less dense outer layers of the solar plasma, which are slowed down due to the interaction with the ambient space plasma (solar wind).

  14. Preparing for Exascale: Towards convection-permitting, global atmospheric simulations with the Model for Prediction Across Scales (MPAS)

    NASA Astrophysics Data System (ADS)

    Heinzeller, Dominikus; Duda, Michael G.; Kunstmann, Harald

    2017-04-01

    With strong financial and political support from national and international initiatives, exascale computing is projected for the end of this decade. Energy requirements and physical limitations imply the use of accelerators and the scaling out to orders of magnitudes larger numbers of cores then today to achieve this milestone. In order to fully exploit the capabilities of these Exascale computing systems, existing applications need to undergo significant development. The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric core, an ocean core, a land-ice core and a sea-ice core. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. Here, we present work towards the application of the atmospheric core (MPAS-A) on current and future high performance computing systems for problems at extreme scale. In particular, we address the issue of massively parallel I/O by extending the model to support the highly scalable SIONlib library. Using global uniform meshes with a convection-permitting resolution of 2-3km, we demonstrate the ability of MPAS-A to scale out to half a million cores while maintaining a high parallel efficiency. We also demonstrate the potential benefit of a hybrid parallelisation of the code (MPI/OpenMP) on the latest generation of Intel's Many Integrated Core Architecture, the Intel Xeon Phi Knights Landing.

  15. Design and analysis of a toroidal tester for the measurement of core losses under axial compressive stress

    NASA Astrophysics Data System (ADS)

    Alatawneh, Natheer; Rahman, Tanvir; Lowther, David A.; Chromik, Richard

    2017-06-01

    Electric machine cores are subjected to mechanical stresses due to manufacturing processes. These stresses include radial, circumferential and axial components that may have significant influences on the magnetic properties of the electrical steel and hence, on the output and efficiencies of electrical machines. Previously, most studies of iron losses due to mechanical stress have considered only radial and circumferential components. In this work, an improved toroidal tester has been designed and developed to measure the core losses and the magnetic properties of electrical steel under a compressive axial stress. The shape of the toroidal ring has been verified using 3D stress analysis. Also, 3D electromagnetic simulations show a uniform flux density distribution in the specimen with a variation of 0.03 T and a maximum average induction level of 1.5 T. The developed design has been prototyped, and measurements were carried out using a steel sample of grade 35WW300. Measurements show that applying small mechanical stresses normal to the sample thickness rises the delivered core losses, then the losses decrease continuously as the stress increases. However, the drop in core losses at high stresses does not go lower than the free-stress condition. Physical explanations for the observed trend of core losses as a function of stress are provided based on core loss separation to the hysteresis and eddy current loss components. The experimental results show that the effect of axial compressive stress on magnetic properties of electrical steel at high level of inductions becomes less pronounced.

  16. Constraints on Earth’s inner core composition inferred from measurements of the sound velocity of hcp-iron in extreme conditions

    PubMed Central

    Sakamaki, Tatsuya; Ohtani, Eiji; Fukui, Hiroshi; Kamada, Seiji; Takahashi, Suguru; Sakairi, Takanori; Takahata, Akihiro; Sakai, Takeshi; Tsutsui, Satoshi; Ishikawa, Daisuke; Shiraishi, Rei; Seto, Yusuke; Tsuchiya, Taku; Baron, Alfred Q. R.

    2016-01-01

    Hexagonal close-packed iron (hcp-Fe) is a main component of Earth’s inner core. The difference in density between hcp-Fe and the inner core in the Preliminary Reference Earth Model (PREM) shows a density deficit, which implies an existence of light elements in the core. Sound velocities then provide an important constraint on the amount and kind of light elements in the core. Although seismological observations provide density–sound velocity data of Earth’s core, there are few measurements in controlled laboratory conditions for comparison. We report the compressional sound velocity (VP) of hcp-Fe up to 163 GPa and 3000 K using inelastic x-ray scattering from a laser-heated sample in a diamond anvil cell. We propose a new high-temperature Birch’s law for hcp-Fe, which gives us the VP of pure hcp-Fe up to core conditions. We find that Earth’s inner core has a 4 to 5% smaller density and a 4 to 10% smaller VP than hcp-Fe. Our results demonstrate that components other than Fe in Earth’s core are required to explain Earth’s core density and velocity deficits compared to hcp-Fe. Assuming that the temperature effects on iron alloys are the same as those on hcp-Fe, we narrow down light elements in the inner core in terms of the velocity deficit. Hydrogen is a good candidate; thus, Earth’s core may be a hidden hydrogen reservoir. Silicon and sulfur are also possible candidates and could show good agreement with PREM if we consider the presence of some melt in the inner core, anelasticity, and/or a premelting effect. PMID:26933678

  17. Models, figures, and gravitational moments of Jupiter's satellites Io and Europa

    NASA Astrophysics Data System (ADS)

    Zharkov, V. N.; Karamurzov, B. S.

    2006-07-01

    Two types of trial three-layer models have been constructed for the satellites Io and Europa. In the models of the first type (Io1 and E1), the cores are assumed to consist of eutectic Fe-FeS melt with the densities ρ 1 = 5.15 g cm-3 (Io1) and 5.2 g cm-3 (E1). In the models of the second type (Io3 and E3), the cores consist of FeS with an admixture of nickel and have the density ρ 1 = 4.6 g cm-3. The approach used here differs from that used previously both in chosen model chemical composition of these satellites and in boundary conditions imposed on the models. The most important question to be answered by modeling the internal structure of the Galilean satellites is that of the condensate composition at the formation epoch of Jupiter's system. Jupiter's core and the Galilean satellites were formed from the condensate. Ganymede and Callisto were formed fairly far from Jupiter in zones with temperatures below the water condensation temperature, water was entirely incorporated into their bodies, and their modeling showed the mass ratio of the icy (I) component to the rock (R) component in them to be I/R ˜ 1. The R composition must be clarified by modeling Io and Europa. The models of the second type (Io3 and E3), in which the satellite cores consist of FeS, yield 25.2 (Io3) and 22.8 (E3) for the core masses (in weight %). In discussing the R composition, we note that, theoretically, the material of which the FeS+Ni core can consist in the R accounts for ˜25.4% of the satellite mass. In this case, such an important parameter as the mantle silicate iron saturation is Fe# = 0.265. The Io3 and E3 models agree well with this theoretical prediction. The models of the first and second types differ markedly in core radius; thus, in principle, the R composition in the formation zone of Jupiter's system can be clarified by geophysical studies. Another problem studied here is that of the error made in modeling Io and Europa using the Radau-Darvin formula when passing from the Love number k 2 to the nondimensional polar moment of inertia bar C. For Io, the Radau-Darvin formula underestimates the true value of bar C by one and a half units in the third decimal digit. For Europa, this effect is approximately a factor of 3 smaller, which roughly corresponds to a ratio of the small parameters for the satellites under consideration α Io/α Europa ˜ 3.4. In modeling the internal structure of the satellites, the core radius depends strongly on both the mean moment of inertia I* and k 2. Therefore, the above discrepancy in bar C for Io is appreciable.

  18. Higher Education Governance and Performance Based Funding as an Ecology of Games

    ERIC Educational Resources Information Center

    Nisar, Muhammad Azfar

    2015-01-01

    To address the problematic situation of higher education affordability, and literacy, President Obama has recently outlined a new strategy to make colleges more affordable for the middle class. While this strategy includes many components, "Paying for Performance" is a core components of this new strategy. In recent years, states have…

  19. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    NASA Astrophysics Data System (ADS)

    Huang, Xiaomeng; Tang, Qiang; Tseng, Yuheng; Hu, Yong; Baker, Allison H.; Bryan, Frank O.; Dennis, John; Fu, Haohuan; Yang, Guangwen

    2016-11-01

    In the Community Earth System Model (CESM), the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, Yuki; Matsui, Tetsuo; Ishima, Takumi

    We study the three-dimensional bosonic t-J model, that is, the t-J model of 'bosonic electrons' at finite temperatures. This model describes a system of an isotropic antiferromagnet with doped bosonic holes and is closely related to systems of two-component bosons in an optical lattice. The bosonic 'electron' operator B{sub x{sigma}} at the site x with a two-component spin {sigma}(=1,2) is treated as a hard-core boson operator and represented by a composite of two slave particles: a spinon described by a Schwinger boson (CP{sup 1} boson) z{sub x}{sigma} and a holon described by a hard-core-boson field {phi}{sub x} as B{sub x}{sigma}={phi}{submore » x}{sup {dagger}}z{sub x}{sigma}. By means of Monte Carlo simulations of this bosonic t-J model, we study its phase structure and the possible phenomena like appearance of antiferromagnetic long-range order, Bose-Einstein condensation, phase separation, etc. Obtained results show that the bosonic t-J model has a phase diagram that suggests some interesting implications for high-temperature superconducting materials.« less

  1. A Core-Particle Model for Periodically Focused Ion Beams with Intense Space-Charge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lund, S M; Barnard, J J; Bukh, B

    2006-08-02

    A core-particle model is derived to analyze transverse orbits of test particles evolving in the presence of a core ion beam described by the KV distribution. The core beam has uniform density within an elliptical cross-section and can be applied to model both quadrupole and solenoidal focused beams in periodic or aperiodic lattices. Efficient analytical descriptions of electrostatic space-charge fields external to the beam core are derived to simplify model equations. Image charge effects are analyzed for an elliptical beam centered in a round, conducting pipe to estimate model corrections resulting from image charge nonlinearities. Transformations are employed to removemore » coherent utter motion associated with oscillations of the ion beam core due to rapidly varying, linear applied focusing forces. Diagnostics for particle trajectories, Poincare phase-space projections, and single-particle emittances based on these transformations better illustrate the effects of nonlinear forces acting on particles evolving outside the core. A numerical code has been written based on this model. Example applications illustrate model characteristics. The core-particle model described has recently been applied to identify physical processes leading to space-charge transport limits for an rms matched beam in a periodic quadrupole focusing channel [Lund and Chawla, Nuc. Instr. and Meth. A 561, 203 (2006)]. Further characteristics of these processes are presented here.« less

  2. Binary black holes in nuclei of extragalactic radio sources

    NASA Astrophysics Data System (ADS)

    Roland, J.; Britzen, S.

    If we assume that nuclei of extragalactic radio sources contain a Binary Black Hole system, the 2 black holes can eject VLBI components and in that case 2 families of different VLBI trajectories will be observed. An important consequence of the presence of a Binary Black Hole system is the following: the VLBI core is associated with one black hole and if a VLBI component is ejected by the second black hole, one expects to be able to detect the offset of the origin of the VLBI component ejected by the black hole not associated with the VLBI core. The ejection of VLBI components is perturbed by the precession of the accretion disk and the motion of the black holes around the gravity center of the BBH system. We modeled the ejection of the component taking into account the 2 perturbations and we obtained a method to fit the coordinates of a VLBI component and to deduce the characteristics of the BBH system, i.e. the ratio Tp/Tb where Tp is the precession period of the accretion disk and Tb the orbital period of the BBH system, the mass ratio M1/M2, the radius of the BBH system Rbin. We applied the method to component S1 of 1823+568 and to component C5 of 3C 279 which presents a large offset of the space origin from the VLBI core. We found that 1823+568 contains a BBH system which size is Rbin ≈ 60 mu as and 3C 279 contains a BBH system which size is Rbin ≈ 378 mu as. We were able to deduce the separation of the 2 black holes and the coordinates of the second black hole from the VLBI core, this information will be important to make the link between the radio reference frame system deduced from VLBI observations and the optical reference frame system deduced from GAIA.

  3. Automatic tissue segmentation of breast biopsies imaged by QPI

    NASA Astrophysics Data System (ADS)

    Majeed, Hassaan; Nguyen, Tan; Kandel, Mikhail; Marcias, Virgilia; Do, Minh; Tangella, Krishnarao; Balla, Andre; Popescu, Gabriel

    2016-03-01

    The current tissue evaluation method for breast cancer would greatly benefit from higher throughput and less inter-observer variation. Since quantitative phase imaging (QPI) measures physical parameters of tissue, it can be used to find quantitative markers, eliminating observer subjectivity. Furthermore, since the pixel values in QPI remain the same regardless of the instrument used, classifiers can be built to segment various tissue components without need for color calibration. In this work we use a texton-based approach to segment QPI images of breast tissue into various tissue components (epithelium, stroma or lumen). A tissue microarray comprising of 900 unstained cores from 400 different patients was imaged using Spatial Light Interference Microscopy. The training data were generated by manually segmenting the images for 36 cores and labelling each pixel (epithelium, stroma or lumen.). For each pixel in the data, a response vector was generated by the Leung-Malik (LM) filter bank and these responses were clustered using the k-means algorithm to find the centers (called textons). A random forest classifier was then trained to find the relationship between a pixel's label and the histogram of these textons in that pixel's neighborhood. The segmentation was carried out on the validation set by calculating the texton histogram in a pixel's neighborhood and generating a label based on the model learnt during training. Segmentation of the tissue into various components is an important step toward efficiently computing parameters that are markers of disease. Automated segmentation, followed by diagnosis, can improve the accuracy and speed of analysis leading to better health outcomes.

  4. "It Gave Me My Life Back": An Evaluation of a Specialist Legal Domestic Abuse Service.

    PubMed

    Lea, Susan J; Callaghan, Lynne

    2016-05-01

    Community-based advocacy services are important in enabling victims to escape domestic abuse and rebuild their lives. This study evaluated a domestic abuse service. Two phases of research were conducted following case-file analysis (n = 86): surveys (n = 22) and interviews (n = 12) with victims, and interviews with key individuals (n = 12) based in related statutory and community organizations. The findings revealed the holistic model of legal, practical, mental health-related, and advocacy components resulted in a range of benefits to victims and enhanced interagency partnership working. Core elements of a successful needs-led, victim-centered service could be distilled. © The Author(s) 2015.

  5. Active non-volatile memory post-processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Sudarsun; Milojicic, Dejan S.; Talwar, Vanish

    A computing node includes an active Non-Volatile Random Access Memory (NVRAM) component which includes memory and a sub-processor component. The memory is to store data chunks received from a processor core, the data chunks comprising metadata indicating a type of post-processing to be performed on data within the data chunks. The sub-processor component is to perform post-processing of said data chunks based on said metadata.

  6. Pan- and core- network analysis of co-expression genes in a model plant

    DOE PAGES

    He, Fei; Maslov, Sergei

    2016-12-16

    Genome-wide gene expression experiments have been performed using the model plant Arabidopsis during the last decade. Some studies involved construction of coexpression networks, a popular technique used to identify groups of co-regulated genes, to infer unknown gene functions. One approach is to construct a single coexpression network by combining multiple expression datasets generated in different labs. We advocate a complementary approach in which we construct a large collection of 134 coexpression networks based on expression datasets reported in individual publications. To this end we reanalyzed public expression data. To describe this collection of networks we introduced concepts of ‘pan-network’ andmore » ‘core-network’ representing union and intersection between a sizeable fractions of individual networks, respectively. Here, we showed that these two types of networks are different both in terms of their topology and biological function of interacting genes. For example, the modules of the pan-network are enriched in regulatory and signaling functions, while the modules of the core-network tend to include components of large macromolecular complexes such as ribosomes and photosynthetic machinery. Our analysis is aimed to help the plant research community to better explore the information contained within the existing vast collection of gene expression data in Arabidopsis.« less

  7. Pan- and core- network analysis of co-expression genes in a model plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Fei; Maslov, Sergei

    Genome-wide gene expression experiments have been performed using the model plant Arabidopsis during the last decade. Some studies involved construction of coexpression networks, a popular technique used to identify groups of co-regulated genes, to infer unknown gene functions. One approach is to construct a single coexpression network by combining multiple expression datasets generated in different labs. We advocate a complementary approach in which we construct a large collection of 134 coexpression networks based on expression datasets reported in individual publications. To this end we reanalyzed public expression data. To describe this collection of networks we introduced concepts of ‘pan-network’ andmore » ‘core-network’ representing union and intersection between a sizeable fractions of individual networks, respectively. Here, we showed that these two types of networks are different both in terms of their topology and biological function of interacting genes. For example, the modules of the pan-network are enriched in regulatory and signaling functions, while the modules of the core-network tend to include components of large macromolecular complexes such as ribosomes and photosynthetic machinery. Our analysis is aimed to help the plant research community to better explore the information contained within the existing vast collection of gene expression data in Arabidopsis.« less

  8. Basal metabolic rate of endotherms can be modeled using heat-transfer principles and physiological concepts: reply to "can the basal metabolic rate of endotherms be explained by biophysical modeling?".

    PubMed

    Roberts, Michael F; Lightfoot, Edwin N; Porter, Warren P

    2011-01-01

    Our recent article (Roberts et al. 2010 ) proposes a mechanistic model for the relation between basal metabolic rate (BMR) and body mass (M) in mammals. The model is based on heat-transfer principles in the form of an equation for distributed heat generation within the body. The model can also be written in the form of the allometric equation BMR = aM(b), in which a is the coefficient of the mass term and b is the allometric exponent. The model generates two interesting results: it predicts that b takes the value 2/3, indicating that BMR is proportional to surface area in endotherms. It also provides an explanation of the physiological components that make up a, that is, respiratory heat loss, core-skin thermal conductance, and core-skin thermal gradient. Some of the ideas in our article have been questioned (Seymour and White 2011 ), and this is our response to those questions. We specifically address the following points: whether a heat-transfer model can explain the level of BMR in mammals, whether our test of the model is inadequate because it uses the same literature data that generated the values of the physiological variables, and whether geometry and empirical values combine to make a "coincidence" that makes the model only appear to conform to real processes.

  9. Resolving Magnetic Flux Patches at the Surface of the Core

    NASA Technical Reports Server (NTRS)

    OBrien, Michael S.

    1996-01-01

    The geomagnetic field at a given epoch can be used to partition the surface of the liquid outer core into a finite number of contiguous regions in which the radial component of the magnetic flux density, B (sub r), is of one sign. These flux patches are instrumental in providing detail to surface fluid flows inferred from the changing geomagnetic field and in evaluating the validity of the frozen-flux approximation on which such inferences rely. Most of the flux patches in models of the modem field are small and enclose little flux compared to the total unsigned flux emanating from the core. To demonstrate that such patches are not required to explain the most spatially complete and accurate data presently available, those from the Magsat mission, I have constructed a smooth core field model that fits the Magsat data but does not possess small flux patches. I conclude that our present knowledge of the geomagnetic field does not allow us to resolve these features reliably at the core-mantle boundary; thus we possess less information about core flow than previously believed.

  10. Radio jets in NGC 4151: where eMERLIN meets HST

    NASA Astrophysics Data System (ADS)

    Williams, D. R. A.; McHardy, I. M.; Baldi, R. D.; Beswick, R. J.; Argo, M. K.; Dullo, B. T.; Knapen, J. H.; Brinks, E.; Fenech, D. M.; Mundell, C. G.; Muxlow, T. W. B.; Panessa, F.; Rampadarath, H.; Westcott, J.

    2017-12-01

    We present high-sensitivity eMERLIN radio images of the Seyfert galaxy NGC 4151 at 1.51 GHz. We compare the new eMERLIN images to those from archival MERLIN observations in 1993 to determine the change in jet morphology in the 22 yr between observations. We report an increase by almost a factor of 2 in the peak flux density of the central core component, C4, thought to host the black hole, but a probable decrease in some other components, possibly due to adiabatic expansion. The core flux increase indicates an active galactic nucleus (AGN) that is currently active and feeding the jet. We detect no significant motion in 22 yr between C4 and the component C3, which is unresolved in the eMERLIN image. We present a spectral index image made within the 512 MHz band of the 1.51 GHz observations. The spectrum of the core, C4, is flatter than that of other components further out in the jet. We use HST emission-line images (H α, [O III] and [O II]) to study the connection between the jet and the emission-line region. Based on the changing emission-line ratios away from the core and comparison with the eMERLIN radio jet, we conclude that photoionization from the central AGN is responsible for the observed emission-line properties further than 4 arcsec (360 pc) from the core, C4. Within this region, a body of evidence (radio-line co-spatiality, low [O III]/H α and estimated fast shocks) suggests additional ionization from the jet.

  11. Development of an Empirically Based Learning Performances Framework for Third-Grade Students' Model-Based Explanations about Plant Processes

    ERIC Educational Resources Information Center

    Zangori, Laura; Forbes, Cory T.

    2016-01-01

    To develop scientific literacy, elementary students should engage in knowledge building of core concepts through scientific practice (Duschl, Schweingruber, & Schouse, 2007). A core scientific practice is engagement in scientific modeling to build conceptual understanding about discipline-specific concepts. Yet scientific modeling remains…

  12. A constructive Indian country response to the evidence-based program mandate.

    PubMed

    Walker, R Dale; Bigelow, Douglas A

    2011-01-01

    Over the last 20 years governmental mandates for preferentially funding evidence-based "model" practices and programs has become doctrine in some legislative bodies, federal agencies, and state agencies. It was assumed that what works in small sample, controlled settings would work in all community settings, substantially improving safety, effectiveness, and value-for-money. The evidence-based "model" programs mandate has imposed immutable "core components," fidelity testing, alien programming and program developers, loss of familiar programs, and resource capacity requirements upon tribes, while infringing upon their tribal sovereignty and consultation rights. Tribal response in one state (Oregon) went through three phases: shock and rejection; proposing an alternative approach using criteria of cultural appropriateness, aspiring to evaluability; and adopting logic modeling. The state heard and accepted the argument that the tribal way of knowing is different and valid. Currently, a state-authorized tribal logic model and a review panel process are used to approve tribal best practices for state funding. This constructive response to the evidence-based program mandate elevates tribal practices in the funding and regulatory world, facilitates continuing quality improvement and evaluation, while ensuring that practices and programs remain based on local community context and culture. This article provides details of a model that could well serve tribes facing evidence-based model program mandates throughout the country.

  13. The Effects of Core-Mantle Interactions on Earth Rotation, Surface Deformation, and Gravity Changes

    NASA Astrophysics Data System (ADS)

    Watkins, A.; Gross, R. S.; Fu, Y.

    2017-12-01

    The length-of-day (LOD) contains a 6-year signal, the cause of which is currently unknown. The signal remains after removing tidal and surface fluid effects, thus the cause is generally believed to be angular momentum exchange between the mantle and core. Previous work has established a theoretical relationship between pressure variations at the core-mantle boundary (CMB) and resulting deformation of the overlying mantle and crust. This study examines globally distributed GPS deformation data in search of this effect, and inverts the discovered global inter-annual component for the CMB pressure variations. The geostrophic assumption is then used to obtain fluid flow solutions at the edge of the core from the CMB pressure variations. Taylor's constraint is applied to obtain the flow deeper within the core, and the equivalent angular momentum and LOD changes are computed and compared to the known 6-year LOD signal. The amplitude of the modeled and measured LOD changes agree, but the degree of period and phase agreement is dependent upon the method of isolating the desired component in the GPS position data. Implications are discussed, and predictions are calculated for surface gravity field changes that would arise from the CMB pressure variations.

  14. Core and shell size dependences on strain in core@shell Prussian blue analogue (PBA) nanoparticles and the effect on photomagnetism.

    NASA Astrophysics Data System (ADS)

    Cain, J. M.; Ferreira, C. F.; Felts, A. C.; Locicero, S. A.; Liang, J.; Talham, D. R.; Meisel, M. W.

    RbxCo[Fe(CN)6]y@Ka Ni[Cr(CN)6]b core@shell heterostructures have been shown to exhibit a photoinduced decrease in magnetization that persists up to the Tc = 70 K of the KNiCr-PBA component, which is not photoactive as a single-phase material. A magnetomechanical effect can explain how the strain in the shell evolves from thermal and photoinduced changes in the volume of the core. Moreover, a simple model has been used to estimate the depth of the strained region of the shell, but only one size of core (347 +/- 35 nm) has been studied. Since the strain depth in the shell is expected to be dependent on the size of the core, three distinct RbCoFe-PBA core sizes were synthesized, and on each, three different KNiCr-PBA shell thicknesses were grown. The magnetization of each core-shell combination was measured before and after irradiation with white light. Our results suggest the strain depth, as expected, increases from 56 nm in heterostructures with a core size of 328 +/- 29 nm to more than 90 nm in heterostructures with a core size of 575 +/- 113 nm. The data from the smallest core size also shows features indicating the model may be too simple. Supported by NSF DMR-1405439 (DRT) and DMR-1202033 (MWM).

  15. Core Noise - Increasing Importance

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2011-01-01

    This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Subsonic Fixed Wing (SFW) Project. Sections of the presentation cover: the SFW system-level noise metrics for the 2015, 2020, and 2025 timeframes; turbofan design trends and their aeroacoustic implications; the emerging importance of core noise and its relevance to the SFW Reduced-Perceived-Noise Technical Challenge; and the current research activities in the core-noise area, with additional details given about the development of a high-fidelity combustor-noise prediction capability as well as activities supporting the development of improved reduced-order, physics-based models for combustor-noise prediction. The need for benchmark data for validation of high-fidelity and modeling work and the value of a potential future diagnostic facility for testing of core-noise-reduction concepts are indicated. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The SFW Reduced-Perceived-Noise Technical Challenge aims to develop concepts and technologies to dramatically reduce the perceived aircraft noise outside of airport boundaries. This reduction of aircraft noise is critical to enabling the anticipated large increase in future air traffic. Noise generated in the jet engine core, by sources such as the compressor, combustor, and turbine, can be a significant contribution to the overall noise signature at low-power conditions, typical of approach flight. At high engine power during takeoff, jet and fan noise have traditionally dominated over core noise. However, current design trends and expected technological advances in engine-cycle design as well as noise-reduction methods are likely to reduce non-core noise even at engine-power points higher than approach. In addition, future low-emission combustor designs could increase the combustion-noise component. The trend towards high-power-density cores also means that the noise generated in the low-pressure turbine will likely increase. Consequently, the combined result from these emerging changes will be to elevate the overall importance of turbomachinery core noise, which will need to be addressed in order to meet future noise goals.

  16. In-situ composite formation of damage tolerant coatings utilizing laser

    DOEpatents

    Blue, Craig A [Knoxville, TN; Wong, Frank [Livermore, CA; Aprigliano, Louis F [Berlin, MD; Engleman, Peter G [Knoxville, TN; Peter, William H [Knoxville, TN; Rozgonyi, Tibor G [Golden, CO; Ozdemir, Levent [Golden, CO

    2011-05-10

    A coating steel component with a pattern of an iron based matrix with crystalline particles metallurgically bound to the surface of a steel substrate for use as disc cutters or other components with one or more abrading surfaces that can experience significant abrasive wear, high point loads, and large shear stresses during use. The coated component contains a pattern of features in the shape of freckles or stripes that are laser formed and fused to the steel substrate. The features can display an inner core that is harder than the steel substrate but generally softer than the matrix surrounding the core, providing toughness and wear resistance to the features. The features result from processing an amorphous alloy where the resulting matrix can be amorphous, partially devitrified or fully devitrified.

  17. In-situ composite formation of damage tolerant coatings utilizing laser

    DOEpatents

    Blue, Craig A; Wong, Frank; Aprigliano, Louis F; Engleman, Peter G; Rozgonyi, Tibor G; Ozdemir, Levent

    2014-03-18

    A coating steel component with a pattern of an iron based matrix with crystalline particles metallurgically bound to the surface of a steel substrate for use as disc cutters or other components with one or more abrading surfaces that can experience significant abrasive wear, high point loads, and large shear stresses during use. The coated component contains a pattern of features in the shape of freckles or stripes that are laser formed and fused to the steel substrate. The features can display an inner core that is harder than the steel substrate but generally softer than the matrix surrounding the core, providing toughness and wear resistance to the features. The features result from processing an amorphous alloy where the resulting matrix can be amorphous, partially devitrified or fully devitrified.

  18. In-situ composite formation of damage tolerant coatings utilizing laser

    DOEpatents

    Blue, Craig A.; Wong, Frank; Aprigliano, Louis F.; Engleman, Peter G.; Peter, William H.; Rozgonyi, Tibor G.; Ozdemir, Levent

    2016-05-24

    A coating steel component with a pattern of an iron based matrix with crystalline particles metallurgically bound to the surface of a steel substrate for use as disc cutters or other components with one or more abrading surfaces that can experience significant abrasive wear, high point loads, and large shear stresses during use. The coated component contains a pattern of features in the shape of freckles or stripes that are laser formed and fused to the steel substrate. The features can display an inner core that is harder than the steel substrate but generally softer than the matrix surrounding the core, providing toughness and wear resistance to the features. The features result from processing an amorphous alloy where the resulting matrix can be amorphous, partially devitrified or fully devitrified.

  19. Paleomagnetism of late Quaternary drift sediments off the west Antarctica Peninsula

    NASA Astrophysics Data System (ADS)

    Channell, J. E. T.; Xuan, C.; Hillenbrand, C. D.; Larter, R. D.

    2016-12-01

    Natural remanant magnetization of a series of piston cores (typically 10 m in lengtth) collected during the JR298 Expedition (January-March 2015) to the west Antarctica Peninsula shows well-defined magnetic components (maximum angular deviations 1°-3°) that potentially record paleomagnetic changes at high southern latitudes. Rock magnetic experiments on the sediments conducted at room and high (up to 700°C) temperatures demonstrate the presence of a low- and a high-coercivity component (mean coercivity of 50-60 mT and 130-140 mT respectively). Paleomagnetic directions from the piston cores are primarily carried by the low-coercivity detrital (titano)magnetite, and are affected by authigenic growth of the high-coercivity maghemite. Maghematization in these sediments is attributed to the low concentrations of labile organic matter and lack of sulfate reduction in an extended oxic zone not penetrated by the piston cores. Despite the varying degree of maghematization, some of the recovered cores yield relative paleointensity (RPI) records that can be matched to a reference RPI record constructed mainly from North Atlantic cores. The resulting age models yield mean sedimentation rates of 4-12 cm/kyr for the JR298 piston cores. RPI may serve as a stratigraphic tool to date sediment cores from the region where traditional isotope stratigraphy is challenging due to the rarity of foraminiferal carbonate.

  20. Helical vortices: viscous dynamics and instability

    NASA Astrophysics Data System (ADS)

    Rossi, Maurice; Selcuk, Can; Delbende, Ivan; Ijlra-Upmc Team; Limsi-Cnrs Team

    2014-11-01

    Understanding the dynamical properties of helical vortices is of great importance for numerous applications such as wind turbines, helicopter rotors, ship propellers. Locally these flows often display a helical symmetry: fields are invariant through combined axial translation of distance Δz and rotation of angle θ = Δz / L around the same z-axis, where 2 πL denotes the helix pitch. A DNS code with built-in helical symmetry has been developed in order to compute viscous quasi-steady basic states with one or multiple vortices. These states will be characterized (core structure, ellipticity, ...) as a function of the pitch, without or with an axial flow component. The instability modes growing in the above base flows and their growth rates are investigated by a linearized version of the DNS code coupled to an Arnoldi procedure. This analysis is complemented by a helical thin-cored vortex filaments model. ANR HELIX.

  1. Sediment mineralogy based on visible and near-infrared reflectance spectroscopy

    USGS Publications Warehouse

    Jarrard, R.D.; Vanden Berg, M.D.; ,

    2006-01-01

    Visible and near-infrared spectroscopy (VNIS) can be used to measure reflectance spectra (wavelength 350-2500 nm) for sediment cores and samples. A local ground-truth calibration of spectral features to mineral percentages is calculated by measuring reflectance spectra for a suite of samples of known mineralogy. This approach has been tested on powders, core plugs and split cores, and we conclude that it works well on all three, unless pore water is present. Initial VNIS studies have concentrated on determination of relative proportions of carbonate, opal, smectite and illite in equatorial Pacific sediments. Shipboard VNIS-based determination of these four components was demonstrated on Ocean Drilling Program Leg 199. ?? The Geological Society of London 2006.

  2. Historical Variations in Inner Core Rotation and Polar Motion at Decade Timescales

    NASA Astrophysics Data System (ADS)

    Dumberry, M.

    2005-12-01

    Exchanges of angular momentum between the mantle, the fluid core and the solid inner core result in changes in the Earth's rotation. Torques in the axial direction produce changes in amplitude, or changes in length of day, while torques in the equatorial direction lead to changes in orientation of the rotation vector with respect to the mantle, or polar motion. In this work, we explore the possibility that a combination of electromagnetic and gravitational torques on the inner core can reproduce the observed decadal variations in polar motion known as the Markowitz wobble. Torsional oscillations, which involve azimuthal motions in the fluid core with typical periods of decades, entrain the inner core by electromagnetic traction. When the inner core is axially rotated, its surfaces of constant density are no longer aligned with the gravitational potential from mantle density heterogeneities, and this results in a gravitational torque between the two. The axial component of this torque has been previously described and is believed to be partly responsible for decadal changes in length of day. In this work, we show that it has also an equatorial component, which produces a tilt of the inner core and results in polar motion. The polar motion produced by this mechanism depends on the density structure in the mantle, the rheology of the inner core, and the time-history of the angle of axial misalignment between the inner core and the mantle. We reconstruct the latter using a model of torsional oscillations derived from geomagnetic secular variation. From this time-history, and by using published models of mantle density structure, we show that we can reproduce the salient characteristics of the Markowitz wobble: an eccentric decadal polar motion of 30-50 milliarcsecs oriented along a specific longitude. We discuss the implications of this result, noting that a match in both amplitude and phase of the observed Markowitz wobble allows the recovery of the historical rotational variations of the inner core, and also provides constraints on structure, rheology and dynamics of the Earth's deep interior that cannot be observed directly.

  3. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    NASA Astrophysics Data System (ADS)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  4. Physcomitrella patens: a model for tip cell growth and differentiation.

    PubMed

    Vidali, Luis; Bezanilla, Magdalena

    2012-12-01

    The moss Physcomitrella patens has emerged as an excellent model system owing to its amenability to reverse genetics. The moss gametophyte has three filamentous tissues that grow by tip growth: chloronemata, caulonemata, and rhizoids. Because establishment of the moss plant relies on this form of growth, it is particularly suited for dissecting the molecular basis of tip growth. Recent studies demonstrate that a core set of actin cytoskeletal proteins is essential for tip growth. Additional actin cytoskeletal components are required for modulating growth to produce caulonemata and rhizoids. Differentiation into these cell types has previously been linked to auxin, light and nutrients. Recent studies have identified that core auxin signaling components as well as transcription factors that respond to auxin or nutrient levels are required for tip-growing cell differentiation. Future studies may establish a connection between the actin cytoskeleton and auxin or nutrient-induced cell differentiation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Early results from Magsat. [studies of near-earth magnetic fields

    NASA Technical Reports Server (NTRS)

    Langel, R. A.; Estes, R. H.; Mayhew, M. A.

    1981-01-01

    Papers presented at the May 27, 1981 meeting of the American Geophysical Union concerning early results from the Magsat satellite program, which was designed to study the near-earth magnetic fields originating in the core and lithosphere, are discussed. The satellite was launched on October 30, 1979 into a sun-synchronous (twilight) orbit, and re-entered the atmosphere on June 11, 1980. Instruments carried included a cesium vapor magnetometer to measure field magnitudes, a fluxgate magnetometer to measure field components and an optical system to measure fluxgate magnetometer orientation. Early results concerned spherical harmonic models, fields due to ionospheric and magnetospheric currents, the identification and interpretation of fields from lithospheric sources. The preliminary results confirm the possibility of separating the measured field into core, crustal and external components, and represent significant developments in analytical techniques in main-field modelling and the physics of the field sources.

  6. NASA's Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    NASA's Earth Science Data Systems (ESDS) Program has evolved over the last two decades, and currently has several core and community components. Core components provide the basic operational capabilities to process, archive, manage and distribute data from NASA missions. Community components provide a path for peer-reviewed research in Earth Science Informatics to feed into the evolution of the core components. The Earth Observing System Data and Information System (EOSDIS) is a core component consisting of twelve Distributed Active Archive Centers (DAACs) and eight Science Investigator-led Processing Systems spread across the U.S. The presentation covers how the ESDS Program continues to evolve and benefits from as well as contributes to advances in Earth Science Informatics.

  7. Evolution of two periodic meteoroid streams: The Perseids and Leonids

    NASA Astrophysics Data System (ADS)

    Brown, Peter Gordon

    Observations and modelling of the Perseid and Leonid meteoroid streams are presented and discussed. The Perseid stream is found to consist of three components: a weak background component, a core component and an outburst component. The particle distribution is identical for the outburst and core populations. Original visual accounts of the Leonid stream from 1832-1997 are analyzed to determine the time and magnitude of the peak for 32 Leonid returns in this interval. Leonid storms are shown to follow a gaussian flux profile, to occur after the perihelion passage of 55P/Tempel-Tuttle and to have a width/particle density relationship consistent with IRAS cometary trail results. Variations in the width of the 1966 Leonid storm as a function of meteoroid mass are as expected based on the Whipple ejection velocity formalism. Four major models of cometary meteoroid ejection are developed and used to simulate plausible starting conditions for the formation of the Perseid and Leonid streams. Initial ejection velocities strongly influence Perseid stream development for the first five revolutions after ejection, at which point planetary perturbations and radiation effects become important for further development. The minimum distance between the osculating orbit of 109P/Swift-Tuttle and the Earth was found to be the principle determinant of any subsequent delivery of meteoroids to Earth. Systematic shifts in the location of the outburst component of the Perseids were shown to be due to the changing age of the primary meteoroid population making up the outbursts. The outburst component is due to distant, direct planetary perturbations from Jupiter and Saturn shifting nodal points inward relative to the comet. The age of the core population of the stream is found to be (25 +/- 10) × 10 3 years while the total age of the stream is in excess of 10 5 years. The primary sinks for the stream are hyperbolic ejection and attainment of sungrazing states due to perturbations from Jupiter and Saturn. Ejection velocities are found to be tens to of order a hundred m/s. Modelling of the Leonid stream has demonstrated that storms from the shower are from meteoroids less than a century in age and are due to trails from Tempel-Tuttle coming within (8 +/- 6) × 10 -4 A.U of the Earth's orbit on average. Trails are perturbed to Earth-intersection through distant, direct perturbations, primarily from Jupiter. The stream decreases in flux by two to three orders of magnitude in the first hundred years of development. Ejection velocities are found to be <20 m/s and average ~ 5 m/s for storm meteoroids. Jupiter controls evolution of the stream after a century; radiation pressure and initial ejection velocities are significant factors only on shorter time- scales. The age of the annual component of the stream is ~ 1000 years.

  8. The HTA Core Model®-10 Years of Developing an International Framework to Share Multidimensional Value Assessment.

    PubMed

    Kristensen, Finn Børlum; Lampe, Kristian; Wild, Claudia; Cerbo, Marina; Goettsch, Wim; Becla, Lidia

    2017-02-01

    The HTA Core Model ® as a science-based framework for assessing dimensions of value was developed as a part of the European network for Health Technology Assessment project in the period 2006 to 2008 to facilitate production and sharing of health technology assessment (HTA) information, such as evidence on efficacy and effectiveness and patient aspects, to inform decisions. It covers clinical value as well as organizational, economic, and patient aspects of technologies and has been field-tested in two consecutive joint actions in the period 2010 to 2016. A large number of HTA institutions were involved in the work. The model has undergone revisions and improvement after iterations of piloting and can be used in a local, national, or international context to produce structured HTA information that can be taken forward by users into their own frameworks to fit their specific needs when informing decisions on technology. The model has a broad scope and offers a common ground to various stakeholders through offering a standard structure and a transparent set of proposed HTA questions. It consists of three main components: 1) the HTA ontology, 2) methodological guidance, and 3) a common reporting structure. It covers domains such as effectiveness, safety, and economics, and also includes domains covering organizational, patient, social, and legal aspects. There is a full model and a focused rapid relative effectiveness assessment model, and a third joint action is to continue till 2020. The HTA Core Model is now available for everyone around the world as a framework for assessing value. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. A Weak Bar Potential and Massive Core in the Seyfert 2 Galaxy NGC 3079: CO(1--0) observations using the Nobeyama Millimeter Array

    NASA Astrophysics Data System (ADS)

    Koda, J.; Sofue, Y.; Kohno, K.; Okumura, S. K.; Irwin, Judith A.

    We present our recent 12CO (1-0) observations in the central molecular disk of the Hα/radio lobe galaxy NGC 3079 with the Nobeyama Millimeter Array. We show four kinematically distinct components in the observed molecular disk: a main disk, spiral arms, a nuclear disk and a nuclear core. We discuss their possible origins using a simple orbit-analysis model in a weak bar potential. We show that three of the four components are well-understood by typical gaseous orbits in a weak bar, such as gaseous x1- and x2-orbits. The main disk and spiral arms are well-understood as the gaseous x1-orbits and their associated crowding, respectively. The nuclear disk is naturally explained by the x2-orbits. However, the nuclear core, showing a high velocity of about 200kmps at a radius of about 100pc, cannot be explained by those gaseous orbits in a bar. Furthermore, no other orbits, derived by bars, cannot be responsible for the nuclear core. Thus we discuss that this component should be attributed to a central massive core with a dynamical mass of about 109Msun within the central 100pc radius. This mass is three orders of magnitude more massive than that of a central black hole in this galaxy. More detailed descriptions are presented in Koda et al. (2002).

  10. Simulation of the present-day climate with the climate model INMCM5

    NASA Astrophysics Data System (ADS)

    Volodin, E. M.; Mortikov, E. V.; Kostrykin, S. V.; Galin, V. Ya.; Lykossov, V. N.; Gritsun, A. S.; Diansky, N. A.; Gusev, A. V.; Iakovlev, N. G.

    2017-12-01

    In this paper we present the fifth generation of the INMCM climate model that is being developed at the Institute of Numerical Mathematics of the Russian Academy of Sciences (INMCM5). The most important changes with respect to the previous version (INMCM4) were made in the atmospheric component of the model. Its vertical resolution was increased to resolve the upper stratosphere and the lower mesosphere. A more sophisticated parameterization of condensation and cloudiness formation was introduced as well. An aerosol module was incorporated into the model. The upgraded oceanic component has a modified dynamical core optimized for better implementation on parallel computers and has two times higher resolution in both horizontal directions. Analysis of the present-day climatology of the INMCM5 (based on the data of historical run for 1979-2005) shows moderate improvements in reproduction of basic circulation characteristics with respect to the previous version. Biases in the near-surface temperature and precipitation are slightly reduced compared with INMCM4 as well as biases in oceanic temperature, salinity and sea surface height. The most notable improvement over INMCM4 is the capability of the new model to reproduce the equatorial stratospheric quasi-biannual oscillation and statistics of sudden stratospheric warmings.

  11. Running and rotating: modelling the dynamics of migrating cell clusters

    NASA Astrophysics Data System (ADS)

    Copenhagen, Katherine; Gov, Nir; Gopinathan, Ajay

    Collective motion of cells is a common occurrence in many biological systems, including tissue development and repair, and tumor formation. Recent experiments have shown cells form clusters in a chemical gradient, which display three different phases of motion: translational, rotational, and random. We present a model for cell clusters based loosely on other models seen in the literature that involves a Vicsek-like alignment as well as physical collisions and adhesions between cells. With this model we show that a mechanism for driving rotational motion in this kind of system is an increased motility of rim cells. Further, we examine the details of the relationship between rim and core cells, and find that the phases of the cluster as a whole are correlated with the creation and annihilation of topological defects in the tangential component of the velocity field.

  12. Spatially-Distributed Stream Flow and Nutrient Dynamics Simulations Using the Component-Based AgroEcoSystem-Watershed (AgES-W) Model

    NASA Astrophysics Data System (ADS)

    Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.

    2010-12-01

    The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The Cedar Creek watershed (CCW) in northeastern Indiana, USA was selected for application of the OMS3-based AgroEcoSystem-Watershed (AgES-W) model. AgES-W performance for stream flow and N loading was assessed using Nash-Sutcliffe model efficiency (ENS) and percent bias (PBIAS) model evaluation statistics. Comparisons of daily and average monthly simulated and observed stream flow and N loads for the 1997-2005 simulation period resulted in PBIAS and ENS values that were similar or better than those reported in the literature for SWAT stream flow and N loading predictions at a similar scale. The results show that the AgES-W model was able to reproduce the hydrological and N dynamics of the CCW with sufficient quality, and should serve as a foundation upon which to better quantify additional water quality indicators (e.g., sediment transport and P dynamics) at the watershed scale.

  13. What's My Math Course Got to Do with Biology?

    ERIC Educational Resources Information Center

    Burks, Robert; Lindquist, Joseph; McMurran, Shawnee

    2008-01-01

    At United States Military Academy, a unit on biological modeling applications forms the culminating component of the first semester core mathematics course for freshmen. The course emphasizes the use of problem-solving strategies and modeling to solve complex and ill-defined problems. Topic areas include functions and their shapes, data fitting,…

  14. SPOC Benchmark Case: SNRE Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishal Patel; Michael Eades; Claude Russel Joyner II

    The Small Nuclear Rocket Engine (SNRE) was modeled in the Center for Space Nuclear Research’s (CSNR) Space Propulsion Optimization Code (SPOC). SPOC aims to create nuclear thermal propulsion (NTP) geometries quickly to perform parametric studies on design spaces of historic and new NTP designs. The SNRE geometry was modeled in SPOC and a critical core with a reasonable amount of criticality margin was found. The fuel, tie-tubes, reflector, and control drum masses were predicted rather well. These are all very important for neutronics calculations so the active reactor geometries created with SPOC can continue to be trusted. Thermal calculations ofmore » the average and hot fuel channels agreed very well. The specific impulse calculations used historically and in SPOC disagree so mass flow rates and impulses differed. Modeling peripheral and power balance components that do not affect nuclear characteristics of the core is not a feature of SPOC and as such, these components should continue to be designed using other tools. A full paper detailing the available SNRE data and comparisons with SPOC outputs will be submitted as a follow-up to this abstract.« less

  15. Genome-Assisted Prediction of Quantitative Traits Using the R Package sommer.

    PubMed

    Covarrubias-Pazaran, Giovanny

    2016-01-01

    Most traits of agronomic importance are quantitative in nature, and genetic markers have been used for decades to dissect such traits. Recently, genomic selection has earned attention as next generation sequencing technologies became feasible for major and minor crops. Mixed models have become a key tool for fitting genomic selection models, but most current genomic selection software can only include a single variance component other than the error, making hybrid prediction using additive, dominance and epistatic effects unfeasible for species displaying heterotic effects. Moreover, Likelihood-based software for fitting mixed models with multiple random effects that allows the user to specify the variance-covariance structure of random effects has not been fully exploited. A new open-source R package called sommer is presented to facilitate the use of mixed models for genomic selection and hybrid prediction purposes using more than one variance component and allowing specification of covariance structures. The use of sommer for genomic prediction is demonstrated through several examples using maize and wheat genotypic and phenotypic data. At its core, the program contains three algorithms for estimating variance components: Average information (AI), Expectation-Maximization (EM) and Efficient Mixed Model Association (EMMA). Kernels for calculating the additive, dominance and epistatic relationship matrices are included, along with other useful functions for genomic analysis. Results from sommer were comparable to other software, but the analysis was faster than Bayesian counterparts in the magnitude of hours to days. In addition, ability to deal with missing data, combined with greater flexibility and speed than other REML-based software was achieved by putting together some of the most efficient algorithms to fit models in a gentle environment such as R.

  16. Modeling cell-cycle synchronization during embryogenesis in Xenopus laevis

    NASA Astrophysics Data System (ADS)

    McIsaac, R. Scott; Huang, K. C.; Sengupta, Anirvan; Wingreen, Ned

    2010-03-01

    A widely conserved aspect of embryogenesis is the ability to synchronize nuclear divisions post-fertilization. How is synchronization achieved? Given a typical protein diffusion constant of 10 μm^2sec, and an embryo length of 1mm, it would take diffusion many hours to propagate a signal across the embryo. Therefore, synchrony cannot be attained by diffusion alone. We hypothesize that known autocatalytic reactions of cell-cycle components make the embryo an ``active medium'' in which waves propagate much faster than diffusion, enforcing synchrony. We report on robust spatial synchronization of components of the core cell cycle circuit based on a mathematical model previously determined by in vitro experiments. In vivo, synchronized divisions are preceded by a rapid calcium wave that sweeps across the embryo. Experimental evidence supports the hypothesis that increases in transient calcium levels lead to derepression of a negative feedback loop, allowing cell divisions to start. Preliminary results indicate a novel relationship between the speed of the initial calcium wave and the ability to achieve synchronous cell divisions.

  17. SRS modeling in high power CW fiber lasers for component optimization

    NASA Astrophysics Data System (ADS)

    Brochu, G.; Villeneuve, A.; Faucher, M.; Morin, M.; Trépanier, F.; Dionne, R.

    2017-02-01

    A CW kilowatt fiber laser numerical model has been developed taking into account intracavity stimulated Raman scattering (SRS). It uses the split-step Fourier method which is applied iteratively over several cavity round trips. The gain distribution is re-evaluated after each iteration with a standard CW model using an effective FBG reflectivity that quantifies the non-linear spectral leakage. This model explains why spectrally narrow output couplers produce more SRS than wider FBGs, as recently reported by other authors, and constitute a powerful tool to design optimized and innovative fiber components to push back the onset of SRS for a given fiber core diameter.

  18. Drosophila model of Meier-Gorlin syndrome based on the mutation in a conserved C-Terminal domain of Orc6.

    PubMed

    Balasov, Maxim; Akhmetova, Katarina; Chesnokov, Igor

    2015-11-01

    Meier-Gorlin syndrome (MGS) is an autosomal recessive disorder characterized by microtia, primordial dwarfism, small ears, and skeletal abnormalities. Patients with MGS often carry mutations in the genes encoding the components of the pre-replicative complex such as Origin Recognition Complex (ORC) subunits Orc1, Orc4, Orc6, and helicase loaders Cdt1 and Cdc6. Orc6 is an important component of ORC and has functions in both DNA replication and cytokinesis. Mutation in conserved C-terminal motif of Orc6 associated with MGS impedes the interaction of Orc6 with core ORC. In order to study the effects of MGS mutation in an animal model system we introduced MGS mutation in Orc6 and established Drosophila model of MGS. Mutant flies die at third instar larval stage with abnormal chromosomes and DNA replication defects. The lethality can be rescued by elevated expression of mutant Orc6 protein. Rescued MGS flies are unable to fly and display multiple planar cell polarity defects. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Satellite Gravity Drilling the Earth

    NASA Technical Reports Server (NTRS)

    vonFrese, R. R. B.; Potts, L. V.; Leftwich, T. E.; Kim, H. R.; Han, S.-H.; Taylor, P. T.; Ashgharzadeh, M. F.

    2005-01-01

    Analysis of satellite-measured gravity and topography can provide crust-to-core mass variation models for new insi@t on the geologic evolution of the Earth. The internal structure of the Earth is mostly constrained by seismic observations and geochemical considerations. We suggest that these constraints may be augmented by gravity drilling that interprets satellite altitude free-air gravity observations for boundary undulations of the internal density layers related to mass flow. The approach involves separating the free-air anomalies into terrain-correlated and -decorrelated components based on the correlation spectrum between the anomalies and the gravity effects of the terrain. The terrain-decorrelated gravity anomalies are largely devoid of the long wavelength interfering effects of the terrain gravity and thus provide enhanced constraints for modeling mass variations of the mantle and core. For the Earth, subcrustal interpretations of the terrain-decorrelated anomalies are constrained by radially stratified densities inferred from seismic observations. These anomalies, with frequencies that clearly decrease as the density contrasts deepen, facilitate mapping mass flow patterns related to the thermodynamic state and evolution of the Earth's interior.

  20. Application of reliability-centered maintenance to boiling water reactor emergency core cooling systems fault-tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Y.A.; Feltus, M.A.

    1995-07-01

    Reliability-centered maintenance (RCM) methods are applied to boiling water reactor plant-specific emergency core cooling system probabilistic risk assessment (PRA) fault trees. The RCM is a technique that is system function-based, for improving a preventive maintenance (PM) program, which is applied on a component basis. Many PM programs are based on time-directed maintenance tasks, while RCM methods focus on component condition-directed maintenance tasks. Stroke time test data for motor-operated valves (MOVs) are used to address three aspects concerning RCM: (a) to determine if MOV stroke time testing was useful as a condition-directed PM task; (b) to determine and compare the plant-specificmore » MOV failure data from a broad RCM philosophy time period compared with a PM period and, also, compared with generic industry MOV failure data; and (c) to determine the effects and impact of the plant-specific MOV failure data on core damage frequency (CDF) and system unavailabilities for these emergency systems. The MOV stroke time test data from four emergency core cooling systems [i.e., high-pressure coolant injection (HPCI), reactor core isolation cooling (RCIC), low-pressure core spray (LPCS), and residual heat removal/low-pressure coolant injection (RHR/LPCI)] were gathered from Philadelphia Electric Company`s Peach Bottom Atomic Power Station Units 2 and 3 between 1980 and 1992. The analyses showed that MOV stroke time testing was not a predictor for eminent failure and should be considered as a go/no-go test. The failure data from the broad RCM philosophy showed an improvement compared with the PM-period failure rates in the emergency core cooling system MOVs. Also, the plant-specific MOV failure rates for both maintenance philosophies were shown to be lower than the generic industry estimates.« less

  1. Components of Effective Cognitive-Behavioral Therapy for Pediatric Headache: A Mixed Methods Approach

    PubMed Central

    Law, Emily F.; Beals-Erickson, Sarah E.; Fisher, Emma; Lang, Emily A.; Palermo, Tonya M.

    2017-01-01

    Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache. PMID:29503787

  2. Components of Effective Cognitive-Behavioral Therapy for Pediatric Headache: A Mixed Methods Approach.

    PubMed

    Law, Emily F; Beals-Erickson, Sarah E; Fisher, Emma; Lang, Emily A; Palermo, Tonya M

    2017-01-01

    Internet-delivered treatment has the potential to expand access to evidence-based cognitive-behavioral therapy (CBT) for pediatric headache, and has demonstrated efficacy in small trials for some youth with headache. We used a mixed methods approach to identify effective components of CBT for this population. In Study 1, component profile analysis identified common interventions delivered in published RCTs of effective CBT protocols for pediatric headache delivered face-to-face or via the Internet. We identified a core set of three treatment components that were common across face-to-face and Internet protocols: 1) headache education, 2) relaxation training, and 3) cognitive interventions. Biofeedback was identified as an additional core treatment component delivered in face-to-face protocols only. In Study 2, we conducted qualitative interviews to describe the perspectives of youth with headache and their parents on successful components of an Internet CBT intervention. Eleven themes emerged from the qualitative data analysis, which broadly focused on patient experiences using the treatment components and suggestions for new treatment components. In the Discussion, these mixed methods findings are integrated to inform the adaptation of an Internet CBT protocol for youth with headache.

  3. Design of material management system of mining group based on Hadoop

    NASA Astrophysics Data System (ADS)

    Xia, Zhiyuan; Tan, Zhuoying; Qi, Kuan; Li, Wen

    2018-01-01

    Under the background of persistent slowdown in mining market at present, improving the management level in mining group has become the key link to improve the economic benefit of the mine. According to the practical material management in mining group, three core components of Hadoop are applied: distributed file system HDFS, distributed computing framework Map/Reduce and distributed database HBase. Material management system of mining group based on Hadoop is constructed with the three core components of Hadoop and SSH framework technology. This system was found to strengthen collaboration between mining group and affiliated companies, and then the problems such as inefficient management, server pressure, hardware equipment performance deficiencies that exist in traditional mining material-management system are solved, and then mining group materials management is optimized, the cost of mining management is saved, the enterprise profit is increased.

  4. SBML Level 3 package: Hierarchical Model Composition, Version 1 Release 3

    PubMed Central

    Smith, Lucian P.; Hucka, Michael; Hoops, Stefan; Finney, Andrew; Ginkel, Martin; Myers, Chris J.; Moraru, Ion; Liebermeister, Wolfram

    2017-01-01

    Summary Constructing a model in a hierarchical fashion is a natural approach to managing model complexity, and offers additional opportunities such as the potential to re-use model components. The SBML Level 3 Version 1 Core specification does not directly provide a mechanism for defining hierarchical models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Hierarchical Model Composition package for SBML Level 3 adds the necessary features to SBML to support hierarchical modeling. The package enables a modeler to include submodels within an enclosing SBML model, delete unneeded or redundant elements of that submodel, replace elements of that submodel with element of the containing model, and replace elements of the containing model with elements of the submodel. In addition, the package defines an optional “port” construct, allowing a model to be defined with suggested interfaces between hierarchical components; modelers can chose to use these interfaces, but they are not required to do so and can still interact directly with model elements if they so chose. Finally, the SBML Hierarchical Model Composition package is defined in such a way that a hierarchical model can be “flattened” to an equivalent, non-hierarchical version that uses only plain SBML constructs, thus enabling software tools that do not yet support hierarchy to nevertheless work with SBML hierarchical models. PMID:26528566

  5. Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.

    PubMed

    Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L

    2012-01-01

    In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.

  6. Lean management systems: creating a culture of continuous quality improvement.

    PubMed

    Clark, David M; Silvester, Kate; Knowles, Simon

    2013-08-01

    This is the first in a series of articles describing the application of Lean management systems to Laboratory Medicine. Lean is the term used to describe a principle-based continuous quality improvement (CQI) management system based on the Toyota production system (TPS) that has been evolving for over 70 years. Its origins go back much further and are heavily influenced by the work of W Edwards Deming and the scientific method that forms the basis of most quality management systems. Lean has two fundamental elements--a systematic approach to process improvement by removing waste in order to maximise value for the end-user of the service and a commitment to respect, challenge and develop the people who work within the service to create a culture of continuous improvement. Lean principles have been applied to a growing number of Healthcare systems throughout the world to improve the quality and cost-effectiveness of services for patients and a number of laboratories from all the pathology disciplines have used Lean to shorten turnaround times, improve quality (reduce errors) and improve productivity. Increasingly, models used to plan and implement large scale change in healthcare systems, including the National Health Service (NHS) change model, have evidence-based improvement methodologies (such as Lean CQI) as a core component. Consequently, a working knowledge of improvement methodology will be a core skill for Pathologists involved in leadership and management.

  7. Non-thermal emission in the core of Perseus: results from a long XMM-Newton observation

    NASA Astrophysics Data System (ADS)

    Molendi, S.; Gastaldello, F.

    2009-01-01

    We employ a long XMM-Newton observation of the core of the Perseus cluster to validate claims of a non-thermal component discovered with Chandra. From a meticulous analysis of our dataset, which includes a detailed treatment of systematic errors, we find the 2-10 keV surface brightness of the non-thermal component to be less than about 5 × 10-16 erg~cm-2 s-1 arcsec-2. The most likely explanation for the discrepancy between the XMM-Newton and Chandra estimates is a problem in the effective area calibration of the latter. Our EPIC-based magnetic field lower limits do not disagree with Faraday rotation measure estimates on a few cool cores and with a minimum energy estimate on Perseus. In the not too distant future Simbol-X may allow detection of non-thermal components with intensities more than 10 times lower than those that can be measured with EPIC; nonetheless even the exquisite sensitivity within reach for Simbol-X might be insufficient to detect the IC emission from Perseus.

  8. Atomically informed nonlocal semi-discrete variational Peierls-Nabarro model for planar core dislocations

    PubMed Central

    Liu, Guisen; Cheng, Xi; Wang, Jian; Chen, Kaiguo; Shen, Yao

    2017-01-01

    Prediction of Peierls stress associated with dislocation glide is of fundamental concern in understanding and designing the plasticity and mechanical properties of crystalline materials. Here, we develop a nonlocal semi-discrete variational Peierls-Nabarro (SVPN) model by incorporating the nonlocal atomic interactions into the semi-discrete variational Peierls framework. The nonlocal kernel is simplified by limiting the nonlocal atomic interaction in the nearest neighbor region, and the nonlocal coefficient is directly computed from the dislocation core structure. Our model is capable of accurately predicting the displacement profile, and the Peierls stress, of planar-extended core dislocations in face-centered cubic structures. Our model could be extended to study more complicated planar-extended core dislocations, such as <110> {111} dislocations in Al-based and Ti-based intermetallic compounds. PMID:28252102

  9. The top of the Olduvai subchron in a high-resolution magnetostratigraphy from the West Turkana core WTK13, Hominin Sites and Paleolakes Drilling Project (HSPDP)

    NASA Astrophysics Data System (ADS)

    Sier, Mark; Langereis, Cor; Dupont-Nivet, Guillaume; Feibel, Craig; Jordeens, Jose; van der Lubbe, Jeroen; Beck, Catherine; Olago, Daniel; Cohen, Andrew

    2017-04-01

    One of the major challenges in understanding the evolution of our own species is identifying the role climate change has played in the evolution of earlier hominin species. To clarify the influence of climate, we need long and continuous high-resolution paleoclimate records, preferably obtained from hominin-bearing sediments, that are well-dated by tephro- and magnetostratigraphy and other methods. This is hindered, however, by the fact that fossil-bearing sediments are often discontinuous, and subject to weathering, which may lead to oxidation and remagnetization. To obtain fresh, unweathered sediments, the Hominin Sites and Paleolakes Drilling Project (HSPDP) collected a 216- meter core (WTK13) in 2013 from deposits of Early Pleistocene paleolake Lorenyang in the western Turkana Basin (Kenya). Here, we present the magnetostratigraphy of the core. Rock magnetic analyses reveal the presence of iron sulphides carrying the remanent magnetizations. To recover polarity orientation from the near-equatorial WTK13 core drilled at 5°N, we developed and successfully applied two independent drill-core reorientation methods taking advantage of (1) the sedimentary fabric as expressed in the Anisotropy of Magnetic Susceptibility (AMS) and (2) the occurrence of a viscous component oriented in the present day field. The reoriented directions reveal a normal to reversed polarity reversal identified as the top of the Olduvai subchron. From this excellent record, we find no evidence for the 'Vrica subchron' previously reported in the area. We suggest that outcrop-based interpretations supporting the presence of the Vrica subchron have been affected by the oxidation of iron sulphides initially present in the sediments as evident in the core record, and by subsequent remagnetization. Based on our new high-resolution magnetostratigraphy and stratigraphic markers, we provide constraints for an initial age model of the WTK13 core. We discuss the implications of the observed geomagnetic record for human evolution studies.

  10. Phase Equilibrium Experiments on Potential Lunar Core Compositions: Extension of Current Knowledge to Multi-Component (Fe-Ni-Si-S-C) Systems

    NASA Technical Reports Server (NTRS)

    Righter, K.; Pando, K.; Danielson, L.

    2014-01-01

    Numerous geophysical and geochemical studies have suggested the existence of a small metallic lunar core, but the composition of that core is not known. Knowledge of the composition can have a large impact on the thermal evolution of the core, its possible early dynamo creation, and its overall size and fraction of solid and liquid. Thermal models predict that the current temperature at the core-mantle boundary of the Moon is near 1650 K. Re-evaluation of Apollo seismic data has highlighted the need for new data in a broader range of bulk core compositions in the PT range of the lunar core. Geochemical measurements have suggested a more volatile-rich Moon than previously thought. And GRAIL mission data may allow much better constraints on the physical nature of the lunar core. All of these factors have led us to determine new phase equilibria experimental studies in the Fe-Ni-S-C-Si system in the relevant PT range of the lunar core that will help constrain the composition of Moon's core.

  11. Nanoporous silica-based protocells at multiple scales for designs of life and nanomedicine

    DOE PAGES

    Sun, Jie; Jakobsson, Eric; Wang, Yingxiao; ...

    2015-01-19

    In this study, various protocell models have been constructed de novo with the bottom-up approach. Here we describe a silica-based protocell composed of a nanoporous amorphous silica core encapsulated within a lipid bilayer built by self-assembly that provides for independent definition of cell interior and the surface membrane. In this review, we will first describe the essential features of this architecture and then summarize the current development of silica-based protocells at both micro- and nanoscale with diverse functionalities. As the structure of the silica is relatively static, silica-core protocells do not have the ability to change shape, but their interiormore » structure provides a highly crowded and, in some cases, authentic scaffold upon which biomolecular components and systems could be reconstituted. In basic research, the larger protocells based on precise silica replicas of cells could be developed into geometrically realistic bioreactor platforms to enable cellular functions like coupled biochemical reactions, while in translational research smaller protocells based on mesoporous silica nanoparticles are being developed for targeted nanomedicine. Ultimately we see two different motivations for protocell research and development: (1) to emulate life in order to understand it; and (2) to use biomimicry to engineer desired cellular interactions.« less

  12. Nanoporous silica-based protocells at multiple scales for designs of life and nanomedicine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Jie; Jakobsson, Eric; Wang, Yingxiao

    In this study, various protocell models have been constructed de novo with the bottom-up approach. Here we describe a silica-based protocell composed of a nanoporous amorphous silica core encapsulated within a lipid bilayer built by self-assembly that provides for independent definition of cell interior and the surface membrane. In this review, we will first describe the essential features of this architecture and then summarize the current development of silica-based protocells at both micro- and nanoscale with diverse functionalities. As the structure of the silica is relatively static, silica-core protocells do not have the ability to change shape, but their interiormore » structure provides a highly crowded and, in some cases, authentic scaffold upon which biomolecular components and systems could be reconstituted. In basic research, the larger protocells based on precise silica replicas of cells could be developed into geometrically realistic bioreactor platforms to enable cellular functions like coupled biochemical reactions, while in translational research smaller protocells based on mesoporous silica nanoparticles are being developed for targeted nanomedicine. Ultimately we see two different motivations for protocell research and development: (1) to emulate life in order to understand it; and (2) to use biomimicry to engineer desired cellular interactions.« less

  13. Circuit models and three-dimensional electromagnetic simulations of a 1-MA linear transformer driver stage

    NASA Astrophysics Data System (ADS)

    Rose, D. V.; Miller, C. L.; Welch, D. R.; Clark, R. E.; Madrid, E. A.; Mostrom, C. B.; Stygar, W. A.; Lechien, K. R.; Mazarakis, M. A.; Langston, W. L.; Porter, J. L.; Woodworth, J. R.

    2010-09-01

    A 3D fully electromagnetic (EM) model of the principal pulsed-power components of a high-current linear transformer driver (LTD) has been developed. LTD systems are a relatively new modular and compact pulsed-power technology based on high-energy density capacitors and low-inductance switches located within a linear-induction cavity. We model 1-MA, 100-kV, 100-ns rise-time LTD cavities [A. A. Kim , Phys. Rev. ST Accel. Beams 12, 050402 (2009)PRABFM1098-440210.1103/PhysRevSTAB.12.050402] which can be used to drive z-pinch and material dynamics experiments. The model simulates the generation and propagation of electromagnetic power from individual capacitors and triggered gas switches to a radially symmetric output line. Multiple cavities, combined to provide voltage addition, drive a water-filled coaxial transmission line. A 3D fully EM model of a single 1-MA 100-kV LTD cavity driving a simple resistive load is presented and compared to electrical measurements. A new model of the current loss through the ferromagnetic cores is developed for use both in circuit representations of an LTD cavity and in the 3D EM simulations. Good agreement between the measured core current, a simple circuit model, and the 3D simulation model is obtained. A 3D EM model of an idealized ten-cavity LTD accelerator is also developed. The model results demonstrate efficient voltage addition when driving a matched impedance load, in good agreement with an idealized circuit model.

  14. A 21 000-year record of fluorescent organic matter markers in the WAIS Divide ice core

    NASA Astrophysics Data System (ADS)

    D'Andrilli, Juliana; Foreman, Christine M.; Sigl, Michael; Priscu, John C.; McConnell, Joseph R.

    2017-05-01

    Englacial ice contains a significant reservoir of organic material (OM), preserving a chronological record of materials from Earth's past. Here, we investigate if OM composition surveys in ice core research can provide paleoecological information on the dynamic nature of our Earth through time. Temporal trends in OM composition from the early Holocene extending back to the Last Glacial Maximum (LGM) of the West Antarctic Ice Sheet Divide (WD) ice core were measured by fluorescence spectroscopy. Multivariate parallel factor (PARAFAC) analysis is widely used to isolate the chemical components that best describe the observed variation across three-dimensional fluorescence spectroscopy (excitation-emission matrices; EEMs) assays. Fluorescent OM markers identified by PARAFAC modeling of the EEMs from the LGM (27.0-18.0 kyr BP; before present 1950) through the last deglaciation (LD; 18.0-11.5 kyr BP), to the mid-Holocene (11.5-6.0 kyr BP) provided evidence of different types of fluorescent OM composition and origin in the WD ice core over 21.0 kyr. Low excitation-emission wavelength fluorescent PARAFAC component one (C1), associated with chemical species similar to simple lignin phenols was the greatest contributor throughout the ice core, suggesting a strong signature of terrestrial OM in all climate periods. The component two (C2) OM marker, encompassed distinct variability in the ice core describing chemical species similar to tannin- and phenylalanine-like material. Component three (C3), associated with humic-like terrestrial material further resistant to biodegradation, was only characteristic of the Holocene, suggesting that more complex organic polymers such as lignins or tannins may be an ecological marker of warmer climates. We suggest that fluorescent OM markers observed during the LGM were the result of greater continental dust loading of lignin precursor (monolignol) material in a drier climate, with lower marine influences when sea ice extent was higher and continents had more expansive tundra cover. As the climate warmed, the record of OM markers in the WD ice core changed, reflecting shifts in carbon productivity as a result of global ecosystem response.

  15. Research References Related to Indoor Air Quality in Schools

    EPA Pesticide Factsheets

    A healthy school environment is one of the keys to keeping young minds and bodies strong. In fact, a healthy school environment is one of eight core components in the Centers for Disease Control and Prevention (CDC) model Healthy Youth!

  16. Frequently Asked Questions about Improved Academic Performance

    EPA Pesticide Factsheets

    A healthy school environment is one of the keys to keeping young minds and bodies strong. In fact, a healthy school environment is one of eight core components in the Centers for Disease Control and Prevention (CDC) model “Healthy Youth! Coordinated

  17. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    PubMed

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  18. 10 CFR 55.41 - Written examination: Operators.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... coefficients, and poison effects. (2) General design features of the core, including core structure, fuel elements, control rods, core instrumentation, and coolant flow. (3) Mechanical components and design... changes, and operating limitations and reasons for these operating characteristics. (6) Design, components...

  19. 10 CFR 55.41 - Written examination: Operators.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... coefficients, and poison effects. (2) General design features of the core, including core structure, fuel elements, control rods, core instrumentation, and coolant flow. (3) Mechanical components and design... changes, and operating limitations and reasons for these operating characteristics. (6) Design, components...

  20. 10 CFR 55.41 - Written examination: Operators.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... coefficients, and poison effects. (2) General design features of the core, including core structure, fuel elements, control rods, core instrumentation, and coolant flow. (3) Mechanical components and design... changes, and operating limitations and reasons for these operating characteristics. (6) Design, components...

  1. The nebular spectra of the transitional Type Ia Supernovae 2007on and 2011iv: broad, multiple components indicate aspherical explosion cores

    NASA Astrophysics Data System (ADS)

    Mazzali, P. A.; Ashall, C.; Pian, E.; Stritzinger, M. D.; Gall, C.; Phillips, M. M.; Höflich, P.; Hsiao, E.

    2018-05-01

    The nebular-epoch spectrum of the rapidly declining, `transitional' Type Ia supernova (SN) 2007on showed double emission peaks, which have been interpreted as indicating that the SN was the result of the direct collision of two white dwarfs. The spectrum can be reproduced using two distinct emission components, one redshifted and one blueshifted. These components are similar in mass but have slightly different degrees of ionization. They recede from one another at a line-of-sight speed larger than the sum of the combined expansion velocities of their emitting cores, thereby acting as two independent nebulae. While this configuration appears to be consistent with the scenario of two white dwarfs colliding, it may also indicate an off-centre delayed detonation explosion of a near-Chandrasekhar-mass white dwarf. In either case, broad emission line widths and a rapidly evolving light curve can be expected for the bolometric luminosity of the SN. This is the case for both SNe 2007on and 2011iv, also a transitional SN Ia that exploded in the same elliptical galaxy, NGC 1404. Although SN 2011iv does not show double-peaked emission line profiles, the width of its emission lines is such that a two-component model yields somewhat better results than a single-component model. Most of the mass ejected is in one component, however, which suggests that SN 2011iv was the result of the off-centre ignition of a Chandrasekhar-mass white dwarf.

  2. An Intelligent Architecture Based on Field Programmable Gate Arrays Designed to Detect Moving Objects by Using Principal Component Analysis

    PubMed Central

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406

  3. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  4. An intelligent architecture based on Field Programmable Gate Arrays designed to detect moving objects by using Principal Component Analysis.

    PubMed

    Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel

    2010-01-01

    This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.

  5. DyNAvectors: dynamic constitutional vectors for adaptive DNA transfection.

    PubMed

    Clima, Lilia; Peptanariu, Dragos; Pinteala, Mariana; Salic, Adrian; Barboiu, Mihail

    2015-12-25

    Dynamic constitutional frameworks, based on squalene, PEG and PEI components, reversibly connected to core centers, allow the efficient identification of adaptive vectors for good DNA transfection efficiency and are well tolerated by mammalian cells.

  6. Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).

    NASA Astrophysics Data System (ADS)

    Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.

    2017-04-01

    The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.

  7. Rise in central west Greenland surface melt unprecedented over the last three centuries

    NASA Astrophysics Data System (ADS)

    Trusel, Luke; Das, Sarah; Osman, Matthew; Evans, Matthew; Smith, Ben; McConnell, Joe; Noël, Brice; van den Broeke, Michiel

    2017-04-01

    Greenland Ice Sheet surface melting has intensified and expanded over the last several decades and is now a leading component of ice sheet mass loss. Here, we constrain the multi-century temporal evolution of surface melt across central west Greenland by quantifying layers of refrozen melt within well-dated firn and ice cores collected in 2014 and 2015, as well as from a core collected in 2004. We find significant agreement among ice core, satellite, and regional climate model melt datasets over recent decades, confirming the fidelity of the ice core melt stratigraphy as a reliable record of past variability in the magnitude of surface melt. We also find a significant correlation between the melt records derived from our new 100-m GC-2015 core (2436 m.a.s.l.) and the older (2004) 150-m D5 core (2472 m.a.s.l.) located 50 km to the southeast. This agreement demonstrates the robustness of the ice core-derived melt histories and the potential for reconstructing regional melt evolution from a single site, despite local variability in melt percolation and refreeze processes. Our array of upper percolation zone cores reveals that although the overall frequency of melt at these sites has not increased, the intensification of melt over the last three decades is unprecedented within at least the last 365 years. Utilizing the regional climate model RACMO 2.3, we show that this melt intensification is a nonlinear response to warming summer air temperatures, thus underscoring the heightened sensitivity of this sector of Greenland to further climate warming. Finally, we examine spatial correlations between the ice core melt records and modeled melt fields across the ice sheet to assess the broader representation of each ice core record. This analysis reveals wide-ranging significant correlations, including to modeled meltwater runoff. As such, our ice core melt records may furthermore offer unique, observationally-constrained insights into past variability in ice sheet mass loss.

  8. Automated technologies needed to prevent radioactive materials from reentering the atmosphere

    NASA Astrophysics Data System (ADS)

    Buden, David; Angelo, Joseph A., Jr.

    Project SIREN (Search, Intercept, Retrieve, Expulsion Nuclear) has been created to identify and evaluate the technologies and operational strategies needed to rendezvous with and capture aerospace radioactive materials (e.g., a distressed or spent space reactor core) before such materials can reenter the terrestrial atmosphere and then to safely move these captured materials to an acceptable space destination for proper disposal. A major component of the current Project SIREN effort is the development of an interactive technology model (including a computerized data base) that explores in building block fashion the interaction of the technologies and procedures needed to successfully accomplish a SIREN mission. This SIREN model will include appropriate national and international technology elements-both contemporary and projected into the next century. To permit maximum flexibility and use, the SIREN technology data base is being programmed for use on 386-class PC's.

  9. The Quartet does not play alone. Comment on "The quartet theory of human emotions: An integrative and neurofunctional model" by S. Koelsch et al.

    NASA Astrophysics Data System (ADS)

    Marco-Pallarés, Josep; Mas-Herrero, Ernest

    2015-06-01

    The study of emotions has been an important topic in cognitive and affective neuroscience in the last decades. In the present manuscript, Koelsch et al. [1] propose a new neurobiological framework based on four emotional core systems (the Quartet), involved in different aspects of human emotion processing. This is an interesting theory that goes beyond classical emotion classification to describe the emotional experience based on four main cerebral components (brainstem, diencephalon, hippocampus, and orbitofrontal cortex). This approach allows the description of different classes of affects, including those that are unique in humans as emotional responses associated to abstract stimuli (for example, aesthetical stimuli such as art and music).

  10. Psychotherapy training: Suggestions for core ingredients and future research.

    PubMed

    Boswell, James F; Castonguay, Louis G

    2007-12-01

    Despite our considerable depth and breadth of empirical knowledge on psychotherapy process and outcome, research on psychotherapy training is somewhat lacking. We would argue, however, that the scientist-practitioner model should not only guide practice, but also the way our field approaches training. In this paper we outline our perspective on the crucial elements of psychotherapy training based on available evidence, theory, and clinical experience, focusing specifically on the structure, key components, and important skills to be learned in a successful training program. In addition, we derive specific research directions based on the crucial elements of our proposed training perspective, and offer general considerations for research on training, including method and measurement issues. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  11. Experience of Delphi technique in the process of establishing consensus on core competencies.

    PubMed

    Raghav, Pankaja Ravi; Kumar, Dewesh; Bhardwaj, Pankaj

    2016-01-01

    The Department of Community Medicine and Family Medicine (CMFM) has been started as a new model for imparting the components of family medicine and delivering health-care services at primary and secondary levels in all six newly established All India Institute of Medical Sciences (AIIMS), but there is no competency-based curriculum for it. The paper aims to share the experience of Delphi method in the process of developing consensus on core competencies of the new model of CMFM in AIIMS for undergraduate medical students in India. The study adopted different approaches and methods, but Delphi was the most critical method used in this research. In Delphi, the experts were contacted by e-mail and their feedback on the same was analyzed. Two rounds of Delphi were conducted in which 150 participants were contacted in Delphi-I but only 46 responded. In Delphi-II, 26 participants responded whose responses were finally considered for analysis. Three of the core competencies namely clinician, primary-care physician, and professionalism were agreed by all the participants, and the least agreement was observed in the competencies of epidemiologist and medical teacher. The experts having more experience were less consistent as responses were changed from agree to disagree in more than 15% of participants and 6% changed from disagree to agree. Within the given constraints, the final list of competencies and skills for the discipline of CMFM compiled after the Delphi process will provide a useful insight into the development of competency-based curriculum of the subject.

  12. Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.

    This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less

  13. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  14. Sorbent, Sublimation, and Icing Modeling Methods: Experimental Validation and Application to an Integrated MTSA Subassembly Thermal Model

    NASA Technical Reports Server (NTRS)

    Bower, Chad; Padilla, Sebastian; Iacomini, Christie; Paul, Heather L.

    2010-01-01

    This paper details the validation of modeling methods for the three core components of a Metabolic heat regenerated Temperature Swing Adsorption (MTSA) subassembly, developed for use in a Portable Life Support System (PLSS). The first core component in the subassembly is a sorbent bed, used to capture and reject metabolically produced carbon dioxide (CO2). The sorbent bed performance can be augmented with a temperature swing driven by a liquid CO2 (LCO2) sublimation heat exchanger (SHX) for cooling the sorbent bed, and a condensing, icing heat exchanger (CIHX) for warming the sorbent bed. As part of the overall MTSA effort, scaled design validation test articles for each of these three components have been independently tested in laboratory conditions. Previously described modeling methodologies developed for implementation in Thermal Desktop and SINDA/FLUINT are reviewed and updated, their application in test article models outlined, and the results of those model correlations relayed. Assessment of the applicability of each modeling methodology to the challenge of simulating the response of the test articles and their extensibility to a full scale integrated subassembly model is given. The independent verified and validated modeling methods are applied to the development of a MTSA subassembly prototype model and predictions of the subassembly performance are given. These models and modeling methodologies capture simulation of several challenging and novel physical phenomena in the Thermal Desktop and SINDA/FLUINT software suite. Novel methodologies include CO2 adsorption front tracking and associated thermal response in the sorbent bed, heat transfer associated with sublimation of entrained solid CO2 in the SHX, and water mass transfer in the form of ice as low as 210 K in the CIHX.

  15. A new discrete dynamic model of ABA-induced stomatal closure predicts key feedback loops

    PubMed Central

    Acharya, Biswa R.; Jeon, Byeong Wook; Zañudo, Jorge G. T.; Zhu, Mengmeng; Osman, Karim; Assmann, Sarah M.

    2017-01-01

    Stomata, microscopic pores in leaf surfaces through which water loss and carbon dioxide uptake occur, are closed in response to drought by the phytohormone abscisic acid (ABA). This process is vital for drought tolerance and has been the topic of extensive experimental investigation in the last decades. Although a core signaling chain has been elucidated consisting of ABA binding to receptors, which alleviates negative regulation by protein phosphatases 2C (PP2Cs) of the protein kinase OPEN STOMATA 1 (OST1) and ultimately results in activation of anion channels, osmotic water loss, and stomatal closure, over 70 additional components have been identified, yet their relationships with each other and the core components are poorly elucidated. We integrated and processed hundreds of disparate observations regarding ABA signal transduction responses underlying stomatal closure into a network of 84 nodes and 156 edges and, as a result, established those relationships, including identification of a 36-node, strongly connected (feedback-rich) component as well as its in- and out-components. The network’s domination by a feedback-rich component may reflect a general feature of rapid signaling events. We developed a discrete dynamic model of this network and elucidated the effects of ABA plus knockout or constitutive activity of 79 nodes on both the outcome of the system (closure) and the status of all internal nodes. The model, with more than 1024 system states, is far from fully determined by the available data, yet model results agree with existing experiments in 82 cases and disagree in only 17 cases, a validation rate of 75%. Our results reveal nodes that could be engineered to impact stomatal closure in a controlled fashion and also provide over 140 novel predictions for which experimental data are currently lacking. Noting the paucity of wet-bench data regarding combinatorial effects of ABA and internal node activation, we experimentally confirmed several predictions of the model with regard to reactive oxygen species, cytosolic Ca2+ (Ca2+c), and heterotrimeric G-protein signaling. We analyzed dynamics-determining positive and negative feedback loops, thereby elucidating the attractor (dynamic behavior) repertoire of the system and the groups of nodes that determine each attractor. Based on this analysis, we predict the likely presence of a previously unrecognized feedback mechanism dependent on Ca2+c. This mechanism would provide model agreement with 10 additional experimental observations, for a validation rate of 85%. Our research underscores the importance of feedback regulation in generating robust and adaptable biological responses. The high validation rate of our model illustrates the advantages of discrete dynamic modeling for complex, nonlinear systems common in biology. PMID:28937978

  16. GALFIT-CORSAIR: Implementing the Core-Sérsic Model Into GALFIT

    NASA Astrophysics Data System (ADS)

    Bonfini, Paolo

    2014-10-01

    We introduce GALFIT-CORSAIR: a publicly available, fully retro-compatible modification of the 2D fitting software GALFIT (v.3) which adds an implementation of the core-Sersic model. We demonstrate the software by fitting the images of NGC 5557 and NGC 5813, which have been previously identified as core-Sersic galaxies by their 1D radial light profiles. These two examples are representative of different dust obscuration conditions, and of bulge/disk decomposition. To perform the analysis, we obtained deep Hubble Legacy Archive (HLA) mosaics in the F555W filter (~V-band). We successfully reproduce the results of the previous 1D analysis, modulo the intrinsic differences between the 1D and the 2D fitting procedures. The code and the analysis procedure described here have been developed for the first coherent 2D analysis of a sample of core-Sersic galaxies, which will be presented in a forth-coming paper. As the 2D analysis provides better constraining on multi-component fitting, and is fully seeing-corrected, it will yield complementary constraints on the missing mass in depleted galaxy cores.

  17. Cscibox: A Software System for Age-Model Construction and Evaluation

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.

    2014-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.

  18. Are All Program Elements Created Equal? Relations Between Specific Social and Emotional Learning Components and Teacher-Student Classroom Interaction Quality.

    PubMed

    Abry, Tashia; Rimm-Kaufman, Sara E; Curby, Timothy W

    2017-02-01

    School-based social and emotional learning (SEL) programs are presented to educators with little understanding of the program components that have the greatest leverage for improving targeted outcomes. Conducted in the context of a randomized controlled trial, the present study used variation in treatment teachers' (N = 143) implementation of four core components of the Responsive Classroom approach to examine relations between each component and the quality of teachers' emotional, organizational, and instructional interactions in third, fourth, and fifth grade classrooms (controlling for pre-intervention interaction quality and other covariates). We also examined the extent to which these relations varied as a function of teachers' baseline levels of interaction quality. Indices of teachers' implementation of Morning Meeting, Rule Creation, Interactive Modeling, and Academic Choice were derived from a combination of teacher-reported surveys and classroom observations. Ratings of teacher-student classroom interactions were aggregated across five observations conducted throughout the school year. Structural path models indicated that teachers' use of Morning Meeting and Academic Choice related to higher levels of emotionally supportive interactions; Academic Choice also related to higher levels of instructional interactions. In addition, teachers' baseline interaction quality moderated several associations such that the strongest relations between RC component use and interaction quality emerged for teachers with the lowest baseline interaction quality. Results highlight the value of examining individual program components toward the identification of program active ingredients that can inform intervention optimization and teacher professional development.

  19. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    Most of today’s visualization libraries and applications are based off of what is known today as the visualization pipeline. In the visualization pipeline model, algorithms are encapsulated as “filtering” components with inputs and outputs. These components can be combined by connecting the outputs of one filter to the inputs of another filter. The visualization pipeline model is popular because it provides a convenient abstraction that allows users to combine algorithms in powerful ways. Unfortunately, the visualization pipeline cannot run effectively on exascale computers. Experts agree that the exascale machine will comprise processors that contain many cores. Furthermore, physical limitations willmore » prevent data movement in and out of the chip (that is, between main memory and the processing cores) from keeping pace with improvements in overall compute performance. To use these processors to their fullest capability, it is essential to carefully consider memory access. This is where the visualization pipeline fails. Each filtering component in the visualization library is expected to take a data set in its entirety, perform some computation across all of the elements, and output the complete results. The process of iterating over all elements must be repeated in each filter, which is one of the worst possible ways to traverse memory when trying to maximize the number of executions per memory access. This project investigates a new type of visualization framework that exhibits a pervasive parallelism necessary to run on exascale machines. Our framework achieves this by defining algorithms in terms of functors, which are localized, stateless operations. Functors can be composited in much the same way as filters in the visualization pipeline. But, functors’ design allows them to be concurrently running on massive amounts of lightweight threads. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale computer. This project concludes with a functional prototype containing pervasively parallel algorithms that perform demonstratively well on many-core processors. These algorithms are fundamental for performing data analysis and visualization at extreme scale.« less

  20. An adaptive management process for forest soil conservation.

    Treesearch

    Michael P. Curran; Douglas G. Maynard; Ronald L. Heninger; Thomas A. Terry; Steven W. Howes; Douglas M. Stone; Thomas Niemann; Richard E. Miller; Robert F. Powers

    2005-01-01

    Soil disturbance guidelines should be based on comparable disturbance categories adapted to specific local soil conditions, validated by monitoring and research. Guidelines, standards, and practices should be continually improved based on an adaptive management process, which is presented in this paper. Core components of this process include: reliable monitoring...

  1. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. II. Analysis of Source Properties

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Preston, R. A.

    2001-06-01

    We have performed a multidimensional correlation analysis on the observed properties of a statistically complete core-selected sample of compact radio-loud active galactic nuclei based on data from the VLBI Space Observing Programme (Paper I) and previously published studies. Our sample is drawn from the well-studied Pearson-Readhead (PR) survey and is ideally suited for investigating the general effects of relativistic beaming in compact radio sources. In addition to confirming many previously known correlations, we have discovered several new trends that lend additional support to the beaming model. These trends suggest that the most highly beamed sources in core-selected samples tend to have (1) high optical polarizations; (2) large parsec- kiloparsec-scale jet misalignments; (3) prominent VLBI core components; (4) one-sided, core, or halo radio morphology on kiloparsec scales; (5) narrow emission line equivalent widths; and (6) a strong tendency for intraday variability at radio wavelengths. We have used higher resolution space and ground-based VLBI maps to confirm the bimodality of the jet misalignment distribution for the PR survey and find that the sources with aligned parsec- and kiloparsec-scale jets generally have arcsecond-scale radio emission on both sides of the core. The aligned sources also have broader emission line widths. We find evidence that the BL Lacertae objects in the PR survey are all highly beamed and have very similar properties to the high optically polarized quasars, with the exception of smaller redshifts. A cluster analysis on our data shows that after partialing out the effects of redshift, the luminosities of our sample objects in various wave bands are generally well correlated with each other but not with other source properties.

  2. Bose-Einstein condensate & degenerate Fermi cored dark matter halos

    NASA Astrophysics Data System (ADS)

    Chung, W.-J.; Nelson, L. A.

    2018-06-01

    There has been considerable interest in the last several years in support of the idea that galaxies and clusters could have highly condensed cores of dark matter (DM) within their central regions. In particular, it has been suggested that dark matter could form Bose-Einstein condensates (BECs) or degenerate Fermi cores. We examine these possibilities under the assumption that the core consists of highly condensed DM (either bosons or fermions) that is embedded in a diffuse envelope (e.g., isothermal sphere). The novelty of our approach is that we invoke composite polytropes to model spherical collisionless structures in a way that is physically intuitive and can be generalized to include other equations of state (EOSs). Our model is very amenable to the analysis of BEC cores (composed of ultra-light bosons) that have been proposed to resolve small-scale CDM anomalies. We show that the analysis can readily be applied to bosons with or without small repulsive self-interactions. With respect to degenerate Fermi cores, we confirm that fermionic particle masses between 1—1000 keV are not excluded by the observations. Finally, we note that this approach can be extended to include a wide range of EOSs in addition to multi-component collisionless systems.

  3. Dynamo Tests for Stratification Below the Core-Mantle Boundary

    NASA Astrophysics Data System (ADS)

    Olson, P.; Landeau, M.

    2017-12-01

    Evidence from seismology, mineral physics, and core dynamics points to a layer with an overall stable stratification in the Earth's outer core, possibly thermal in origin, extending below the core-mantle boundary (CMB) for several hundred kilometers. In contrast, energetic deep mantle convection with elevated heat flux implies locally unstable thermal stratification below the CMB in places, consistent with interpretations of non-dipole geomagnetic field behavior that favor upwelling flows below the CMB. Here, we model the structure of convection and magnetic fields in the core using numerical dynamos with laterally heterogeneous boundary heat flux in order to rationalize this conflicting evidence. Strongly heterogeneous boundary heat flux generates localized convection beneath the CMB that coexists with an overall stable stratification there. Partially stratified dynamos have distinctive time average magnetic field structures. Without stratification or with stratification confined to a thin layer, the octupole component is small and the CMB magnetic field structure includes polar intensity minima. With more extensive stratification, the octupole component is large and the magnetic field structure includes intense patches or high intensity lobes in the polar regions. Comparisons with the time-averaged geomagnetic field are generally favorable for partial stratification in a thin layer but unfavorable for stratification in a thick layer beneath the CMB.

  4. Judging Alignment of Curriculum-Based Measures in Mathematics and Common Core Standards

    ERIC Educational Resources Information Center

    Morton, Christopher

    2013-01-01

    Measurement literature supports the utility of alignment models for application with state standards and large-scale assessments. However, the literature is lacking in the application of these models to curriculum-based measures (CBMs) and common core standards. In this study, I investigate the alignment of CBMs and standards, with specific…

  5. A Turbulent Origin for the Complex Envelope Kinematics in the Young Low-mass Core Per-bolo 58

    NASA Astrophysics Data System (ADS)

    Maureira, María José; Arce, Héctor G.; Offner, Stella S. R.; Dunham, Michael M.; Pineda, Jaime E.; Fernández-López, Manuel; Chen, Xuepeng; Mardones, Diego

    2017-11-01

    We use CARMA 3 mm continuum and molecular lines (NH2D, N2H+, HCO+, HCN, and CS) at ˜1000 au resolution to characterize the structure and kinematics of the envelope surrounding the deeply embedded first core candidate Per-bolo 58. The line profile of the observed species shows two distinct peaks separated by 0.4-0.6 km s-1, which most likely arise from two different optically thin velocity components rather than the product of self-absorption in an optically thick line. The two velocity components, each with a mass of ˜0.5-0.6 {M}⊙ , overlap spatially at the position of the continuum emission and produce a general gradient along the outflow direction. We investigate whether these observations are consistent with infall in a turbulent and magnetized envelope. We compare the morphology and spectra of the N2H+ (1-0) with synthetic observations of an MHD simulation that considers the collapse of an isolated core that is initially perturbed with a turbulent field. The proposed model matches the data in the production of two velocity components, traced by the isolated hyperfine line of the N2H+ (1-0) spectra, and shows a general agreement in morphology and velocity field. We also use large maps of the region to compare the kinematics of the core with that of the surrounding large-scale filamentary structure and find that accretion from the large-scale filament could also explain the complex kinematics exhibited by this young dense core.

  6. Linking hygroscopicity and the surface microstructure of model inorganic salts, simple and complex carbohydrates, and authentic sea spray aerosol particles.

    PubMed

    Estillore, Armando D; Morris, Holly S; Or, Victor W; Lee, Hansol D; Alves, Michael R; Marciano, Meagan A; Laskina, Olga; Qin, Zhen; Tivanski, Alexei V; Grassian, Vicki H

    2017-08-09

    Individual airborne sea spray aerosol (SSA) particles show diversity in their morphologies and water uptake properties that are highly dependent on the biological, chemical, and physical processes within the sea subsurface and the sea surface microlayer. In this study, hygroscopicity data for model systems of organic compounds of marine origin mixed with NaCl are compared to data for authentic SSA samples collected in an ocean-atmosphere facility providing insights into the SSA particle growth, phase transitions and interactions with water vapor in the atmosphere. In particular, we combine single particle morphology analyses using atomic force microscopy (AFM) with hygroscopic growth measurements in order to provide important insights into particle hygroscopicity and the surface microstructure. For model systems, a range of simple and complex carbohydrates were studied including glucose, maltose, sucrose, laminarin, sodium alginate, and lipopolysaccharides. The measured hygroscopic growth was compared with predictions from the Extended-Aerosol Inorganics Model (E-AIM). It is shown here that the E-AIM model describes well the deliquescence transition and hygroscopic growth at low mass ratios but not as well for high ratios, most likely due to a high organic volume fraction. AFM imaging reveals that the equilibrium morphology of these single-component organic particles is amorphous. When NaCl is mixed with the organics, the particles adopt a core-shell morphology with a cubic NaCl core and the organics forming a shell similar to what is observed for the authentic SSA samples. The observation of such core-shell morphologies is found to be highly dependent on the salt to organic ratio and varies depending on the nature and solubility of the organic component. Additionally, single particle organic volume fraction AFM analysis of NaCl : glucose and NaCl : laminarin mixtures shows that the ratio of salt to organics in solution does not correspond exactly for individual particles - showing diversity within the ensemble of particles produced even for a simple two component system.

  7. Developing a patient-led electronic feedback system for quality and safety within Renal PatientView.

    PubMed

    Giles, Sally J; Reynolds, Caroline; Heyhoe, Jane; Armitage, Gerry

    2017-03-01

    It is increasingly acknowledged that patients can provide direct feedback about the quality and safety of their care through patient reporting systems. The aim of this study was to explore the feasibility of patients, healthcare professionals and researchers working in partnership to develop a patient-led quality and safety feedback system within an existing electronic health record (EHR), known as Renal PatientView (RPV). Phase 1 (inception) involved focus groups (n = 9) and phase 2 (requirements) involved cognitive walkthroughs (n = 34) and 1:1 qualitative interviews (n = 34) with patients and healthcare professionals. A Joint Services Expert Panel (JSP) was convened to review the findings from phase 1 and agree the core principles and components of the system prototype. Phase 1 data were analysed using a thematic approach. Data from phase 1 were used to inform the design of the initial system prototype. Phase 2 data were analysed using the components of heuristic evaluation, resulting in a list of core principles and components for the final system prototype. Phase 1 identified four main barriers and facilitators to patients feeding back on quality and safety concerns. In phase 2, the JSP agreed that the system should be based on seven core principles and components. Stakeholders were able to work together to identify core principles and components for an electronic patient quality and safety feedback system in renal services. Tensions arose due to competing priorities, particularly around anonymity and feedback. Careful consideration should be given to the feasibility of integrating a novel element with differing priorities into an established system with existing functions and objectives. © 2016 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  8. Second- and third-harmonic generation in metal-based structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scalora, M.; Akozbek, N.; Bloemer, M. J.

    We present a theoretical approach to the study of second- and third-harmonic generation from metallic structures and nanocavities filled with a nonlinear material in the ultrashort pulse regime. We model the metal as a two-component medium, using the hydrodynamic model to describe free electrons and Lorentz oscillators to account for core electron contributions to both the linear dielectric constant and harmonic generation. The active nonlinear medium that may fill a metallic nanocavity, or be positioned between metallic layers in a stack, is also modeled using Lorentz oscillators and surface phenomena due to symmetry breaking are taken into account. We studymore » the effects of incident TE- and TM-polarized fields and show that a simple reexamination of the basic equations reveals additional, exploitable dynamical features of nonlinear frequency conversion in plasmonic nanostructures.« less

  9. 57Fe Mössbauer spectroscopy and electron paramagnetic resonance studies of human liver ferritin, Ferrum Lek and Maltofer®

    NASA Astrophysics Data System (ADS)

    Alenkina, I. V.; Oshtrakh, M. I.; Klencsár, Z.; Kuzmann, E.; Chukin, A. V.; Semionkin, V. A.

    2014-09-01

    A human liver ferritin, commercial Ferrum Lek and Maltofer® samples were studied using Mössbauer spectroscopy and electron paramagnetic resonance. Two Mössbauer spectrometers have been used: (i) a high velocity resolution (4096 channels) at 90 and 295 K, (ii) and a low velocity resolution (250 channels) at 20 and 40 K. It is shown that the three studied materials have different superparamagnetic features at various temperatures. This may be caused by different magnetic anisotropy energy barriers, sizes (volume), structures and compositions of the iron cores. The electron paramagnetic resonance spectra of the ferritin, Ferrum Lek and Maltofer® were decomposed into multiple spectral components demonstrating the presence of minor ferro- or ferrimagnetic phases along with revealing marked differences among the studied substances. Mössbauer spectroscopy provides evidences on several components in the measured spectra which could be related to different regions, layers, nanocrystallites, etc. in the iron cores that coincides with heterogeneous and multiphase models for the ferritin iron cores.

  10. Few-body semiclassical approach to nucleon transfer and emission reactions

    NASA Astrophysics Data System (ADS)

    Sultanov, Renat A.; Guster, D.

    2014-04-01

    A three-body semiclassical model is proposed to describe the nucleon transfer and emission reactions in a heavy-ion collision. In this model the two heavy particles, i.e. nuclear cores A1(ZA1, MA1) and A2(ZA2, MA2), move along classical trajectories {{R}_1}( t ) and {{R}_2}( t ) respectively, while the dynamics of the lighter neutron (n) is considered from a quantum mechanical point of view. Here, Mi are the nucleon masses and Zi are the Coulomb charges of the heavy nuclei (i = 1, 2). A Faddeev-type semiclassical formulation using realistic paired nuclear-nuclear potentials is applied so that all three channels (elastic, rearrangement and break-up) are described in a unified manner. In order to solve the time-dependent equations the Faddeev components of the total three-body wave-function are expanded in terms of the input and output channel target eigenfunctions. In the special case, when the nuclear cores are identical (A1 ≡ A2) and also the two-level approximation in the expansion over the target (subsystem) functions is used, the time-dependent semiclassical Faddeev equations are resolved in an explicit way. To determine the realistic {{R}_1}( t ) and {{R}_2}( t ) trajectories of the nuclear cores, a self-consistent approach based on the Feynman path integral theory is applied.

  11. Late Quaternary carbonate accumulation along eastern South Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Crabill, K.; Slowey, N. C.; Foreman, A. D.; Charles, C.

    2016-12-01

    Water masses originating from both the North Atlantic Ocean and the Southern Ocean intersect the Walvis Ridge and Namibian margin of southwest Africa. Changes in the distribution and properties of these water masses through time are reflected by variations in the nature of the sediments accumulating along this margin. A suite of piston and gravity cores that possess sediment records corresponding to the most recent glacial-interglacial cycles were collected from the water depth range of 550 to 3700 meters. Sediment dry bulk density, XRF analyses and the concentration of CaCO3 were precisely determined at regular depth intervals in these cores. Foraminiferal δ18O along with XRF Fe/Ca data provide an age-depth model for key cores. The age-depth model and dry bulk density will be used with the calcium carbonate contents to calculate the accumulation rates of CaCO3 during each MIS 1-5e. The spatial and temporal variability in both the CaCO3 content and the CaCO3 mass accumulation rates along the Namibian continental slope will be described. Based on comparisons of these two parameters, inferences will be made about how variations of CaCO3 production, dilution of by non-CaCO3 sediment components, and dissolution of CaCO3 due to changes in ocean circulation/climate have occurred during intervals of the last glacial-interglacial cycle.

  12. Finite Element Development and Specifications of a Patched, Recessed Nomex Core Honeycomb Panel for Increased Sound Transmission Loss

    NASA Technical Reports Server (NTRS)

    Grosveld, Ferdinand W.

    2007-01-01

    This informal report summarizes the development and the design specifications of a recessed nomex core honeycomb panel in fulfillment of the deliverable in Task Order 13RBE, Revision 10, Subtask 17. The honeycomb panel, with 0.020-inch thick aluminum face sheets, has 0.016-inch thick aluminum patches applied to twenty-five, 6 by 6 inch, quarter inch thick recessed cores. A 10 dB higher transmission loss over the frequency range 250 - 1000 Hz was predicted by a MSC/NASTRAN finite element model when compared with the transmission loss of the base nomex core honeycomb panel. The static displacement, due to a unit force applied at either the core or recessed core area, was of the same order of magnitude as the static displacement of the base honeycomb panel when exposed to the same unit force. The mass of the new honeycomb design is 5.1% more than the base honeycomb panel. A physical model was constructed and is being tested.

  13. Face Sheet/Core Disbond Growth in Honeycomb Sandwich Panels Subjected to Ground-Air-Ground Pressurization and In-Plane Loading

    NASA Technical Reports Server (NTRS)

    Chen, Zhi M.; Krueger, Ronald; Rinker, Martin

    2015-01-01

    Typical damage modes in light honeycomb sandwich structures include face sheet/core disbonding and core fracture, both of which can pose a threat to the structural integrity of a component. These damage modes are of particular interest to aviation certification authorities since several in-service occurrences, such as rudder structural failure and other control surface malfunctions, have been attributed to face sheet/core disbonding. Extensive studies have shown that face sheet/core disbonding and core fracture can lead to damage propagation caused by internal pressure changes in the core. The increasing use of composite sandwich construction in aircraft applications makes it vitally important to understand the effect of ground-air-ground (GAG) cycles and conditions such as maneuver and gust loads on face sheet/core disbonding. The objective of the present study was to use a fracture mechanics based approach developed earlier to evaluate the loading at the disbond front caused by ground-air-ground pressurization and in-plane loading. A honeycomb sandwich panel containing a circular disbond at one face sheet/core interface was modeled with three-dimensional (3D) solid finite elements. The disbond was modeled as a discrete discontinuity and the strain energy release rate along the disbond front was computed using the Virtual Crack Closure Technique (VCCT). Special attention was paid to the pressure-deformation coupling which can decrease the pressure load within the disbonded sandwich section significantly when the structure is highly deformed. The commercial finite element analysis software, Abaqus/Standard, was used for the analyses. The recursive pressure-deformation coupling problem was solved by representing the entrapped air in the honeycomb cells as filled cavities in Abaqus/Standard. The results show that disbond size, face sheet thickness and core thickness are important parameters that determine crack tip loading at the disbond front. Further, the pressure-deformation coupling was found to have an important load decreasing effect [6]. In this paper, a detailed problem description is provided first. Second, the analysis methodology is presented. The fracture mechanics approach used is described and the specifics of the finite element model, including the fluid-filled cavities, are introduced. Third, the initial model verification and validation are discussed. Fourth, the findings from a closely related earlier study [6] are summarized. These findings provided the basis for the current investigation. Fifth, an aircraft ascent scenario from 0 to 12192 m (0 to 40000 ft) is considered and the resulting crack tip loading at the disbond front is determined. In-plane loading to simulate maneuvers and gust conditions are also considered. Sixth, the results are shown for a curved panel, which was used to simulate potential fuselage applications. Finally, a brief summary of observations is presented and recommendations for improvement are provided.

  14. Generalized model for k -core percolation and interdependent networks

    NASA Astrophysics Data System (ADS)

    Panduranga, Nagendra K.; Gao, Jianxi; Yuan, Xin; Stanley, H. Eugene; Havlin, Shlomo

    2017-09-01

    Cascading failures in complex systems have been studied extensively using two different models: k -core percolation and interdependent networks. We combine the two models into a general model, solve it analytically, and validate our theoretical results through extensive simulations. We also study the complete phase diagram of the percolation transition as we tune the average local k -core threshold and the coupling between networks. We find that the phase diagram of the combined processes is very rich and includes novel features that do not appear in the models studying each of the processes separately. For example, the phase diagram consists of first- and second-order transition regions separated by two tricritical lines that merge and enclose a two-stage transition region. In the two-stage transition, the size of the giant component undergoes a first-order jump at a certain occupation probability followed by a continuous second-order transition at a lower occupation probability. Furthermore, at certain fixed interdependencies, the percolation transition changes from first-order → second-order → two-stage → first-order as the k -core threshold is increased. The analytic equations describing the phase boundaries of the two-stage transition region are set up, and the critical exponents for each type of transition are derived analytically.

  15. Exploring international clinical education in US-based programs: identifying common practices and modifying an existing conceptual model of international service-learning.

    PubMed

    Pechak, Celia M; Black, Jill D

    2014-02-01

    Increasingly physical therapist students complete part of their clinical training outside of their home country. This trend is understudied. The purposes of this study were to: (1) explore, in depth, various international clinical education (ICE) programs; and (2) determine whether the Conceptual Model of Optimal International Service-Learning (ISL) could be applied or adapted to represent ICE. Qualitative content analysis was used to analyze ICE programs and consider modification of an existing ISL conceptual model for ICE. Fifteen faculty in the United States currently involved in ICE were interviewed. The interview transcriptions were systematically analyzed by two researchers. Three models of ICE practices emerged: (1) a traditional clinical education model where local clinical instructors (CIs) focus on the development of clinical skills; (2) a global health model where US-based CIs provide the supervision in the international setting, and learning outcomes emphasized global health and cultural competency; and (3) an ICE/ISL hybrid where US-based CIs supervise the students, and the foci includes community service. Additionally the data supported revising the ISL model's essential core conditions, components and consequence for ICE. The ICE conceptual model may provide a useful framework for future ICE program development and research.

  16. A Student Selected Component (or Special Study Module) in Forensic and Legal Medicine: Design, delivery, assessment and evaluation of an optional module as an addition to the medical undergraduate core curriculum.

    PubMed

    Kennedy, Kieran M; Wilkinson, Andrew

    2018-01-01

    The General Medical Council (United Kingdom) advocates development of non-core curriculum Student Selected Components and their inclusion in all undergraduate medical school curricula. This article describes a rationale for the design, delivery, assessment and evaluation of Student Selected Components in Forensic and Legal Medicine. Reference is made to the available evidence based literature pertinent to the delivery of undergraduate medical education in the subject area. A Student Selected Component represents an opportunity to highlight the importance of the legal aspects of medical practice, to raise the profile of the discipline of Forensic and Legal Medicine amongst undergraduate medical students and to introduce students to the possibility of a future career in the area. The authors refer to their experiences of design, delivery, assessment and evaluation of Student Selected Components in Forensic and Legal Medicine at their respective Universities in the Republic of Ireland (Galway) and in the United Kingdom (Oxford). Copyright © 2017. Published by Elsevier Ltd.

  17. Graphics Processing Unit (GPU) Acceleration of the Goddard Earth Observing System Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Putnam, Williama

    2011-01-01

    The Goddard Earth Observing System 5 (GEOS-5) is the atmospheric model used by the Global Modeling and Assimilation Office (GMAO) for a variety of applications, from long-term climate prediction at relatively coarse resolution, to data assimilation and numerical weather prediction, to very high-resolution cloud-resolving simulations. GEOS-5 is being ported to a graphics processing unit (GPU) cluster at the NASA Center for Climate Simulation (NCCS). By utilizing GPU co-processor technology, we expect to increase the throughput of GEOS-5 by at least an order of magnitude, and accelerate the process of scientific exploration across all scales of global modeling, including: The large-scale, high-end application of non-hydrostatic, global, cloud-resolving modeling at 10- to I-kilometer (km) global resolutions Intermediate-resolution seasonal climate and weather prediction at 50- to 25-km on small clusters of GPUs Long-range, coarse-resolution climate modeling, enabled on a small box of GPUs for the individual researcher After being ported to the GPU cluster, the primary physics components and the dynamical core of GEOS-5 have demonstrated a potential speedup of 15-40 times over conventional processor cores. Performance improvements of this magnitude reduce the required scalability of 1-km, global, cloud-resolving models from an unfathomable 6 million cores to an attainable 200,000 GPU-enabled cores.

  18. 3D printed microfluidic circuitry via multijet-based additive manufacturing†

    PubMed Central

    Sochol, R. D.; Sweet, E.; Glick, C. C.; Venkatesh, S.; Avetisyan, A.; Ekman, K. F.; Raulinaitis, A.; Tsai, A.; Wienkers, A.; Korner, K.; Hanson, K.; Long, A.; Hightower, B. J.; Slatton, G.; Burnett, D. C.; Massey, T. L.; Iwai, K.; Lee, L. P.; Pister, K. S. J.; Lin, L.

    2016-01-01

    The miniaturization of integrated fluidic processors affords extensive benefits for chemical and biological fields, yet traditional, monolithic methods of microfabrication present numerous obstacles for the scaling of fluidic operators. Recently, researchers have investigated the use of additive manufacturing or “three-dimensional (3D) printing” technologies – predominantly stereolithography – as a promising alternative for the construction of submillimeter-scale fluidic components. One challenge, however, is that current stereolithography methods lack the ability to simultaneously print sacrificial support materials, which limits the geometric versatility of such approaches. In this work, we investigate the use of multijet modelling (alternatively, polyjet printing) – a layer-by-layer, multi-material inkjetting process – for 3D printing geometrically complex, yet functionally advantageous fluidic components comprised of both static and dynamic physical elements. We examine a fundamental class of 3D printed microfluidic operators, including fluidic capacitors, fluidic diodes, and fluidic transistors. In addition, we evaluate the potential to advance on-chip automation of integrated fluidic systems via geometric modification of component parameters. Theoretical and experimental results for 3D fluidic capacitors demonstrated that transitioning from planar to non-planar diaphragm architectures improved component performance. Flow rectification experiments for 3D printed fluidic diodes revealed a diodicity of 80.6 ± 1.8. Geometry-based gain enhancement for 3D printed fluidic transistors yielded pressure gain of 3.01 ± 0.78. Consistent with additional additive manufacturing methodologies, the use of digitally-transferrable 3D models of fluidic components combined with commercially-available 3D printers could extend the fluidic routing capabilities presented here to researchers in fields beyond the core engineering community. PMID:26725379

  19. Molecular structure of the pyruvate dehydrogenase complex from Escherichia coli K-12.

    PubMed

    Vogel, O; Hoehn, B; Henning, U

    1972-06-01

    The pyruvate dehydrogenase core complex from E. coli K-12, defined as the multienzyme complex that can be obtained with a unique polypeptide chain composition, has a molecular weight of 3.75 x 10(6). All results obtained agree with the following numerology. The core complex consists of 48 polypeptide chains. There are 16 chains (molecular weight = 100,000) of the pyruvate dehydrogenase component, 16 chains (molecular weight = 80,000) of the dihydrolipoamide dehydrogenase component, and 16 chains (molecular weight = 56,000) of the dihydrolipoamide dehydrogenase component. Usually, but not always, pyruvate dehydrogenase complex is produced in vivo containing at least 2-3 mol more of dimers of the pyruvate dehydrogenase component than the stoichiometric ratio with respect to the core complex. This "excess" component is bound differently than are the eight dimers in the core complex.

  20. Effect of a core-softened O-O interatomic interaction on the shock compression of fused silica

    NASA Astrophysics Data System (ADS)

    Izvekov, Sergei; Weingarten, N. Scott; Byrd, Edward F. C.

    2018-03-01

    Isotropic soft-core potentials have attracted considerable attention due to their ability to reproduce thermodynamic, dynamic, and structural anomalies observed in tetrahedral network-forming compounds such as water and silica. The aim of the present work is to assess the relevance of effective core-softening pertinent to the oxygen-oxygen interaction in silica to the thermodynamics and phase change mechanisms that occur in shock compressed fused silica. We utilize the MD simulation method with a recently published numerical interatomic potential derived from an ab initio MD simulation of liquid silica via force-matching. The resulting potential indicates an effective shoulder-like core-softening of the oxygen-oxygen repulsion. To better understand the role of the core-softening we analyze two derivative force-matching potentials in which the soft-core is replaced with a repulsive core either in the three-body potential term or in all the potential terms. Our analysis is further augmented by a comparison with several popular empirical models for silica that lack an explicit core-softening. The first outstanding feature of shock compressed glass reproduced with the soft-core models but not with the other models is that the shock compression values at pressures above 20 GPa are larger than those observed under hydrostatic compression (an anomalous shock Hugoniot densification). Our calculations indicate the occurrence of a phase transformation along the shock Hugoniot that we link to the O-O repulsion core-softening. The phase transformation is associated with a Hugoniot temperature reversal similar to that observed experimentally. With the soft-core models, the phase change is an isostructural transformation between amorphous polymorphs with no associated melting event. We further examine the nature of the structural transformation by comparing it to the Hugoniot calculations for stishovite. For stishovite, the Hugoniot exhibits temperature reversal and associated phase transformation, which is a transition to a disordered phase (liquid or dense amorphous), regardless of whether or not the model accounts for core-softening. The onset pressures of the transformation predicted by different models show a wide scatter within 60-110 GPa; for potentials without core-softening, the onset pressure is much higher than 110 GPa. Our results show that the core-softening of the interaction in the oxygen subsystem of silica is the key mechanism for the structural transformation and thermodynamics in shock compressed silica. These results may provide an important contribution to a unified picture of anomalous response to shock compression observed in other network-forming oxides and single-component systems with core-softening of effective interactions.

  1. Evaluating a De-Centralized Regional Delivery System for Breast Cancer Screening and Patient Navigation for the Rural Underserved.

    PubMed

    Inrig, Stephen J; Tiro, Jasmin A; Melhado, Trisha V; Argenbright, Keith E; Craddock Lee, Simon J

    2014-01-01

    Providing breast cancer screening services in rural areas is challenging due to the fractured nature of healthcare delivery systems and complex reimbursement mechanisms that create barriers to access for the under- and uninsured. Interventions that reduce structural barriers to mammography, like patient navigation programs, are effective and recommended, especially for minority and underserved women. Although the literature on rural healthcare is significant, the field lacks studies of adaptive service delivery models and rigorous evaluation of evidence-based programs that facilitate routine screening and appropriate follow-up across large geographic areas. To better understand how to implement a decentralized regional delivery "hub & spoke" model for rural breast cancer screening and patient navigation, we have designed a rigorous, structured, multi-level and mixed-methods evaluation based on Glasgow's RE-AIM model (Reach, Effectiveness, Adoption, Implementation, and Maintenance). The program is comprised of three core components: 1) Outreach to underserved women by partnering with county organizations; 2) Navigation to guide patients through screening and appropriate follow-up; and 3) Centralized Reimbursement to coordinate funding for screening services through a central contract with Medicaid Breast and Cervical Cancer Services (BCCS). Using Glasgow's RE-AIM model, we will: 1) assess which counties have the resources and capacity to implement outreach and/or navigation components, 2) train partners in each county on how to implement components, and 3) monitor process and outcome measures in each county at regular intervals, providing booster training when needed. This evaluation strategy will elucidate how the heterogeneity of rural county infrastructure impacts decentralized service delivery as a navigation program expands. In addition to increasing breast cancer screening access, our model improves and maintains time to diagnostic resolution and facilitates timely referral to local cancer treatment services. We offer this evaluation approach as an exemplar for scientific methods to evaluate the translation of evidence-based federal policy into sustainable health services delivery in a rural setting.

  2. Evaluating a De-Centralized Regional Delivery System for Breast Cancer Screening and Patient Navigation for the Rural Underserved

    PubMed Central

    Inrig, Stephen J.; Tiro, Jasmin A.; Melhado, Trisha V.; Argenbright, Keith E.; Craddock Lee, Simon J.

    2017-01-01

    Providing breast cancer screening services in rural areas is challenging due to the fractured nature of healthcare delivery systems and complex reimbursement mechanisms that create barriers to access for the under- and uninsured. Interventions that reduce structural barriers to mammography, like patient navigation programs, are effective and recommended, especially for minority and underserved women. Although the literature on rural healthcare is significant, the field lacks studies of adaptive service delivery models and rigorous evaluation of evidence-based programs that facilitate routine screening and appropriate follow-up across large geographic areas. Objectives To better understand how to implement a decentralized regional delivery “hub & spoke” model for rural breast cancer screening and patient navigation, we have designed a rigorous, structured, multi-level and mixed-methods evaluation based on Glasgow’s RE-AIM model (Reach, Effectiveness, Adoption, Implementation, and Maintenance). Methods and Design The program is comprised of three core components: 1) Outreach to underserved women by partnering with county organizations; 2) Navigation to guide patients through screening and appropriate follow-up; and 3) Centralized Reimbursement to coordinate funding for screening services through a central contract with Medicaid Breast and Cervical Cancer Services (BCCS). Using Glasgow’s RE-AIM model, we will: 1) assess which counties have the resources and capacity to implement outreach and/or navigation components, 2) train partners in each county on how to implement components, and 3) monitor process and outcome measures in each county at regular intervals, providing booster training when needed. Discussion This evaluation strategy will elucidate how the heterogeneity of rural county infrastructure impacts decentralized service delivery as a navigation program expands. In addition to increasing breast cancer screening access, our model improves and maintains time to diagnostic resolution and facilitates timely referral to local cancer treatment services. We offer this evaluation approach as an exemplar for scientific methods to evaluate the translation of evidence-based federal policy into sustainable health services delivery in a rural setting. PMID:28713882

  3. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering.

    PubMed

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-04-01

    Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Warm-Core Intensification Through Horizontal Eddy Heat Transports into the Eye

    NASA Technical Reports Server (NTRS)

    Braun, Scott A.; Montgomery, Michael T.; Fulton, John; Nolan, David S.; Starr, David OC (Technical Monitor)

    2001-01-01

    A simulation of Hurricane Bob (1991) using the PSU/NCAR MM5 mesoscale model with a finest mesh spacing of 1.3 km is used to diagnose the heat budget of the hurricane. Heat budget terms, including latent and radiative heating, boundary layer forcing, and advection terms were output directly from the model for a 6-h period with 2-min frequency. Previous studies of warm core formation have emphasized the warming associated with gentle subsidence within the eye. The simulation of Hurricane Bob confirms subsidence warming as a major factor for eye warming, but also shows a significant contribution from horizontal advective terms. When averaged over the area of the eye, subsidence is found to strongly warm the mid-troposphere (2-9 km) while horizontal advection warms the mid to upper troposphere (5-13 km) with about equal magnitude. Partitioning of the horizontal advective terms into azimuthal mean and eddy components shows that the mean radial circulation does not, as expected, generally contribute to this warming, but that it is produced almost entirely by the horizontal eddy transport of heat into the eye. A further breakdown of the eddy components into azimuthal wave numbers 1, 2, and higher indicates that the warming is dominated by wave number 1 asymmetries, with smaller coming from higher wave numbers. Warming by horizontal eddy transport is consistent with idealized modeling of vortex Rossby waves and work is in progress to identify and clarify the role of vortex Rossby waves in warm-core intensification in both the full-physics model and idealized models.

  5. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data.

    PubMed

    Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O

    2015-08-25

    Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.

  6. Embedded binaries and their dense cores

    NASA Astrophysics Data System (ADS)

    Sadavoy, Sarah I.; Stahler, Steven W.

    2017-08-01

    We explore the relationship between young, embedded binaries and their parent cores, using observations within the Perseus Molecular Cloud. We combine recently published Very Large Array observations of young stars with core properties obtained from Submillimetre Common-User Bolometer Array 2 observations at 850 μm. Most embedded binary systems are found towards the centres of their parent cores, although several systems have components closer to the core edge. Wide binaries, defined as those systems with physical separations greater than 500 au, show a tendency to be aligned with the long axes of their parent cores, whereas tight binaries show no preferred orientation. We test a number of simple, evolutionary models to account for the observed populations of Class 0 and I sources, both single and binary. In the model that best explains the observations, all stars form initially as wide binaries. These binaries either break up into separate stars or else shrink into tighter orbits. Under the assumption that both stars remain embedded following binary break-up, we find a total star formation rate of 168 Myr-1. Alternatively, one star may be ejected from the dense core due to binary break-up. This latter assumption results in a star formation rate of 247 Myr-1. Both production rates are in satisfactory agreement with current estimates from other studies of Perseus. Future observations should be able to distinguish between these two possibilities. If our model continues to provide a good fit to other star-forming regions, then the mass fraction of dense cores that becomes stars is double what is currently believed.

  7. Nuclear Power Plant Mechanical Component Flooding Fragility Experiments Status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C. L.; Savage, B.; Johnson, B.

    This report describes progress on Nuclear Power Plant mechanical component flooding fragility experiments and supporting research. The progress includes execution of full scale fragility experiments using hollow-core doors, design of improvements to the Portal Evaluation Tank, equipment procurement and initial installation of PET improvements, designation of experiments exploiting the improved PET capabilities, fragility mathematical model development, Smoothed Particle Hydrodynamic simulations, wave impact simulation device research, and pipe rupture mechanics research.

  8. Electrosprayed core-shell polymer-lipid nanoparticles for active component delivery

    NASA Astrophysics Data System (ADS)

    Eltayeb, Megdi; Stride, Eleanor; Edirisinghe, Mohan

    2013-11-01

    A key challenge in the production of multicomponent nanoparticles for healthcare applications is obtaining reproducible monodisperse nanoparticles with the minimum number of preparation steps. This paper focus on the use of electrohydrodynamic (EHD) techniques to produce core-shell polymer-lipid structures with a narrow size distribution in a single step process. These nanoparticles are composed of a hydrophilic core for active component encapsulation and a lipid shell. It was found that core-shell nanoparticles with a tunable size range between 30 and 90 nm and a narrow size distribution could be reproducibly manufactured. The results indicate that the lipid component (stearic acid) stabilizes the nanoparticles against collapse and aggregation and improves entrapment of active components, in this case vanillin, ethylmaltol and maltol. The overall structure of the nanoparticles produced was examined by multiple methods, including transmission electron microscopy and differential scanning calorimetry, to confirm that they were of core-shell form.

  9. Runtime Performance and Virtual Network Control Alternatives in VM-Based High-Fidelity Network Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoginath, Srikanth B; Perumalla, Kalyan S; Henz, Brian J

    2012-01-01

    In prior work (Yoginath and Perumalla, 2011; Yoginath, Perumalla and Henz, 2012), the motivation, challenges and issues were articulated in favor of virtual time ordering of Virtual Machines (VMs) in network simulations hosted on multi-core machines. Two major components in the overall virtualization challenge are (1) virtual timeline establishment and scheduling of VMs, and (2) virtualization of inter-VM communication. Here, we extend prior work by presenting scaling results for the first component, with experiment results on up to 128 VMs scheduled in virtual time order on a single 12-core host. We also explore the solution space of design alternatives formore » the second component, and present performance results from a multi-threaded, multi-queue implementation of inter-VM network control for synchronized execution with VM scheduling, incorporated in our NetWarp simulation system.« less

  10. A Mathematical Evaluation of the Core Conductor Model

    PubMed Central

    Clark, John; Plonsey, Robert

    1966-01-01

    This paper is a mathematical evaluation of the core conductor model where its three dimensionality is taken into account. The problem considered is that of a single, active, unmyelinated nerve fiber situated in an extensive, homogeneous, conducting medium. Expressions for the various core conductor parameters have been derived in a mathematically rigorous manner according to the principles of electromagnetic theory. The purpose of employing mathematical rigor in this study is to bring to light the inherent assumptions of the one dimensional core conductor model, providing a method of evaluating the accuracy of this linear model. Based on the use of synthetic squid axon data, the conclusion of this study is that the linear core conductor model is a good approximation for internal but not external parameters. PMID:5903155

  11. Mantle-driven geodynamo features - accounting for non-thermal lower mantle features

    NASA Astrophysics Data System (ADS)

    Choblet, G.; Amit, H.

    2011-12-01

    Lower mantle heterogeneity responsible for spatial variations of the CMB heat flux could control long term geodynamo properties such as deviations from axial symmetry in the magnetic field and the core flow, frequency of geomagnetic reversals and anisotropic growth of the inner core. In this context, a classical interpretation of tomographic mapping of the lowermost mantle is to correlate linearly seismic velocities to heat flux anomalies. This implicitly assumes that temperature alone controls the tomographic anomalies. In addition, the limited spatial resolution of tomographic images precludes modeling sharp CMB heat flux structures.. There has been growing evidence however that non-thermal origins are also be expected for seismic velocity anomalies: the three main additional control parameters are (i) compositional anomalies possibly associated to the existence of a deep denser layer, (ii) the phase transition in magnesium perovskite believed to occur in the lowermost mantle and (iii) the possible presence of partial melts. Numerical models of mantle dynamics have illustrated how the first two parameters could distort the linear relationship between shear wave velocity anomalies and CMB heat flux (Nakagawa and Tackley, 2008). In this presentation we will consider the effect of such alternative interpretations of seismic velocity anomalies in order to prescribe CMB heat flux as an outer boundary for dynamo simulations. We first focus on the influence of post-perovskite. Taking into account this complexity could result in an improved agreement between the long term average properties of simulated dynamos and geophysical observations, including the Atlantic/Pacific hemispherical dichotomy in core flow activity, the single intense paleomagnetic field structure in the southern hemisphere, and possibly degree 1 dominant mode of inner-core seismic heterogeneity. We then account for sharp anomalies that are not resolved by the global tomographic probe. For instance, Ultra Low Velocity Zones (ULVZs) have been identified by dedicated seismic tools that cannot be observed by global tomographic models. These are likely associated to the hottest regions in the lowermost mantle. We thus model anomalies of the CMB heat flux where narrow ridges with low heat flux are juxtaposed to a large scale degree 2 pattern which represents the dominant component of tomographic observations. We find that hot ridges located with a large-scale positive heat flux anomaly to the east produce a time-average narrow elongated upwelling which acts as a flow barrier at the top of the core and results in intensified low-latitudes magnetic flux patches. This is found to have a clear signature on the meridional component of the thermal wind balance. Based on the lower mantle seismic tomography pattern, time average intense geomagnetic flux patches are expected below east Asia and Oceania and below the Americas.

  12. Vulnerable Atherosclerotic Plaque Elasticity Reconstruction Based on a Segmentation-Driven Optimization Procedure Using Strain Measurements: Theoretical Framework

    PubMed Central

    Le Floc’h, Simon; Tracqui, Philippe; Finet, Gérard; Gharib, Ahmed M.; Maurice, Roch L.; Cloutier, Guy; Pettigrew, Roderic I.

    2016-01-01

    It is now recognized that prediction of the vulnerable coronary plaque rupture requires not only an accurate quantification of fibrous cap thickness and necrotic core morphology but also a precise knowledge of the mechanical properties of plaque components. Indeed, such knowledge would allow a precise evaluation of the peak cap-stress amplitude, which is known to be a good biomechanical predictor of plaque rupture. Several studies have been performed to reconstruct a Young’s modulus map from strain elastograms. It seems that the main issue for improving such methods does not rely on the optimization algorithm itself, but rather on preconditioning requiring the best estimation of the plaque components’ contours. The present theoretical study was therefore designed to develop: 1) a preconditioning model to extract the plaque morphology in order to initiate the optimization process, and 2) an approach combining a dynamic segmentation method with an optimization procedure to highlight the modulogram of the atherosclerotic plaque. This methodology, based on the continuum mechanics theory prescribing the strain field, was successfully applied to seven intravascular ultrasound coronary lesion morphologies. The reconstructed cap thickness, necrotic core area, calcium area, and the Young’s moduli of the calcium, necrotic core, and fibrosis were obtained with mean relative errors of 12%, 4% and 1%, 43%, 32%, and 2%, respectively. PMID:19164080

  13. Geologic columns for the ICDP-USGS Eyreville A and B cores, Chesapeake Bay impact structure: Sediment-clast breccias, 1096 to 444 m depth

    USGS Publications Warehouse

    Edwards, L.E.; Powars, D.S.; Gohn, G.S.; Dypvik, H.

    2009-01-01

    The Eyreville A and B cores, recovered from the "moat" of the Chesapeake Bay impact structure, provide a thick section of sediment-clast breccias and minor stratified sediments from 1095.74 to 443.90 m. This paper discusses the components of these breccias, presents a geologic column and descriptive lithologic framework for them, and formalizes the Exmore Formation. From 1095.74 to ??867 m, the cores consist of nonmarine sediment boulders and sand (rare blocks up to 15.3 m intersected diameter). A sharp contact in both cores at ??867 m marks the lowest clayey, silty, glauconitic quartz sand that constitutes the base of the Exmore Formation and its lower diamicton member. Here, material derived from the upper sediment target layers, as well as some impact ejecta, occurs. The block-dominated member of the Exmore Formation, from ??855-618.23 m, consists of nonmarine sediment blocks and boulders (up to 45.5 m) that are juxtaposed complexly. Blocks of oxidized clay are an important component. Above 618.23 m, which is the base of the informal upper diamicton member of the Exmore Formation, the glauconitic matrix is a consistent component in diamicton layers between nonmarine sediment clasts that decrease in size upward in the section. Crystalline-rock clasts are not randomly distributed but rather form local concentrations. The upper part of the Exmore Formation consists of crudely fining-upward sandy packages capped by laminated silt and clay. The overlap interval of Eyreville A and B (940-??760 m) allows recognition of local similarities and differences in the breccias. ?? 2009 The Geological Society of America.

  14. Majority of Solar Wind Intervals Support Ion-Driven Instabilities

    NASA Astrophysics Data System (ADS)

    Klein, K. G.; Alterman, B. L.; Stevens, M. L.; Vech, D.; Kasper, J. C.

    2018-05-01

    We perform a statistical assessment of solar wind stability at 1 AU against ion sources of free energy using Nyquist's instability criterion. In contrast to typically employed threshold models which consider a single free-energy source, this method includes the effects of proton and He2 + temperature anisotropy with respect to the background magnetic field as well as relative drifts between the proton core, proton beam, and He2 + components on stability. Of 309 randomly selected spectra from the Wind spacecraft, 53.7% are unstable when the ion components are modeled as drifting bi-Maxwellians; only 4.5% of the spectra are unstable to long-wavelength instabilities. A majority of the instabilities occur for spectra where a proton beam is resolved. Nearly all observed instabilities have growth rates γ slower than instrumental and ion-kinetic-scale timescales. Unstable spectra are associated with relatively large He2 + drift speeds and/or a departure of the core proton temperature from isotropy; other parametric dependencies of unstable spectra are also identified.

  15. Strain field mapping of dislocations in a Ge/Si heterostructure.

    PubMed

    Liu, Quanlong; Zhao, Chunwang; Su, Shaojian; Li, Jijun; Xing, Yongming; Cheng, Buwen

    2013-01-01

    Ge/Si heterostructure with fully strain-relaxed Ge film was grown on a Si (001) substrate by using a two-step process by ultra-high vacuum chemical vapor deposition. The dislocations in the Ge/Si heterostructure were experimentally investigated by high-resolution transmission electron microscopy (HRTEM). The dislocations at the Ge/Si interface were identified to be 90° full-edge dislocations, which are the most efficient way for obtaining a fully relaxed Ge film. The only defect found in the Ge epitaxial film was a 60° dislocation. The nanoscale strain field of the dislocations was mapped by geometric phase analysis technique from the HRTEM image. The strain field around the edge component of the 60° dislocation core was compared with those of the Peierls-Nabarro and Foreman dislocation models. Comparison results show that the Foreman model with a = 1.5 can describe appropriately the strain field around the edge component of a 60° dislocation core in a relaxed Ge film on a Si substrate.

  16. SESNPCA: Principal Component Analysis Applied to Stripped-Envelope Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Williamson, Marc; Bianco, Federica; Modjaz, Maryam

    2018-01-01

    In the new era of time-domain astronomy, it will become increasingly important to have rigorous, data driven models for classifying transients, including supernovae (SNe). We present the first application of principal component analysis (PCA) to stripped-envelope core-collapse supernovae (SESNe). Previous studies of SNe types Ib, IIb, Ic, and broad-line Ic (Ic-BL) focus only on specific spectral features, while our PCA algorithm uses all of the information contained in each spectrum. We use one of the largest compiled datasets of SESNe, containing over 150 SNe, each with spectra taken at multiple phases. Our work focuses on 49 SNe with spectra taken 15 ± 5 days after maximum V-band light where better distinctions can be made between SNe type Ib and Ic spectra. We find that spectra of SNe type IIb and Ic-BL are separable from the other types in PCA space, indicating that PCA is a promising option for developing a purely data driven model for SESNe classification.

  17. Majority of Solar Wind Intervals Support Ion-Driven Instabilities.

    PubMed

    Klein, K G; Alterman, B L; Stevens, M L; Vech, D; Kasper, J C

    2018-05-18

    We perform a statistical assessment of solar wind stability at 1 AU against ion sources of free energy using Nyquist's instability criterion. In contrast to typically employed threshold models which consider a single free-energy source, this method includes the effects of proton and He^{2+} temperature anisotropy with respect to the background magnetic field as well as relative drifts between the proton core, proton beam, and He^{2+} components on stability. Of 309 randomly selected spectra from the Wind spacecraft, 53.7% are unstable when the ion components are modeled as drifting bi-Maxwellians; only 4.5% of the spectra are unstable to long-wavelength instabilities. A majority of the instabilities occur for spectra where a proton beam is resolved. Nearly all observed instabilities have growth rates γ slower than instrumental and ion-kinetic-scale timescales. Unstable spectra are associated with relatively large He^{2+} drift speeds and/or a departure of the core proton temperature from isotropy; other parametric dependencies of unstable spectra are also identified.

  18. Building Capacity for Workplace Health Promotion: Findings From the Work@Health® Train-the-Trainer Program

    PubMed Central

    Lang, Jason; Cluff, Laurie; Rineer, Jennifer; Brown, Darigg; Jones-Jack, Nkenge

    2017-01-01

    Small- and mid-sized employers are less likely to have expertise, capacity, or resources to implement workplace health promotion programs, compared with large employers. In response, the Centers for Disease Control and Prevention developed the Work@Health® employer training program to determine the best way to deliver skill-based training to employers of all sizes. The core curriculum was designed to increase employers’ knowledge of the design, implementation, and evaluation of workplace health strategies. The first arm of the program was direct employer training. In this article, we describe the results of the second arm—the program’s train-the-trainer (T3) component, which was designed to prepare new certified trainers to provide core workplace health training to other employers. Of the 103 participants who began the T3 program, 87 fully completed it and delivered the Work@Health core training to 233 other employers. Key indicators of T3 participants’ knowledge and attitudes significantly improved after training. The curriculum delivered through the T3 model has the potential to increase the health promotion capacity of employers across the nation, as well as organizations that work with employers, such as health departments and business coalitions. PMID:28829622

  19. Building Capacity for Workplace Health Promotion: Findings From the Work@Health® Train-the-Trainer Program.

    PubMed

    Lang, Jason; Cluff, Laurie; Rineer, Jennifer; Brown, Darigg; Jones-Jack, Nkenge

    2017-11-01

    Small- and mid-sized employers are less likely to have expertise, capacity, or resources to implement workplace health promotion programs, compared with large employers. In response, the Centers for Disease Control and Prevention developed the Work@Health ® employer training program to determine the best way to deliver skill-based training to employers of all sizes. The core curriculum was designed to increase employers' knowledge of the design, implementation, and evaluation of workplace health strategies. The first arm of the program was direct employer training. In this article, we describe the results of the second arm-the program's train-the-trainer (T3) component, which was designed to prepare new certified trainers to provide core workplace health training to other employers. Of the 103 participants who began the T3 program, 87 fully completed it and delivered the Work@Health core training to 233 other employers. Key indicators of T3 participants' knowledge and attitudes significantly improved after training. The curriculum delivered through the T3 model has the potential to increase the health promotion capacity of employers across the nation, as well as organizations that work with employers, such as health departments and business coalitions.

  20. 12 CFR 567.5 - Components of capital.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Requirements § 567.5 Components of capital. (a) Core Capital. (1) The following elements, 3 less the amount of any deductions pursuant to paragraph (a)(2) of this section, comprise a savings association' s core... includable in core capital. (iii) Minority interests in the equity accounts of the subsidiaries that are...

  1. Using SAFRAN Software to Assess Radiological Hazards from Dismantling of Tammuz-2 Reactor Core at Al-tuwaitha Nuclear Site

    NASA Astrophysics Data System (ADS)

    Abed Gatea, Mezher; Ahmed, Anwar A.; jundee kadhum, Saad; Ali, Hasan Mohammed; Hussein Muheisn, Abbas

    2018-05-01

    The Safety Assessment Framework (SAFRAN) software has implemented here for radiological safety analysis; to verify that the dose acceptance criteria and safety goals are met with a high degree of confidence for dismantling of Tammuz-2 reactor core at Al-tuwaitha nuclear site. The activities characterizing, dismantling and packaging were practiced to manage the generated radioactive waste. Dose to the worker was considered an endpoint-scenario while dose to the public has neglected due to that Tammuz-2 facility is located in a restricted zone and 30m berm surrounded Al-tuwaitha site. Safety assessment for dismantling worker endpoint-scenario based on maximum external dose at component position level in the reactor pool and internal dose via airborne activity while, for characterizing and packaging worker endpoints scenarios have been done via external dose only because no evidence for airborne radioactivity hazards outside the reactor pool. The in-situ measurements approved that reactor core components are radiologically activated by Co-60 radioisotope. SAFRAN results showed that the maximum received dose for workers are (1.85, 0.64 and 1.3mSv/y) for activities dismantling, characterizing and packaging of reactor core components respectively. Hence, the radiological hazards remain below the low level hazard and within the acceptable annual dose for workers in radiation field

  2. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  3. A strong radio brightening at the jet base of M 87 during the elevated very high energy gamma-ray state in 2012

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hada, K.; Giroletti, M.; Giovannini, G.

    2014-06-20

    We report our intensive, high angular resolution radio monitoring observations of the jet in M 87 with the VLBI Exploration of Radio Astrometry (VERA) and the European VLBI Network (EVN) from 2011 February to 2012 October, together with contemporaneous high-energy (100 MeV 100 GeV) γ rays by VERITAS. We detected a remarkable (up to ∼70%) increase of the radio flux density from the unresolved jet base (radio core) with VERA at 22 and 43 GHz coincident with the VHE activity. Meanwhile, we confirmed with EVN at 5 GHz that the peculiar knot, HST-1, which is an alternative favored γ-ray productionmore » site located at ≳120 pc from the nucleus, remained quiescent in terms of its flux density and structure. These results in the radio bands strongly suggest that the VHE γ-ray activity in 2012 originates in the jet base within 0.03 pc or 56 Schwarzschild radii (the VERA spatial resolution of 0.4 mas at 43 GHz) from the central supermassive black hole. We further conducted VERA astrometry for the M 87 core at six epochs during the flaring period, and detected core shifts between 22 and 43 GHz, a mean value of which is similar to that measured in the previous astrometric measurements. We also discovered a clear frequency-dependent evolution of the radio core flare at 43, 22, and 5 GHz; the radio flux density increased more rapidly at higher frequencies with a larger amplitude, and the light curves clearly showed a time-lag between the peaks at 22 and 43 GHz, the value of which is constrained to be within ∼35-124 days. This indicates that a new radio-emitting component was created near the black hole in the period of the VHE event, and then propagated outward with progressively decreasing synchrotron opacity. By combining the obtained core shift and time-lag, we estimated an apparent speed of the newborn component propagating through the opaque region between the cores at 22 and 43 GHz. We derived a sub-luminal speed (less than ∼0.2c) for this component. This value is significantly slower than the super-luminal (∼1.1c) features that appeared from the core during the prominent VHE flaring event in 2008, suggesting that stronger VHE activity can be associated with the production of a higher Lorentz factor jet in M 87.« less

  4. ECLS systems for a lunar base - A baseline and some alternate concepts

    NASA Technical Reports Server (NTRS)

    Hypes, Warren D.; Hall, John B., Jr.

    1988-01-01

    A baseline ECLS system for a lunar base manned intermittently by four crewmembers and later permanently occupied by eight crewmembers has been designed. A summary of the physical characteristics for the intermittently manned and the continuously manned bases is given. Since Space Station inheritance is a key assumption in the mission models, the ECLS system components are distributed within Space Station modules and nodes. A 'core assembly' concept is then developed to meet the objectives of both phases of the ECLS system. A supplementary study is discussed which assessed tankage requirements, penalties incurred by adding subsystem redundancy and by pressurizing large surface structures, and difficulties imposed by intermittent occupancy. Alternate concepts using lunar-derived oxygen, the gravitational field as a design aid, and a city utility-type ECLS system are also discussed.

  5. ANALYSIS OF THE HERSCHEL /HEXOS SPECTRAL SURVEY TOWARD ORION SOUTH: A MASSIVE PROTOSTELLAR ENVELOPE WITH STRONG EXTERNAL IRRADIATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tahani, K.; Plume, R.; Bergin, E. A.

    2016-11-20

    We present results from a comprehensive submillimeter spectral survey toward the source Orion South, based on data obtained with the Heterodyne Instrument for the Far-Infrared instrument on board the Herschel Space Observatory , covering the frequency range of 480 to 1900 GHz. We detect 685 spectral lines with signal-to-noise ratios (S/Ns) > 3 σ , originating from 52 different molecular and atomic species. We model each of the detected species assuming conditions of Local Thermodynamic Equilibrium. This analysis provides an estimate of the physical conditions of Orion South (column density, temperature, source size, and V {sub LSR}). We find evidencemore » for three different cloud components: a cool ( T {sub ex} ∼ 20–40 K), spatially extended (>60″), and quiescent (Δ V {sub FWHM} ∼ 4 km s{sup -1}) component; a warmer ( T {sub ex} ∼ 80–100 K), less spatially extended (∼30″), and dynamic (Δ V {sub FWHM} ∼ 8 km s{sup -1}) component, which is likely affected by embedded outflows; and a kinematically distinct region ( T {sub ex} > 100 K; V {sub LSR} ∼ 8 km s{sup -1}), dominated by emission from species that trace ultraviolet irradiation, likely at the surface of the cloud. We find little evidence for the existence of a chemically distinct “hot-core” component, likely due to the small filling factor of the hot core or hot cores within the Herschel beam. We find that the chemical composition of the gas in the cooler, quiescent component of Orion South more closely resembles that of the quiescent ridge in Orion-KL. The gas in the warmer, dynamic component, however, more closely resembles that of the Compact Ridge and Plateau regions of Orion-KL, suggesting that higher temperatures and shocks also have an influence on the overall chemistry of Orion South.« less

  6. Excellence and evidence in staffing: a data-driven model for excellence in staffing (2nd edition).

    PubMed

    Baggett, Margarita; Batcheller, Joyce; Blouin, Ann Scott; Behrens, Elizabeth; Bradley, Carol; Brown, Mary J; Brown, Diane Storer; Bolton, Linda Burnes; Borromeo, Annabelle R; Burtson, Paige; Caramanica, Laura; Caspers, Barbara A; Chow, Marilyn; Christopher, Mary Ann; Clarke, Sean P; Delucas, Christine; Dent, Robert L; Disser, Tony; Eliopoulos, Charlotte; Everett, Linda Q; Garcia, Amy; Glassman, Kimberly; Goodwin, Susan; Haagenson, Deb; Harper, Ellen; Harris, Kathy; Hoying, Cheryl L; Hughes-Rease, Marsha; Kelly, Lesly; Kiger, Anna J; Kobs-Abbott, Ann; Krueger, Janelle; Larson, Jackie; March, Connie; Martin, Deborah Maust; Mazyck, Donna; Meenan, Penny; McGaffigan, Patricia; Myers, Karen K; Nell, Kate; Newcomer, Britta; Cathy, Rick; O'Rourke, Maria; Rosa, Billy; Rose, Robert; Rudisill, Pamela; Sanford, Kathy; Simpson, Roy L; Snowden, Tami; Strickland, Bob; Strohecker, Sharon; Weems, Roger B; Welton, John; Weston, Marla; Valentine, Nancy M; Vento, Laura; Yendro, Susan

    2014-01-01

    The Patient Protection and Affordable Care Act (PPACA, 2010) and the Institute of Medicine's (IOM, 2011) Future of Nursing report have prompted changes in the U.S. health care system. This has also stimulated a new direction of thinking for the profession of nursing. New payment and priority structures, where value is placed ahead of volume in care, will start to define our health system in new and unknown ways for years. One thing we all know for sure: we cannot afford the same inefficient models and systems of care of yesterday any longer. The Data-Driven Model for Excellence in Staffing was created as the organizing framework to lead the development of best practices for nurse staffing across the continuum through research and innovation. Regardless of the setting, nurses must integrate multiple concepts with the value of professional nursing to create new care and staffing models. Traditional models demonstrate that nurses are a commodity. If the profession is to make any significant changes in nurse staffing, it is through the articulation of the value of our professional practice within the overall health care environment. This position paper is organized around the concepts from the Data-Driven Model for Excellence in Staffing. The main concepts are: Core Concept 1: Users and Patients of Health Care, Core Concept 2: Providers of Health Care, Core Concept 3: Environment of Care, Core Concept 4: Delivery of Care, Core Concept 5: Quality, Safety, and Outcomes of Care. This position paper provides a comprehensive view of those concepts and components, why those concepts and components are important in this new era of nurse staffing, and a 3-year challenge that will push the nursing profession forward in all settings across the care continuum. There are decades of research supporting various changes to nurse staffing. Yet little has been done to move that research into practice and operations. While the primary goal of this position paper is to generate research and innovative thinking about nurse staffing across all health care settings, a second goal is to stimulate additional publications. This includes a goal of at least 20 articles in Nursing Economic$ on best practices in staffing and care models from across the continuum over the next 3 years.

  7. Introducing Gross Pathology to Undergraduate Medical Students in the Dissecting Room

    ERIC Educational Resources Information Center

    Wood, Andrew; Struthers, Kate; Whiten, Susan; Jackson, David; Herrington, C. Simon

    2010-01-01

    Pathology and anatomy are both sciences that contribute to the foundations of a successful medical career. In the past decade, medical education has undergone profound changes with the development of a core curriculum combined with student selected components. There has been a shift from discipline-based teaching towards problem-based learning.…

  8. Why Do We Need Future Ready Librarians? That Kid.

    ERIC Educational Resources Information Center

    Ray, Mark

    2018-01-01

    In this article, the author examines the need of the Future Ready Librarians (FRL) initiative. The FRL Framework helps define how librarians might lead, teach, and support schools based on the core research-based components defined by Future Ready. The framework and initiative are intended to be ways to change the conversation about school…

  9. Work-Based Learning through Civic Engagement

    ERIC Educational Resources Information Center

    McKoy, Deborah L.; Stern, David; Bierbaum, Ariel H.

    2011-01-01

    Work-based learning (WBL), an important part of the 1990s "School to Work" movement, is a core component of the Linked Learning strategy which is now shaping efforts to improve secondary education in California and around the nation in cities such as Detroit, New York, and Philadelphia. WBL can include not only classic internships and…

  10. Inclusive Instruction: Evidence-Based Practices for Teaching Students with Disabilities. What Works for Special-Needs Learners Series

    ERIC Educational Resources Information Center

    Brownell, Mary T.; Smith, Sean J.; Crockett, Jean B.; Griffin, Cynthia C.

    2012-01-01

    This accessible book presents research-based strategies for supporting K-8 students with high-incidence disabilities to become accomplished learners. The authors clearly describe the core components of effective inclusive instruction, showing how to recognize and respond to individual students' needs quickly and appropriately. Teachers are…

  11. Palaeomagnetic constraints on the evolution of the Atlantis Massif oceanic core complex (Mid-Atlantic Ridge, 30°N)

    NASA Astrophysics Data System (ADS)

    Morris, A.; Pressling, N.; Gee, J. S.

    2012-04-01

    Oceanic core complexes expose lower crustal and upper mantle rocks on the seafloor by tectonic unroofing in the footwalls of large-slip detachment faults. They represent a fundamental component of the seafloor spreading system at slow and ultraslow axes. One of the most extensively studied oceanic core complexes is Atlantis Massif, located at 30°N at the intersection of the Atlantis Transform Fault and the Mid Atlantic Ridge (MAR). The central dome of the massif exposes the corrugated detachment fault surface and was drilled during IODP Expedition 304/305 (Hole U1309D). This sampled a 1.4 km faulted and complexly layered footwall section dominated by gabbroic lithologies with minor ultramafic rocks. Palaeomagnetic analyses demonstrate that the gabbroic sequences at Atlantis Massif carry highly stable remanent magnetizations that provide valuable information on the evolution of the section. Thermal demagnetization experiments recover high unblocking temperature components of reversed polarity (R1) throughout the gabbroic sequences. Correlation of structures observed on oriented borehole (FMS) images and those recorded on unoriented core pieces allows reorientation of R1 remanences. The mean remanence direction in true geographic coordinates constrains the tectonic rotation experienced by the Atlantis Massif footwall, indicating a 46°±6° counterclockwise around a MAR-parallel horizontal axis trending 011°±6°. The detachment fault therefore initiated at a steep dip of >50° and then rotated flexurally to its present day low angle geometry (consistent with a 'rolling-hinge' model for detachment evolution). In a number of intervals, the gabbros exhibit a complex remanence structure with the presence of additional intermediate temperature normal (N1) and lower temperature reversed (R2) polarity components, suggesting an extended period of remanence acquisition during different polarity intervals. Sharp break-points between different polarity components suggest that they were acquired by a thermal mechanism. There appears to be no correlation between remanence structure and either the igneous stratigraphy or the distribution of alteration in the core. Instead, the remanence data are consistent with a model in which the lower crustal section acquired magnetizations of different polarity during a protracted cooling history spanning two geomagnetic reversals. The crystallization age of the section (1.2 Ma; derived from Pb/U zircon dating) suggests that the R1 component was acquired during geomagnetic polarity chron C1r.2r, N1 during chron C1r.1n (Jaramillo) and R2 during chron C1r.1r. By considering the maximum time intervals available for acquisition of the N1 and R2 components and correcting laboratory unblocking temperatures accordingly, the data provide additional constraints on the thermal evolution of the Atlantis Massif footwall.

  12. Palaeomagnetic constraints on the evolution of the Atlantis Massif oceanic core complex (Mid-Atlantic Ridge, 30°N)

    NASA Astrophysics Data System (ADS)

    Morris, A.; Pressling, N.; Gee, J. S.

    2011-12-01

    Oceanic core complexes expose lower crustal and upper mantle rocks on the seafloor by tectonic unroofing in the footwalls of large-slip detachment faults. They represent a fundamental component of the seafloor spreading system at slow and ultraslow axes. One of the most extensively studied oceanic core complexes is Atlantis Massif, located at 30°N at the intersection of the Atlantis Transform Fault and the Mid Atlantic Ridge (MAR). The central dome of the massif exposes the corrugated detachment fault surface and was drilled during IODP Expedition 304/305 (Hole U1309D). This sampled a 1.4 km faulted and complexly layered footwall section dominated by gabbroic lithologies with minor ultramafic rocks. Palaeomagnetic analyses demonstrate that the gabbroic sequences at Atlantis Massif carry highly stable remanent magnetizations that provide valuable information on the evolution of the section. Thermal demagnetization experiments recover high unblocking temperature components of reversed polarity (R1) throughout the gabbroic sequences. Correlation of structures observed on oriented borehole (FMS) images and those recorded on unoriented core pieces allows reorientation of R1 remanences. The mean remanence direction in true geographic coordinates constrains the tectonic rotation experienced by the Atlantis Massif footwall, indicating a 46°±6° counterclockwise around a MAR-parallel horizontal axis trending 011°±6°. The detachment fault therefore initiated at a steep dip of >50° and then rotated flexurally to its present day low angle geometry (consistent with a 'rolling-hinge' model for detachment evolution). In a number of intervals, the gabbros exhibit a complex remanence structure with the presence of additional intermediate temperature normal (N1) and lower temperature reversed (R2) polarity components, suggesting an extended period of remanence acquisition during different polarity intervals. Sharp break-points between different polarity components suggest that they were acquired by a thermal mechanism. There appears to be no correlation between remanence structure and either the igneous stratigraphy or the distribution of alteration in the core. Instead, the remanence data are consistent with a model in which the lower crustal section acquired magnetizations of different polarity during a protracted cooling history spanning two geomagnetic reversals. The crystallization age of the section (1.2 Ma; derived from Pb/U zircon dating) suggests that the R1 component was acquired during geomagnetic polarity chron C1r.2r, N1 during chron C1r.1n (Jaramillo) and R2 during chron C1r.1r. By considering the maximum time intervals available for acquisition of the N1 and R2 components and correcting laboratory unblocking temperatures accordingly, the data provide additional constraints on the thermal evolution of the Atlantis Massif footwall.

  13. [Core factors of schizophrenia structure based on PANSS and SAPS/SANS results. Discerning and head-to-head comparisson of PANSS and SASPS/SANS validity].

    PubMed

    Masiak, Marek; Loza, Bartosz

    2004-01-01

    A lot of inconsistencies across dimensional studies of schizophrenia(s) are being unveiled. These problems are strongly related to the methodological aspects of collecting data and specific statistical analyses. Psychiatrists have developed lots of psychopathological models derived from analytic studies based on SAPS/SANS (the Scale for the Assessment of Positive Symptoms/the Scale for the Assessment of Negative Symptoms) and PANSS (The Positive and Negative Syndrome Scale). The unique validation of parallel two independent factor models was performed--ascribed to the same illness and based on different diagnostic scales--to investigate indirect methodological causes of clinical discrepancies. 100 newly admitted patients (mean age--33.5, 18-45, males--64, females--36, hospitalised on average 5.15 times) with paranoid schizophrenia (according to ICD-10) were scored and analysed using PANSS and SAPS/SANS during psychotic exacerbation. All patients were treated with neuroleptics of various kinds with 410mg equivalents of chlorpromazine (atypicals:typicals --> 41:59). Factor analyses were applied to basic results (with principal component analysis, normalised varimax rotation). Investing the cross-model validity, canonical analysis was applied. Models of schizophrenia varied from 3 to 5 factors. PANSS model included: positive, negative, disorganisation, cognitive and depressive components and SAPS/SANS model was dominated by positive, negative and disorganisation factors. The SAPS/SANS accounted for merely 48% of the PANSS common variances. The SAPS/SANS combined measurement preferentially (67% of canonical variance) targeted positive-negative dichotomy. Respectively, PANSS shared positive-negative phenomenology in 35% of its own variance. The general concept of five-dimensionality in paranoid schizophrenia looks clinically more heuristic and statistically more stabilised.

  14. A three-dimensional nonlinear Timoshenko beam based on the core-congruential formulation

    NASA Technical Reports Server (NTRS)

    Crivelli, Luis A.; Felippa, Carlos A.

    1992-01-01

    A three-dimensional, geometrically nonlinear two-node Timoshenkoo beam element based on the total Larangrian description is derived. The element behavior is assumed to be linear elastic, but no restrictions are placed on magnitude of finite rotations. The resulting element has twelve degrees of freedom: six translational components and six rotational-vector components. The formulation uses the Green-Lagrange strains and second Piola-Kirchhoff stresses as energy-conjugate variables and accounts for the bending-stretching and bending-torsional coupling effects without special provisions. The core-congruential formulation (CCF) is used to derived the discrete equations in a staged manner. Core equations involving the internal force vector and tangent stiffness matrix are developed at the particle level. A sequence of matrix transformations carries these equations to beam cross-sections and finally to the element nodal degrees of freedom. The choice of finite rotation measure is made in the next-to-last transformation stage, and the choice of over-the-element interpolation in the last one. The tangent stiffness matrix is found to retain symmetry if the rotational vector is chosen to measure finite rotations. An extensive set of numerical examples is presented to test and validate the present element.

  15. Technology for Transient Simulation of Vibration during Combustion Process in Rocket Thruster

    NASA Astrophysics Data System (ADS)

    Zubanov, V. M.; Stepanov, D. V.; Shabliy, L. S.

    2018-01-01

    The article describes the technology for simulation of transient combustion processes in the rocket thruster for determination of vibration frequency occurs during combustion. The engine operates on gaseous propellant: oxygen and hydrogen. Combustion simulation was performed using the ANSYS CFX software. Three reaction mechanisms for the stationary mode were considered and described in detail. The way for obtaining quick CFD-results with intermediate combustion components using an EDM model was found. The way to generate the Flamelet library with CFX-RIF was described. A technique for modeling transient combustion processes in the rocket thruster was proposed based on the Flamelet library. A cyclic irregularity of the temperature field like vortex core precession was detected in the chamber. Frequency of flame precession was obtained with the proposed simulation technique.

  16. Failover in Cellular Automata

    NASA Astrophysics Data System (ADS)

    Kumar, Shailesh; Rao, Shrisha

    This paper studies a phenomenon called failover, and shows that this phenomenon (in particular, stateless failover) can be modeled by Game of Life cellular automata. This is the first time that this sophisticated real-life system behavior has been modeled in abstract terms. A cellular automata (CA) configuration is constructed that exhibits emergent failover. The configuration is based on standard Game of Life rules. Gliders and glider-guns form the core messaging structure in the configuration. The blinker is represented as the basic computational unit, and it is shown how it can be recreated in case of a failure. Stateless failover using the primary-backup mechanism is demonstrated. The details of the CA components used in the configuration and its working are described, and a simulation of the complete configuration is also presented.

  17. Developing, delivering and evaluating primary mental health care: the co-production of a new complex intervention.

    PubMed

    Reeve, Joanne; Cooper, Lucy; Harrington, Sean; Rosbottom, Peter; Watkins, Jane

    2016-09-06

    Health services face the challenges created by complex problems, and so need complex intervention solutions. However they also experience ongoing difficulties in translating findings from research in this area in to quality improvement changes on the ground. BounceBack was a service development innovation project which sought to examine this issue through the implementation and evaluation in a primary care setting of a novel complex intervention. The project was a collaboration between a local mental health charity, an academic unit, and GP practices. The aim was to translate the charity's model of care into practice-based evidence describing delivery and impact. Normalisation Process Theory (NPT) was used to support the implementation of the new model of primary mental health care into six GP practices. An integrated process evaluation evaluated the process and impact of care. Implementation quickly stalled as we identified problems with the described model of care when applied in a changing and variable primary care context. The team therefore switched to using the NPT framework to support the systematic identification and modification of the components of the complex intervention: including the core components that made it distinct (the consultation approach) and the variable components (organisational issues) that made it work in practice. The extra work significantly reduced the time available for outcome evaluation. However findings demonstrated moderately successful implementation of the model and a suggestion of hypothesised changes in outcomes. The BounceBack project demonstrates the development of a complex intervention from practice. It highlights the use of Normalisation Process Theory to support development, and not just implementation, of a complex intervention; and describes the use of the research process in the generation of practice-based evidence. Implications for future translational complex intervention research supporting practice change through scholarship are discussed.

  18. An introductory pharmacy practice experience based on a medication therapy management service model.

    PubMed

    Agness, Chanel F; Huynh, Donna; Brandt, Nicole

    2011-06-10

    To implement and evaluate an introductory pharmacy practice experience (IPPE) based on the medication therapy management (MTM) service model. Patient Care 2 is an IPPE that introduces third-year pharmacy students to the MTM service model. Students interacted with older adults to identify medication-related problems and develop recommendations using core MTM elements. Course outcome evaluations were based on number of documented medication-related problems, recommendations, and student reviews. Fifty-seven older adults participated in the course. Students identified 52 medication-related problems and 66 medical problems, and documented 233 recommendations relating to health maintenance and wellness, pharmacotherapy, referrals, and education. Students reported having adequate experience performing core MTM elements. Patient Care 2 may serve as an experiential learning model for pharmacy schools to teach the core elements of MTM and provide patient care services to the community.

  19. GERICOS: A Generic Framework for the Development of On-Board Software

    NASA Astrophysics Data System (ADS)

    Plasson, P.; Cuomo, C.; Gabriel, G.; Gauthier, N.; Gueguen, L.; Malac-Allain, L.

    2016-08-01

    This paper presents an overview of the GERICOS framework (GEneRIC Onboard Software), its architecture, its various layers and its future evolutions. The GERICOS framework, developed and qualified by LESIA, offers a set of generic, reusable and customizable software components for the rapid development of payload flight software. The GERICOS framework has a layered structure. The first layer (GERICOS::CORE) implements the concept of active objects and forms an abstraction layer over the top of real-time kernels. The second layer (GERICOS::BLOCKS) offers a set of reusable software components for building flight software based on generic solutions to recurrent functionalities. The third layer (GERICOS::DRIVERS) implements software drivers for several COTS IP cores of the LEON processor ecosystem.

  20. Empirical research on complex networks modeling of combat SoS based on data from real war-game, Part I: Statistical characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kou, Yingxin; Li, Zhanwu; Xu, An; Wu, Cheng

    2018-01-01

    We build a complex networks model of combat System-of-Systems (SoS) based on empirical data from a real war-game, this model is a combination of command & control (C2) subnetwork, sensors subnetwork, influencers subnetwork and logistical support subnetwork, each subnetwork has idiographic components and statistical characteristics. The C2 subnetwork is the core of whole combat SoS, it has a hierarchical structure with no modularity, of which robustness is strong enough to maintain normal operation after any two nodes is destroyed; the sensors subnetwork and influencers subnetwork are like sense organ and limbs of whole combat SoS, they are both flat modular networks of which degree distribution obey GEV distribution and power-law distribution respectively. The communication network is the combination of all subnetworks, it is an assortative Small-World network with core-periphery structure, the Intelligence & Communication Stations/Command Center integrated with C2 nodes in the first three level act as the hub nodes in communication network, and all the fourth-level C2 nodes, sensors, influencers and logistical support nodes have communication capability, they act as the periphery nodes in communication network, its degree distribution obeys exponential distribution in the beginning, Gaussian distribution in the middle, and power-law distribution in the end, and its path length obeys GEV distribution. The betweenness centrality distribution, closeness centrality distribution and eigenvector centrality are also been analyzed to measure the vulnerability of nodes.

  1. Molecular Structure of the Pyruvate Dehydrogenase Complex from Escherichia coli K-12

    PubMed Central

    Vogel, Otto; Hoehn, Barbara; Henning, Ulf

    1972-01-01

    The pyruvate dehydrogenase core complex from E. coli K-12, defined as the multienzyme complex that can be obtained with a unique polypeptide chain composition, has a molecular weight of 3.75 × 106. All results obtained agree with the following numerology. The core complex consists of 48 polypeptide chains. There are 16 chains (molecular weight = 100,000) of the pyruvate dehydrogenase component, 16 chains (molecular weight = 80,000) of the dihydrolipoamide dehydrogenase component, and 16 chains (molecular weight = 56,000) of the dihydrolipoamide dehydrogenase component. Usually, but not always, pyruvate dehydrogenase complex is produced in vivo containing at least 2-3 mol more of dimers of the pyruvate dehydrogenase component than the stoichiometric ratio with respect to the core complex. This “excess” component is bound differently than are the eight dimers in the core complex. Images PMID:4556465

  2. Constitutive Modeling of the Facesheet to Core Interface in Honeycomb Sandwich Panels Subject to Mode I Delamination

    NASA Technical Reports Server (NTRS)

    Hoewer, Daniel; Lerch, Bradley A.; Bednarcyk, Brett A.; Pineda, Evan Jorge; Reese, Stefanie; Simon, Jaan-Willem

    2017-01-01

    A new cohesive zone traction-separation law, which includes the effects of fiber bridging, has been developed, implemented with a finite element (FE) model, and applied to simulate the delamination between the facesheet and core of a composite honeycomb sandwich panel. The proposed traction-separation law includes a standard initial cohesive component, which accounts for the initial interfacial stiffness and energy release rate, along with a new component to account for the fiber bridging contribution to the delamination process. Single cantilever beam tests on aluminum honeycomb sandwich panels with carbon fiber reinforced polymer facesheets were used to characterize and evaluate the new formulation and its finite element implementation. These tests, designed to evaluate the mode I toughness of the facesheet to core interface, exhibited significant fiber bridging and large crack process zones, giving rise to a concave downward concave upward pre-peak shape in the load-displacement curve. Unlike standard cohesive formulations, the proposed formulation captures this observed shape, and its results have been shown to be in excellent quantitative agreement with experimental load-displacement and apparent critical energy release rate results, representative of a payload fairing structure, as well as local strain fields measured with digital image correlation.

  3. Pyrene-Labeled Amphiphiles: Dynamic And Structural Probes Of Membranes And Lipoproteins

    NASA Astrophysics Data System (ADS)

    Pownall, Henry J.; Homan, Reynold; Massey, John B.

    1987-01-01

    Lipids and proteins are important functional and structural components of living organisms. Although proteins are frequently found as soluble components of plasma or the cell cytoplasm, many lipids are much less soluble and separate into complex assemblies that usually contain proteins. Cell membranes and plasma lipoproteins' are two important macro-molecular assemblies that contain both lipids and proteins. Cell membranes are composed of a variety of lipids and proteins that form an insoluble bilayer array that has relatively little curvature over distances of several nm. Plasma lipoproteins are different in that they are much smaller, water-soluble, and have highly curved surfaces. A model of a high density lipoprotein (HDL) is shown in Figure 1. This model (d - 10 nm) contains a surface of polar lipids and proteins that surrounds a small core of insoluble lipids, mostly triglycerides and cholesteryl esters. The low density (LDL) (d - 25 nm) and very low density (VLDL) (d 90 nm) lipoproteins have similar architectures, except the former has a cholesteryl ester core and the latter a core that is almost exclusively triglyceride (Figure 1). The surface proteins of HDL are amphiphilic and water soluble; the single protein of LDL is insoluble, whereas VLDL contains both soluble and insoluble proteins. The primary structures of all of these proteins are known.

  4. Using Intel Xeon Phi to accelerate the WRF TEMF planetary boundary layer scheme

    NASA Astrophysics Data System (ADS)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen

    2014-05-01

    The Weather Research and Forecasting (WRF) model is designed for numerical weather prediction and atmospheric research. The WRF software infrastructure consists of several components such as dynamic solvers and physics schemes. Numerical models are used to resolve the large-scale flow. However, subgrid-scale parameterizations are for an estimation of small-scale properties (e.g., boundary layer turbulence and convection, clouds, radiation). Those have a significant influence on the resolved scale due to the complex nonlinear nature of the atmosphere. For the cloudy planetary boundary layer (PBL), it is fundamental to parameterize vertical turbulent fluxes and subgrid-scale condensation in a realistic manner. A parameterization based on the Total Energy - Mass Flux (TEMF) that unifies turbulence and moist convection components produces a better result that the other PBL schemes. For that reason, the TEMF scheme is chosen as the PBL scheme we optimized for Intel Many Integrated Core (MIC), which ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our optimization results for TEMF planetary boundary layer scheme. The optimizations that were performed were quite generic in nature. Those optimizations included vectorization of the code to utilize vector units inside each CPU. Furthermore, memory access was improved by scalarizing some of the intermediate arrays. The results show that the optimization improved MIC performance by 14.8x. Furthermore, the optimizations increased CPU performance by 2.6x compared to the original multi-threaded code on quad core Intel Xeon E5-2603 running at 1.8 GHz. Compared to the optimized code running on a single CPU socket the optimized MIC code is 6.2x faster.

  5. A Fault Oblivious Extreme-Scale Execution Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKie, Jim

    The FOX project, funded under the ASCR X-stack I program, developed systems software and runtime libraries for a new approach to the data and work distribution for massively parallel, fault oblivious application execution. Our work was motivated by the premise that exascale computing systems will provide a thousand-fold increase in parallelism and a proportional increase in failure rate relative to today’s machines. To deliver the capability of exascale hardware, the systems software must provide the infrastructure to support existing applications while simultaneously enabling efficient execution of new programming models that naturally express dynamic, adaptive, irregular computation; coupled simulations; and massivemore » data analysis in a highly unreliable hardware environment with billions of threads of execution. Our OS research has prototyped new methods to provide efficient resource sharing, synchronization, and protection in a many-core compute node. We have experimented with alternative task/dataflow programming models and shown scalability in some cases to hundreds of thousands of cores. Much of our software is in active development through open source projects. Concepts from FOX are being pursued in next generation exascale operating systems. Our OS work focused on adaptive, application tailored OS services optimized for multi → many core processors. We developed a new operating system NIX that supports role-based allocation of cores to processes which was released to open source. We contributed to the IBM FusedOS project, which promoted the concept of latency-optimized and throughput-optimized cores. We built a task queue library based on distributed, fault tolerant key-value store and identified scaling issues. A second fault tolerant task parallel library was developed, based on the Linda tuple space model, that used low level interconnect primitives for optimized communication. We designed fault tolerance mechanisms for task parallel computations employing work stealing for load balancing that scaled to the largest existing supercomputers. Finally, we implemented the Elastic Building Blocks runtime, a library to manage object-oriented distributed software components. To support the research, we won two INCITE awards for time on Intrepid (BG/P) and Mira (BG/Q). Much of our work has had impact in the OS and runtime community through the ASCR Exascale OS/R workshop and report, leading to the research agenda of the Exascale OS/R program. Our project was, however, also affected by attrition of multiple PIs. While the PIs continued to participate and offer guidance as time permitted, losing these key individuals was unfortunate both for the project and for the DOE HPC community.« less

  6. The study on the core personality trait words of Chinese medical university students based on social network analysis

    PubMed Central

    Wu, Ying; Xue, Yunzhen; Xue, Zhanling

    2017-01-01

    Abstract The medical university students in China whose school work is relatively heavy and educational system is long are a special professional group. Many students have psychological problems more or less. So, to understand their personality characteristics will provide a scientific basis for the intervention of psychological health. We selected top 30 personality trait words according to the order of frequency. Additionally, some methods such as social network analysis (SNA) and visualization technology of mapping knowledge domain were used in this study. Among these core personality trait words Family conscious had the 3 highest centralities and possessed the largest core status and influence. From the analysis of core-peripheral structure, we can see polarized core-perpheral structure was quite obvious. From the analysis of K-plex, there were in total 588 “K-2”K-plexs. From the analysis of Principal Components, we selected the 11 principal components. This study of personality not only can prevent disease, but also provide a scientific basis for students’ psychological healthy education. In addition, we have adopted SNA to pay more attention to the relationship between personality trait words and the connection among personality dimensions. This study may provide the new ideas and methods for the research of personality structure. PMID:28906409

  7. The study on the core personality trait words of Chinese medical university students based on social network analysis.

    PubMed

    Wu, Ying; Xue, Yunzhen; Xue, Zhanling

    2017-09-01

    The medical university students in China whose school work is relatively heavy and educational system is long are a special professional group. Many students have psychological problems more or less. So, to understand their personality characteristics will provide a scientific basis for the intervention of psychological health.We selected top 30 personality trait words according to the order of frequency. Additionally, some methods such as social network analysis (SNA) and visualization technology of mapping knowledge domain were used in this study.Among these core personality trait words Family conscious had the 3 highest centralities and possessed the largest core status and influence. From the analysis of core-peripheral structure, we can see polarized core-perpheral structure was quite obvious. From the analysis of K-plex, there were in total 588 "K-2"K-plexs. From the analysis of Principal Components, we selected the 11 principal components.This study of personality not only can prevent disease, but also provide a scientific basis for students' psychological healthy education. In addition, we have adopted SNA to pay more attention to the relationship between personality trait words and the connection among personality dimensions. This study may provide the new ideas and methods for the research of personality structure.

  8. Fabrication and characterization of optical sensors using metallic core-shell thin film nanoislands for ozone detection

    NASA Astrophysics Data System (ADS)

    Addanki, Satish; Nedumaran, D.

    2017-07-01

    Core-Shell nanostructures play a vital role in the sensor field owing to their performance improvements in sensing characteristics and well-established synthesis procedures. These nanostructures can be ingeniously tuned to achieve tailored properties for a particular application of interest. In this work, an Ag-Au core-shell thin film nanoislands with APTMS (3-Aminopropyl trimethoxysilane) and PVA (Polyvinyl alcohol) binding agents was modeled, synthesized and characterized. The simulation results were used to fabricate the sensor through chemical route. The results of this study confirmed that the APTMS based Ag-Au core-shell thin film nanoislands offered a better performance over the PVA based Ag-Au core-shell thin film nanoislands. Also, the APTMS based Ag-Au core-shell thin film nanoislands exhibited better sensitivity towards ozone sensing over the other types, viz., APTMS/PVA based Au-Ag core-shell and standalone Au/Ag thin film nanoislands.

  9. LIBS data analysis using a predictor-corrector based digital signal processor algorithm

    NASA Astrophysics Data System (ADS)

    Sanders, Alex; Griffin, Steven T.; Robinson, Aaron

    2012-06-01

    There are many accepted sensor technologies for generating spectra for material classification. Once the spectra are generated, communication bandwidth limitations favor local material classification with its attendant reduction in data transfer rates and power consumption. Transferring sensor technologies such as Cavity Ring-Down Spectroscopy (CRDS) and Laser Induced Breakdown Spectroscopy (LIBS) require effective material classifiers. A result of recent efforts has been emphasis on Partial Least Squares - Discriminant Analysis (PLS-DA) and Principle Component Analysis (PCA). Implementation of these via general purpose computers is difficult in small portable sensor configurations. This paper addresses the creation of a low mass, low power, robust hardware spectra classifier for a limited set of predetermined materials in an atmospheric matrix. Crucial to this is the incorporation of PCA or PLS-DA classifiers into a predictor-corrector style implementation. The system configuration guarantees rapid convergence. Software running on multi-core Digital Signal Processor (DSPs) simulates a stream-lined plasma physics model estimator, reducing Analog-to-Digital (ADC) power requirements. This paper presents the results of a predictorcorrector model implemented on a low power multi-core DSP to perform substance classification. This configuration emphasizes the hardware system and software design via a predictor corrector model that simultaneously decreases the sample rate while performing the classification.

  10. Modelling the radiolysis of RSG-GAS primary cooling water

    NASA Astrophysics Data System (ADS)

    Butarbutar, S. L.; Kusumastuti, R.; Subekti, M.; Sunaryo, G. R.

    2018-02-01

    Water chemistry control for light water coolant reactor required a reliable understanding of radiolysis effect in mitigating corrosion and degradation of reactor structure material. It is known that oxidator products can promote the corrosion, cracking and hydrogen pickup both in the core and in the associated piping components of the reactor. The objective of this work is to provide the radiolysis model of RSG GAS cooling water and further more to predict the oxidator concentration which can lead to corrosion of reactor material. Direct observations or measurements of the chemistry in and around the high-flux core region of a nuclear reactor are difficult due to the extreme conditions of high temperature, pressure, and mixed radiation fields. For this reason, chemical models and computer simulations of the radiolysis of water under these conditions are an important route of investigation. FACSIMILE were used to calculate the concentration of O2 formed at relatively long-time by the pure water γ and neutron irradiation (pH=7) at temperature between 25 and 50 °C. This simulation method is based on a complex chemical reaction kinetic. In this present work, 300 MeV-proton were used to mimic γ-rays radiolysis and 2 MeV fast neutrons. Concentration of O2 were calculated at 10-6 - 106 s time scale.

  11. HDL-mimetic PLGA nanoparticle to target atherosclerosis plaque macrophages.

    PubMed

    Sanchez-Gaytan, Brenda L; Fay, Francois; Lobatto, Mark E; Tang, Jun; Ouimet, Mireille; Kim, YongTae; van der Staay, Susanne E M; van Rijs, Sarian M; Priem, Bram; Zhang, Liangfang; Fisher, Edward A; Moore, Kathryn J; Langer, Robert; Fayad, Zahi A; Mulder, Willem J M

    2015-03-18

    High-density lipoprotein (HDL) is a natural nanoparticle that exhibits an intrinsic affinity for atherosclerotic plaque macrophages. Its natural targeting capability as well as the option to incorporate lipophilic payloads, e.g., imaging or therapeutic components, in both the hydrophobic core and the phospholipid corona make the HDL platform an attractive nanocarrier. To realize controlled release properties, we developed a hybrid polymer/HDL nanoparticle composed of a lipid/apolipoprotein coating that encapsulates a poly(lactic-co-glycolic acid) (PLGA) core. This novel HDL-like nanoparticle (PLGA-HDL) displayed natural HDL characteristics, including preferential uptake by macrophages and a good cholesterol efflux capacity, combined with a typical PLGA nanoparticle slow release profile. In vivo studies carried out with an ApoE knockout mouse model of atherosclerosis showed clear accumulation of PLGA-HDL nanoparticles in atherosclerotic plaques, which colocalized with plaque macrophages. This biomimetic platform integrates the targeting capacity of HDL biomimetic nanoparticles with the characteristic versatility of PLGA-based nanocarriers.

  12. HDL-Mimetic PLGA Nanoparticle To Target Atherosclerosis Plaque Macrophages

    PubMed Central

    Sanchez-Gaytan, Brenda L.; Fay, Francois; Lobatto, Mark E.; Tang, Jun; Ouimet, Mireille; Kim, YongTae; van der Staay, Susanne E. M.; van Rijs, Sarian M.; Priem, Bram; Zhang, Liangfang; Fisher, Edward A; Moore, Kathryn J.; Langer, Robert; Fayad, Zahi A.; Mulder, Willem J M

    2015-01-01

    High-density lipoprotein (HDL) is a natural nanoparticle that exhibits an intrinsic affinity for atherosclerotic plaque macrophages. Its natural targeting capability as well as the option to incorporate lipophilic payloads, e.g., imaging or therapeutic components, in both the hydrophobic core and the phospholipid corona make the HDL platform an attractive nanocarrier. To realize controlled release properties, we developed a hybrid polymer/HDL nanoparticle composed of a lipid/apolipoprotein coating that encapsulates a poly(lactic-co-glycolic acid) (PLGA) core. This novel HDL-like nanoparticle (PLGA–HDL) displayed natural HDL characteristics, including preferential uptake by macrophages and a good cholesterol efflux capacity, combined with a typical PLGA nanoparticle slow release profile. In vivo studies carried out with an ApoE knockout mouse model of atherosclerosis showed clear accumulation of PLGA–HDL nanoparticles in atherosclerotic plaques, which colocalized with plaque macrophages. This biomimetic platform integrates the targeting capacity of HDL biomimetic nanoparticles with the characteristic versatility of PLGA-based nanocarriers. PMID:25650634

  13. School Climate Factors Relating to Teacher Burnout: A Mediator Model

    ERIC Educational Resources Information Center

    Grayson, Jessica L.; Alvarez, Heather K.

    2008-01-01

    The present study investigated components of school climate (i.e. parent/community relations, administration, student behavioral values) and assessed their influence on the core burnout dimensions of Emotional Exhaustion, Depersonalization, and feelings of low Personal Accomplishment. The study weighed the relative contributions of demographic…

  14. State Injury Programs’ Response to the Opioid Epidemic: The Role of CDC’s Core Violence and Injury Prevention Program

    PubMed Central

    Deokar, Angela J.; Dellapenna, Alan; DeFiore-Hyrmer, Jolene; Laidler, Matt; Millet, Lisa; Morman, Sara; Myers, Lindsey

    2018-01-01

    The Centers for Disease Control and Prevention’s (CDC’s) Core Violence and Injury Prevention Program (Core) supports capacity of state violence and injury prevention programs to implement evidence-based interventions. Several Core-funded states prioritized prescription drug overdose (PDO) and leveraged their systems to identify and respond to the epidemic before specific PDO prevention funding was available through CDC. This article describes activities employed by Core-funded states early in the epidemic. Four case examples illustrate states’ approaches within the context of their systems and partners. While Core funding is not sufficient to support a comprehensive PDO prevention program, having Core in place at the beginning of the emerging epidemic had critical implications for identifying the problem and developing systems that were later expanded as additional resources became available. Important components included staffing support to bolster programmatic and epidemiological capacity; diverse and collaborative partnerships; and use of surveillance and evidence-informed best practices to prioritize decision-making. PMID:29189501

  15. State Injury Programs' Response to the Opioid Epidemic: The Role of CDC's Core Violence and Injury Prevention Program.

    PubMed

    Deokar, Angela J; Dellapenna, Alan; DeFiore-Hyrmer, Jolene; Laidler, Matt; Millet, Lisa; Morman, Sara; Myers, Lindsey

    The Centers for Disease Control and Prevention's (CDC's) Core Violence and Injury Prevention Program (Core) supports capacity of state violence and injury prevention programs to implement evidence-based interventions. Several Core-funded states prioritized prescription drug overdose (PDO) and leveraged their systems to identify and respond to the epidemic before specific PDO prevention funding was available through CDC. This article describes activities employed by Core-funded states early in the epidemic. Four case examples illustrate states' approaches within the context of their systems and partners. While Core funding is not sufficient to support a comprehensive PDO prevention program, having Core in place at the beginning of the emerging epidemic had critical implications for identifying the problem and developing systems that were later expanded as additional resources became available. Important components included staffing support to bolster programmatic and epidemiological capacity; diverse and collaborative partnerships; and use of surveillance and evidence-informed best practices to prioritize decision-making.

  16. SU-F-207-05: Excess Heat Corrections in a Prototype Calorimeter for Direct Realization of CT Absorbed Dose to Phantom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen-Mayer, H; Tosh, R

    2015-06-15

    Purpose: To reconcile air kerma and calorimetry measurements in a prototype calorimeter for obtaining absorbed dose in diagnostic CT beams. While corrections for thermal artifacts are routine and generally small in calorimetry of radiotherapy beams, large differences in relative stopping powers of calorimeter materials at the lower energies typical of CT beams greatly magnify their effects. Work-to-date on the problem attempts to reconcile laboratory measurements with modeling output from Monte Carlo and finite-element analysis of heat transfer. Methods: Small thermistor beads were embedded in a polystyrene (PS) core element of 1 cm diameter, which was inserted into a cylindrical HDPEmore » phantom of 30 cm diameter and subjected to radiation in a diagnostic CT x-ray imaging system. Resistance changes in the thermistors due to radiation heating were monitored via lock-in amplifier. Multiple 3-second exposures were recorded at 8 different dose-rates from the CT system, and least-squares fits to experimental data were compared to an expected thermal response obtained by finite-element analysis incorporating source terms based on semi-empirical modeling and Monte Carlo simulation. Results: Experimental waveforms exhibited large thermal artifacts with fast time constants, associated with excess heat in wires and glass, and smaller steps attributable to radiation heating of the core material. Preliminary finite-element analysis follows the transient component of the signal qualitatively, but predicts a slower decay of temperature spikes. This was supplemented by non-linear least-squares fits incorporating semi-empirical formulae for heat transfer, which were used to obtain dose-to-PS in reasonable agreement with the output of Monte Carlo calculations that converts air kerma to absorbed dose. Conclusion: Discrepancies between the finite-element analysis and our experimental data testify to the very significant heat transfer correction required for absorbed dose calorimetry of diagnostic CT beams. The results obtained here are being used to refine both simulations and design of calorimeter core components.« less

  17. Universal core model for multiple-gate field-effect transistors with short channel and quantum mechanical effects

    NASA Astrophysics Data System (ADS)

    Shin, Yong Hyeon; Bae, Min Soo; Park, Chuntaek; Park, Joung Won; Park, Hyunwoo; Lee, Yong Ju; Yun, Ilgu

    2018-06-01

    A universal core model for multiple-gate (MG) field-effect transistors (FETs) with short channel effects (SCEs) and quantum mechanical effects (QMEs) is proposed. By using a Young’s approximation based solution for one-dimensional Poisson’s equations the total inversion charge density (Q inv ) in the channel is modeled for double-gate (DG) and surrounding-gate SG (SG) FETs, following which a universal charge model is derived based on the similarity of the solutions, including for quadruple-gate (QG) FETs. For triple-gate (TG) FETs, the average of DG and QG FETs are used. A SCEs model is also proposed considering the potential difference between the channel’s surface and center. Finally, a QMEs model for MG FETs is developed using the quantum correction compact model. The proposed universal core model is validated on commercially available three-dimensional ATLAS numerical simulations.

  18. A review on prognostics approaches for remaining useful life of lithium-ion battery

    NASA Astrophysics Data System (ADS)

    Su, C.; Chen, H. J.

    2017-11-01

    Lithium-ion (Li-ion) battery is a core component for various industrial systems, including satellite, spacecraft and electric vehicle, etc. The mechanism of performance degradation and remaining useful life (RUL) estimation correlate closely to the operating state and reliability of the aforementioned systems. Furthermore, RUL prediction of Li-ion battery is crucial for the operation scheduling, spare parts management and maintenance decision for such kinds of systems. In recent years, performance degradation prognostics and RUL estimation approaches have become a focus of the research concerning with Li-ion battery. This paper summarizes the approaches used in Li-ion battery RUL estimation. Three categories are classified accordingly, i.e. model-based approach, data-based approach and hybrid approach. The key issues and future trends for battery RUL estimation are also discussed.

  19. A soft X-ray map of the Perseus cluster of galaxies

    NASA Technical Reports Server (NTRS)

    Cash, W.; Malina, R. F.; Wolff, R. S.

    1976-01-01

    A 0.5-3-keV X-ray map of the Perseus cluster of galaxies is presented. The map shows a region of strong emission centered near NGC 1275 plus a highly elongated emission region which lies along the line of bright galaxies that dominates the core of the cluster. The data are compared with various models that include point and diffuse sources. One model which adequately represents the data is the superposition of a point source at NGC 1275 and an isothermal ellipsoid resulting from the bremsstrahlung emission of cluster gas. The ellipsoid has a major core radius of 20.5 arcmin and a minor core radius of 5.5 arcmin, consistent with the values obtained from galaxy counts. All acceptable models provide evidence for a compact source (less than 3 arcmin FWHM) at NGC 1275 containing about 25% of the total emission. Since the diffuse X-ray and radio components have radically different morphologies, it is unlikely that the emissions arise from a common source, as proposed in inverse-Compton models.

  20. Development and Parameters of a Non-Self-Similar CME Caused by the Eruption of a Quiescent Prominence

    NASA Astrophysics Data System (ADS)

    Kuzmenko, I. V.; Grechnev, V. V.

    2017-10-01

    The eruption of a large quiescent prominence on 17 August 2013 and an associated coronal mass ejection (CME) were observed from different vantage points by the Solar Dynamics Observatory (SDO), the Solar-Terrestrial Relations Observatory (STEREO), and the Solar and Heliospheric Observatory (SOHO). Screening of the quiet Sun by the prominence produced an isolated negative microwave burst. We estimated the parameters of the erupting prominence from a radio absorption model and measured them from 304 Å images. The variations of the parameters as obtained by these two methods are similar and agree within a factor of two. The CME development was studied from the kinematics of the front and different components of the core and their structural changes. The results were verified using movies in which the CME expansion was compensated for according to the measured kinematics. We found that the CME mass (3.6 × 10^{15} g) was mainly supplied by the prominence (≈ 6 × 10^{15} g), while a considerable part drained back. The mass of the coronal-temperature component did not exceed 10^{15} g. The CME was initiated by the erupting prominence, which constituted its core and remained active. The structural and kinematical changes started in the core and propagated outward. The CME structures continued to form during expansion, which did not become self-similar up to 25 R_{⊙}. The aerodynamic drag was insignificant. The core formed during the CME rise to 4 R_{⊙} and possibly beyond. Some of its components were observed to straighten and stretch outward, indicating the transformation of tangled structures of the core into a simpler flux rope, which grew and filled the cavity as the CME expanded.

  1. Finite Element Modelling and Analysis of Damage Detection Methodology in Piezo Electric Sensor and Actuator Integrated Sandwich Cantilever Beam

    NASA Astrophysics Data System (ADS)

    Pradeep, K. R.; Thomas, A. M.; Basker, V. T.

    2018-03-01

    Structural health monitoring (SHM) is an essential component of futuristic civil, mechanical and aerospace structures. It detects the damages in system or give warning about the degradation of structure by evaluating performance parameters. This is achieved by the integration of sensors and actuators into the structure. Study of damage detection process in piezoelectric sensor and actuator integrated sandwich cantilever beam is carried out in this paper. Possible skin-core debond at the root of the cantilever beam is simulated and compared with undamaged case. The beam is actuated using piezoelectric actuators and performance differences are evaluated using Polyvinylidene fluoride (PVDF) sensors. The methodology utilized is the voltage/strain response of the damaged versus undamaged beam against transient actuation. Finite element model of piezo-beam is simulated in ANSYSTM using 8 noded coupled field element, with nodal degrees of freedoms are translations in the x, y directions and voltage. An aluminium sandwich beam with a length of 800mm, thickness of core 22.86mm and thickness of skin 0.3mm is considered. Skin-core debond is simulated in the model as unmerged nodes. Reduction in the fundamental frequency of the damaged beam is found to be negligible. But the voltage response of the PVDF sensor under transient excitation shows significantly visible change indicating the debond. Piezo electric based damage detection system is an effective tool for the damage detection of aerospace and civil structural system having inaccessible/critical locations and enables online monitoring possibilities as the power requirement is minimal.

  2. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    PubMed

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  3. Nonspecific Organelle-Targeting Strategy with Core-Shell Nanoparticles of Varied Lipid Components/Ratios.

    PubMed

    Zhang, Lu; Sun, Jiashu; Wang, Yilian; Wang, Jiancheng; Shi, Xinghua; Hu, Guoqing

    2016-07-19

    We report a nonspecific organelle-targeting strategy through one-step microfluidic fabrication and screening of a library of surface charge- and lipid components/ratios-varied lipid shell-polymer core nanoparticles. Different from the common strategy relying on the use of organelle-targeted moieties conjugated onto the surface of nanoparticles, here, we program the distribution of hybrid nanoparticles in lysosomes or mitochondria by tuning the lipid components/ratios in shell. Hybrid nanoparticles with 60% 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP) and 20% 1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (DOPE) can intracellularly target mitochondria in both in vitro and in vivo models. While replacing DOPE with the same amount of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC), the nanoparticles do not show mitochondrial targeting, indicating an incremental effect of cationic and fusogenic lipids on lysosomal escape which is further studied by molecular dynamics simulations. This work unveils the lipid-regulated subcellular distribution of hybrid nanoparticles in which target moieties and complex synthetic steps are avoided.

  4. First-principles prediction of Si-doped Fe carbide as one of the possible constituents of Earth's inner core

    NASA Astrophysics Data System (ADS)

    Das, Tilak; Chatterjee, Swastika; Ghosh, Sujoy; Saha-Dasgupta, Tanusri

    2017-09-01

    We perform a computational study based on first-principles calculations to investigate the relative stability and elastic properties of the doped and undoped Fe carbide compounds at 200-364 GPa. We find that upon doping a few weight percent of Si impurities at the carbon sites in Fe7C3 carbide phases, the values of Poisson's ratio and density increase while VP, and VS decrease compared to their undoped counterparts. This leads to marked improvement in the agreement of seismic parameters such as P wave and S wave velocity, Poisson's ratio, and density with the Preliminary Reference Earth Model (PREM) data. The agreement with PREM data is found to be better for the orthorhombic phase of iron carbide (o-Fe7C3) compared to hexagonal phase (h-Fe7C3). Our theoretical analysis indicates that Fe carbide containing Si impurities can be a possible constituent of the Earth's inner core. Since the density of undoped Fe7C3 is low compared to that of inner core, as discussed in a recent theoretical study, our proposal of Si-doped Fe7C3 can provide an alternative solution as an important component of the Earth's inner core.

  5. Are annual layers preserved in NorthGRIP Eemian ice?

    NASA Astrophysics Data System (ADS)

    Kettner, E.; Bigler, M.; Nielsen, M. E.; Steffensen, J. P.; Svensson, A.

    2009-04-01

    A newly developed setup for continuous flow analysis (CFA) of ice cores in Copenhagen is optimized for high resolution analysis of four components: Soluble sodium (mainly deriving from sea salt), soluble ammonium (related to biological processes and biomass burning events), insoluble dust particles (basically transported from Asian deserts to Greenland), and the electrolytic melt water conductivity (which is a bulk signal for all ionic constituents). Furthermore, we are for the first time implementing a flow cytometer to obtain high quality dust concentration and size distribution profiles based on individual dust particle measurements. Preliminary measurements show that the setup is able to resolve annual layers of 1 cm thickness. Ice flow models predict that annual layers in the Eemian section of the Greenland NorthGRIP ice core (130-115 ka BP) have a thickness of around 1 cm. However, the visual stratigraphy of the ice core indicates that the annual layering in the Eemian section may be disturbed by micro folds and rapid crystal growth. In this case study we will measure the impurity content of an Eemian segment of the NorthGRIP ice core with the new CFA setup. This will allow for a comparison to well-known impurity levels of the Holocene in both Greenland and Antarctic ice and we will attempt to determine if annual layers are still present in the ice.

  6. Dark Matter Profiles in Dwarf Galaxies: A Statistical Sample Using High-Resolution Hα Velocity Fields from PCWI

    NASA Astrophysics Data System (ADS)

    Relatores, Nicole C.; Newman, Andrew B.; Simon, Joshua D.; Ellis, Richard; Truong, Phuongmai N.; Blitz, Leo

    2018-01-01

    We present high quality Hα velocity fields for a sample of nearby dwarf galaxies (log M/M⊙ = 8.4-9.8) obtained as part of the Dark Matter in Dwarf Galaxies survey. The purpose of the survey is to investigate the cusp-core discrepancy by quantifying the variation of the inner slope of the dark matter distributions of 26 dwarf galaxies, which were selected as likely to have regular kinematics. The data were obtained with the Palomar Cosmic Web Imager, located on the Hale 5m telescope. We extract rotation curves from the velocity fields and use optical and infrared photometry to model the stellar mass distribution. We model the total mass distribution as the sum of a generalized Navarro-Frenk-White dark matter halo along with the stellar and gaseous components. We present the distribution of inner dark matter density profile slopes derived from this analysis. For a subset of galaxies, we compare our results to an independent analysis based on CO observations. In future work, we will compare the scatter in inner density slopes, as well as their correlations with galaxy properties, to theoretical predictions for dark matter core creation via supernovae feedback.

  7. A method of Modelling and Simulating the Back-to-Back Modular Multilevel Converter HVDC Transmission System

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Fan, Youping; Zhang, Dai; Ge, Mengxin; Zou, Xianbin; Li, Jingjiao

    2017-09-01

    This paper proposes a method to simulate a back-to-back modular multilevel converter (MMC) HVDC transmission system. In this paper we utilize an equivalent networks to simulate the dynamic power system. Moreover, to account for the performance of converter station, core components of model of the converter station gives a basic model of simulation. The proposed method is applied to an equivalent real power system.

  8. Integrated healthy workplace model: An experience from North Indian industry

    PubMed Central

    Thakur, Jarnail Singh; Bains, Puneet; Kar, Sitanshu Sekhar; Wadhwa, Sanjay; Moirangthem, Prabha; Kumar, Rajesh; Wadwalker, Sanjay; Sharma, Yashpal

    2012-01-01

    Background: Keeping in view of rapid industrialization and growing Indian economy, there has been a substantial increase in the workforce in India. Currently there is no organized workplace model for promoting health of industrial workers in India. Objective: To develop and implement a healthy workplace model in three industrial settings of North India. Materials and Methods: An operations research was conducted for 12 months in purposively selected three industries of Chandigarh. In phase I, a multi-stakeholder workshop was conducted to finalize the components and tools for the healthy workplace model. NCD risk factors were assessed in 947 employees in these three industries. In phase II, the healthy workplace model was implemented on pilot basis for a period of 12 months in these three industries to finalize the model. Findings: Healthy workplace committee with involvement of representatives of management, labor union and research organization was formed in three industries. Various tools like comprehensive and rapid healthy workplace assessment forms, NCD work-lite format for risk factors surveillance and monitoring and evaluation format were developed. The prevalence of tobacco use, ever alcoholics was found to be 17.8% and 47%, respectively. Around one-third (28%) of employees complained of back pain in the past 12 months. Healthy workplace model with focus on three key components (physical environment, psychosocial work environment, and promoting healthy habits) was developed, implemented on pilot basis, and finalized based on experience in participating industries. A stepwise approach for model with a core, expanded, and optional components were also suggested. An accreditation system is also required for promoting healthy workplace program. Conclusion: Integrated healthy workplace model is feasible, could be implemented in industrial setting in northern India and needs to be pilot tested in other parts of the country. PMID:23776318

  9. A Paleointensity-Based Test of the Geocentric Axial Dipole (GAD) Hypothesis

    NASA Astrophysics Data System (ADS)

    Heimpel, M. H.; Veikkolainen, T.; Evans, M. E.; Pesonen, L. J.; Korhonen, K.

    2016-12-01

    The GAD model is central to many aspects of geophysics, including plate tectonics and paleoclimate. However, significant departures from a GAD field over geologic time have not been ruled out, particularly for the Precambrian. Here, we investigate a test of the GAD model using published paleointensity data. Our goals are to determine if paleointensities can shed light on the validity of the GAD model, and hence to see if they provide constraints on the evolution of the geodynamo throughout earth history. Using numerical dynamo models, we show that intensity distributions can be fairly well characterized by the first three zonal Gauss coefficients (dipole, quadrupole and octupole), although time-averaging tends to broaden the range of intensities. The dynamo models indicate that the ancient core, prior to nucleation of the inner core, may have had a significant (up to 10%) contribution of the zonal octupole. We then investigate the connection between the measured paleointensities assembled in the PINT database and the GAD model by means of predicted theoretical frequency distributions for various simple models (GAD, GAD ± small zonal quadrupole or octupole components). Hitherto, paleointensities have often been analysed in terms of corresponding virtual dipole moments (VDMs). But this rather begs the question because a GAD model is assumed in order to derive a VDM. By using raw field values reported from each sampling site we eliminate dependence on the GAD hypothesis. We find that models consisting of one or two different GADs cannot explain the data, but 3- or 4-GAD models can fit the data surprisingly well, and adding a ±5% octupole significantly improves the fit.

  10. Evaluation of AHRQ's on-time pressure ulcer prevention program: a facilitator-assisted clinical decision support intervention for nursing homes.

    PubMed

    Olsho, Lauren E W; Spector, William D; Williams, Christianna S; Rhodes, William; Fink, Rebecca V; Limcangco, Rhona; Hurd, Donna

    2014-03-01

    Pressure ulcers present serious health and economic consequences for nursing home residents. The Agency for Healthcare Research & Quality, in partnership with the New York State Department of Health, implemented the pressure ulcer module of On-Time Quality Improvement for Long Term Care (On-Time), a clinical decision support intervention to reduce pressure ulcer incidence rates. To evaluate the effectiveness of the On-Time program in reducing the rate of in-house-acquired pressure ulcers among nursing home residents. We employed an interrupted time-series design to identify impacts of 4 core On-Time program components on resident pressure ulcer incidence in 12 New York State nursing homes implementing the intervention (n=3463 residents). The sample was purposively selected to include nursing homes with high baseline prevalence and incidence of pressure ulcers and high motivation to reduce pressure ulcers. Differential timing and sequencing of 4 core On-Time components across intervention nursing homes and units enabled estimation of separate impacts for each component. Inclusion of a nonequivalent comparison group of 13 nursing homes not implementing On-Time (n=2698 residents) accounts for potential mean-reversion bias. Impacts were estimated via a random-effects Poisson model including resident-level and facility-level covariates. We find a large and statistically significant reduction in pressure ulcer incidence associated with the joint implementation of 4 core On-Time components (incidence rate ratio=0.409; P=0.035). Impacts vary with implementation of specific component combinations. On-Time implementation is associated with sizable reductions in pressure ulcer incidence.

  11. Application of 3-signal coherence to core noise transmission

    NASA Technical Reports Server (NTRS)

    Krejsa, E. A.

    1983-01-01

    A method for determining transfer functions across turbofan engine components and from the engine to the far-field is developed. The method is based on the three-signal coherence technique used previously to obtain far-field core noise levels. This method eliminates the bias error in transfer function measurements due to contamination of measured pressures by nonpropagating pressure fluctuations. Measured transfer functions from the engine to the far-field, across the tailpipe, and across the turbine are presented for three turbofan engines.

  12. Evolution dynamics modeling and simulation of logistics enterprise's core competence based on service innovation

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Tong, Yuting

    2017-04-01

    With the rapid development of economy, the development of logistics enterprises in China is also facing a huge challenge, especially the logistics enterprises generally lack of core competitiveness, and service innovation awareness is not strong. Scholars in the process of studying the core competitiveness of logistics enterprises are mainly from the perspective of static stability, not from the perspective of dynamic evolution to explore. So the author analyzes the influencing factors and the evolution process of the core competence of logistics enterprises, using the method of system dynamics to study the cause and effect of the evolution of the core competence of logistics enterprises, construct a system dynamics model of evolution of core competence logistics enterprises, which can be simulated by vensim PLE. The analysis for the effectiveness and sensitivity of simulation model indicates the model can be used as the fitting of the evolution process of the core competence of logistics enterprises and reveal the process and mechanism of the evolution of the core competence of logistics enterprises, and provide management strategies for improving the core competence of logistics enterprises. The construction and operation of computer simulation model offers a kind of effective method for studying the evolution of logistics enterprise core competence.

  13. A Study on Dielectric Properties of Cadmium Sulfide-Zinc Sulfide Core-Shell Nanocomposites for Application as Nanoelectronic Filter Component in the Microwave Domain

    NASA Astrophysics Data System (ADS)

    Devi, Jutika; Datta, Pranayee

    2018-07-01

    Complex permittivities of cadmium sulfide (CdS), zinc sulfide (ZnS), and of cadmium sulfide-zinc sulfide (CdS/ZnS) core-shell nanoparticles embedded in a polyvinyl alcohol matrix (PVA) were measured in liquid phase using a VectorNetwork Analyzer in the frequency range of 500 MHz-10 GHz. These nanocomposites are modeled as an embedded capacitor, and their electric field distribution and polarization have been studied using COMSOL Multiphysics software. By varying the thickness of the shell and the number of inclusions, the capacitance values were estimated. It was observed that CdS, ZnS and CdS/ZnS core-shell nanoparticles embedded in a polyvinyl alcohol matrix show capacitive behavior. There is a strong influence of the dielectric properties in the capacitive behavior of the embedded nanocapacitor. The capping matrix, position and filling factors of nanoinclusions all affect the capacitive behavior of the tested nanocomposites. Application of the CdS, ZnS and CdS/ZnS core-shell nanocomposite as the passive low-pass filter circuit has also been investigated. From the present study, it has been found that CdS/ZnS core-shell nanoparticles embedded in PVA matrix are potential structures for application as nanoelectronic filter components in different areas of communication.

  14. A Study on Dielectric Properties of Cadmium Sulfide-Zinc Sulfide Core-Shell Nanocomposites for Application as Nanoelectronic Filter Component in the Microwave Domain

    NASA Astrophysics Data System (ADS)

    Devi, Jutika; Datta, Pranayee

    2018-03-01

    Complex permittivities of cadmium sulfide (CdS), zinc sulfide (ZnS), and of cadmium sulfide-zinc sulfide (CdS/ZnS) core-shell nanoparticles embedded in a polyvinyl alcohol matrix (PVA) were measured in liquid phase using a VectorNetwork Analyzer in the frequency range of 500 MHz-10 GHz. These nanocomposites are modeled as an embedded capacitor, and their electric field distribution and polarization have been studied using COMSOL Multiphysics software. By varying the thickness of the shell and the number of inclusions, the capacitance values were estimated. It was observed that CdS, ZnS and CdS/ZnS core-shell nanoparticles embedded in a polyvinyl alcohol matrix show capacitive behavior. There is a strong influence of the dielectric properties in the capacitive behavior of the embedded nanocapacitor. The capping matrix, position and filling factors of nanoinclusions all affect the capacitive behavior of the tested nanocomposites. Application of the CdS, ZnS and CdS/ZnS core-shell nanocomposite as the passive low-pass filter circuit has also been investigated. From the present study, it has been found that CdS/ZnS core-shell nanoparticles embedded in PVA matrix are potential structures for application as nanoelectronic filter components in different areas of communication.

  15. Z39.50 and GILS model. [Government Information Locator Service

    NASA Technical Reports Server (NTRS)

    Christian, Eliot

    1994-01-01

    The Government Information Locator System (GILS) is a component of the National Information Infrastructure (NII) which provides electronic access to sources of publicly accessible information maintained throughout the Federal Government. GILS is an internetworking information resource that identifies other information resources, describes the information available in the referenced resources, and provides assistance in how to obtain the information either directly or through intermediaries. The GILS core content which references each Federal information system holding publicly accessible data or information is described in terms of mandatory and optional core elements.

  16. 3-D thermal analysis using finite difference technique with finite element model for improved design of components of rocket engine turbomachines for Space Shuttle Main Engine SSME

    NASA Technical Reports Server (NTRS)

    Sohn, Kiho D.; Ip, Shek-Se P.

    1988-01-01

    Three-dimensional finite element models were generated and transferred into three-dimensional finite difference models to perform transient thermal analyses for the SSME high pressure fuel turbopump's first stage nozzles and rotor blades. STANCOOL was chosen to calculate the heat transfer characteristics (HTCs) around the airfoils, and endwall effects were included at the intersections of the airfoils and platforms for the steady-state boundary conditions. Free and forced convection due to rotation effects were also considered in hollow cores. Transient HTCs were calculated by taking ratios of the steady-state values based on the flow rates and fluid properties calculated at each time slice. Results are presented for both transient plots and three-dimensional color contour isotherm plots; they were also converted into universal files to be used for FEM stress analyses.

  17. Distilling Common History and Practice Elements to Inform Dissemination: Hanf-Model BPT Programs as an Example

    PubMed Central

    Kaehler, Laura A.; Jacobs, Mary; Jones, Deborah J.

    2016-01-01

    There is a shift in evidence-based practice toward an understanding of the treatment elements that characterize empirically-supported interventions in general and the core components of specific approaches in particular. The evidence-base for Behavioral Parent Training (BPT), the standard of care for early-onset disruptive behavior disorders (Oppositional Defiant Disorder and Conduct Disorder), which frequently co-occur with Attention Deficit Hyperactivity Disorder, is well-established; yet, an ahistorical, program-specific lens tells little regarding how leaders, including Constance Hanf at the University of Oregon, shaped the common practice elements of contemporary evidence-based BPT. Accordingly, this review summarizes the formative work of Hanf, as well as the core elements, evolution, and extensions of her work, represented in Community Parent Education (COPE; Cunningham, Bremner, & Boyle, 1995; Cunningham, Bremner, Secord, & Harrison, 2009), Defiant Children (DC; Barkley 1987; Barkley, 2013), Helping the Noncompliant Child (HNC; Forehand & McMahon, 1981; McMahon & Forehand, 2003), Parent-Child Interaction Therapy (PCIT; Eyberg, & Robinson, 1982; Eyberg, 1988; Eyberg & Funderburk, 2011), and the Incredible Years (IY; Webster-Stratton, 1981; 1982; 2008). Our goal is not to provide an exhaustive review of the evidence-base for the Hanf-Model programs; rather, our intention is to provide a template of sorts from which agencies and clinicians can make informed choices about how and why they are using one program versus another, as well as how to make inform flexible use one program or combination of practice elements across programs, to best meet the needs of child clients and their families. Clinical implications and directions for future work are discussed. PMID:27389606

  18. The "What", "Why" and "How" of Employee Well-Being: A New Model

    ERIC Educational Resources Information Center

    Page, Kathryn M.; Vella-Brodrick, Dianne A.

    2009-01-01

    This paper examines the "what", "why" and "how" of employee well-being. Beginning with the "what" of well-being, the construct of mental health was explored with the aim of building a model of employee well-being. It was proposed that employee well-being consists of three core components: (1) subjective well-being; (2) workplace well-being and (3)…

  19. Novel hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization estimation method for population pharmacokinetic data analysis.

    PubMed

    Ng, C M

    2013-10-01

    The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.

  20. Development of the GPM Observatory Thermal Vacuum Test Model

    NASA Technical Reports Server (NTRS)

    Yang, Kan; Peabody, Hume

    2012-01-01

    A software-based thermal modeling process was documented for generating the thermal panel settings necessary to simulate worst-case on-orbit flight environments in an observatory-level thermal vacuum test setup. The method for creating such a thermal model involved four major steps: (1) determining the major thermal zones for test as indicated by the major dissipating components on the spacecraft, then mapping the major heat flows between these components; (2) finding the flight equivalent sink temperatures for these test thermal zones; (3) determining the thermal test ground support equipment (GSE) design and initial thermal panel settings based on the equivalent sink temperatures; and (4) adjusting the panel settings in the test model to match heat flows and temperatures with the flight model. The observatory test thermal model developed from this process allows quick predictions of the performance of the thermal vacuum test design. In this work, the method described above was applied to the Global Precipitation Measurement (GPM) core observatory spacecraft, a joint project between NASA and the Japanese Aerospace Exploration Agency (JAXA) which is currently being integrated at NASA Goddard Space Flight Center for launch in Early 2014. From preliminary results, the thermal test model generated from this process shows that the heat flows and temperatures match fairly well with the flight thermal model, indicating that the test model can simulate fairly accurately the conditions on-orbit. However, further analysis is needed to determine the best test configuration possible to validate the GPM thermal design before the start of environmental testing later this year. Also, while this analysis method has been applied solely to GPM, it should be emphasized that the same process can be applied to any mission to develop an effective test setup and panel settings which accurately simulate on-orbit thermal environments.

  1. Mean-field and linear regime approach to magnetic hyperthermia of core-shell nanoparticles: can tiny nanostructures fight cancer?

    NASA Astrophysics Data System (ADS)

    Carrião, Marcus S.; Bakuzis, Andris F.

    2016-04-01

    The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy.The phenomenon of heat dissipation by magnetic materials interacting with an alternating magnetic field, known as magnetic hyperthermia, is an emergent and promising therapy for many diseases, mainly cancer. Here, a magnetic hyperthermia model for core-shell nanoparticles is developed. The theoretical calculation, different from previous models, highlights the importance of heterogeneity by identifying the role of surface and core spins on nanoparticle heat generation. We found that the most efficient nanoparticles should be obtained by selecting materials to reduce the surface to core damping factor ratio, increasing the interface exchange parameter and tuning the surface to core anisotropy ratio for each material combination. From our results we propose a novel heat-based hyperthermia strategy with the focus on improving the heating efficiency of small sized nanoparticles instead of larger ones. This approach might have important implications for cancer treatment and could help improving clinical efficacy. Electronic supplementary information (ESI) available: Unit cells per region calculation; core-shell Hamiltonian; magnetisation description functions; energy argument of Brillouin function; polydisperse models; details of experimental procedure; LRT versus core-shell model; model calculation software; and shell thickness study. See DOI: 10.1039/C5NR09093H

  2. MCNP-model for the OAEP Thai Research Reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallmeier, F.X.; Tang, J.S.; Primm, R.T. III

    An MCNP input was prepared for the Thai Research Reactor, making extensive use of the MCNP geometry`s lattice feature that allows a flexible and easy rearrangement of the core components and the adjustment of the control elements. The geometry was checked for overdefined or undefined zones by two-dimensional plots of cuts through the core configuration with the MCNP geometry plotting capabilities, and by a three-dimensional view of the core configuration with the SABRINA code. Cross sections were defined for a hypothetical core of 67 standard fuel elements and 38 low-enriched uranium fuel elements--all filled with fresh fuel. Three test calculationsmore » were performed with the MCNP4B-code to obtain the multiplication factor for the cases with control elements fully inserted, fully withdrawn, and at a working position.« less

  3. Interactive Multimedia Learning: Innovating Classroom Education in a Malaysian University

    ERIC Educational Resources Information Center

    Leow, Fui-Theng; Neo, Mai

    2014-01-01

    This research study was conducted at INTI International University, and aimed at enhancing the quality of classroom learning for University students with three important emphases: Gagne's instructional model, multimedia, and student-centred learning. An Interactive Learning Module (ILM) was developed as the core component in forming the…

  4. Performance Evaluation of the Honeywell GG1308 Miniature Ring Laser Gyroscope

    DTIC Science & Technology

    1993-01-01

    information. The final display line provides the current DSB configuration status. An external strobe was established between the Contraves motion...components and systems. The core of the facility is a Contraves -Goerz Model 57CD 2-axis motion simulator capable of highly precise position, rate and

  5. Alternative Instructional Strategies in an IS Curriculum

    ERIC Educational Resources Information Center

    Parker, Kevin R.; LeRouge, Cynthia; Trimmer, Ken

    2005-01-01

    Systems Analysis and Design is a core component of an education in information systems. To appeal to a wider range of constituents and facilitate the learning process, the content of a traditional Systems Analysis and Design course has been supplemented with an alternative modeling approach. This paper presents an instructional design that…

  6. Compressed Sensing (CS) Imaging with Wide FOV and Dynamic Magnification

    DTIC Science & Technology

    2011-03-14

    Digital Micromirror Device (DMD) to implement the CS measurement patterns. The core component of the DMD is a 768(V)?1024(H) aluminum micromirror array...image has different curves and textures, thus has different statistical model parameters. The sampling 19 Table 2: Reconstruction of images in

  7. ON THE SUSTAINABILITY OF INTEGRATED MODEL SYSTEMS WITH INDUSTRIAL, ECOLOGICAL, AND MACROECONOMIC COMPONENTS

    EPA Science Inventory

    At its core, sustainability asks whether the planet will persist into the indefinite future in a regime which is amenable to human existence. The issue of sustainability has ever increasing amounts of natural resources and causing a host of environmental impacts. The management o...

  8. Impact of Guided-Inquiry-Based Instruction with a Writing and Reflection Emphasis on Chemistry Students' Critical Thinking Abilities

    ERIC Educational Resources Information Center

    Gupta, Tanya; Burke, K. A.; Mehta, Akash; Greenbowe, Thomas J.

    2015-01-01

    The Science Writing Heuristic (SWH) laboratory instruction approach has been used successfully over a decade to engage students in laboratory activities. SWH-based instruction emphasizes knowledge construction through individual writing and reflection, and collaborative learning as a group. In the SWH approach, writing is a core component of…

  9. Managing the Evolution of an Enterprise Architecture using a MAS-Product-Line Approach

    NASA Technical Reports Server (NTRS)

    Pena, Joaquin; Hinchey, Michael G.; Resinas, manuel; Sterritt, Roy; Rash, James L.

    2006-01-01

    We view an evolutionary system ns being n software product line. The core architecture is the unchanging part of the system, and each version of the system may be viewed as a product from the product line. Each "product" may be described as the core architecture with sonre agent-based additions. The result is a multiagent system software product line. We describe an approach to such n Software Product Line-based approach using the MaCMAS Agent-Oriented nzethoclology. The approach scales to enterprise nrchitectures as a multiagent system is an approprinre means of representing a changing enterprise nrchitectclre nnd the inferaction between components in it.

  10. Design of a component-based integrated environmental modeling framework

    EPA Science Inventory

    Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...

  11. Photoluminescent silicon nanocrystal-based multifunctional carrier for pH-regulated drug delivery.

    PubMed

    Xu, Zhigang; Wang, Dongdong; Guan, Min; Liu, Xiaoyan; Yang, Yanjie; Wei, Dongfeng; Zhao, Chunyan; Zhang, Haixia

    2012-07-25

    A core-shell structured multifunctional carrier with nanocrystalline silicon (ncSi) as the core and a water-soluble block copolymer as the shell based on a poly(methacrylic acid) (PMAA) inner shell and polyethylene glycol (MPEG) outer shell (ncSi-MPM) was synthesized for drug delivery. The morphology, composition, and properties of the resulting ncSi-MPM were determined by comprehensive multianalytical characterization, including (1)H NMR spectroscopy, FTIR spectroscopy, XPS spectroscopy, TEM, DLS, and fluorescence spectroscopy analyses. The size of the resulting ncSi-MPM nanocarriers ranged from 40 to 110 nm under a simulated physiological environment. The loading efficiency of model drug doxorubicin (DOX) was approximately 6.1-7.4 wt % for ncSi-MPM and the drug release was pH controlled. Cytotoxicity studies demonstrated that DOX-loaded ncSi-MPM showed high anticancer activity against Hela cells. Hemolysis percentages (<2%) of ncSi-MPM were within the scope of safe values. Fluorescent imaging studies showed that the nanocarriers could be used as a tracker at the cellular level. Integration of the above functional components may result in ncSi-MPM becoming a promising multifunctional carrier for drug delivery and biomedical applications.

  12. In Vitro MRV-based Hemodynamic Study of Complex Helical Flow in a Patient-specific Jugular Model

    NASA Astrophysics Data System (ADS)

    Kefayati, Sarah; Acevedo-Bolton, Gabriel; Haraldsson, Henrik; Saloner, David

    2014-11-01

    Neurointerventional Radiologists are frequently requested to evaluate the venous side of the intracranial circulation for a variety of conditions including: Chronic Cerebrospinal Venous Insufficiency thought to play a role in the development of multiple sclerosis; sigmoid sinus diverticulum which has been linked to the presence of pulsatile tinnitus; and jugular vein distension which is related to cardiac dysfunction. Most approaches to evaluating these conditions rely on structural assessment or two dimensional flow analyses. This study was designed to investigate the highly complex jugular flow conditions using magnetic resonance velocimetry (MRV). A jugular phantom was fabricated based on the geometry of the dominant jugular in a tinnitus patient. Volumetric three-component time-resolved velocity fields were obtained using 4D PC-MRI -with the protocol enabling turbulence acquisition- and the patient-specific pulsatile waveform. Flow was highly complex exhibiting regions of jet, high swirling strength, and strong helical pattern with the core originating from the focal point of the jugular bulb. Specifically, flow was analyzed for helicity and the level of turbulence kinetic energy elevated in the core of helix and distally, in the post-narrowing region.

  13. Sedimentation rates and erosion changes recorded in recent sediments of Lake Piaseczno, south-eastern Poland

    NASA Astrophysics Data System (ADS)

    Tylmann, Wojciech; Turczyński, Marek; Kinder, Małgorzata

    2009-10-01

    This paper presents the dating results and basic analyses of recent sediments from Lake Piaseczno. The age of sediments was determined using the 210Pb method and constant flux: constant sedimentation (CF: CS) model. The estimated timescale was in agreement with the AMS14C date from the base of the core. The mean sediment accumulation rate during the last 100 years was calculated as 0.025 g cm-2 a-1. Based on the radiocarbon date, the rate of sediment accumulation below the 210Pb dating horizon was estimated as 0.066 g cm-2 a-1. The variability of main physical properties and sediment components along the core was analysed as well. The sediments were characterised by a very high water content (>80%). Carbonates were either not present or at a very low level (<1%). However, organic and minerogenic matter variability represents an interesting record of increasing erosion intensity in the catchment area. Analysis of archival cartographic materials demonstrated that the most likely reason for the enhanced transport of minerogenic matter to the lake was deforestation caused by human activity in the beginning of the 20th century.

  14. Warm-Core Intensification of a Hurricane Through Horizontal Eddy Heat Transports Inside the Eye

    NASA Technical Reports Server (NTRS)

    Braun, Scott A.; Montgomery, Michael T.; Fulton, John; Nolan, David S.

    2001-01-01

    A simulation of Hurricane Bob (1991) using the PSU/NCAR MM5 mesoscale model with a finest mesh spacing of 1.3 km is used to diagnose the heat budget of the hurricane. Heat budget terms, including latent and radiative heating, boundary layer forcing, and advection terms were output directly from the model for a 6-h period with 2-min frequency. Previous studies of warm core formation have emphasized the warming associated with gentle subsidence within the eye. The simulation of Hurricane Bob also identifies subsidence warming as a major factor for eye warming, but also shows a significant contribution from horizontal advective terms. When averaged over the area of the eye, excluding the eyewall (at least in an azimuthal mean sense), subsidence is found to strongly warm the mid-troposphere (2-9 km) while horizontal advection warms the mid to upper troposphere (5-13 km) with about equal magnitude. Partitioning of the horizontal advective terms into azimuthal mean and eddy components shows that the mean radial circulation cannot, as expected, generally contribute to this warming, but that it is produced almost entirely by the horizontal eddy transport of heat into the eye. A further breakdown of the eddy components into azimuthal wave numbers 1, 2, and higher indicates that the warming is dominated by wave number 1 asymmetries, with smaller contributions coming from higher wave numbers. Warming by horizontal eddy transport is consistent with idealized modeling of vortex Rossby waves and work is in progress to identify and clarify the role of vortex Rossby waves in warm-core intensification in both the full-physics model and idealized models.

  15. CAD-centric Computation Management System for a Virtual TBM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakanth Munipalli; K.Y. Szema; P.Y. Huang

    HyPerComp Inc. in research collaboration with TEXCEL has set out to build a Virtual Test Blanket Module (VTBM) computational system to address the need in contemporary fusion research for simulating the integrated behavior of the blanket, divertor and plasma facing components in a fusion environment. Physical phenomena to be considered in a VTBM will include fluid flow, heat transfer, mass transfer, neutronics, structural mechanics and electromagnetics. We seek to integrate well established (third-party) simulation software in various disciplines mentioned above. The integrated modeling process will enable user groups to interoperate using a common modeling platform at various stages of themore » analysis. Since CAD is at the core of the simulation (as opposed to computational meshes which are different for each problem,) VTBM will have a well developed CAD interface, governing CAD model editing, cleanup, parameter extraction, model deformation (based on simulation,) CAD-based data interpolation. In Phase-I, we built the CAD-hub of the proposed VTBM and demonstrated its use in modeling a liquid breeder blanket module with coupled MHD and structural mechanics using HIMAG and ANSYS. A complete graphical user interface of the VTBM was created, which will form the foundation of any future development. Conservative data interpolation via CAD (as opposed to mesh-based transfer), the regeneration of CAD models based upon computed deflections, are among the other highlights of phase-I activity.« less

  16. The use of CORE model by metacognitive skill approach in developing characters junior high school students

    NASA Astrophysics Data System (ADS)

    Fisher, Dahlia; Yaniawati, Poppy; Kusumah, Yaya Sukjaya

    2017-08-01

    This study aims to analyze the character of students who obtain CORE learning model using metacognitive approach. The method in this study is qualitative research and quantitative research design (Mixed Method Design) with concurrent embedded strategy. The research was conducted on two groups: an experimental group and the control group. An experimental group consists of students who had CORE model learning using metacognitive approach while the control group consists of students taught by conventional learning. The study was conducted the object this research is the seventh grader students in one the public junior high schools in Bandung. Based on this research, it is known that the characters of the students in the CORE model learning through metacognitive approach is: honest, hard work, curious, conscientious, creative and communicative. Overall it can be concluded that CORE model learning is good for developing characters of a junior high school student.

  17. Radiative Properties of Carriers in Cdse-Cds Core-Shell Heterostructured Nanocrystals of Various Geometries

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Dong, L.; Popov, S.; Friberg, A. T.

    2013-07-01

    We report a model on core-shell heterostructured nanocrystals with CdSe as the core and CdS as the shell. The model is based on one-band Schrödinger equation. Three different geometries, nanodot, nanorod, and nanobone, are implemented. The carrier localization regimes with these structures are simulated, compared, and analyzed. Based on the electron and hole wave functions, the carrier overlap integral that has a great impact on stimulated emission is further investigated numerically by a novel approach. Furthermore, the relation between the nanocrystal size and electron-hole recombination energy is also examined.

  18. New quasi-geostrophic flow estimations for the Earth's core

    NASA Astrophysics Data System (ADS)

    Pais, M. Alexandra

    2014-05-01

    Quasi-geostrophic (QG) flows have been reported in numerical dynamo studies that simulate Boussinesq convection of an electrical conducting fluid inside a rapidly rotating spherical shell. In these cases, the required condition for columnar convection seems to be that inertial waves should propagate much faster in the medium than Alfvén waves. QG models are particularly appealing for studies where Earth's liquid core flows are assessed from information contained in geomagnetic data obtained at and above the Earth's surface. Here, they make the whole difference between perceiving only the core surface expression of the geodynamo or else assessing the whole interior core flow. The QG approximation has now been used in different studies to invert geomagnetic field models, providing a different kinematic interpretation of the observed geomagnetic field secular variation (SV). Under this new perspective, a large eccentric jet flowing westward under the Atlantic Hemisphere and a cyclonic column under the Pacific were pointed out as interesting features of the flow. A large eccentric jet with similar characteristics has been explained in recent numerical geodynamo simulations in terms of dynamical coupling between the solid core, the liquid core and the mantle. Nonetheless, it requires an inner core crystallization on the eastern hemisphere, contrary to what has been proposed in recent dynamical models for the inner core. Some doubts remain, as we see, concerning the dynamics that can explain the radial outward flow in the eastern core hemisphere, actually seen in inverted core flow models. This and other puzzling features justify a new assessment of core flows, taking full advantage of the recent geomagnetic field model COV-OBS and of experience, accumulated over the years, on flow inversion. Assuming the QG approximation already eliminates a large part of non-uniqueness in the inversion. Some important non-uniqueness still remains, inherent to the physical model, given our present inability to distinguish the small length scales of the internal geomagnetic field when measuring it at the Earth's surface and above. This can be dealt with in the form of a parameterization error. We recalculated flow models for the whole 1840-2010 period of COV-OBS, using the covariance matrices provided by the authors and an iterative estimation of the parameterization error. Results are compared with previous estimations. We then apply standard tools of Empirical Orthogonal Functions/ Principal Components Analysis to sort out variability modes that, hopefully, can also be identified with dynamical modes.

  19. Overall Architecture of the Intraflagellar Transport (IFT)-B Complex Containing Cluap1/IFT38 as an Essential Component of the IFT-B Peripheral Subcomplex.

    PubMed

    Katoh, Yohei; Terada, Masaya; Nishijima, Yuya; Takei, Ryota; Nozaki, Shohei; Hamada, Hiroshi; Nakayama, Kazuhisa

    2016-05-20

    Intraflagellar transport (IFT) is essential for assembly and maintenance of cilia and flagella as well as ciliary motility and signaling. IFT is mediated by multisubunit complexes, including IFT-A, IFT-B, and the BBSome, in concert with kinesin and dynein motors. Under high salt conditions, purified IFT-B complex dissociates into a core subcomplex composed of at least nine subunits and at least five peripherally associated proteins. Using the visible immunoprecipitation assay, which we recently developed as a convenient protein-protein interaction assay, we determined the overall architecture of the IFT-B complex, which can be divided into core and peripheral subcomplexes composed of 10 and 6 subunits, respectively. In particular, we identified TTC26/IFT56 and Cluap1/IFT38, neither of which was included with certainty in previous models of the IFT-B complex, as integral components of the core and peripheral subcomplexes, respectively. Consistent with this, a ciliogenesis defect of Cluap1-deficient mouse embryonic fibroblasts was rescued by exogenous expression of wild-type Cluap1 but not by mutant Cluap1 lacking the binding ability to other IFT-B components. The detailed interaction map as well as comparison of subcellular localization of IFT-B components between wild-type and Cluap1-deficient cells provides insights into the functional relevance of the architecture of the IFT-B complex. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  20. Overall Architecture of the Intraflagellar Transport (IFT)-B Complex Containing Cluap1/IFT38 as an Essential Component of the IFT-B Peripheral Subcomplex*

    PubMed Central

    Katoh, Yohei; Terada, Masaya; Nishijima, Yuya; Takei, Ryota; Nozaki, Shohei; Hamada, Hiroshi; Nakayama, Kazuhisa

    2016-01-01

    Intraflagellar transport (IFT) is essential for assembly and maintenance of cilia and flagella as well as ciliary motility and signaling. IFT is mediated by multisubunit complexes, including IFT-A, IFT-B, and the BBSome, in concert with kinesin and dynein motors. Under high salt conditions, purified IFT-B complex dissociates into a core subcomplex composed of at least nine subunits and at least five peripherally associated proteins. Using the visible immunoprecipitation assay, which we recently developed as a convenient protein-protein interaction assay, we determined the overall architecture of the IFT-B complex, which can be divided into core and peripheral subcomplexes composed of 10 and 6 subunits, respectively. In particular, we identified TTC26/IFT56 and Cluap1/IFT38, neither of which was included with certainty in previous models of the IFT-B complex, as integral components of the core and peripheral subcomplexes, respectively. Consistent with this, a ciliogenesis defect of Cluap1-deficient mouse embryonic fibroblasts was rescued by exogenous expression of wild-type Cluap1 but not by mutant Cluap1 lacking the binding ability to other IFT-B components. The detailed interaction map as well as comparison of subcellular localization of IFT-B components between wild-type and Cluap1-deficient cells provides insights into the functional relevance of the architecture of the IFT-B complex. PMID:26980730

  1. The Role of Body Crystallization in Asteroidal Cores

    NASA Astrophysics Data System (ADS)

    Wasson, J. T.

    1993-07-01

    Large fractionations (factors of 2000-6000) in Ir/Ni and other ratios demonstrate that the magmatic groups of iron meteorites formed by fractional crystallization, and thus that the residual liquid remained well stirred during core crystallization. Past models have relied on solidification at the base or the top of the core, but body crystallization offers an attractive alternative. The simplest of the earlier models involved convective maxing induced by the liberation of heat and light elements (especially S) during upward crystallization from the center of the core. Other models involving downward crystallization from the core-mantle interface are based on the fact that temperatures at this location are slightly lower than those at the center; no whole-core stirring mechanism is provided by these models. Haack and Scott recently published a variant of the downward crystallization model involving the growth of giant (kilometer-scale) dendrites. Because crystallization creates a boundary layer enriched in S that does not participate in the convection, these models require several K of supercooling to induce crystallization (this undercooling is much greater than the temperature difference between the center of the core and the core-mantle interface). Buoyant forces will occasionally remove droplets of the basal boundary fluid; thus it was thinner and its degree of undercooling less than in that at the ceiling of the magma chamber. Homogeneous nucleation of metals is difficult to achieve; generally 200-300 K of undercooling is required, much more than could possibly occur in an asteroidal core. Crystals could, however, nucleate in the magma body on chromite, probably the first liquidus phase (A. Kracher, personal communication, notes that this is required to explain why Cr behaved like a compatible element despite having a solid/liquid D < 1). In addition, some tiny, submillimeter dendrites that formed at the top of the core must have pinched off and fallen into the magma. Such seeds settle as a result of buoyant forces (thus stirring the magma) and, as a result, achieve very thin boundary layers and require low degrees of undercooling in order to crystallize. The rate of core crystallization is limited by the rate of heat transport across the core-mantle interface. If sufficient nuclei are available at different sites, the bulk of the crystallization occurs where undercooling is least. It is possible that a larger fraction of the total crystallization occurred in the body of the magma than at its base or ceiling.

  2. Hubble Space Telescope Astrometry of the Procyon System

    NASA Technical Reports Server (NTRS)

    Bond, Howard E.; Gilliland, Ronald L.; Schaefer, Gail H.; Demarque, Pierre; Girard, Terrence M.; Holberg, Jay B.; Gudehus, Donald; Mason, Brian D.; Kozhurina-Platais, Vera; Burleigh, Matthew R.

    2015-01-01

    The nearby star Procyon is a visual binary containing the F5 IV-V subgiant Procyon A, orbited in a 40.84-year period by the faint DQZ white dwarf (WD) Procyon B. Using images obtained over two decades with the Hubble Space Telescope, and historical measurements back to the 19th century, we have determined precise orbital elements. Combined with measurements of the parallax and the motion of the A component, these elements yield dynamical masses of 1.478 plus or minus 0.012M and 0.592 plus or minus 0.006M for A and B, respectively. The mass of Procyon A agrees well with theoretical predictions based on asteroseismology and its temperature and luminosity. Use of a standard core-overshoot model agrees best for a surprisingly high amount of core overshoot. Under these modeling assumptions, Procyon A's age is approximately 2.7 Gyr. Procyon B's location in the H-R diagram is in excellent agreement with theoretical cooling tracks for WDs of its dynamical mass. Its position in the mass-radius plane is also consistent with theory, assuming a carbon-oxygen core and a helium-dominated atmosphere. Its progenitor's mass was 1.9-2.2M, depending on its amount of core overshoot. Several astrophysical puzzles remain. In the progenitor system, the stars at periastron were separated by only approximately AU, which might have led to tidal interactions and even mass transfer; yet there is no direct evidence that these have occurred. Moreover the orbital eccentricity has remained high (approximately 0.40). The mass of Procyon B is somewhat lower than anticipated from the initial-to-final-mass relation seen in open clusters. The presence of heavy elements in its atmosphere requires ongoing accretion, but the place of origin is uncertain.

  3. Eddy current position indicating apparatus for measuring displacements of core components of a liquid metal nuclear reactor

    DOEpatents

    Day, Clifford K.; Stringer, James L.

    1977-01-01

    Apparatus for measuring displacements of core components of a liquid metal fast breeder reactor by means of an eddy current probe. The active portion of the probe is located within a dry thimble which is supported on a stationary portion of the reactor core support structure. Split rings of metal, having a resistivity significantly different than sodium, are fixedly mounted on the core component to be monitored. The split rings are slidably positioned around, concentric with the probe and symmetrically situated along the axis of the probe so that motion of the ring along the axis of the probe produces a proportional change in the probes electrical output.

  4. Experimental Study on Effects of Ground Roughness on Flow Characteristics of Tornado-Like Vortices

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Cao, Shuyang; Pang, Weichiang; Cao, Jinxin

    2017-02-01

    The three-dimensional wind velocity and dynamic pressure for stationary tornado-like vortices that developed over ground of different roughness categories were investigated to clarify the effects of ground roughness. Measurements were performed for various roughness categories and two swirl ratios. Variations of the vertical and horizontal distributions of velocity and pressure with roughness are presented, with the results showing that the tangential, radial, and axial velocity components increase inside the vortex core near the ground under rough surface conditions. Meanwhile, clearly decreased tangential components are found outside the core radius at low elevations. The high axial velocity inside the vortex core over rough ground surface indicates that roughness produces an effect similar to a reduced swirl ratio. In addition, the pressure drop accompanying a tornado is more significant at elevations closer to the ground under rough compared with smooth surface conditions. We show that the variations of the flow characteristics with roughness are dependent on the vortex-generating mechanism, indicating the need for appropriate modelling of tornado-like vortices.

  5. Evidence for a Second Component in the High-energy Core Emission from Centaurus A?

    NASA Astrophysics Data System (ADS)

    Sahakyan, N.; Yang, R.; Aharonian, F. A.; Rieger, F. M.

    2013-06-01

    We report on an analysis of Fermi Large Area Telescope data from four years of observations of the nearby radio galaxy Centaurus A (Cen A). The increased photon statistics results in a detection of high-energy (>100 MeV) gamma-rays up to 50 GeV from the core of Cen A, with a detection significance of about 44σ. The average gamma-ray spectrum of the core reveals evidence for a possible deviation from a simple power law. A likelihood analysis with a broken power-law model shows that the photon index becomes harder above Eb ~= 4 GeV, changing from Γ1 = 2.74 ± 0.03 below to Γ2 = 2.09 ± 0.20 above. This hardening could be caused by the contribution of an additional high-energy component beyond the common synchrotron self-Compton jet emission. No clear evidence for variability in the high-energy domain is seen. We compare our results with the spectrum reported by H.E.S.S. in the TeV energy range and discuss possible origins of the hardening observed.

  6. Modeling Core Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Mezzacappa, Anthony

    2017-01-01

    Core collapse supernovae, or the death throes of massive stars, are general relativistic, neutrino-magneto-hydrodynamic events. The core collapse supernova mechanism is still not in hand, though key components have been illuminated, and the potential for multiple mechanisms for different progenitors exists. Core collapse supernovae are the single most important source of elements in the Universe, and serve other critical roles in galactic chemical and thermal evolution, the birth of neutron stars, pulsars, and stellar mass black holes, the production of a subclass of gamma-ray bursts, and as potential cosmic laboratories for fundamental nuclear and particle physics. Given this, the so called ``supernova problem'' is one of the most important unsolved problems in astrophysics. It has been fifty years since the first numerical simulations of core collapse supernovae were performed. Progress in the past decade, and especially within the past five years, has been exponential, yet much work remains. Spherically symmetric simulations over nearly four decades laid the foundation for this progress. Two-dimensional modeling that assumes axial symmetry is maturing. And three-dimensional modeling, while in its infancy, has begun in earnest. I will present some of the recent work from the ``Oak Ridge'' group, and will discuss this work in the context of the broader work by other researchers in the field. I will then point to future requirements and challenges. Connections with other experimental, observational, and theoretical efforts will be discussed, as well.

  7. Virus-producing cells determine the host protein profiles of HIV-1 virion cores

    PubMed Central

    2012-01-01

    Background Upon HIV entry into target cells, viral cores are released and rearranged into reverse transcription complexes (RTCs), which support reverse transcription and also protect and transport viral cDNA to the site of integration. RTCs are composed of viral and cellular proteins that originate from both target and producer cells, the latter entering the target cell within the viral core. However, the proteome of HIV-1 viral cores in the context of the type of producer cells has not yet been characterized. Results We examined the proteomic profiles of the cores purified from HIV-1 NL4-3 virions assembled in Sup-T1 cells (T lymphocytes), PMA and vitamin D3 activated THP1 (model of macrophages, mMΦ), and non-activated THP1 cells (model of monocytes, mMN) and assessed potential involvement of identified proteins in the early stages of infection using gene ontology information and data from genome-wide screens on proteins important for HIV-1 replication. We identified 202 cellular proteins incorporated in the viral cores (T cells: 125, mMΦ: 110, mMN: 90) with the overlap between these sets limited to 42 proteins. The groups of RNA binding (29), DNA binding (17), cytoskeleton (15), cytoskeleton regulation (21), chaperone (18), vesicular trafficking-associated (12) and ubiquitin-proteasome pathway-associated proteins (9) were most numerous. Cores of the virions from SupT1 cells contained twice as many RNA binding proteins as cores of THP1-derived virus, whereas cores of virions from mMΦ and mMN were enriched in components of cytoskeleton and vesicular transport machinery, most probably due to differences in virion assembly pathways between these cells. Spectra of chaperones, cytoskeletal proteins and ubiquitin-proteasome pathway components were similar between viral cores from different cell types, whereas DNA-binding and especially RNA-binding proteins were highly diverse. Western blot analysis showed that within the group of overlapping proteins, the level of incorporation of some RNA binding (RHA and HELIC2) and DNA binding proteins (MCM5 and Ku80) in the viral cores from T cells was higher than in the cores from both mMΦ and mMN and did not correlate with the abundance of these proteins in virus producing cells. Conclusions Profiles of host proteins packaged in the cores of HIV-1 virions depend on the type of virus producing cell. The pool of proteins present in the cores of all virions is likely to contain factors important for viral functions. Incorporation ratio of certain RNA- and DNA-binding proteins suggests their more efficient, non-random packaging into virions in T cells than in mMΦ and mMN. PMID:22889230

  8. Energy efficient engine component development and integration program

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Accomplishments in the Energy Efficient Engine Component Development and Integration program during the period of April 1, 1981 through September 30, 1981 are discussed. The major topics considered are: (1) propulsion system analysis, design, and integration; (2) engine component analysis, design, and development; (3) core engine tests; and (4) integrated core/low spool testing.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  10. A Team-Based Approach to Improving Core Instructional Reading Practices within Response to Intervention

    ERIC Educational Resources Information Center

    Harlacher, Jason E.; Potter, Jon B.; Weber, Jill M.

    2015-01-01

    Core instruction is an important part of an effective response to intervention (RTI) model. To implement RTI effectively, school teams should regularly examine the effectiveness of their core instruction to determine if at least 80% of students meet the proficiency standard with core support alone. However, some educators may not have the skills…

  11. Phosphoproteomic Analysis of Protein Kinase C Signaling in Saccharomyces cerevisiae Reveals Slt2 Mitogen-activated Protein Kinase (MAPK)-dependent Phosphorylation of Eisosome Core Components*

    PubMed Central

    Mascaraque, Victoria; Hernáez, María Luisa; Jiménez-Sánchez, María; Hansen, Rasmus; Gil, Concha; Martín, Humberto; Cid, Víctor J.; Molina, María

    2013-01-01

    The cell wall integrity (CWI) pathway of the model organism Saccharomyces cerevisiae has been thoroughly studied as a paradigm of the mitogen-activated protein kinase (MAPK) pathway. It consists of a classic MAPK module comprising the Bck1 MAPK kinase kinase, two redundant MAPK kinases (Mkk1 and Mkk2), and the Slt2 MAPK. This module is activated under a variety of stimuli related to cell wall homeostasis by Pkc1, the only member of the protein kinase C family in budding yeast. Quantitative phosphoproteomics based on stable isotope labeling of amino acids in cell culture is a powerful tool for globally studying protein phosphorylation. Here we report an analysis of the yeast phosphoproteome upon overexpression of a PKC1 hyperactive allele that specifically activates CWI MAPK signaling in the absence of external stimuli. We found 82 phosphopeptides originating from 43 proteins that showed enhanced phosphorylation in these conditions. The MAPK S/T-P target motif was significantly overrepresented in these phosphopeptides. Hyperphosphorylated proteins provide putative novel targets of the Pkc1–cell wall integrity pathway involved in diverse functions such as the control of gene expression, protein synthesis, cytoskeleton maintenance, DNA repair, and metabolism. Remarkably, five components of the plasma-membrane-associated protein complex known as eisosomes were found among the up-regulated proteins. We show here that Pkc1-induced phosphorylation of the eisosome core components Pil1 and Lsp1 was not exerted directly by Pkc1, but involved signaling through the Slt2 MAPK module. PMID:23221999

  12. Core Noise: Overview of Upcoming LDI Combustor Test

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.

    2012-01-01

    This presentation is a technical summary of and outlook for NASA-internal and NASA-sponsored external research on core (combustor and turbine) noise funded by the Fundamental Aeronautics Program Fixed Wing Project. The presentation covers: the emerging importance of core noise due to turbofan design trends and its relevance to the NASA N+3 noise-reduction goal; the core noise components and the rationale for the current emphasis on combustor noise; and the current and planned research activities in the combustor-noise area. Two NASA-sponsored research programs, with particular emphasis on indirect combustor noise, "Acoustic Database for Core Noise Sources", Honeywell Aerospace (NNC11TA40T) and "Measurement and Modeling of Entropic Noise Sources in a Single-Stage Low-Pressure Turbine", U. Illinois/U. Notre Dame (NNX11AI74A) are briefly described. Recent progress in the development of CMC-based acoustic liners for broadband noise reduction suitable for turbofan-core application is outlined. Combustor-design trends and the potential impacts on combustor acoustics are discussed. A NASA GRC developed nine-point lean-direct-injection (LDI) fuel injector is briefly described. The modification of an upcoming thermo-acoustic instability evaluation of the GRC injector in a combustor rig to also provide acoustic information relevant to community noise is presented. The NASA Fundamental Aeronautics Program has the principal objective of overcoming today's national challenges in air transportation. The reduction of aircraft noise is critical to enabling the anticipated large increase in future air traffic. The Quiet Performance Research Theme of the Fixed Wing Project aims to develop concepts and technologies to dramatically reduce the perceived community noise attributable to aircraft with minimal impact on weight and performance.

  13. A Comparison Between Reported and Enacted Pedagogical Content Knowledge (PCK) About Graphs of Motion

    NASA Astrophysics Data System (ADS)

    Mazibe, Ernest N.; Coetzee, Corene; Gaigher, Estelle

    2018-04-01

    This paper reports a case study of four grade 10 physical sciences teachers' PCK about graphs of motion. We used three data collection strategies, namely teachers' written accounts, captured by the content representation (CoRe) tool, interviews and classroom observations. We conceptualised the PCK displayed in the CoRe tool and the interviews as reported PCK and the PCK demonstrated during teaching as enacted PCK. These two manifestations of PCK were compared to establish the extent of agreement between reported and enacted PCK. We adopted the topic-specific PCK (TSPCK) model as the framework that guided this study. This model describes TSPCK in terms of five components of teacher knowledge. Guided by the model, we designed two rubrics to assess these manifestations of TSPCK on a four-point scale. The results of this study indicated that the reported PCK was not necessarily a reflection of the PCK enacted during teaching. The levels of PCK in the components were seldom higher in the enacted PCK, but tended to be similar or lower than in the reported PCK. The study implies that the enactment of PCK should be emphasised in teacher education.

  14. Femtosecond laser processing of optical fibres for novel sensor development

    NASA Astrophysics Data System (ADS)

    Kalli, Kyriacos; Theodosiou, Antreas; Ioannou, Andreas; Lacraz, Amedee

    2017-04-01

    We present results of recent research where we have utilized a femtosecond laser to micro-structure silica and polymer optical fibres in order to realize versatile optical components such as diffractive optical elements on the fibre end face, the inscription of integrated waveguide circuits in the fibre cladding and novel optical fibre sensors designs based on Bragg gratings in the core. A major hurdle in tailoring or modifying the properties of optical fibres is the development of an inscription method that can prove to be a flexible and reliable process that is generally applicable to all optical fibre types; this requires careful matching of the laser parameters and optics in order to examine the spatial limits of direct laser writing, whether the application is structuring at the surface of the optical fibre or inscription in the core and cladding of the fibre. We demonstrate a variety of optical components such as two-dimensional grating structures, Bessel, Airy and vortex beam generators; moreover, optical bridging waveguides inscribed in the cladding of single-mode fibre as a means to selectively couple light from single-core to multi-core optical fibres, and demonstrate a grating based sensor; finally, we have developed a novel femtosecond laser inscription method for the precise inscription of tailored Bragg grating sensors in silica and polymer optical fibres. We also show that this novel fibre Bragg grating inscription technique can be used to modify and add versatility to an existing, encapsulated optical fibre pressure sensor.

  15. Probabilistic Graphical Model Representation in Phylogenetics

    PubMed Central

    Höhna, Sebastian; Heath, Tracy A.; Boussau, Bastien; Landis, Michael J.; Ronquist, Fredrik; Huelsenbeck, John P.

    2014-01-01

    Recent years have seen a rapid expansion of the model space explored in statistical phylogenetics, emphasizing the need for new approaches to statistical model representation and software development. Clear communication and representation of the chosen model is crucial for: (i) reproducibility of an analysis, (ii) model development, and (iii) software design. Moreover, a unified, clear and understandable framework for model representation lowers the barrier for beginners and nonspecialists to grasp complex phylogenetic models, including their assumptions and parameter/variable dependencies. Graphical modeling is a unifying framework that has gained in popularity in the statistical literature in recent years. The core idea is to break complex models into conditionally independent distributions. The strength lies in the comprehensibility, flexibility, and adaptability of this formalism, and the large body of computational work based on it. Graphical models are well-suited to teach statistical models, to facilitate communication among phylogeneticists and in the development of generic software for simulation and statistical inference. Here, we provide an introduction to graphical models for phylogeneticists and extend the standard graphical model representation to the realm of phylogenetics. We introduce a new graphical model component, tree plates, to capture the changing structure of the subgraph corresponding to a phylogenetic tree. We describe a range of phylogenetic models using the graphical model framework and introduce modules to simplify the representation of standard components in large and complex models. Phylogenetic model graphs can be readily used in simulation, maximum likelihood inference, and Bayesian inference using, for example, Metropolis–Hastings or Gibbs sampling of the posterior distribution. [Computation; graphical models; inference; modularization; statistical phylogenetics; tree plate.] PMID:24951559

  16. Core-level photoemission investigation of atomic-fluorine adsorption on GaAs(110)

    NASA Astrophysics Data System (ADS)

    McLean, A. B.; Terminello, L. J.; McFeely, F. R.

    1989-12-01

    The adsorption of atomic F on the cleaved GaAs(110) surface has been studied with use of high-resolution core-level photoelectron spectroscopy by exposing the GaAs(110) surfaces to XeF2, which adsorbs dissociatively, leaving atomic F behind. This surface reaction produces two chemically shifted components in the Ga 3d core-level emission which are attributed to an interfacial monofluoride and a stable trifluoride reaction product, respectively. The As 3d core level develops only one chemically shifted component and from its exposure-dependent behavior it is attributed to an interfacial monofluoride. Least-squares analysis of the core-level line shapes revealed that (i) the F bonds to both the anion and the cation , (ii) the GaF3 component (characteristic of strong interfacial reaction) and the surface core-level shifted component (characteristic of a well ordered, atomically clean surface) are present together over a relatively large range of XeF2 exposures, and (iii) it is the initial disruption of the GaAs(110) surface that is the rate-limiting step in this surface reaction. These results are compared with similar studies of Cl and O adsorption on GaAs(110).

  17. GalaxyTBM: template-based modeling by building a reliable core and refining unreliable local regions.

    PubMed

    Ko, Junsu; Park, Hahnbeom; Seok, Chaok

    2012-08-10

    Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.

  18. Primative components, crustal assimilation, and magmatic degassing of the 2008 Kilauea summit eruption

    USGS Publications Warehouse

    Rowe, Michael C.; Thornber, Carl R.; Orr, Tim R.

    2015-01-01

    Simultaneous summit and rift zone eruptions at Kīlauea starting in 2008 reflect a shallow eruptive plumbing system inundated by a bourgeoning supply of new magma from depth. Olivine-hosted melt inclusions, host glass, and bulk lava compositions of magma erupted at both the summit and east rift zone demonstrate chemical continuity at both ends of a well-worn summit-to-rift pipeline. Analysis of glass within dense-cored lapilli erupted from the summit in March – August 2008 show these are not samplings of compositionally distinct magmas stored in the shallow summit magma reservoir, but instead result from remelting and assimilation of fragments from conduit wall and vent blocks. Summit pyroclasts show the predominant and most primitive component erupted to be a homogenous, relatively trace-element-depleted melt that is a compositionally indistinguishable from east rift lava. Based on a “top-down” model for the geochemical variation in east rift zone lava over the past 30 years, we suggest that the apparent absence of a 1982 enriched component in melt inclusions, as well as the proposed summit-rift zone connectivity based on sulfur and mineral chemistry, indicate that the last of the pre-1983 magma has been flushed out of the summit reservoir during the surge of mantle-derived magma from 2003-2007.

  19. Understanding Preservice Teachers' Technology Use through TPACK Framework

    ERIC Educational Resources Information Center

    Pamuk, S.

    2012-01-01

    This study discusses preservice teachers' achievement barriers to technology integration, using principles of technological pedagogical content knowledge (TPACK) as an evaluative framework. Technology-capable participants each freely chose a content area to comprise project. Data analysis based on interactions among core components of TPACK…

  20. Interaction and Instructed Second Language Acquisition

    ERIC Educational Resources Information Center

    Loewen, Shawn; Sato, Masatoshi

    2018-01-01

    Interaction is an indispensable component in second language acquisition (SLA). This review surveys the instructed SLA research, both classroom and laboratory-based, that has been conducted primarily within the interactionist approach, beginning with the core constructs of interaction, namely input, negotiation for meaning, and output. The review…

  1. Managing Substitute Teaching.

    ERIC Educational Resources Information Center

    Jones, Kevin R.

    1999-01-01

    This news brief presents information on managing substitute teaching. The information is based on issues discussed at a summit meeting which included public school administrators and personnel directors from around the nation. The main topics of concern focused around four core components related to the management of substitute teaching:…

  2. Tuning optical properties of water-soluble CdTe quantum dots for biological applications

    NASA Astrophysics Data System (ADS)

    Schulze, Anne S.; Tavernaro, Isabella; Machka, Friederike; Dakischew, Olga; Lips, Katrin S.; Wickleder, Mathias S.

    2017-02-01

    In this study, two different synthetic methods in aqueous solution are presented to tune the optical properties of CdTe and CdSe semiconductor nanoparticles. Additionally, the influence of different temperatures, pressures, precursor ratios, surface ligands, bases, and core components in the synthesis was investigated with regard to the particle sizes and optical properties. As a result, a red shift of the emission and absorption maxima with increasing reaction temperature (100 to 220°C), pressure (1 to 25 bar), and different ratios of core components of alloyed semiconductor nanoparticles could be observed without a change of the particle size. An increase in particle size from 2.5 to 5 nm was only achieved by variation of the mercaptocarboxylic acid ligands in combination with the reaction time and used base. To get a first hint on the cytotoxic effects and cell uptake of the synthesized quantum dots, in vitro tests mesenchymal stem cells (MSCs) were carried out.

  3. A new method to quantitatively compare focal ratio degradation due to different end termination techniques

    NASA Astrophysics Data System (ADS)

    Poppett, Claire; Allington-Smith, Jeremy

    2010-07-01

    We investigate the FRD performance of a 150 μm core fibre for its suitability to the SIDE project.1 This work builds on our previous work2 (Paper 1) where we examined the dependence of FRD on length in fibres with a core size of 100 μm and proposed a new multi-component model to explain the results. In order to predict the FRD characteristics of a fibre, the most commonly used model is an adaptation of the Gloge8model by Carrasco and Parry3 which quantifies the the number of scattering defects within an optical bre using a single parameter, d0. The model predicts many trends which are seen experimentally, for example, a decrease in FRD as core diameter increases, and also as wavelength increases. However the model also predicts a strong dependence on FRD with length that is not seen experimentally. By adapting the single fibre model to include a second fibre, we can quantify the amount of FRD due to stress caused by the method of termination. By fitting the model to experimental data we find that polishing the fibre causes a small increase in stress to be induced in the end of the fibre compared to a simple cleave technique.

  4. Influence of precipitating light elements on stable stratification below the core/mantle boundary

    NASA Astrophysics Data System (ADS)

    O'Rourke, J. G.; Stevenson, D. J.

    2017-12-01

    Stable stratification below the core/mantle boundary is often invoked to explain anomalously low seismic velocities in this region. Diffusion of light elements like oxygen or, more slowly, silicon could create a stabilizing chemical gradient in the outermost core. Heat flow less than that conducted along the adiabatic gradient may also produce thermal stratification. However, reconciling either origin with the apparent longevity (>3.45 billion years) of Earth's magnetic field remains difficult. Sub-isentropic heat flow would not drive a dynamo by thermal convection before the nucleation of the inner core, which likely occurred less than one billion years ago and did not instantly change the heat flow. Moreover, an oxygen-enriched layer below the core/mantle boundary—the source of thermal buoyancy—could establish double-diffusive convection where motion in the bulk fluid is suppressed below a slowly advancing interface. Here we present new models that explain both stable stratification and a long-lived dynamo by considering ongoing precipitation of magnesium oxide and/or silicon dioxide from the core. Lithophile elements may partition into iron alloys under extreme pressure and temperature during Earth's formation, especially after giant impacts. Modest core/mantle heat flow then drives compositional convection—regardless of thermal conductivity—since their solubility is strongly temperature-dependent. Our models begin with bulk abundances for the mantle and core determined by the redox conditions during accretion. We then track equilibration between the core and a primordial basal magma ocean followed by downward diffusion of light elements. Precipitation begins at a depth that is most sensitive to temperature and oxygen abundance and then creates feedbacks with the radial thermal and chemical profiles. Successful models feature a stable layer with low seismic velocity (which mandates multi-component evolution since a single light element typically increases seismic velocity) growing to its present-day size while allowing enough precipitation to drive compositional convection below. Crucially, this modeling offers unique constrains on Earth's accretion and the light element composition of the core compared to degenerate estimates derived from bulk density and seismic measurements.

  5. Variations of Strahl Properties with Fast and Slow Solar Wind

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo; Goldstein, Melvyn L.; Gurgiolo, Chris

    2008-01-01

    The interplanetary solar wind electron velocity distribution function generally shows three different populations. Two of the components, the core and halo, have been the most intensively analyzed and modeled populations using different theoretical models. The third component, the strahl, is usually seen at higher energies, is confined in pitch-angle, is highly field-aligned and skew. This population has been more difficult to identify and to model in the solar wind. In this work we make use of the high angular, energy and time resolution and three-dimensional data of the Cluster/PEACE electron spectrometer to identify and analyze this component in the ambient solar wind during high and slow speed solar wind. The moment density and fluid velocity have been computed by a semi-numerical integration method. The variations of solar wind density and drift velocity with the general build solar wind speed could provide some insight into the source, origin, and evolution of the strahl.

  6. Parallelization of combinatorial search when solving knapsack optimization problem on computing systems based on multicore processors

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.

  7. Research on Shock Responses of Three Types of Honeycomb Cores

    NASA Astrophysics Data System (ADS)

    Peng, Fei; Yang, Zhiguang; Jiang, Liangliang; Ren, Yanting

    2018-03-01

    The shock responses of three kinds of honeycomb cores have been investigated and analyzed based on explicit dynamics analysis. According to the real geometric configuration and the current main manufacturing methods of aluminum alloy honeycomb cores, the finite element models of honeycomb cores with three different cellular configurations (conventional hexagon honeycomb core, rectangle honeycomb core and auxetic honeycomb core with negative Poisson’s ratio) have been established through FEM parametric modeling method based on Python and Abaqus. In order to highlight the impact response characteristics of the above three honeycomb cores, a 5 mm thick panel with the same mass and material was taken as contrast. The analysis results showed that the peak values of longitudinal acceleration history curves of the three honeycomb cores were lower than those of the aluminum alloy panel in all three reference points under the loading of a longitudinal pulse pressure load with the peak value of 1 MPa and the pulse width of 1 μs. It could be concluded that due to the complex reflection and diffraction of stress wave induced by shock in honeycomb structures, the impact energy was redistributed which led to a decrease in the peak values of the longitudinal acceleration at the measuring points of honeycomb cores relative to the panel.

  8. Integration of Treatment Innovation Planning and Implementation: Strategic Process Models and Organizational Challenges

    PubMed Central

    Lehman, Wayne E. K.; Simpson, D. Dwayne; Knight, Danica K.; Flynn, Patrick M.

    2015-01-01

    Sustained and effective use of evidence-based practices in substance abuse treatment services faces both clinical and contextual challenges. Implementation approaches are reviewed that rely on variations of plan-do-study-act (PDSA) cycles, but most emphasize conceptual identification of core components for system change strategies. A 2-phase procedural approach is therefore presented based on the integration of TCU models and related resources for improving treatment process and program change. Phase 1 focuses on the dynamics of clinical services, including stages of client recovery (cross-linked with targeted assessments and interventions), as the foundations for identifying and planning appropriate innovations to improve efficiency and effectiveness. Phase 2 shifts to the operational and organizational dynamics involved in implementing and sustaining innovations (including the stages of training, adoption, implementation, and practice). A comprehensive system of TCU assessments and interventions for client and program-level needs and functioning are summarized as well, with descriptions and guidelines for applications in practical settings. PMID:21443294

  9. Structural Color Tuning: Mixing Melanin-Like Particles with Different Diameters to Create Neutral Colors.

    PubMed

    Kawamura, Ayaka; Kohri, Michinari; Yoshioka, Shinya; Taniguchi, Tatsuo; Kishikawa, Keiki

    2017-04-18

    We present the ability to tune structural colors by mixing colloidal particles. To produce high-visibility structural colors, melanin-like core-shell particles composed of a polystyrene (PSt) core and a polydopamine (PDA) shell, were used as components. The results indicated that neutral structural colors could be successfully obtained by simply mixing two differently sized melanin-like PSt@PDA core-shell particles. In addition, the arrangements of the particles, which were important factors when forming structural colors, were investigated by mathematical processing using a 2D Fourier transform technique and Voronoi diagrams. These findings provide new insights for the development of structural color-based ink applications.

  10. The extreme blazar AO 0235+164 as seen by extensive ground and space radio observations

    NASA Astrophysics Data System (ADS)

    Kutkin, A. M.; Pashchenko, I. N.; Lisakov, M. M.; Voytsik, P. A.; Sokolovsky, K. V.; Kovalev, Y. Y.; Lobanov, A. P.; Ipatov, A. V.; Aller, M. F.; Aller, H. D.; Lahteenmaki, A.; Tornikoski, M.; Gurvits, L. I.

    2018-04-01

    Clues to the physical conditions in radio cores of blazars come from measurements of brightness temperatures as well as effects produced by intrinsic opacity. We study the properties of the ultra-compact blazar AO 0235+164 with RadioAstron ground-space radio interferometer, multifrequency VLBA, EVN, and single-dish radio observations. We employ visibility modelling and image stacking for deriving structure and kinematics of the source, and use Gaussian process regression to find the relative multiband time delays of the flares. The multifrequency core size and time lags support prevailing synchrotron self-absorption. The intrinsic brightness temperature of the core derived from ground-based very long baseline interferometry (VLBI) is close to the equipartition regime value. In the same time, there is evidence for ultra-compact features of the size of less than 10 μas in the source, which might be responsible for the extreme apparent brightness temperatures of up to 1014 K as measured by RadioAstron. In 2007-2016 the VLBI components in the source at 43 GHz are found predominantly in two directions, suggesting a bend of the outflow from southern to northern direction. The apparent opening angle of the jet seen in the stacked image at 43 GHz is two times wider than that at 15 GHz, indicating a collimation of the flow within the central 1.5 mas. We estimate the Lorentz factor Γ = 14, the Doppler factor δ = 21, and the viewing angle θ = 1.7° of the apparent jet base, derive the gradients of magnetic field strength and electron density in the outflow, and the distance between jet apex and the core at each frequency.

  11. STRUCTURES OF THE VELA PULSAR AND THE GLITCH CRISIS FROM THE BRUECKNER THEORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, A.; Dong, J. M.; Wang, J. B.

    Detailed structures of the Vela pulsar (PSR B0833-45, with a period of 89.33 ms) are predicted by adopting a recently constructed unified treatment of all parts of neutron stars: the outer crust, the inner crust, and the core based on modern microscopic Brueckner–Hartree–Fock calculations. Taking a pulsar mass in the range from 1.0 to 2.0 M{sub ⊙}, we calculate the central density, the core/crust radii, the core/crustal mass, the core/crustal thickness, the moment of inertia, and the crustal moment of inertia. Among them, the crustal moment of inertia could be effectively constrained from the accumulated glitch observations, which has been a great debate recently, knownmore » as the “glitch crisis.” Namely, superfluid neutrons contained in the inner crust, which are regarded as the origin of the glitch in the standard two-component model, could be largely entrained in the nuclei lattices, and then there may not be enough superfluid neutrons (∼4/5 less than the previous value) to trigger the large glitches (Δν/ν{sub 0} ∼ 10{sup −6}) in the Vela pulsar. By confronting the glitch observations with the theoretical calculations for the crustal moment of inertia, we find that despite some recent opposition to the crisis argument, the glitch crisis is still present, which means that besides the crustal superfluid neutrons, core neutrons might be necessary for explaining the large glitches of the Vela pulsar.« less

  12. Chemical Convention in the Lunar Core from Melting Experiments on the Ironsulfur System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J.; Liu, J.; Chen, B.

    2012-03-26

    By reanalyzing Apollo lunar seismograms using array-processing methods, a recent study suggests that the Moon has a solid inner core and a fluid outer core, much like the Earth. The volume fraction of the lunar inner core is 38%, compared with 4% for the Earth. The pressure at the Moon's core-mantle boundary is 4.8 GPa, and that at the ICB is 5.2 GPa. The partially molten state of the lunar core provides constraints on the thermal and chemical states of the Moon: The temperature at the inner core boundary (ICB) corresponds to the liquidus of the outer core composition, andmore » the mass fraction of the solid core allows us to infer the bulk composition of the core from an estimated thermal profile. Moreover, knowledge on the extent of core solidification can be used to evaluate the role of chemical convection in the origin of early lunar core dynamo. Sulfur is considered an antifreeze component in the lunar core. Here we investigate the melting behavior of the Fe-S system at the pressure conditions of the lunar core, using the multi-anvil apparatus and synchrotron and laboratory-based analytical methods. Our goal is to understand compositionally driven convection in the lunar core and assess its role in generating an internal magnetic field in the early history of the Moon.« less

  13. GVIPS Models and Software

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Gendy, Atef; Saleeb, Atef F.; Mark, John; Wilt, Thomas E.

    2007-01-01

    Two reports discuss, respectively, (1) the generalized viscoplasticity with potential structure (GVIPS) class of mathematical models and (2) the Constitutive Material Parameter Estimator (COMPARE) computer program. GVIPS models are constructed within a thermodynamics- and potential-based theoretical framework, wherein one uses internal state variables and derives constitutive equations for both the reversible (elastic) and the irreversible (viscoplastic) behaviors of materials. Because of the underlying potential structure, GVIPS models not only capture a variety of material behaviors but also are very computationally efficient. COMPARE comprises (1) an analysis core and (2) a C++-language subprogram that implements a Windows-based graphical user interface (GUI) for controlling the core. The GUI relieves the user of the sometimes tedious task of preparing data for the analysis core, freeing the user to concentrate on the task of fitting experimental data and ultimately obtaining a set of material parameters. The analysis core consists of three modules: one for GVIPS material models, an analysis module containing a specialized finite-element solution algorithm, and an optimization module. COMPARE solves the problem of finding GVIPS material parameters in the manner of a design-optimization problem in which the parameters are the design variables.

  14. Comparative evaluation of the indigenous microbial diversity vs. drilling fluid contaminants in the NEEM Greenland ice core.

    PubMed

    Miteva, Vanya; Burlingame, Caroline; Sowers, Todd; Brenchley, Jean

    2014-08-01

    Demonstrating that the detected microbial diversity in nonaseptically drilled deep ice cores is truly indigenous is challenging because of potential contamination with exogenous microbial cells. The NEEM Greenland ice core project provided a first-time opportunity to determine the origin and extent of contamination throughout drilling. We performed multiple parallel cultivation and culture-independent analyses of five decontaminated ice core samples from different depths (100-2051 m), the drilling fluid and its components Estisol and Coasol, and the drilling chips collected during drilling. We created a collection of diverse bacterial and fungal isolates (84 from the drilling fluid and its components, 45 from decontaminated ice, and 66 from drilling chips). Their categorization as contaminants or intrinsic glacial ice microorganisms was based on several criteria, including phylogenetic analyses, genomic fingerprinting, phenotypic characteristics, and presence in drilling fluid, chips, and/or ice. Firmicutes and fungi comprised the dominant group of contaminants among isolates and cloned rRNA genes. Conversely, most Proteobacteria and Actinobacteria originating from the ice were identified as intrinsic. This study provides a database of potential contaminants useful for future studies of NEEM cores and can contribute toward developing standardized protocols for contamination detection and ensuring the authenticity of the microbial diversity in deep glacial ice. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.

  15. Three dielectric constants and orientation order parameters in nematic mesophases

    NASA Astrophysics Data System (ADS)

    Yoon, Hyung Guen; Jeong, Seung Yeon; Kumar, Satyendra; Park, Min Sang; Park, Jung Ok; Srinivasarao, M.; Shin, Sung Tae

    2011-03-01

    Temperature dependence of the three components ɛ1 , ɛ2 , and ɛ3 of dielectric constant and orientation order parameters in the nematic phase of mesogens with rod, banana, and zero-order dendritic shape were measured using the in-plane and vertical switching geometries, and micro-Raman technique. Results on the well-known uniaxial (Nu) nematogens, E7 and 5CB, revealed two components ɛ1 = ~ɛ| | and ɛ2 = ~ɛ3 = ~ɛ⊥ , as expected. The three dielectric constants were different for two azo substituted (A131 and A103) and an oxadiazole based (ODBP-Ph-C12) bent core mesogens, and a Ge core tetrapode. In some cases, two of the components became the same indicating a loss of biaxiality at temperatures coinciding with the previously reported Nu to biaxial nematic transition. This interpretation is substantiated by micro-Raman measurements of the uniaxial and biaxial nematic order parameters. Supported by the US Department of Energy, Basic Energy Sciences grant ER46572 and by Samsung Electronics Corporation.

  16. Group delay spread analysis of coupled-multicore fibers: A comparison between weak and tight bending conditions

    NASA Astrophysics Data System (ADS)

    Fujisawa, Takeshi; Saitoh, Kunimasa

    2017-06-01

    Group delay spread of coupled three-core fiber is investigated based on coupled-wave theory. The differences between supermode and discrete core mode models are thoroughly investigated to reveal applicability of both models for specific fiber bending condition. A macrobending with random twisting is taken into account for random modal mixing in the fiber. It is found that for weakly bent condition, both supermode and discrete core mode models are applicable. On the other hand, for strongly bent condition, the discrete core mode model should be used to account for increased differential modal group delay for the fiber without twisting and short correlation length, which were experimentally observed recently. Results presented in this paper indicate the discrete core mode model is superior to the supermode model for the analysis of coupled-multicore fibers for various bent condition. Also, for estimating GDS of coupled-multicore fiber, it is critically important to take into account the fiber bending condition.

  17. A Thorough View of the Nuclear Region of NGC 253: Combined Herschel, SOFIA, and APEX Data Set

    NASA Astrophysics Data System (ADS)

    Pérez-Beaupuits, J. P.; Güsten, R.; Harris, A.; Requena-Torres, M. A.; Menten, K. M.; Weiß, A.; Polehampton, E.; van der Wiel, M. H. D.

    2018-06-01

    We present a large set of spectral lines detected in the 40″ central region of the starburst galaxy NGC 253. Observations were obtained with the three instruments SPIRE, PACS, and HIFI on board the Herschel Space Observatory, upGREAT on board the SOFIA airborne observatory, and the ground-based Atacama Pathfinder EXperiment telescope. Combining the spectral and photometry products of SPIRE and PACS, we model the dust continuum spectral energy distribution (SED) and the most complete 12CO line SED reported so far toward the nuclear region of NGC 253. The properties and excitation of the molecular gas were derived from a three-component non-LTE radiative transfer model, using the SPIRE 13CO lines and ground-based observations of the lower-J 13CO and HCN lines, to constrain the model parameters. Three dust temperatures were identified from the continuum emission, and three components are needed to fit the full CO line SED. Only the third CO component (fitting mostly the HCN and PACS 12CO lines) is consistent with a shock-/mechanical-heating scenario. A hot core chemistry is also argued as a plausible scenario to explain the high-J 12CO lines detected with PACS. The effect of enhanced cosmic-ray ionization rates, however, cannot be ruled out and is expected to play a significant role in the diffuse and dense gas chemistry. This is supported by the detection of ionic species like OH+ and H2O+, as well as the enhanced fluxes of the OH lines with respect to those of H2O lines detected in both PACS and SPIRE spectra.

  18. A Strong Radio Brightening at the Jet Base of M 87 during the Elevated Very High Energy Gamma-Ray State in 2012

    NASA Astrophysics Data System (ADS)

    Hada, K.; Giroletti, M.; Kino, M.; Giovannini, G.; D'Ammando, F.; Cheung, C. C.; Beilicke, M.; Nagai, H.; Doi, A.; Akiyama, K.; Honma, M.; Niinuma, K.; Casadio, C.; Orienti, M.; Krawczynski, H.; Gómez, J. L.; Sawada-Satoh, S.; Koyama, S.; Cesarini, A.; Nakahara, S.; Gurwell, M. A.

    2014-06-01

    We report our intensive, high angular resolution radio monitoring observations of the jet in M 87 with the VLBI Exploration of Radio Astrometry (VERA) and the European VLBI Network (EVN) from 2011 February to 2012 October, together with contemporaneous high-energy (100 MeV 100 GeV) γ rays by VERITAS. We detected a remarkable (up to ~70%) increase of the radio flux density from the unresolved jet base (radio core) with VERA at 22 and 43 GHz coincident with the VHE activity. Meanwhile, we confirmed with EVN at 5 GHz that the peculiar knot, HST-1, which is an alternative favored γ-ray production site located at gsim120 pc from the nucleus, remained quiescent in terms of its flux density and structure. These results in the radio bands strongly suggest that the VHE γ-ray activity in 2012 originates in the jet base within 0.03 pc or 56 Schwarzschild radii (the VERA spatial resolution of 0.4 mas at 43 GHz) from the central supermassive black hole. We further conducted VERA astrometry for the M 87 core at six epochs during the flaring period, and detected core shifts between 22 and 43 GHz, a mean value of which is similar to that measured in the previous astrometric measurements. We also discovered a clear frequency-dependent evolution of the radio core flare at 43, 22, and 5 GHz the radio flux density increased more rapidly at higher frequencies with a larger amplitude, and the light curves clearly showed a time-lag between the peaks at 22 and 43 GHz, the value of which is constrained to be within ~35-124 days. This indicates that a new radio-emitting component was created near the black hole in the period of the VHE event, and then propagated outward with progressively decreasing synchrotron opacity. By combining the obtained core shift and time-lag, we estimated an apparent speed of the newborn component propagating through the opaque region between the cores at 22 and 43 GHz. We derived a sub-luminal speed (less than ~0.2c) for this component. This value is significantly slower than the super-luminal (~1.1c) features that appeared from the core during the prominent VHE flaring event in 2008, suggesting that stronger VHE activity can be associated with the production of a higher Lorentz factor jet in M 87.

  19. A Strong Radio Brightening At The Jet Base Of M 87 During The Elevated Very High Energy Gamma-Ray State In 2012

    DOE PAGES

    Hada, K.; Giroletti, M.; Kino, M.; ...

    2014-06-04

    We report our intensive, high-angular-resolution radio monitoring observations of the jet in M 87 with the VLBI Exploration of Radio Astrometry (VERA) and the European VLBI Network (EVN) from February 2011 to October 2012, together with contemporaneous high-energy (HE; 100 MeV< E <100 GeV) -ray light curves obtained by the Fermi Large Area Telescope (LAT). During this period (specifically from February 2012 to March 2012), an elevated level of the M 87 flux is reported at very-high-energy (VHE; E > 100 GeV) -rays by VERITAS. We detected a remarkable (up to ~70%) increase of the radio flux density from themore » unresolved jet base (radio core) with VERA at 22 and 43 GHz coincident with the VHE activity. Meanwhile, we confirmed with EVN at 5 GHz that the peculiar knot HST-1, which is an alternative favored -ray production site located at &120 pc from the nucleus, remained quiescent in terms of its flux density and structure. These results in the radio bands strongly suggest that the VHE -ray activity in 2012 originates in the jet base within 0.03 pc or 56 Schwarzschild radii (the VERA spatial resolution of 0.4 mas at 43 GHz) from the central supermassive black hole. We further conducted VERA astrometry for the M 87 core at six epochs during the flaring period, and detected core shifts between 22 and 43 GHz, a mean value of which is similar to that measured in the previous astrometric measurements. We also discovered a clear frequency-dependent evolution of the radio core flare at 43, 22 and 5 GHz; the radio flux density increased more rapidly at higher frequencies with a larger amplitude, and the light curves clearly showed a time-lag between the peaks at 22 and 43 GHz, the value of which is constrained to be within ~ 35 - 124 days. This indicates that a new radio-emitting component was created near the black hole in the period of the VHE event, and then propagated outward with progressively decreasing synchrotron opacity. By combining the obtained core shift and time-lag, we estimated an apparent speed of the newborn component propagating through the opaque region between the cores at 22 and 43 GHz. We derived a sub-luminal speed (less than ~0.2c) for this component. This value is significantly slower than the super-luminal (~1.1c) features that appeared from the core during the prominent VHE flaring event in 2008, suggesting that the stronger VHE activity can be associated with the production of the higher Lorentz factor jet in M 87.« less

  20. Geochemical cycles in sediments deposited on the slopes of the Guaymas and Carmen Basins of the Gulf of California over the last 180 years

    USGS Publications Warehouse

    Dean, W.; Pride, C.; Thunell, R.

    2004-01-01

    Sediments deposited on the slopes of the Guaymas and Carmen Basins in the central Gulf of California were recovered in two box cores. Q-mode factor analyses identified detrital-clastic, carbonate, and redox associations in the elemental composition of these sediments. The detrital-clastic fraction appears to contain two source components, a more mafic component presumably derived from the Sierra Madre Occidental along the west coast of Mexico, and a more felsic component most likely derived from sedimentary rocks (mostly sandstones) of the Colorado Plateau and delivered by the Colorado River. The sediments also contain significant siliceous biogenic components and minor calcareous biogenic components, but those components were not quantified in this study. Redox associations were identified in both cores based on relatively high concentrations of molybdenum, which is indicative of deposition under conditions of sulfate reduction. Decreases in concentrations of molybdenum in younger sediments suggest that the bottom waters of the Gulf have became more oxygenated over the last 100 years. Many geochemical components in both box cores exhibit distinct cyclicity with periodicities of 10-20 years. The most striking are 20-year cycles in the more mafic components (e.g., titanium), particularly in sediments deposited during the 19th century. In that century, the titanium cycles are in very good agreement with warm phases of the Pacific Decadal Oscillation, implying that at times of greater influx of titanium-rich volcanic debris, there were more El Nin??os and higher winter precipitation. The cycles are interpreted as due to greater and lesser riverine influx of volcanic rock debris from the Sierra Madre. There is also spectral evidence for periodicities of 4-8 and 8-16 years, suggesting that the delivery of detrital-clastic material is responding to some multiannual (ENSO?) forcing.

Top